added
stringlengths
19
26
created
timestamp[s]
id
stringlengths
9
16
metadata
dict
source
stringclasses
2 values
text
stringlengths
6
1.2M
2024-09-04T02:54:57.516243
2020-03-02T00:15:09
2003.02631
{ "authors": "Linyan Lu and Zhaohui Yang and Mingzhe Chen and Zelin Zang and\n Mohammad Shikh-Bahaei", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26057", "submitter": "Zhaohui Yang", "url": "https://arxiv.org/abs/2003.02631" }
arxiv-papers
# Machine Learning for Predictive Deployment of UAVs with Multiple Access Linyan Lu1, Zhaohui Yang1, Mingzhe Chen2, Zelin Zang3, and Mohammad Shikh- Bahaei1 1Centre for Telecommunications Research, Department of Engineering, King’s College London, London WC2B 4BG, UK. 2Chinese University of Hong Kong, Shenzhen, 518172, China, and also with the Department of Electrical Engineering, Princeton University, Princeton, NJ, 08544, USA. 3College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310027, China. Emails: lulinyan- <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract In this paper, a machine learning based deployment framework of unmanned aerial vehicles (UAVs) is studied. In the considered model, UAVs are deployed as flying base stations (BS) to offload heavy traffic from ground BSs. Due to time-varying traffic distribution, a long short-term memory (LSTM) based prediction algorithm is introduced to predict the future cellular traffic. To predict the user service distribution, a KEG algorithm, which is a joint K-means and expectation maximization (EM) algorithm based on Gaussian mixture model (GMM), is proposed for determining the service area of each UAV. Based on the predicted traffic, the optimal UAV positions are derived and three multi-access techniques are compared so as to minimize the total transmit power. Simulation results show that the proposed method can reduce up to 24% of the total power consumption compared to the conventional method without traffic prediction. Besides, rate splitting multiple access (RSMA) has the lower required transmit power compared to frequency domain multiple access (FDMA) and time domain multiple access (TDMA). ###### Index Terms: UAV Deployment, LSTM, K-means, EM, GMM, RSMA ## I Introduction Since user demands for communication services grow dramatically, traditional base stations (BSs) cannot meet the required demand of the cellular traffic, which can lead to a bottleneck of cellular communication [1, 2, 3]. Recently, there has been a growing interest in the study of unmanned aerial vehicle (UAV) communication due to its excellent attributes of versatility, maneuverability and flexibility [4]. UAV gradually plays an important role in wireless communications, which can offer low-cost and efficient wireless connectivity for devices. UAVs acting as flying BSs is one of the most important research objects in the UAV communication. Through locations adjustment, obstacle avoidance, and line-of-sight (LoS) link reinforcement, UAVs are able to offload data traffic from loaded BSs and increase the connectivity of wireless network so as to improve the communication throughput, coverage, and energy efficiency [5]. Therefore, it is a feasible and beneficial option to utilize UAVs to ensure the connectivity of wireless communication network via meeting the surging data demands. For efficient and rapid dispatch of UAVs, the prediction of potential hotspot areas plays a crucial role to help network operators acquire the information of occurrence and degrees of congestion in advance to reduce the entire network communication delay [6, 7, 8]. Machine learning techniques is a useful tool which has the ability to efficiently predict the distribution of future traffic data [9, 10, 11, 12, 13, 14, 15]. With such predictions, the target locations of UAVs can be specified beforehand and the deployment can be more intelligent and on-demand. There are a number of existing works investigating the applications of UAV deployment in communication. In UAV deployment, UAV service areas and their optimal placements are two critical factors [16, 17, 18, 19]. In [16], UAV latitudes and 2-D locations are optimized based on circle packing theory. Then, UAV placements and service area boundaries are separately determined for power efficiency in [17]. To efficiently dispatch UAVs, some intelligent and practical methods are proposed. A functional split scheme selection of each UAV is jointed with UAV deployment in [18]. In [19], an adaptive deployment scheme is introduced to optimize UAV displacement direction, time-dependent traffic of mobile users can be served in real time. In UAV communication, machine learning techniques are also applied to improve system performance [20, 21, 22]. The aerial channel environment is predom field position (CRF) in [23]. In [20], in order to provide more efficient downlink service, a learning approach based on the weighted expectation maximization (WEM) algorithm is used to predict downlink user traffic demand, meanwhile a traffic load contract using contract theory is introduced. The main contribution of this work is a predictive UAV deployment scheme for UAV-assisted networking with ML approaches. Our key contributions include: * • The cellular traffic is predicted according to the analysis of previous data by BP Neural Network model. Then, a joint K-means and EM algorithm of a Gaussian Mixture Model (KEG) is proposed to divide the entire service area into clusters as UAV service areas in the temporal and spatial patterns [24]. UAV locations are optimized so as to minimize the total transmit power of the whole UAV swarm network. Three different access techniques have been compared in the simulation. * • The results show that the proposed UAV deployment framework can reduce the overall transmit power by over 23.85% compared to the conventional method. Besides, it is also shown that rate splitting multiple access (RSMA) can decrease up to 35.5% and 66.4% total power compared to frequency domain multiple access (FDMA) and time domain multiple access (TDMA), respectively. ## II System Model and Problem Formulation Given a time-dependent UAV-assisted wireless network, a group of ground users in the network are distributed in a geographical area $C$. A set $\bm{I}$ of $I$ UAVs assist a set $\bm{J}$ of $J$ ground BSs to offload amounts of cellular traffic for congestion alleviation, so the ground users in a time- variant hotspot area have the air-ground communications with UAVs when ground BSs are overloaded. It is assumed that the height of all users and BSs is zero compared to the UAVs. Besides, each UAV has directional antennas, so the transmissions of different UAVs will not be interfered with each other. For convenient elaboration, we specify the ground users served by a UAV as _aerial users_ and the service area of a UAV $i$ as an _aerial cell_ which can be expressed as $C_{i}$. In order to consider the communication of all users fairly, the aerial cells are supposed to completely cover the entire area without any overlaps. Because UAVs have limited energy amounts, UAVs should be efficiently deployed ### II-A Air-ground Model We assume that a ground user $j\in\bm{J}$ located at $(x,y)\in C$ and a UAV $i\in\bm{I}$ located at $(x_{i},y_{i},h_{i})$ with the aerial cell $C_{i}$, so the uplink received power of a UAV $i$ and a ground user $j$ is calculated as: $P_{r,ij}[dB]=P_{ij}[dB]+G_{ij}[dB]-L_{ij}[dB]-r_{ij}[dB],$ (1) where $P_{ij}$ is the transmit power, $G_{ij}$ is the antenna gain, $L_{ij}$ is the free space path loss, $r_{i}$ is the excessive path loss with a Gaussian distribution which depends on the category of link. For mathematical tractability, we give a hypothesis that all of beam alignment of ground-air links are perfect and UAV antenna gains are same. Thus, $G_{ij}$ can be a constant number $G$. The free space path loss has a specific formula to be acquired: $L_{ij}[dB]=10n\log(\frac{4\pi f_{c}d_{ij}}{c}),$ (2) where $n\geq 2$ is the path loss exponent, $f_{c}$ is the system carrier frequency, $c$ is the light speed, $d_{ij}=[(x_{i}-x_{j})^{2}+(y_{i}-y_{j})^{2}+h_{i}^{2})]^{-2}$ is the distance between a UAV $i$ and a user $j$ [24]. In general, the air-to-ground transmission is separated into two main cateogories: the line-of-sight link (LoS) and the non-line-of-sight link (NLoS) . The NLoS link suffers a higher excessive path loss owing to shadowing and blockage. Figure 2 has shown these two links in the picture. Figure 1: The LoS Link and the NLoS Link of UAV Transmission The excessive path loss in these two links can be expressed as ${r_{i}^{LoS}}\sim{\cal N}(\mu_{LoS},\sigma_{LoS}^{2})$ and ${r_{i}^{NLoS}}\sim{\cal N}(\mu_{NLoS},\sigma_{NLoS}^{2})$ respectively. The probability of the occurance of LoS link is similar to a sigmoid function [25]: $p_{ij}^{LoS}=\frac{1}{1+a\exp{[b(a-\theta_{ij})]}},$ (3) where $a$ and $b$ are environment constant coefficients, $\theta_{e,ij}=sin^{-1}(h_{i}/d_{ij})$ is the elevation angle of a UAV $i$ and a user $j$, so the NLoS link existing probability is $p_{ij}^{NLoS}=1-p_{ij}^{LoS}$ [24]. Therefore, the average excessive path loss in the uplink transmission is: $r_{ij}={r_{i}^{LoS}}p_{ij}^{LoS}+{r_{i}^{NLoS}}p_{ij}^{NLoS}$ (4) To this end, the uplink data rate of a UAV $i$ and a user $j$ can be expressed as: $R_{ij}=W_{ij}\log_{2}(\frac{P_{r,ij}}{N_{0}}+1)(bits/s)$ (5) where $W_{ij}$ is the bandwidth of a UAV $i$ and a user $j$ which can be , $N_{0}=n_{0}W_{ij}$ is the power of addictive white Gaussian noise and $n_{0}$ is its average power spectral density. For tractable formulation, each UAV can offer sufficient overall bandwidth and all transmit bandwidths are assumed to be a constant value $W$. ### II-B Multi-access Modes In this paper, three different multi-access techniques will be used. They are rate splitting multiple access (RSMA), frequency division multiple access (FDMA) and time division multiple access (TDMA) [26]. We set two users as an example to give a series of theoretical formulas and set $R_{1}$ and $R_{2}$ are the actual data rates of two users, $g_{1}$ and $g_{2}$ are the channel gains of two users which include the antenna gain, free space path loss and the excessive path loss. RSMA is a multi-access mode with coding and decoding techniques, the equations for maximizing the sum-rate of users for uplink transmission are shown below [27, 26, 28, 29, 30, 31, 32, 33, 34]: $\begin{cases}R_{1}\leq W\log_{2}(1+g_{1}P_{1}/(Wn_{0}))\\\ R_{2}\leq W\log_{2}(1+g_{2}P_{2}/(Wn_{0}))\\\ R_{1}+R_{2}\leq W\log_{2}(1+(g_{1}P_{1}+g_{2}P_{2})/(Wn_{0}))\end{cases}$ (6) FDMA is a technique of BS bandwidth allocation for users to avoid interference of transmissions in the same area, where $b_{1}$ and $b_{2}$ are the weight coefficients. $\begin{cases}R_{1}\leq Wb_{1}\log_{2}(1+g_{1}P_{1}/(Wn_{0}b_{1}))\\\ R_{2}\leq Wb_{2}\log_{2}(1+g_{2}P_{2}/(Wn_{0}b_{2}))\\\ b_{1}+b_{2}=1,b_{1}\geq 0,b_{2}\geq 0\end{cases}$ (7) Similarly, TDMA assigns the partial time period to users who can use the whole bandwidth, the equations have been represented. $\begin{cases}R_{1}\leq Wb_{1}\log_{2}(1+g_{1}P_{1}/(Wn_{0}))\\\ R_{2}\leq Wb_{2}\log_{2}(1+g_{2}P_{2}/(Wn_{0}))\\\ b_{1}+b_{2}=1,b_{1}\geq 0,b_{2}\geq 0\end{cases}$ (8) In simulation and analysis section, we will compare these three multi-access modes for choosing a best one to minimize the total transmit power. ### II-C Uplink Transmit Power Computation Network operators require to assign UAVs to those traffic congestion areas so as to offload the heavy traffic from busy BSs. For UAVs, continuous movements will consume too much transmit power. Thus, we need to analyze the situation of traffic offloading. A data set is presented as a matrix $\bm{D}$: $\bm{D}=\\{D^{t}_{d}(x,y)\mid t\in\\{T,...,24T\\},d\in\\{1,...,8\\},(x,y)\in C\\}$ (9) where $T$ is the time of an hour, $t$ is the time moment and $d$ is the day. $D^{t}_{d}(x,y)$ represents the amount of cellular traffic offloaded from a BS located at $(x,y)$ in a period of $T$ in the day $d$ [24]. In this paper, for the conveinience of simulation and analysis, we assume that all cellular traffic of BSs are totally offloaded to UAVs. Since the positions of mobile users are uncertain and most of mobile users only move around a single BS in a period of an hour, we assume that all ground receivers have the same positions as their nearst BSs. After obtaining the future predictive traffic information through the implementation of ML methods based on the matrix $\bm{S}$, the required average data rate within a aerial cell $C_{i}$ in a period of $T$ will be given. Only when the communication throughput of UAV is not less than the demanded data rate, the communication can be ensured. Therefore, the communication condition is formulated as: $\iint_{C_{i}}R_{i}(x,y)dxdy\geq\frac{1}{T}\iint_{C_{i}}D^{t}_{d}(x,y)dxdy$ (10) where R(x,y) is the maximum data rate of the overall transmission between a group of ground users in a ground cell with a BS at $(x,y)$ and a UAV $i$. We can simplify this equation as $R_{i}(x,y)\geq D^{t}_{d}(x,y)/T$ (11) Let us set $D^{t}_{d}(x,y)/T=\alpha(x,y)$, $\alpha(x,y)$ is the minimal requirement of average data rate. Combining the formula (3.5) and (3.10), the minimum power for transmission should be provided by a UAV will be: $P_{i}^{min}(x,y)=(2^{\frac{\alpha(x,y)}{W}}-1)n_{0}WL_{i}(x,y)r_{i}(x,y)/G$ (12) where both $L_{i}(x,y)$ and $r_{i}(x,y)$ are between the UAV $i$ and the group of ground users in a ground cell of the BS at $(x,y)$, $L_{i}(x,y)$ is the path loss in free space, $r_{i}(x,y)$ is the excessive path loss. This equation provide a target basis for UAV location optimization. This section proposes a novel predictive scheduling scheme of UAV based on ML. According to the real data set in City Cellular Traffic Map [35] and the characteristics of celluar data traffic, for the sake of the efficiency of UAV deployment, we make some rational assumptions. Because humans have a certain pace of life with periodic acitivity, the change of cellular data traffic has a repetitive pattern in daily life [36]. Thus, we assume that the cellular traffic amount has specific distribution in the same hour in different days and the data of each hour in the same day is independent. To this end, the real data set can be classified into 24 independent models and we assume each single model follows the Guassian mixture model which will be explained in section 4.3. The logical procedure diagram of UAV predictive deployment is shown in Figure 3. At first, the acquired real data set is preprocessed to get the cellular traffic amount of every hour in the first 5 days $\\{D^{t}_{d}\\}_{d=1,t=T}^{5,24T}$ and the topology information of every BS $\\{\bm{x}_{n}\\}_{n=1}^{N}=\\{(x_{n},y_{n})\\}_{n=1}^{N}$, where $x_{n}$ is the relative longitude of $n^{th}$ BS and $y_{n}$ is the relative latitude of $n^{th}$ BS. Then, a BP neural network model for cellular traffic amount prediction is come up to predict the cellular amount in $24$ hours in the $6^{th}$ day. At the same time, a joint K-means and EM algorithm relying on a GMM for aerial cell classification (ground user clustering) is created, the point cluster label $\\{l_{n}\\}_{n=1}^{N}$ of every single point $\bm{x}_{n}$ is obtained, the point $\bm{x}_{n}$ with the same label value consists of an aerial cell. Then, the optimal UAV locations $\\{\bm{x}_{i}\\}_{i=1}^{K}=\\{(x_{i},y_{i})\\}_{i=1}^{K}$ for minimizing total transmit power $P_{min}$ is derived according to the system model introduced in section 3. The purpose of whole process is to achieve an UAV aid of the on-demand, power-effective, lantency-free network services. Figure 2: The Logical Procedure Diagram of UAV Predictive Deployment ### II-D Cellular Demand Prediction In this part, a simple BP neural network model is utilized to predict the future cellular demand in an hour. Neural network has a variety of categories and BP neural network is the most basic one. The theory introduction is shown below. #### II-D1 The Neuron Model Refere to biological neural systems, the basic component of neural networks is neuron model. In a biological neural system, every neuron is connected to other neurons. A neuron accepts some chemical materials (chemical messages) as its input emanated from other connected neurons, the electrical potential in this neuron changes; If the amount of the input is enough, the electrical potential exceeds to a threshold, this neuron will be activated and send chemical materials to other neurons. The famous McCulloch and Pitts model (M-P) of neuron as Figure 4 is based on the above description, which can be formulated in a mathematical form: $y=f(\bm{w}\bm{x}-\theta)$ (13) where $\bm{x}=\\{x_{i}\\}_{i=1}^{n}$ is the input of the neuron, $\bm{w}=\\{w_{i}\\}_{i=1}^{n}$ is its corresponding weight, $\theta$ is a threshold of activation, and $f(.)$ is an activation function with multiple forms. The step function is the most ideal activation function, but it is discontinuous and not smooth, so it is not convenient to do the operations of integral and differential. Hence, in most cases, we usually use a canonical example, the sigmoid function which can squash the large range of input values into the range from 0 to 1 of outputs. The step function and the sigmoid function are shown in the Figure 5. Figure 3: M-P Neuron Model Figure 4: The Step Function and the Sigmoid Function When enough training data sets are given, the weights and threshold can be obtained by learning. The threshold can be seen as a dummy node with fixed input $1$, so the learning of weights and threshold can be jointed together. For a single training data set ($x$,$y$), the current output is $\hat{y}$ for the input $x$. Weight updating can be shown as: $w_{i}\leftarrow w_{i}+\bigtriangleup w_{i}$ (14) $\bigtriangleup w_{i}=\eta(y-\hat{y})x_{i}$ (15) where $\eta\in(0,1)$ is a learning rate. In this way, the weight will be adjusted until $y=\hat{y}$. This is a single functional neuron, its learning ability is very limited. Thus, we always add the hidden layers to compose a multi-layer feedforward nerual network. #### II-D2 Error Backpropagation For multi-layer networks, BP(backpropagation) is one of the most successful learning algorithms. BP algorithm is not only used in the multi-layer feed forward networks, but also used in other networks such as recurrent neural networks. The basic structure of the neural networks consists of an input layer and an output layer with multiple hidden layers between them. For clear and concise description, the number of hidden layers is set to $1$. The input layer is $\bm{x}=\\{x_{i}\\}_{i=1}^{d}$, the hidden layer is $\bm{b}=\\{b_{h}\\}_{h=1}^{q}$, and the output layer is $\hat{\bm{y}}=\\{\hat{y}\\}_{j=1}^{l}$. The weight between the input layer node $i$ and the hidden layer node $h$ is $v_{ih}$, The weight between the hidden layer node $h$ and the output layer node $y$ is $w_{ih}$. The variable symbols and the BP network model with three layers are shown in Figure 6, the weights are also represented. Figure 5: Multilayer Network Sketch For $h^{th}$ node of hidden layer and $j^{th}$ node of output layer, the inputs and the outputs are shown below: $\begin{cases}\alpha_{h}=\sum_{i=1}^{d}v_{i}{}_{h}x_{i}\\\ b_{h}=f(\alpha_{h}-\gamma_{h})\\\ \beta_{j}=\sum_{h=1}^{q}w_{h}{}_{j}b_{h}\\\ \hat{y}_{j}=f(\beta_{j}-\theta_{j})\end{cases}$ (16) Then the mean square error can be obtained as following, where $1/2$ is for the convenience of the subsequent calculation. $E=\frac{1}{2}\sum_{j=1}^{l}(\hat{y}_{j}-y_{j})^{2}$ (17) Because BP algorithm is based on gradient descent algorithm, parameters are adjusted according to the negative gradient. Set the gradient of output layer as an example, the change of weight can be written as $\bigtriangleup w_{h}{}_{j}=-\eta\frac{\partial E}{\partial w_{h}{}_{j}}$ (18) Besides, the chain rule can make the formula as $\frac{\partial E}{\partial w_{h}{}_{j}}=\frac{\partial E}{\partial\hat{y}_{j}}\cdot\frac{\partial\hat{y}_{j}}{\partial\beta_{j}}\cdot\frac{\partial\beta_{j}}{\partial w_{h}{}_{j}}$ (19) $\frac{\partial\beta_{j}}{\partial w_{h}{}_{j}}=b_{h}$ (20) The intermediate variable of gradient can be obtained as $g_{j}=-\frac{\partial E}{\partial\hat{y}_{j}}\cdot\frac{\partial\hat{y}_{j}}{\partial\beta_{j}}=\hat{y}_{j}(1-\hat{y}_{j})(y_{j}-\hat{y}_{j})$ (21) Therefore, the gradient of the hidden layer is similar, $e_{h}=-\frac{\partial E}{\partial b_{h}}\cdot\frac{\partial b_{h}}{\partial\alpha_{h}}=b_{h}(1-b_{h})\sum_{j=1}^{l}w_{h}{}_{j}$ (22) Finally, we can get the update of weights and thresholds for the hidden and output layer, $\begin{cases}\bigtriangleup w_{h}{}_{j}=\eta g_{j}b_{h}\\\ \bigtriangleup\theta_{j}=-\eta g_{j}\\\ \bigtriangleup v_{i}{}_{h}=\eta e_{h}x_{i}\\\ \bigtriangleup\gamma_{h}=-\eta e_{h}\end{cases}$ (23) Note that the aim of BP algorithm is to minimize the accumulate error for all training data sets. The workflow of BP algorithm is represented in Algorithm 1. The stop condition is that the iteration has achieved the peak value or the error $E$ is smaller than a minimum threshold. Algorithm 1 BP Algorithm [37] 0: A training data set $\bm{S}=\\{(\bm{x}_{k},\bm{y}_{k})\\}^{m}_{k=1}$, a learning rate $\eta$. Randomly initiate all weights and thresholds $w$, $\theta$, $v$, $\gamma$ repeat for all $(\bm{x}_{k},\bm{y}_{k})\in\bm{S}$ do Calculate the current output $\hat{y}_{k}$ using Equation (4.4) Calculate intermediate variable of gradient $g$ and $e$ using Equation (4.9) and (4.10) Update weights and threshold $w$, $\theta$, $v$, $\gamma$ using Equation (4.11) end for until The stop condition is reached Weights and thresholds $w$, $\theta$, $v$, $\gamma$ However, BP algorithm has some inadequacies. Overfitting problem is very obvious because of the powerful presentation of BP networks, early stopping method and regularization method are often used to prevent this situation. The limitation of local minimum is another problem, people usually utilize simulated annealing, genetic algorithm and random gradient descent algorithm to solve it [37]. ### II-E Ground User Clustering Ground user clustering is a key step in UAV deployement, which also means the partition of UAV aerial cells. In order to satisfy the fairness and globality of division, we adopt a KEG algorithm to implement service area classification. And To show the algorithm practicability, we use the topology information part of City Cellular Traffic Map [35] as the real data set. #### II-E1 The K-means Algorithm The K-means algorithm is a most basic non-hierarchical iterative clustering algorithm, belonging to unsupervised learning. The goal of this algorithm is to cluster data points with very low inter-cluster similarity as well as very high intra-cluster similarity [38]. The similarity often denotes the distances between data points. Giving a data set $\bm{X}=\\{\bm{x}_{n}\\}^{N}_{n=1}$ composed of $N$ number of $M$-dimensional variables $\bm{x}_{n}$. We divide this data set into $K$ clusters, each cluster has a centroid. At first, we set the centroids of clusters are $K$ vectors $\\{\bm{\mu}_{k}\\}^{K}_{k=1}$ with $M$ dimensions. These vectors usually randomly take $K$ variables from the given data set $\bm{X}$ for initialisation. To realize the algorithm goal, each data point is supposed to be as close as possible to its cluster centroid, and the distances of each data point and other cluster centroids should be as large as possible. Thus, we set that point cluster label $\bm{r}_{nk}$ indicates which cluster the data point $\bm{x}_{n}$ belongs to, this variable can be formulated as: $\bm{r}_{nk}=\begin{cases}1&\text{if $k=\mathop{\arg\min}_{i}\ \|\bm{x}_{n}-\bm{\mu}_{i}\|$}\\\ 0&\text{otherwise}\end{cases}$ (24) Then, we assign every data point to its nearest cluster and update the point cluster label after the distances between each data point and cluster centroids are calculated. The third one is to update cluster centroids according to those data point label. The specific K-means algorithm is shown as Algorithm 2. Algorithm 2 K-means Algorithm [38] 0: The cluster number $K$, the data point set $\bm{X}=\\{\bm{x}_{n}\\}^{N}_{n=1}$. 1: Initialize $\\{\bm{\mu}_{k}\\}^{K}_{k=1}$ as $K$ variables chose from $\bm{X}$ randomly, initialize $\bm{r}_{nk}$ as an $n*k$ all-zero matrix 2: repeat 3: for all $\bm{x}_{n}\in X$ do 4: Allocate each data point $\bm{x}_{n}$ to cluster $k^{*}=\mathop{\arg\min}_{i}\ \|\bm{x}_{n}-\bm{\mu}_{i}\|$ 5: Update point labels $\bm{r}_{nk}(n,k^{*})=1$ 6: Calculate cluster centroids $\bm{\mu}_{k}=\sum\bm{r}_{nk}\bm{x}_{n}/\sum\bm{r}_{nk}$, $k=1,...,K$ 7: end for 8: until $\\{\bm{\mu}_{k}\\}^{K}_{k=1}$ will not be changed. 8: The point cluster label $\bm{r}_{nk}$, the cluster centroids $\\{\bm{\mu}_{k}\\}^{K}_{k=1}$ On the one side, although the K-means algorithm has the capability to cluster data points, it cannot find the latent variables corresponding the observed data. On the other side, the principle of the K-means is simple and it is easy to implement the simulation because of its fast convergence and good clusting effect, it can be a pratical data initializer for other complex algorithms. #### II-E2 The EM Algorithm The EM algorithm has the ability to recognize the important role of latent variables in a joint distribution. Its goal is to acquire maximum likelihood results for the models with latent variables [38]. In a mixed distribution model, given a sample data set $\bm{X}=\\{\bm{x}_{n}\\}^{N}_{n=1}$ with an unknown latent variable set $\bm{Z}=\\{\bm{z}_{n}\\}^{N}_{n=1}$, we want to find the suitable parameter set $\bm{\theta}$ to well discribe the joint distribution $p(\bm{X}|\bm{\theta})=\sum_{\bm{Z}}p(\bm{X},\bm{Z}|\bm{\theta})$ [38]. However, the observed variable set $\bm{X}$ and the latent variable set $\bm{Z}$ are determined by parameter set $\bm{\theta}$. And we are only given the incomplete data set $\bm{X}$ without $\bm{Z}$, so we cannot get the optimal parameter set $\bm{\theta}$. To facilitate analysis, a log likelihood function is defined as: ${\cal{L}}(\bm{\theta})=\ln{\\{p(\bm{X}|\bm{\theta})\\}}=\ln{\\{\sum_{\bm{Z}}p(\bm{X},\bm{Z}|\bm{\theta})\\}}$ (25) And a posterior distribution $p(\bm{Z}|\bm{X},\bm{\theta})$ about the latent variable set $\bm{Z}$ is introduced. Thus, our goal is changed to get the maximum likelihood function $p(\bm{X}|\bm{\theta})$ with regard to $\bm{\theta}$. The iteration of EM mainly consists of two step: Expectation step (E step) and Maximum step (M step). In E step, we evalulate the posterior probability $p(\bm{Z}|\bm{X},\bm{\theta)}$; In M step, we operate the log likelihood maximization to update parameter set $\bm{\theta}$ [38]. The iteration stops until the convergence of the log likelihood function is checked. The specific algorithm is shown in Algorithm 3. Algorithm 3 EM Algorithm [38] 0: The observed variable set $\bm{X}$ 1: Initialize the parameter set $\bm{\theta}_{old}$, 2: repeat 3: E step Calculate the posterior probability $p(\bm{Z}|\bm{X},\bm{\theta}_{old})$ 4: M step Calculate $\bm{\theta}_{new}=\mathop{\arg\min}_{\bm{\theta}}\ \sum_{\bm{Z}}p(\bm{Z}|\bm{X},\bm{\theta}_{old})\ln{p(\bm{X},\bm{Z}|\bm{\theta})}$ 5: Update the parameter set $\bm{\theta}$, $\bm{\theta}_{old}\leftarrow\bm{\theta}_{new}$ 6: until The log likelihood function ${\cal{L}}(\bm{\theta})$ converges. 6: The posterior probability $p(\bm{Z}|\bm{X},\bm{\theta})_{old}$, the parameter set $\bm{\theta}_{old}$ The EM algorithm can also have good performance in the situation of missing some observed variable values, the observed variable distribution can be aquired by marginializing those missing value and taking the whole joint variable distribution. In this case, the data returned by the sensor which has some values missing can also be well processed. Therefore, in the scenario of our ground user clustering, EM algorithm is a useful method to find the latent variables in data set and classify users in a fair manner. #### II-E3 The KEG Algorithm In this paper, the cellular traffic distribution is complex and time-varying, but a GMM, a linear Gaussian-component superposition model has a remarkable advantage of abundantly representing data distribution. We model the cellular traffic distribution by the GMM as: $p(\bm{X})=\sum_{k=1}^{K}\pi_{k}{\cal{N}}(\bm{X}|\bm{\mu}_{k},\bm{\sigma}_{k})$ (26) where $\bm{X}=\\{\bm{x}_{n}\\}^{N}_{n=1}$ is the tolopogy information of the whole area and $\bm{x}_{n}$ is every data point with $M$ dimensions. $p(.)$ represents as a probability function, $K$ is Gaussian component number and $k\in\\{1,...,K\\}$ denotes a specific one Gaussian component, the mixing coefficient $\pi$ equals one or zero has $\sum_{k=1}^{K}\pi_{k}=1$, $\bm{\mu}=\\{\bm{\mu}_{n}\\}^{N}_{n=1}$ is the mean values corresponding to the cluster centroids with $M$ dimensions, $\bm{\sigma}=\\{\bm{\sigma}_{n}\\}^{N}_{n=1}$ is the covariance with $M$ dimensions. Besides, in a GMM, the latent variables are discrete. To this end, we introduce the KEG algorithm based on the K-means algorithm and the EM algorithm. In the KEG algorithm, the EM part aims to find the discrete latent variables and the suitable parameters for analyzing data distribution and clustering data set. Even if a data set is incomplete with missing some values, the EM can also process the data set in a suitable manner. In the EM part, the value of the log likelihood function will increase with the number of iteration rising. When the log likelihood function does not change anymore, the current parameters are the aims we want to obtain. But since this algorithm usually needs many iterations to reach the convergent point, the K-means part is utilized as a data initializer to provide appropriate and rational initialized values. The integrated KEG algorithm is demonstrated in Algorithm 4. Algorithm 4 KEG algorithm 0: The topology data set $\bm{X}=\\{\bm{x}_{n}\\}^{N}_{n=1}$, the clustering number $K$ 1: Set a variable $D$ as the dimension of $\bm{x}_{n}$, initialize the means $\bm{\mu}_{k}$ as a $1$-by-$D$ matrix by using the K-means algorithm, the covariance $\bm{\sigma}_{k}$ as a $D$-by-$D$ identical matrix and mixing coefficients $\pi_{k}=1/K$, $\forall k\in\\{1,...,K\\}$. 2: repeat 3: for all $k\in\\{1,...,K\\}$ do 4: E step: Compute the posterior probability of all $\bm{x_{n}}$ by$\gamma_{nk}=\pi_{k}{\cal{N}}(\bm{x}_{n}|\bm{\mu}_{k},\bm{\sigma}_{k})/\sum_{i=1}^{K}\pi_{i}{\cal{N}}(\bm{x}_{n}|\bm{\mu}_{i},\bm{\sigma}_{i})$ 5: For every $\bm{x}_{n}$, allocate it to the cluster $l_{n}=\mathop{\arg\max}_{k}\gamma_{nk}$ 6: M step: Calculate the new parameters $\bm{\mu}_{k}^{new}=\sum_{n=1}^{N}\gamma_{nk}\bm{x}_{n}/\sum_{n=1}^{N}\gamma_{nk}$ $\bm{\sigma}_{k}^{new}=\sum_{n=1}^{N}\gamma_{nk}(\bm{x}_{n}-\bm{\mu}_{k}^{new})(\bm{x}_{n}-\bm{\mu}_{k}^{new})^{T}/\sum_{n=1}^{N}\gamma_{nk}$$\pi_{k}^{new}=\sum_{n=1}^{N}\gamma_{nk}/\sum_{k=1}^{K}\sum_{n=1}^{N}\gamma_{nk}$ 7: end for 8: Calculate the log likelihood using $\bm{\mu}_{k}^{new}$, $\bm{\sigma}_{k}^{new}$ and $\pi_{k}^{new}$, for $k\in\\{1,...,K\\}$${\cal{L}}(\bm{\mu},\bm{\sigma},\bm{\pi})=\ln{p(\bm{X}|\bm{\mu},\bm{\sigma},\bm{\pi})}=\sum_{n=1}^{N}\ln{\\{\sum_{k=1}^{K}\pi_{k}{\cal{N}}(\bm{x}_{n}|\bm{\mu}_{k},\bm{\sigma}_{k})\\}}$ 9: Update the parameters as $\bm{\mu}_{k}\leftarrow\bm{\mu}_{k}^{new}$, $\bm{\sigma}_{k}\leftarrow\bm{\sigma}_{k}^{new}$, $\pi_{k}\leftarrow\pi_{k}^{new}$ 10: until The log likelihood function ${\cal{L}}(\bm{\mu},\bm{\sigma},\bm{\pi})$ is converged. 10: The parameters $\\{\pi_{k}$, $\bm{\mu}_{k}$, $\bm{\sigma}_{k}\\}_{k=1}^{K}$, the cluster labels $\\{l_{n}\\}_{n=1}^{N}$. When the parameters are obtained, the predicted cellular traffic amount data is used as the input, only $4^{th}$ step and $5^{th}$ step of Algorithm 4 is operated, we can get the cluster label $\\{l_{n}\\}_{n=1}^{N}$of the data points $\\{\bm{x}_{n}\\}^{N}_{n=1}$ that indicates which cluster the data point belongs to. ### II-F UAV Location Optimization After determining the aerial cells using the KEG algorithm, our next aim is to select a optimal location for every UAV so that the minimum transmit power can be gotten. No matter whether UAVs are in a high altitude platform or a low altitude platform, we assume that all UAVs are in the same altitude $h$, we can formulate this problem as: $\min_{x_{i},y_{i}}\ \ P_{i}=Q\iint_{C_{i}}A_{i}(x,y)d_{i}^{2}(x,y)r_{i}(x,y)dxdy$ (27) where $Q=(\frac{4\pi f_{c}}{c})^{2}\frac{Wn_{0}}{G}$ is not relevant to the locations of BS $(x,y)$ in the service area, $A_{i}(x,y)=2^{\frac{TD^{t}(x,y)}{W}-1}$ is the BS distribution and $D^{t}(x,y)$ denotes the cellular data amount of BS at $(x,y)$ in a hour in the predictive day, $d_{i}^{2}(x,y)$ is the distance between the UAV $i$ and the BS at $(x,y)$, $r_{i}(x,y)$ is also related to $d_{i}^{2}(x,y)$ for the computation of LoS link probability. According to the Theorem 1 of paper [17], the function to get $P_{i}^{min}$ is a convex function, UAV optimal locations can be calculated as: $x_{i}^{*}=\frac{\iint_{C_{i}}xA_{i}(x,y)dxdy}{\iint_{C_{i}}A_{i}(x,y)dxdy}$ (28) $y_{i}^{*}=\frac{\iint_{C_{i}}yA_{i}(x,y)dxdy}{\iint_{C_{i}}A_{i}(x,y)dxdy}$ (29) where there is a condition $h_{i}>>(x-x_{i})^{2}+(y-y_{i})^{2}$ or $h_{i}<<(x-x_{i})^{2}+(y-y_{i})^{2}$ to be satisfied [17]. At last, we accumulate the transmit power of all operating UAV to get the minimum total power for transmission: $P_{min}=\sum_{i}P_{i}^{min}$ (30) ## III Simulation and Analysis ### III-A Illustration of Simulation Process For simulation, we consider two scenrios, one is the entire area given from the raw data, the other is the limited area with the relative longitude from $111.055$ to $111.07$ degrees and the relative altitude from $13.03$ to $13.05$ degrees. The limited area is the contrast of the entire area. And for a specific contrast, we classify the entire area and the limited area into $8$ clusters. Some parameters used in the simulation have been given the specific values which are shown in Table 1 [39]. Moreover, for getting the rational simulation results, we assume that all BSs have a basic celluar amount $500$ bytes for basic operation, so all the cellular amounts we get from the raw data add $500$. TABLE I: Simulation Parameters $f_{c}$ | Carrier frequency | 5GHz ---|---|--- $n_{o}$ | Noise power spectral | -140dBm/Hz $\mu_{LoS}$ | Excessive path loss for LoS link | 3dB $\mu_{NLoS}$ | Excessive path loss for NLoS link | 23dB $W$ | Bandwidth | 1MHz $h$ | UAV’s altitude | 200m $G$ | Antenna gain | 10dB For BP neural network part, we use the _nftool_ built-in box of MATLAB to implement the cellular traffic amount prediction. We assume that every BS has the same cellular traffic distribution, we set that the input is the cellular traffic amount of the first six days and the output is the amount of the seventh day. The input in the training data set form is $(\\{\bm{D}^{t}_{d}\\}_{d=1}^{5},\bm{D}^{t}_{d=6})$, where the input layer is the first $5$ days cellular amount and the output layer is the $6^{th}$ day cellular amount. We choose _trainrp_ as the training function and set that the neural network has 2 hidden layers with $20$ and $10$ neurons respectively, the network structure is shown in Figure 7. Then we do data training to find the suitable parameters for our neural network model. The user interface of neural network training is shown in Figure 8. Figure 6: Neural Network Structure Figure 7: Neural Network Training After training, the input layer $\bm{x}$ is set as $\\{\bm{D}^{t}_{d}\\}_{d=2}^{6}$, then this output layer is what we want to obtain. Furthermore, the ratio of training data is $80\%$, valuating data is $10\%$ and testing data is $10\%$. The data training is for updating weights, the data evaluation is to detect the overfitting problem and to avoid it as soon as possible, the data testing is to examine the performance of the neuron network. In addition, all the cellular data amounts are normalized using min- normalizaion mothed to make sure every single value is ranged from $0$ to $1$. Then, for the application of the KEG algorithm, we give two conditions for finishing the iterations: the first one is the minimum threshold for parameter error of two neighboring iterations, the second one is the maximum iteration times. If any of these conditions are met, the algorithm ends. ### III-B The Comparision of Three Multi-access Techniques In the proposed UAV deployment framework, RSMA as an excellent option of multiple access technique is adopted in the uplink transmission. FDMA and TDMA have been developed maturely, but RSMA with excellent robustness and energy effeciency enjoys great popularity in the new generation of communications. The Figure 9 is shown below which represents the comparison the performance of RSMA, FDMA and TDMA. For all three of multi-access techniques, with the increasing of bandwidth, the system can reduce the total power. But RSMA is the best one which has the lowest power consumption in all bandwidths. Then FDMA is the second best and it is better than TDMA. RSMA can reduce 35.3% power than FDMA and 66.4% than TDMA. Therefore, in this paper, we use RSMA for uplink transmission. Figure 8: Comparison of Three Multi-access Modes ### III-C The Simulation Results of the Limited Area and the Entire Area In order to verify the usefulness and robustness of the KEG algorithm, we use the data of the limited area and the entire area to do the simulation. The comparison results of the K-means part are shown in Figure 10 and Figure 11 and the EM part are shown in Figure 12 and Figure 13. Figure 9: K-means for the Limited Area Figure 10: EM for the Entire Area Figure 11: EM for the Limited Area In the Figure 9, 10, 11 and 12, the scattering points are the locations of BS. The group of the points with the same color represents a cluster which is an aerial cell of a UAV. The black crosses denotes the centroids of every cluster. As we can see, for the K-means part, both the entire area and the limited area have distinct boundaries of service cell and the centroids locate at the clusters with the corresponding colors. But for the EM part, the clusters have the containing and contained states, the centroids may not appear in the clusters with their own colors. The corresponding situation of the limited area is much better than that the entire area. ### III-D The Effect of Proposed UAV Scheduling Framework The main goal of UAV deployment in this paper is to minimize the total transmit power in the uplink transmission. Thus, based on this goal, we divide the area into the ground user clusters for aerial cell classification and determine the optimal locations of UAVs. The Figure 14 shows the schematic sketch of UAV deployment for the entire area and the limited area. Figure 12: Deployment Results As shown in the Figure 14, the number of UAV of the limited area is $8$, but the one of the entire area changes into $7$ because the system with our proposed framework judges that $7$ UAVs are enough to supply the offloaded cellular data amount, this decision is made during the clustering of the K-means part. Because the initial centroid of the merged category chooses an unfavorable position leading to a low similarity in the cluster. Figure 13: Power Comparison for the Limited Area Figure 14: Power Comparison for the Entire Area Then we use the total transmit power of four different schemes to evaluate the performance of the proposed framework. Experimental schemes are a scheme with KEG and location optimization, a scheme without KEG but with location optimization, a scheme with KEG but without location optimization, and a scheme without neither KEG or location optimization. For the limited area, the power comparison of four schemes is represented in Figure 15. In general, the scheme with KEG and location optimization consume the least total transmit power. The performance of the one with no KEG and no location optimization is worst. The remaining two types of scheme have similar performance in general. The scheme with KEF and location optimization reduce 24% power consumption than the worst one. For the entire area, four schemes have been contrasted in Figure 16. Obviously, the scheme without KEG but with location optimization has the best effect for system performance improvement. Next, the scheme with KEG but no location optimization is better than the one with location optimization but no KEG. The scheme with no KEG and no location is the worst scheme. The scheme with KEF and location optimization reduce 0.47% power consumption than the worst one. Based on the above simulation results, our proposed framework is not suitable to apply only several UAVs in the entire area. The number of UAVs is too small to carry the full range of traffic of an entire city. But still in this situation, the scheme with KEG and location optimization has some contributions for reducing the total transmit power consumption. Our UAV deployment framework has good performance for relatively small area below dozens of square kilometers, especially for those areas with the approximate GMM cellular data amount distribution. ## IV Conclusion In this paper, we have investigated the UAV location optimization in a downlink system. To effectively optimize the UAV location, we fist predice the user traffic distribution by a joint K-means and EM algorithm based on GMM. With the predicted traffic distribution, the optimal locations for UAVs are accordingly obtained. Simulation results show that RSMA reduces total power consumption compared to FDMA, and TDMA. ## References * [1] W. Saad, M. Bennis, and M. Chen, “A vision of 6G wireless systems: Applications, trends, technologies, and open research problems,” _arXiv preprint arXiv:1902.10265_ , 2019. * [2] J. Lyu, Y. Zeng, and R. Zhang, “Uav-aided offloading for cellular hotspot,” vol. 17, no. 6. IEEE, 2018, pp. 3988–4001. * [3] M. Chen, H. V. Poor, W. Saad, and S. Cui, “Wireless communications for collaborative federated learning in the internet of things,” _arXiv preprint arXiv:2006.02499_ , 2020. * [4] M. Mozaffari, W. Saad, M. Bennis, Y.-H. Nam, and M. Debbah, “A tutorial on uavs for wireless networks: Applications, challenges, and open problems.” IEEE, 2019. * [5] M. Mamdouh, M. A. Elrukhsi, and A. Khattab, “Securing the internet of things and wireless sensor networks via machine learning: A survey,” in _2018 International Conference on Computer and Applications (ICCA)_. IEEE, 2018, pp. 215–218. * [6] J. Zhang, X. Zhu, and Z. Zhou, “Design of time delayed control systems in uav using model based predictive algorithm,” in _2010 2nd International Asia Conference on Informatics in Control, Automation and Robotics (CAR 2010)_ , vol. 1. IEEE, 2010, pp. 269–272. * [7] Z. Li, M. Chen, C. Pan, N. Huang, Z. Yang, and A. Nallanathan, “Joint trajectory and communication design for secure uav networks,” _IEEE Commun. Lett._ , vol. 23, no. 4, pp. 636–639, April 2019. * [8] Z. Yang, C. Pan, M. Shikh-Bahaei, W. Xu, M. Chen, M. Elkashlan, and A. Nallanathan, “Joint altitude, beamwidth, location, and bandwidth optimization for UAV-enabled communications,” _IEEE Commun. Lett._ , vol. 22, no. 8, pp. 1716–1719, Aug. 2018. * [9] M. Chen, U. Challita, W. Saad, C. Yin, and M. Debbah, “Artificial neural networks-based machine learning for wireless networks: A tutorial,” _IEEE Commun. Surveys Tut._ , vol. 21, no. 4, pp. 3039–3071, Fourthquarter 2019. * [10] P. Dong, H. Zhang, G. Y. Li, I. S. Gaspar, and N. NaderiAlizadeh, “Deep cnn-based channel estimation for mmwave massive mimo systems,” _IEEE J. Sel. Topics Signal Process._ , vol. 13, no. 5, pp. 989–1000, 2019. * [11] Y. Shi, K. Yang, T. Jiang, J. Zhang, and K. B. Letaief, “Communication-efficient edge ai: Algorithms and systems,” _arXiv preprint arXiv:2002.09668_ , 2020. * [12] G. Jia, Z. Yang, H.-K. Lam, J. Shi, and M. Shikh-Bahaei, “Channel assignment in uplink wireless communication using machine learning approach,” _arXiv preprint arXiv:2001.03952_ , 2020. * [13] M. Chen, Z. Yang, W. Saad, C. Yin, H. V. Poor, and S. Cui, “A joint learning and communications framework for federated learning over wireless networks,” _arXiv preprint arXiv:1909.07972_ , 2019. * [14] Z. Yang, M. Chen, W. Saad, C. S. Hong, and M. Shikh-Bahaei, “Energy efficient federated learning over wireless communication networks,” _arXiv preprint arXiv:1911.02417_ , 2019. * [15] Y. Wang and V. Friderikos, “Caching as an image characterization problem using deep convolutional neural networks,” _arXiv preprint arXiv:1907.07263_ , 2019\. * [16] M. Mozaffari, W. Saad, M. Bennis, and M. Debbah, “Efficient deployment of multiple unmanned aerial vehicles for optimal wireless coverage,” vol. 20, no. 8. IEEE, 2016, pp. 1647–1650. * [17] ——, “Optimal transport theory for power-efficient deployment of unmanned aerial vehicles,” in _2016 IEEE international conference on communications (ICC)_. IEEE, 2016, pp. 1–6. * [18] L. Wang and S. Zhou, “Energy-efficient uav deployment with flexible functional split selection,” in _2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)_. IEEE, 2018, pp. 1–5. * [19] Z. Wang, L. Duan, and R. Zhang, “Adaptive deployment for uav-aided communication networks,” _IEEE Transactions on Wireless Communications_ , vol. 18, no. 9, pp. 4531–4543, 2019. * [20] Q. Zhang, W. Saad, M. Bennis, X. Lu, M. Debbah, and W. Zuo, “Predictive deployment of uav base stations in wireless networks: Machine learning meets contract theory,” _arXiv preprint arXiv:1811.01149_ , 2018. * [21] Y. Wang, M. Chen, Z. Yang, T. Luo, and W. Saad, “Deep learning for optimal deployment of uavs with visible light communications,” _arXiv preprint arXiv:1912.00752_ , 2019. * [22] M. Chen, M. Mozaffari, W. Saad, C. Yin, M. Debbah, and C. S. Hong, “Caching in the sky: Proactive deployment of cache-enabled unmanned aerial vehicles for optimized quality-of-experience,” _IEEE J. Sel. Areas Commun._ , vol. 35, no. 5, pp. 1046–1061, May 2017. * [23] S. Q. Zhang, F. Xue, N. A. Himayat, S. Talwar, and H. Kung, “A machine learning assisted cell selection method for drones in cellular networks,” in _2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)_. IEEE, 2018, pp. 1–5. * [24] Q. Zhang, M. Mozaffari, W. Saad, M. Bennis, and M. Debbah, “Machine learning for predictive on-demand deployment of uavs for wireless communications,” in _2018 IEEE Global Communications Conference (GLOBECOM)_. IEEE, 2018, pp. 1–6. * [25] A. Al-Hourani, S. Kandeepan, and S. Lardner, “Optimal lap altitude for maximum coverage,” _IEEE Wireless Communications Letters_ , vol. 3, no. 6, pp. 569–572, 2014. * [26] Z. Yang, M. Chen, W. Saad, W. Xu, and M. Shikh-Bahaei, “Sum-rate maximization of uplink rate splitting multiple access (RSMA) communication,” _arXiv preprint arXiv:1906.04092_ , 2019. * [27] Z. Yang, M. Chen, W. Saad, and M. Shikh-Bahaei, “Optimization of rate allocation and power control for rate splitting multiple access (RSMA),” _arXiv preprint arXiv:1903.08068_ , 2019. * [28] B. Clerckx, H. Joudeh, C. Hao, M. Dai, and B. Rassouli, “Rate splitting for MIMO wireless networks: A promising PHY-layer strategy for LTE evolution,” _IEEE Commun. Mag._ , vol. 54, no. 5, pp. 98–105, May 2016. * [29] Y. Mao, B. Clerckx, and V. O. K. Li, “Rate-splitting multiple access for downlink communication systems: Bridging, generalizing, and outperforming SDMA and NOMA,” _EURASIP J. Wireless Commun. Network._ , vol. 2018, no. 1, pp. 1–54, May 2018. * [30] ——, “Rate-splitting for multi-user multi-antenna wireless information and power transfer,” _arXiv preprint arXiv:1902.07851_ , 2019. * [31] Y. Mao, B. Clerckx, and V. O. K. Li, “Rate-splitting for multi-antenna non-orthogonal unicast and multicast transmission: Spectral and energy efficiency analysis,” _IEEE Trans. Commun._ , to appear, 2019. * [32] ——, “Energy efficiency of rate-splitting multiple access, and performance benefits over SDMA and NOMA,” in _Proc. IEEE Int. Symp. Wireless Commun. Sys._ , Lisbon, Portugal, Aug. 2018, pp. 1–5. * [33] M. Dai and B. Clerckx, “Multiuser millimeter wave beamforming strategies with quantized and statistical CSIT,” _IEEE Trans. Wireless Commun._ , vol. 16, no. 11, pp. 7025–7038, Nov. 2017. * [34] B. Clerckx, Y. Mao, R. Schober, and H. V. Poor, “Rate-splitting unifying SDMA, OMA, NOMA, and multicasting in MISO broadcast channel: A simple two-user rate analysis,” _arXiv preprint arXiv:1906.04474_ , 2019\. * [35] X. Chen, Y. Jin, S. Qiang, W. Hu, and K. Jiang, “Analyzing and modeling spatio-temporal dependence of cellular traffic at city scale,” in _2015 IEEE International Conference on Communications (ICC)_. IEEE, 2015, pp. 3585–3591. * [36] U. Paul, A. P. Subramanian, M. M. Buddhikot, and S. R. Das, “Understanding traffic dynamics in cellular data networks,” in _2011 Proceedings IEEE INFOCOM_. IEEE, 2011, pp. 882–890. * [37] Z. Zhou, “Machine learning,” in _Bioinformatics (Oxford, England)_ , 2010\. * [38] C. M. Bishop, “Pattern recognition and machine learning.” springer, 2006. * [39] Z. Yang, M. Chen, W. Saad, W. Xu, M. Shikh-Bahaei, H. V. Poor, and S. Cui, “Energy-efficient wireless communications with distributed reconfigurable intelligent surfaces,” 2020.
2024-09-04T02:54:57.529715
2020-02-25T19:47:19
2003.02638
{ "authors": "Marcus Ebner von Eschenbach, Binyamin Manela, Jan Peters, Armin Biess", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26058", "submitter": "Marcus Ebner Von Eschenbach", "url": "https://arxiv.org/abs/2003.02638" }
arxiv-papers
# Metric-Based Imitation Learning Between Two Dissimilar Anthropomorphic Robotic Arms Marcus Ebner von Eschenbach1,2, Binyamin Manela1, Jan Peters2, Armin Biess1 *This work was supported in part by the Helmsley Charitable Trust through the Agricultural, Biological and Cognitive Robotics Initiative and the Israel Science Foundation (grant no. 1627/17).1Department of Industrial Engineering and Management Ben-Gurion University of the Negev, Be’er Sheva 84105, Israel <EMAIL_ADDRESS>Intelligent Autonomous Systems Group Technical University Darmstadt, 64289 Darmstadt, Germany Germany<EMAIL_ADDRESS> ###### Abstract The development of autonomous robotic systems that can learn from human demonstrations to imitate a desired behavior - rather than being manually programmed - has huge technological potential. One major challenge in imitation learning is the correspondence problem: how to establish corresponding states and actions between the expert and learner, when the embodiments of the agents are different (morphology, dynamics, degrees of freedom, etc.). Many existing approaches in imitation learning circumvent the correspondence problem, for example, kinesthetic teaching or teleoperation, which are performed on the robot. In this work we explicitly address the correspondence problem by introducing a distance measure between dissimilar embodiments. This measure is then used as a loss function for static pose imitation and as a feedback signal within a model-free deep reinforcement learning framework for dynamic movement imitation between two anthropomorphic robotic arms in simulation. We find that the measure is well suited for describing the similarity between embodiments and for learning imitation policies by distance minimization. ## I INTRODUCTION (a) (b) Figure 1: Static pose imitation between dissimilar antrophomorphic robotic arms using a 7-DOF-Panda expert. (a) 4-DOF-Panda learner; (b) 3-DOF-Panda learner. Dissimilar robots are generated by locking DOFs (expert is shown on the left, the learner on the right, locked joints in red). Approaches to imitation learning in robotics have delivered huge success ranging from helicopter acrobatics [1], high-speed arm skills [2], haptic control [3, 4], gestures [5], manipulation [6, 7, 8] to legged locomotion [9, 8]. The machine learning algorithms that make imitation learning possible are well studied and have recently been summarized [10]. Surprisingly, despite all of these impressive successes in the acquisition of new robot motor skills, fundamental research questions in imitation learning of central importance have remained open for decades. Among such core questions is the correspondence problem: how can one agent (the learner or imitator) produce a similar behavior - in some aspect - to behavior it perceives in another agent (the expert or demonstrator), given that the two agents obey different kinematics and dynamics (body morphology, degrees of freedom (DOFs), constraints, joints and actuators, torque limits), i.e., occupy different state spaces [11]? Existing algorithmic approaches towards imitation learning can be divided into two groups: behavioral cloning (BC) and inverse reinforcement learning (IRL) or inverse optimal control (IOC), both can be further subdivided into model- based and model-free approaches depending on whether the system dynamics is available or not [10]. BC and IRL make different assumptions about the correspondence of learner and expert. In BC, a mapping from states to actions is generated from the demonstrations using supervised learning methods. This mapping can then be used by the learner to reproduce similar behavior, provided that the embodiments of expert and learner are alike, otherwise the method will fail due to lack of correspondence. Successful implementations of model-based BC algorithms have been obtained for a hitting-a-ball task with an underactuated robot [12], playing video games [13] and controlling a UAV through a real forest [14]. Model-free BC algorithms have been implemented for autonomous RC helicopter flight using experts demonstrations [1] and for learning tasks, such as tennis swings [15], ball-paddling [2] and human-robot collaborative motions in tool-handover tasks [16], autonomous knot-tying using a surgical robot [17] and character animation [18]. In an IRL framework, the learner infers a reward function for a given task from expert demonstrations of the task. The underlying assumption is that the reward function is a parsimonious and portable representation of the task, which can be transferred and generalized to agents with different embodiments. Thus, IRL implicitly resolves the correspondence problem but has the disadvantage of being computationally expensive and requires reinforcement learning in an inner loop. IRL has been implemented mostly in a model-based approach for tasks, such as learning to drive a car in a simulator [19] and path planning [20, 21, 22]. A few model-free IRL algorithms have been proposed and used to learn policies of robot motion [4, 23], [24]. The correspondence problem results in the following question: what action sequence in the learner is required to produce behavior that is similar to the expert, given that learner and expert have different embodiments and given that a measure of similarity $d$ is defined. If we denote the state-action pairs of the expert and learner as $(\bm{s}_{t},\bm{a}_{t})$ and $(\bm{\hat{s}}_{t},\bm{\hat{a}}_{t})$, respectively, with $t=1,2,\dots T$, then we can formulate the correspondence problem in its simplest form as follows: for a given set of demonstrations ${\cal{D}}=\\{\bm{s}_{1},\bm{a}_{1},\dots,\bm{a}_{T-1},\bm{s}_{T}\\}_{i=1}^{N}$ find actions $\bm{\hat{a}}_{t},t=1,2,\dots T$, so that $\bm{\hat{s}}_{t}$ is similar to $\bm{s}_{t}$ for all $t=1,2,\dots T$, where similarity (negative loss) is defined as $\sum_{t=1}^{T}d(\bm{s}_{t},\bm{\hat{s}}_{t})~{}\rightarrow~{}\mbox{min}$. Note that the states depend on the actions via the system dynamics, thus, it is $\bm{{s}}_{t+1}=\bm{f}(\bm{{s}}_{t},\bm{{a}}_{t})$ and $\hat{\bm{s}}_{t+1}=\hat{\bm{f}}(\hat{\bm{s}}_{t},\hat{\bm{a}}_{t})$, where $\bm{f}$ and $\hat{\bm{f}}$ describe the system dynamics of expert and learner, respectively. Several levels of complexity can be added to this formulation. (1) Generally, the dynamics for real systems are stochastic, thus, $\bm{{s}}_{t+1}=\bm{f}(\bm{{s}}_{t},\bm{{a}}_{t})+\bm{\epsilon}_{t}$ and $\hat{\bm{s}}_{t+1}=\hat{\bm{f}}(\hat{\bm{s}}_{t},\hat{\bm{a}}_{t})+\hat{\bm{\epsilon}}_{t}$, where $\bm{\epsilon}_{t}$ and $\hat{\bm{\epsilon}}_{t}$ is noise. In this case the distance can be measured between probability distributions of state vectors, $\sum_{t=1}^{T}d(p(\bm{s}_{t}),\hat{p}(\bm{\hat{s}}_{t}))\rightarrow\mbox{min}$, where $p(\bm{s}_{t})$ and $\hat{p}(\hat{\bm{s}}_{t})$ denote the probability distribution for expert and learner, respectively (2) The system dynamics of expert and/or learner are often not known, leading to model-free vs. model- based approaches. (3) The demonstrations are often given in the form ${\cal{D}}=\\{\bm{s}_{1},\dots,\bm{s}_{T}\\}$, i.e., the actions of the expert are not available. (4) The states of the expert may be only partially observable, for instance, if the environment is observed by cameras. The states must then be inferred from observations $\bm{o}_{t}$ only. In this work we study imitation tasks between two dissimilar anthropomorphic robot arms, which are generated by locking degrees of freedom (DOFs) in the learner. Throughout this work we assume that the dynamics of the learner are not available to the learning agent, thus, we are in a model-free setting. We first introduce our definition of an embodiment state and provide a distance measure to assess the similarity between embodiments. This distance measure is then used to imitate static poses using neural networks (Fig.1) and as a feedback signal for movement imitation using reinforcement learning. ## II RELATED WORK Metric approaches to the correspondence problem in imitation have been developed in a series of studies [25, 26, 27, 28, 29, 30]. In these studies, the correspondence problem was formulated in state-action space with separate metrics for states and actions. Simple global metrics based on the Hamming norm, Euclidean distance and infinity norm ($L_{p}$-norms) were used to measure the similarity between expert and learner. Another approach to the correspondence problem is to explicitly learn the forward dynamics of the learner. The actions of the learner are then adapted to the given demonstrations by using the learned forward dynamics. Within this framework, Englert et al. [12] have used the Kullback-Leibler divergence as similarity measure to compare trajectory distributions from an expert and a robot learner. Similarly, Grimes et al. [31, 32] have used Gaussian Mixture Models to learn a forward model and infer optimal actions of the learner. The model has been used to transfer human motions to a humanoid robot. ## III METHODS ### III-A Definition of an embodiment In this study an embodiment consists of a chain of links, which are represented by frames that are attached to each link. Frames are commonly used in robotics to describe the orientation and translation (pose) of rigid bodies with respect to a laboratory frame. Frames are elements of the special Euclidean group $SE(3)$, which is a non-Euclidean manifold and group (Lie- group). Frames can be represented as homogeneous matrices defined as $\displaystyle\bm{T}=\begin{bmatrix}\bm{R}&\bm{p}\\\ \bm{0}^{T}&1\end{bmatrix}\,,$ (1) where $\bm{R}\in SO(3)$ is a rotation matrix ($\bm{R}\bm{R}^{T}=\bm{R}^{T}\bm{R}=\bm{I},\det\bm{R}=1$) and $\bm{p}\in\mathbb{R}^{3}$ is a column vector, describing the orientation and translation of a frame, respectively, with respect to a reference frame. For simplicity we write $\bm{T}=[\bm{R},\bm{p}]$. The inverse is then defined as $\bm{T}^{-1}=[\bm{R}^{T},-\bm{R}^{T}\bm{p}]$. The configuration space of an embodiment consisting of $n$ links with attached frames can be described by an element of the direct product space $SE(3)^{n}=SE(3)\times SE(3)\times\dots\times SE(3)$ ($n$ copies). The velocity of a frame is described by a twist, which encodes the rotational and translational velocity of the rigid body. A twist is an element of the Lie-algebra $se(3)$, which defines a vector space. The velocity of the embodiment consisting of $n$ frames is described by an element of the direct product space $se(3)^{n}=se(3)\times se(3)\times\dots\times se(3)$ ($n$ copies). A twist can be represented by $4\times 4$ matrices of the form $\displaystyle{\cal{\bm{V}}}^{b}=\bm{T}^{-1}\dot{\bm{T}}=\begin{bmatrix}[{\bm{\omega}}^{b}]&\bm{v}^{b}\\\ \bm{0}^{T}&0\end{bmatrix}$ (2) or $\displaystyle{\cal{\bm{V}}}^{s}=\dot{\bm{T}}\bm{T}^{-1}=\begin{bmatrix}[{\bm{\omega}}^{s}]&\bm{v}^{s}\\\ \bm{0}^{T}&0\end{bmatrix}\;,$ (3) where (2) defines the body twist and (3) the spatial twist. The notation $[\cdot]$ denotes a skew symmetric $3\times 3$ matrix composed of the components of the angular velocity $\bm{\omega}=[\omega_{1},\omega_{2},\omega_{3}]^{T}$, that is $\displaystyle[\bm{\omega}]=-[\bm{\omega}]^{T}=\begin{bmatrix}0&-\omega_{3}&\omega_{2}\\\ \omega_{3}&0&-\omega_{1}\\\ -\omega_{2}&\omega_{1}&0\end{bmatrix}\,.$ (4) Specifically, $[\bm{\omega}^{b}]=\bm{R}^{T}\dot{\bm{R}}\in\mathbb{R}^{3\times 3}$ and $\bm{v}^{b}=\bm{R}^{T}\dot{\bm{p}}\in\mathbb{R}^{3}$ define the angular velocity and translational velocity of the origin with respect to the base frame, respectively, both expressed in coordinates of the body frame. A similar but less intuitive physical interpretation can be given to the spatial twist. For simplicity we write ${\cal{\bm{V}}}=[\bm{\omega},\bm{v}]$. The joint of the first link is always attached to the origin of the base frame, which serves as a reference frame in which all comparisons will be performed. Each joint rotates around one axis. The forward kinematic map is a map from joint angles ${\bm{q}=[q_{1},q_{2},\dots,q_{n}]^{T}}$ to frames $\bm{q}\rightarrow\bm{T}(\bm{q})$, where $n$ denotes the number of DOFs. For simplicity we first consider a planar manipulator with $n$ DOFs. We assume that all links have cylindrical shape and constant mass density. We attach a frame to each link $i$ with its origin at a distance of $r_{i}$ from joint $i-1$, $i=1,\dots,n,$ and with the $x$-axis that is pointing along the link direction. The transformation from link frame $i$ to the base frame $0$ can then be described by a product of matrix exponentials $\displaystyle\bm{T}_{0i}(q_{1},q_{2},\dots,q_{i})=e^{q_{1}{{\bm{S}}}_{1}}e^{q_{2}{{\bm{S}}}_{2}}\dots e^{q_{i}{{\bm{S}}}_{i}}\bm{M}_{i}\,,$ (5) where $\displaystyle\bm{M}_{i}=\begin{bmatrix}1&0&0&r_{i}\\\ 0&1&0&0\\\ 0&0&1&0\\\ 0&0&0&1\end{bmatrix}\,,\;\;{{\bm{S}}}_{i}=\begin{bmatrix}[{\bm{n}}_{i}]&-\bm{n}_{i}\times\bm{q}_{i}\cr 0&0\cr\end{bmatrix},$ (6) for $i=1,\dots,n$. The homogeneous matrix $\bm{M}_{i}$ describes a constant shift of the frame $i$ by $r_{i}$ along the $x$-axis. The screw $S_{i}\in se(3)$ is a $4\times 4$ matrix and describes the rotation axis of the revolute joint $i$. Here $\bm{n}_{i}$ denotes a unit vector in the direction of the joint axis $i$, $\bm{q}_{i}$ is a vector from the base to any point on the joint axis $i$ (both expressed in coordinates with respect to the base frame) and $\times$ denotes the vector cross product. Common choices for $r_{i}$ are $r_{i}=l_{i}/2$, which corresponds to attaching frames to the center of mass (COM) of each link (our choice) and $r_{i}=l_{i}$, which corresponds to attaching frames to the end of each link. Joint angles and joint velocities ${\dot{\bm{q}}=[\dot{q}_{1},\dot{q}_{2},\dots,\dot{q}_{n}]^{T}}$ determine the twists, thus $(\bm{q},\dot{\bm{q}})\rightarrow{\cal{\bm{V}}}(\bm{q},\dot{\bm{q}})$. The (body) twists follow from (2) and (5) as ($i=1,\dots,n$) $\displaystyle{\cal{\bm{V}}}_{0i}=\bm{T}^{-1}_{0i}(q_{1},q_{2},\dots,q_{i})\cdot\dot{\bm{T}}_{0i}(q_{1},q_{2},\dots,q_{i})\,,$ (7) which can be determined recursively leading to $\displaystyle{\cal{\bm{V}}}_{0i}=\mbox{Ad}_{[\bm{M}_{i}e^{\bm{S}_{i}q_{i}}]^{-1}}({\cal{\bm{V}}}_{0i-1})+\bm{S}_{i}\dot{q}_{i}\,,$ (8) where the adjoint map $\mbox{Ad}_{\bm{T}}({\cal{\bm{V}}})$ for frame $\bm{T}=[\bm{R},\bm{p}]$ and twist ${\cal{\bm{V}}}=[\bm{\omega},\bm{v}]$ is defined as $\mbox{Ad}_{\bm{T}}({\cal{\bm{V}}})=[\bm{R}\bm{\omega},\bm{p}\times\bm{R}\bm{\omega}+\bm{R}\bm{v}]\in se(3)$ and ${\cal{\bm{V}}}_{00}:=\bm{0}$ [33]. Note that from the (body) twists ${\cal{\bm{V}}}_{0i}=[\bm{\omega}_{0i},\bm{v}_{0i}]$ the angular velocity and translational velocity of frame $\bm{T}_{0i}=[\bm{R}_{0i},\bm{p}_{0i}]$ with respect to the base frame can be easily obtained by $\displaystyle\bm{\omega}_{0i}^{s}$ $\displaystyle=$ $\displaystyle\bm{R}_{0i}\bm{\omega}_{0i}\,,$ (9) $\displaystyle\dot{\bm{p}}_{0i}$ $\displaystyle=$ $\displaystyle\bm{R}_{0i}\bm{v}_{0i}\,.$ (10) For further details using frames and twists we refer to [34]. After these derivations we can define the state of an embodiment $\bm{s}$ as the state of all frames $\displaystyle\bm{s}$ $\displaystyle=$ $\displaystyle(\bm{s}_{1},\dots,\bm{s}_{i},\dots,\bm{s}_{n})$ (11) $\displaystyle=$ $\displaystyle(\bm{T}_{01},{\cal{\bm{V}}}_{01},\dots\,,\bm{T}_{0i},{\cal{\bm{V}}}_{0i},\dots\,,\bm{T}_{0n},{\cal{\bm{V}}}_{0n})\,,$ where we surpressed the arguments and $\bm{s}_{i}=(\bm{T}_{0i},{\cal{\bm{V}}}_{0i})\in\bm{{\cal{S}}}_{i}$ denotes the state of frame $i$. Note that via the foward kinematic map, which is assumed to be known in this work, an embodiment state is fully determined by its joint angles $\bm{q}$ and joint velocities $\dot{\bm{q}}$, i.e., $\bm{s}=\bm{s}(\bm{q},\dot{\bm{q}})$. A special case of (11) is obtained when ignoring rotational information. In this case, an embodiment can be described by the position and (translational) velocity of a set of candidate points, defined by the origin of each frame (by setting $\bm{R}=\bm{I}$ in (1), (2)). The embodiment state is then described by $\displaystyle\bm{s}$ $\displaystyle=$ $\displaystyle(\bm{x}_{1},\bm{v}_{1},\dots\,,\bm{x}_{i},\bm{v}_{i},\dots\,,\bm{x}_{n},\bm{v}_{n})\,,$ (12) where $\bm{s}_{i}=(\bm{x}_{i},\bm{v}_{i})$ denotes the state vector of candidate point $i$. The definition of an embodiment in terms of frames/twists and candidate points is generic and can be applied to any robot. A disadvantage of using frames is that they are not elements of a vector space, but define a non-Euclidean manifold. ### III-B Similarity between embodiments (a) (b) (c) Figure 2: Pose imitation task between planar manipulators. (a) Demonstrator pose; (b, c) Learner poses. Learner (b) generates perfect imitation if similarity is defined by candidate points, e.g., end-effector position. However, for a similarity measure based on frames, the two embodiments do not resemble each other. Learner (c) – consisting only of two links – provides better imitation than learner (b) if similarity is measured between frames attached to each link. Similarity of embodiments can be assessed by defining a distance measure $d$ between frames/twists or candidate points, that is $d(\bm{s}_{i},\hat{\bm{s}}_{j}):\bm{{\cal{S}}}_{i}\times\hat{\bm{{\cal{S}}}}_{j}\mapsto\mathbb{R}^{+}_{0}\,,$ where $\bm{s}_{i}\in\bm{{\cal{S}}}_{i}$ and $\hat{\bm{s}}_{j}\in\hat{\bm{{\cal{S}}}}_{j}$ result from different state spaces. We first consider the distance $\displaystyle d(\bm{T}_{i},\hat{\bm{T}}_{j})$ $\displaystyle=$ $\displaystyle\alpha_{tr}d_{tr}(\bm{T}_{i},\hat{\bm{T}}_{j})+\alpha_{rot}d_{rot}(\bm{T}_{i},\hat{\bm{T}}_{j})$ (13) between two frames, $\bm{T}_{i}=[\bm{R}_{i},\bm{p}_{i}]$ and $\hat{\bm{T}}_{j}=[\hat{\bm{R}}_{j},\hat{\bm{p}}_{j}]$. The distance consists of a translational and rotational part, which are weighted with factors $\alpha_{tr}$ and $\alpha_{rot}$. The weights can be either constants or functions of other variables. For the translational part we set the Euclidean distance between the two frame origins $\displaystyle d_{tr}(\bm{T}_{i},\hat{\bm{T}}_{j})=\lVert{\bm{p}_{i}-\hat{\bm{p}}}_{j}\lVert\,.$ (14) There are various ways to define the rotational distance between frames. We choose to take the angle between the unit vectors pointing along the $x$-axes of the frames, i.e., into the directions of the links. Thus, we define $\displaystyle\beta=\arccos(\bm{e}_{x}^{i}\cdot\hat{\bm{e}}_{x}^{j})$ (15) leading to values in the interval $[0,\pi]$. This definition results in numerical problems when performing gradient descent because the derivative of the $\arccos$-function is $\displaystyle\frac{d}{dx}\arccos x=-\frac{1}{\sqrt{1-x^{2}}}\,.$ (16) To avoid singularities, a modified rotational distance can be defined by shifting the negated $\cos\beta$ into the interval $[0,\pi]$ as $\displaystyle d_{rot}(\bm{T}_{i},\hat{\bm{T}}_{j})=\frac{\pi}{2}(1-\cos\beta)=\frac{\pi}{2}(1-\bm{e}_{x}^{i}\cdot\hat{\bm{e}}_{x}^{j})\,.$ (17) Note that the direction of the $x$-axis of frame $\bm{T}_{0i}=[\bm{R}_{0i},\bm{p}_{0i}]$ with respect to the laboratory frame can be easily extracted as the first column of the rotation matrix $\bm{R}_{0i}$. The distance measure introduced in (13), (14) and (17) includes only the static pose of the embodiment and frames might be considered similar, even though they move into different directions. To include also motion information into the distance measure, the twists of the frames need to be taken into consideration. For dynamic motion imitation we therefore augment the distance measure between two states $\bm{s}_{i}=(\bm{T}_{i},{\cal{\bm{V}}}_{i})\in\bm{{\cal{S}}}_{i}$ and $\hat{\bm{s}}_{j}=(\hat{\bm{T}}_{j},{\hat{\cal{\bm{V}}}}_{j})\in\hat{\bm{{\cal{S}}}}_{j}$ by including the translational and angular velocity (9) and (10), that is $\displaystyle d(\bm{s}_{i},\hat{\bm{s}}_{j})=\alpha_{tr}d_{tr}+\alpha_{rot}d_{rot}+\alpha_{v}d_{v}+\alpha_{\omega}d_{\omega}\,,$ (18) with $\displaystyle d_{v}=\lVert\dot{\bm{p}}_{i}-\dot{\hat{\bm{p}}}_{j}\lVert\;\;\;\;\mbox{and}\;\;\;\;d_{\omega}=\lVert\bm{\omega}^{s}_{i}-\hat{\bm{\omega}}^{s}_{j}\lVert\,.$ (19) Note that the distance measure between two states $\bm{s}_{i}$ and $\hat{\bm{s}}_{j}$, can be extended over all state spaces by defining the sum of all mutual distances $\displaystyle d(\bm{s},\hat{\bm{s}})=\sum_{i=1}^{n}\sum_{j=1}^{\hat{n}}d(\bm{s}_{i},\hat{\bm{s}}_{j})\,.$ (20) In the next section a weighted version of (20) will be introduced by incorporating link correspondences. ### III-C Link Correspondences To measure similarity between links of two embodiments, we first define how correspondence between links of different embodiments can be established. Embodiments may differ in the number and length of links, and thus, a one-to- one assignment between links is often not possible. To establish correspondence between links of different embodiments with possibly different overall size, we first rescale each embodiment by the sum of its link lengths $L$, resulting in a chain length of $1$. To establish correspondence, we assign a weight for every possible link-pair combination. Thus, for two embodiments 1 and 2 with $n$ and $\hat{n}$ links, respectively, link correspondence can be represented by a correspondence matrix $\bm{W}\in\mathbb{R}^{n\times\hat{n}}$. Irrelevant combinations result in zero or close to zero entries and higher values indicate higher relevance. Each row of the correspondence matrix contains the correspondence weights of one link of embodiment 1 to all other links of embodiment 2, where the highest value indicates which link is the most relevant from all links of embodiment 2. The elements of the correspondence matrix can either be calculated as a function of embodiment states or pre-calculated for a pair of embodiments, independent of their current state. State-Dependent Assignment. For state-dependent calculations of the correspondence matrix $\bm{W}(\bm{s},\hat{\bm{s}})$, weights are calculated using the distance measure between frames. The closer the distance, the higher the weight for this pair of frames. To obtain the correspondence matrix $\bm{W}(\bm{s},\hat{\bm{s}})$, the mutual distance matrix $\bm{D}^{\prime}(\bm{s},\hat{\bm{s}})=({D}^{\prime}(\bm{s}_{i},\hat{\bm{s}}_{j}))\,,i=1\dots n,j=1\dots\hat{n},$ between all links of the two embodiments is computed using the distance measure in (13). A correspondence matrix can be generated by replacing the smallest element of each row of $\bm{D}^{\prime}(\bm{s},\hat{\bm{s}})$ with 1 and all other elements with 0, resulting in a binary matrix $\bm{W}_{12}(\bm{s},\hat{\bm{s}})$ that assigns exactly one link of embodiment 1 to each link of embodiment 2 with a weight 1. The same operation can be applied for each column of $\bm{D}^{\prime}(\bm{s},\hat{\bm{s}})$, resulting in $\bm{W}_{21}$. Adding $\bm{W}_{12}$ to $\bm{W}_{21}$ results in a correspondence matrix $\bm{W}$. A correspondence matrix that only uses the minimum for each row and each column is very selective and ignores the fact that more than one link of the other embodiment may be lying in a similar distance and should be taken into consideration. This effect can be mitigated by applying a softmax function to the rows and columns of the correspondence matrix, after multiplying with a constant factor $\xi<0$ to find soft minima instead of maxima and to adjust the distinctness of the minimum. ### III-D Calculating Distance Between Embodiments We define distance between embodiments as element-wise multiplication of the distance function with the correspondence matrix i.e., $\bm{D}(\bm{s},\hat{\bm{s}})=\bm{W}(\bm{s},\hat{\bm{s}})\circ\bm{D}^{\prime}(\bm{s},\hat{\bm{s}})$, where $\circ$ denotes the Hadamard product. Only distances between corresponding link pairs remain because non-corresponding pairs are weighted with zero or near-zero values. To obtain one single scalar number, the mean of all entries of the resulting matrix is taken. Matrix norms, such as the Frobenius norm, can also be used. For the evaluation of the correspondence matrix and the distance matrix, suitable weights, $\alpha_{tr}$ and $\alpha_{rot}$, need to be chosen. Different settings are possible here. The pseudo-code for calculating the distance measure are shown in Algorithm 1. Function _distance_measure( $\bm{s}(\bm{q})$, $\hat{\bm{s}}(\hat{\bm{q}})$, $\alpha_{tr}$, $\alpha_{rot}$)_ Calculate $\\{\bm{T}_{i}\\}_{i=1}^{n},\\{\hat{\bm{T}}_{j}\\}_{j=1}^{\hat{n}}$ from $\bm{q}$, $\hat{\bm{q}}$ using forward kinematics. $\bm{D}\leftarrow d(\bm{T},\hat{\bm{T}},\alpha_{tr},\alpha_{rot})\;\bm{foreach}\;(\bm{T},\hat{\bm{T}})$ in $\\{\bm{T}_{i}\\}_{i=1}^{n}\times\\{\hat{\bm{T}}_{j}\\}_{j=1}^{\hat{n}}$ Either calculate correspondence matrix $\bm{W}(\bm{s},\hat{\bm{s}})$ or use static correspondence matrix $\bm{W}$. $\bm{D}_{W}\leftarrow\bm{D}\circ\textbf{W}$ $\overline{\textbf{D}_{W}}=\frac{\sum_{i=1}^{m}\sum_{j=1}^{n}D_{W,ij}}{mn}$ return $\overline{\textbf{D}_{W}}$ Algorithm 1 Calculating the Distance Measure ## IV RESULTS We studied static and dynamic imitation tasks in simulation between two dissimilar embodiments using the previously derived distance measure. First, we present results of static pose imitation tasks between planar manipulators with different links using gradient descent. Second, we examined whether a neural network can learn the optimal static pose between two planar manipulators and, furthermore, between two Franka Emika Panda robots for a given expert pose. In addition, we investigated how well the learner generalized to poses it has not seen during training. Third, we present results for a dynamic imitation tasks between two Franka Emika Panda robotic arms. For this purpose, a simulation environment was built in the physics simulator Gazebo. In all simulations, we assumed that the learner has no knowledge of the robot dynamics. ### IV-A Static Pose Imitation Task In a static pose imitation task, the optimal pose of the learner, $\hat{\bm{q}}^{\ast}$ is obtained for a given pose of the expert, $\bm{q}$, by minimization of the distance measure $\displaystyle\hat{\bm{q}}^{\ast}=\operatorname*{arg\,min}_{\hat{\bm{q}}}\;d(\bm{s}(\bm{q}),\hat{\bm{s}}(\hat{\bm{q}}))\,.$ (21) Minimizing the distance function is a nonlinear optimization problem for which generally no analytical solution exists, in particular, for embodiments with a large number of links. Using mathematical libraries such as TensorFlow, the gradient of the distance function can be computed and local minima can be found numerically via gradient descent. Instead of trying to solve the optimization problem repeatedly for each input, we can learn a mapping from joint angles of the expert to joint angles of the learner $\bm{f}_{\bm{\theta}}:\bm{q}\longrightarrow\hat{\bm{q}}$, where $\bm{q}\in[-\pi,\pi]^{n}$ and $\hat{\bm{q}}\in[-\pi,\pi]^{\hat{n}}$. The function $\bm{f}_{\bm{\theta}}$ can be approximated by a neural network with weight parameters $\bm{\theta}$. The distance measure was implemented as a computation graph in TensorFlow [35]. (a) $\alpha_{tr}=0.5,\;\alpha_{rot}=1.0$ (b) $\alpha_{tr}=3.0,\;\alpha_{rot}=1.0$ (c) $\alpha_{tr}=1.0,\;\alpha_{rot}=1.0$ (d) (e) (f) Figure 3: (a-c) Effects of using different distance weighting factors on pose imitation between two planar manipulators (expert: blue, learner: orange). (d-f) Distance function between planar manipulators with two links. (d) State- dependent weight matrix with $\alpha_{tr}=1.5,\alpha_{rot}=1.0$; (e) State- dependent weight matrix with distance-dependent weighting factors; (f) State- independent weight matrix, considering only rotational distance between corresponding links ($\alpha_{tr}=0$). The distance is plotted over all possible joint angles of one embodiment, while the other manipulator remains fixed at $\bm{q}=[1.5,-1.5]$. Comparing Link Correspondences. Before training the neural network, we analyzed the behavior of the distance function for a simple toy model. Fig. 3a-c shows an imitation task between planar manipulators with two links. The distance was measured using a state-dependent correspondence matrix $\bm{W}$, using varying weight factors $\alpha_{tr}$, $\alpha_{rot}$. Each pose of the learner was found via gradient descent. The choice of weight factors clearly has a strong influence on the quality of the result. Balancing between translational and rotational weights is challenging. One possibility to overcome this difficulty may be to simply use the translational distance as the translational weight and to redefine the rotational weight by subtracting the translational distance from its maximum value and rescaling from $[0,\pi]$ to $[0,2]$. This way, translational and rotational distance are in the same value range, i.e., $\alpha_{tr}=d_{tr},\quad\alpha_{rot}=\frac{2}{\pi}(2-d_{tr}).$ (22) The maximum distance results from the fact that both embodiments are normalized to be in a sphere of radius $1$ or diameter $2$, which is the maximum Euclidean distance between two points in this sphere. Equation (22) ensures that the sum of $\alpha_{tr}$ and $\alpha_{rot}$ is always $2$, excluding the transformation factor $2/\pi$. The problem with the above approaches is that there exist local minima in the distance function, as can be observed in 3(d) and 3(e) for two-link manipulators. 3(f) on the other hand, shows that when considering the only the orientational between frames (setting $\alpha_{tr}=0$) using a static, precalculated correspondence matrix, only one minimum remains . This approach is less flexible but more robust and resulted in parallel alignment of corresponding links. Therefore, all following experiments were conducted using this distance measure. Pose Imitation Mapping Using a Neural Network. We next implemented a neural network to map joint angles of the expert to corresponding joint angles of the learner, leading to a more efficient method for static pose imitation than conducting a gradient descent search for each state. To generate this nonlinear map, network parameters $\bm{\theta^{*}}=\operatorname*{arg\,min}_{\bm{\theta}}\sum_{i=1}^{N}d(\bm{s}(\bm{q}_{i}),\hat{\bm{s}}(f_{\bm{\theta}}(\bm{q}_{i})))\,,$ (23) need to be determined, which minimize the distance between the states of the expert and the states of the learner for a given training set $\\{\bm{q}_{1},\bm{q}_{2},\dots,\bm{q}_{N}\\}$, where the map $\bm{f}_{\bm{\theta}}(\bm{q}_{i})$ is represented by a neural network and $\bm{\theta}$ are the network parameters. The training dataset $\\{\bm{q}_{1},\bm{q}_{2},\dots,\bm{q}_{N}\\}$ can be generated randomly because it contains only expert angles that do not need to be labeled. The network structure consists of three hidden layers of size $32$ with LReLU activation functions. The output layer uses the tanh as activation function, resulting in output values in $[-1,1]$. These values can then be mapped to angular values in $[-\pi,\pi]$. Having generated a training dataset and another dataset for validation, the network was trained using a minibatch- based stochastic gradient descent method. After dividing the training set into minibatches, the update step is performed for each of the minibatches. Afterwards, the whole training set is shuffled and the procedure repeated. (a) (b) (c) (d) Figure 4: Imitation between two planar manipulators using a neural network. (a-b) Expert: 7-DOF, learner: 4-DOF; (c-d) Expert: 4-DOF, learner: 7-DOF. We trained a neural network to find a mapping from joint angles of a 7-DOF expert manipulator to angles of a 4-DOF learner manipulator and another network for the same pair of manipulators but with switched expert/learner roles. Each time we used a training set of 1024 expert demonstrations, dividing it into 32 minibatches in each episode. Fig. 4 shows the learned poses of the trained network for given expert angles that were not included in the training set. Pose Imitation Between Three-Dimensional Embodiments. We next applied the method from the previous section to two Franka Emika Panda robotic arms in simulation. Dissimilar embodiments were generated by locking individual DOFs of the learner to $q_{i}=0$. For example, to simulate a four-link-Panda robot with four DOFs, joint 3, 6, and 7 were locked. To simulate a three-link-Panda robot, additionally link 2 was locked (see Fig. 1). We first studied static pose imitation between identical embodiments with all full 7 DOFs enabled (Video 1 111https://youtu.be/UPZclkFoFXQ). 1(a) shows an example of how the trained network solves the task between dissimilar embodiments (expert: 7 DOFs, learner: 4 DOFs). The locked joints 6 and 7 (indicated in red) lie at the very end of the embodiment and therefore do not contribute much to the overall configuration of the embodiment, in contrast to the locked joint 3 at the center of the embodiment. In 1(b), the learner only has 3 DOFs. The learner tries to establish similarity by rotating joint 4 as the second joint is locked. The results can be seen in Video 2 222https://youtu.be/BmFH6Nr9F1Y. The experiments have shown that training a neural network with a distance- based loss function worked reasonably well for static pose imitation. Local minima in the distance function and over-fitting on the training set posed some problems. While the former is more difficult to solve, the latter can be solved by increasing the size of the training set and stopping the training process at a suitable time. Another possibility to improve the network’s performance may lie in the structure and training of the network. We used the same structure for all pose imitation tasks without employing techniques that decrease the probability of over-fitting, such as dropout or regularization [36]. ### IV-B Dynamic Motion Imitation In this section, we a apply a reinforcement learning algorithm for motion imitation by using the online distance measure between embodiments as a feedback signal. Reinforcement learning has shown great success in learning motor tasks, for example, [37, 38]. We study the transfer of motions from one Panda robot with maximally seven DOFs to another one in simulation. As before, different embodiments are generated by locking DOFs in the learner. We assume that the dynamics of the robots are unavailable (model-free) and that the learner is controlled by joint torques $\bm{\tau}$. Consequently, the agent needs to control the joint positions but also, implicitly, learn the the robot dynamics. Simulation Environment. The manufacturer (Franka Emika) of the Panda robot provided a good integration of the robot into the ROS ecosystem, which we augmented by the Gazebo physics simulator. Unfortunately, no exact inertia values for the Panda were provided, which were needed to simulate the dynamics. The CoR-Lab group from the Universit t Bielefeld published some estimates of inertia values on their GitHub repository333https://github.com/corlab/cogimon-gazebo- models/blob/master/franka/robots/panda_arm.xacro. We used these estimates and manually adjusted them by using the guide given in the Gazebo manual. The simulated robot is controlled via joint torque commands. To create trajectories of the expert, simple PID-controllers were configured for each joint. Due to the lack of sophisticated controllers and to facilitate the task for the RL agent, gravity was turned off in the simulation. RL Environment and Agent. The next task consisted in the implementation of the reinforcement learning agent and the interface for interaction with the simulation. The state space consisted of the expert and learner states, thus, the environment’s state $\bm{s}$ is defined by the tuple $\bm{s}_{t}=\\{\bm{s}(\bm{q}_{t},\dot{\bm{q}}_{t}),\hat{\bm{s}}(\hat{\bm{q}}_{t},\dot{\hat{\bm{q}}}_{t})\\}$. The actions are given by the torque commands of the learner, $\hat{\bm{a}}_{t}=\hat{\bm{\tau}}_{t}$, subject to the torque limits for each corresponding joint. The control of the expert is not observable by the RL agent. For training and testing, multiple random trajectories of similar duration were recorded. The step size was set to $\Delta t=0.1$s and a step consisted of the following transitions: The agent executed an action in the environment by sending a torque command to the learner. The torques were then applied for the duration of the simulation time $\Delta t$. The simulation then paused and returned the next observed state $s_{t+1}$ together with the reward $r_{t+1}$, which was calculated from the state as the negative distance measure between the embodiment states. To train the agent, we used Proximal Policy Optimization (PPO) [39], which is a state-of-the art actor-critic DRL algorithm. One big advantage of PPO is its robustness towards hyperparameter tuning. We employed the GPU-optimized version (PPO2) of the Stable-Baselines repository, based on OpenAI’s implementations. Simulation 1: Imitation of a Single Trajectory. We first tested, whether motion imitation using reinforcement learning is feasible by transferring a single trajectory between two Panda robots with all 7 DOFs activated, i.e., expert and learner had identical embodiments. Both expert and learner started in their zero pose, in which all joint angles and joint velocities are set to zero. The trajectory of the expert was recorded off-line by moving each joint to an arbitrarily chosen goal position. The environment was reset whenever the trajectory ended. Each trajectory had a total duration of 5 s, which leads to 50 steps per episode. The discount factor was set to $\gamma=0.4$ implying that the agent acted myopically. This value was chosen because high values led to very slow training progress. The weights for the frame-distance function were set to $\alpha_{tr}=0.0,\;\alpha_{rot}=1.0,\;\alpha_{v}=0.001,\;\alpha_{\omega}=0.01$ in all simulation experiments. Training time was about $7.5$ hours on a desktop computer using GPU-accelerated computations. The simulation showed that the learned trajectory resembles the expert’s trajectory very closely and that imitation of motions is possible using a reinforcement learning framework with a distance related reward function. The results can be seen in Video 3 444https://youtu.be/dLN314VJTHg. Simulation 2: Generalization Between Trajectories. Figure 5: Generalization capabilities of two different agents. Shown is the distance measure for two 7-DOF-Panda learners while imitating the same, previously unseen trajectory. One learner (”Specialized“) was trained on a single trajectory, whereas the other (”Generalized“) was trained on 124 different trajectories. It was examined next, whether the agent can imitate trajectories it had not seen before. For this purpose, we trained the agent with different numbers of training data (one vs. 124 trajectories). Trajectories were recorded as before, but each time with different final poses. The trajectory of the agent trained on a single trajectory barely resembles the trajectory of the expert. The imitation of the agent trained on the larger data set is not perfect, but resembles the trajectories from the expert more closely (see Video 4 555https://youtu.be/2Q7jiY9DRUg). This improvement shows in a significantly smaller distance between expert and learner along the trajectory (see Fig. 5). Simulation 3: Imitation Between Dissimilar Embodiments. In the next experiment, we studied motion transfer between dissimilar Panda robots. Towards this goal, we trained the agent again on the same 124 trajectories but this time the learner had only 4 or 3 DOFs, respectively. As the learning robot was restricted in its DOFs, some trajectories could could not be imitated well, resulting in higher values of the distance measure. We found that the restricted learner moved its links in similar directions as the expert but the restrictions prevented a more similar imitation. Examples can be seen in Video 5 666https://youtu.be/Fytw8sz0pG0. ## V CONCLUSIONS Our main contributions with this work are threefold: First, we have introduced a definition of embodiment states in terms of frames/twists and candidate points. Second, we have povided a distance measure between dissimilar embodiments using correspondences between frames of expert and learner. Third, we have applied this distance measure to static pose and movement imitation tasks between manipulators. All tasks have been performed in simulation. In all experiments we could show that the agent was able to learn the imitation task, even though no dynamic model has been provided to the learner. The framework that we have developed is generic and flexible and not limited to our choice of parameters, distance measures and type of robots. Depending on the correspondence matrix calculation, the topology of the embodiments is not crucial. Possibly even free topologies like swarms of flying objects could be compared and brought into similarity. ## References * [1] P. Abbeel, A. Coates, and A. Y. Ng, “Autonomous helicopter aerobatics through apprenticeship learning,” _The International Journal of Robotics Research_ , vol. 29, no. 13, pp. 1608–1639, 2010. * [2] J. Kober and J. Peters, “Imitation and reinforcement learning,” _IEEE Robotics & Automation Magazine_, vol. 17, no. 2, pp. 55–62, 2010. * [3] P. Kormushev, S. Calinon, and D. G. Caldwell, “Imitation learning of positional and force skills demonstrated via kinesthetic teaching and haptic input,” _Advanced Robotics_ , vol. 25, no. 5, pp. 581–603, 2011. * [4] A. Boularias, J. Kober, and J. Peters, “Relative entropy inverse reinforcement learning,” in _Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics_ , 2011, pp. 182–189. * [5] S. Calinon, F. D’halluin, E. L. Sauser, D. G. Caldwell, and A. G. Billard, “Learning and reproduction of gestures by imitation,” _IEEE Robotics & Automation Magazine_, vol. 17, no. 2, pp. 44–54, 2010. * [6] T. Asfour, P. Azad, F. Gyarfas, and R. Dillmann, “Imitation learning of dual-arm manipulation tasks in humanoid robots,” _International Journal of Humanoid Robotics_ , vol. 5, no. 02, pp. 183–202, 2008. * [7] M. Lopes, F. S. Melo, and L. Montesano, “Affordance-based imitation learning in robots,” in _2007 IEEE/RSJ International Conference on Intelligent Robots and Systems_. IEEE, 2007, pp. 1015–1021. * [8] N. Ratliff, J. A. Bagnell, and S. S. Srinivasa, “Imitation learning for locomotion and manipulation,” in _2007 7th IEEE-RAS International Conference on Humanoid Robots_. IEEE, 2007, pp. 392–397. * [9] R. Chalodhorn, D. B. Grimes, K. Grochow, and R. P. Rao, “Learning to walk by imitation in low-dimensional subspaces,” _Advanced Robotics_ , vol. 24, no. 1-2, pp. 207–232, 2010. * [10] T. Osa, J. Pajarinen, G. Neumann, J. A. Bagnell, P. Abbeel, J. Peters, _et al._ , “An algorithmic perspective on imitation learning,” _Foundations and Trends® in Robotics_ , vol. 7, no. 1-2, pp. 1–179, 2018. * [11] A. Billard, S. Calinon, and R. Dillmann, “Learning from humans,” _Springer Handbook of Robotics_ , pp. 1995–2014, 01 2016. * [12] P. Englert, A. Paraschos, M. P. Deisenroth, and J. Peters, “Probabilistic model-based imitation learning,” _Adaptive Behavior_ , vol. 21, no. 5, pp. 388–403, 2013. * [13] S. Ross, G. Gordon, and D. Bagnell, “A reduction of imitation learning and structured prediction to no-regret online learning,” in _Proceedings of the fourteenth international conference on artificial intelligence and statistics_ , 2011, pp. 627–635. * [14] S. Ross, N. Melik-Barkhudarov, K. S. Shankar, A. Wendel, D. Dey, J. A. Bagnell, and M. Hebert, “Learning monocular reactive uav control in cluttered natural environments,” _2013 IEEE International Conference on Robotics and Automation_ , 2013. * [15] A. J. Ijspeert, J. Nakanishi, and S. Schaal, “Movement imitation with nonlinear dynamical systems in humanoid robots,” in _Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292)_ , vol. 2. IEEE, 2002, pp. 1398–1403. * [16] G. Maeda, M. Ewerton, G. Neumann, R. Lioutikov, and J. Peters, “Phase estimation for fast action recognition and trajectory generation in human–robot collaboration,” _The International Journal of Robotics Research_ , vol. 36, no. 13-14, pp. 1579–1594, 2017. * [17] T. Osa, K. Harada, N. Sugita, and M. Mitsuishi, “Trajectory planning under different initial conditions for surgical task automation by learning from demonstration,” in _2014 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2014, pp. 6507–6513. * [18] X. B. Peng, P. Abbeel, S. Levine, and M. van de Panne, “Deepmimic: Example-guided deep reinforcement learning of physics-based character skills,” _ACM Transactions on Graphics (TOG)_ , vol. 37, no. 4, pp. 1–14, 2018. * [19] P. Abbeel and A. Y. Ng, “Apprenticeship learning via inverse reinforcement learning,” in _Proceedings of the twenty-first international conference on Machine learning_ , 2004. * [20] N. D. Ratliff, J. A. Bagnell, and M. A. Zinkevich, “Maximum margin planning,” in _Proceedings of the 23rd international conference on Machine learning_ , 2006, pp. 729–736. * [21] D. Silver, J. Bagnell, and A. Stentz, “Learning from demonstration for autonomous navigation in complex unstructured terrain,” _I. J. Robotic Res._ , vol. 29, pp. 1565–1592, 10 2010. * [22] B. Ziebart, A. Maas, J. Bagnell, and A. Dey, “Maximum entropy inverse reinforcement learning.” AAAI Press, 2008, pp. 1433–1438. * [23] C. Finn, S. Levine, and P. Abbeel, “Guided cost learning: Deep inverse optimal control via policy optimization,” _Proceedings of the 33Rd International Conference on Machine Learning_ , 03 2016. * [24] J. Ho, J. Gupta, and S. Ermon, “Model-free imitation learning with policy optimization,” in _International Conference on Machine Learning_ , 2016, pp. 2760–2769. * [25] C. L. Nehaniv and K. Dautenhahn, “Like me?- Measures of correspondence and imitation,” _Cybernetics and Systems_ , vol. 32, pp. 11–51, 2001. * [26] A. Alissandrakis, C. Nehaniv, and K. Dautenhahn, “Imitating with alice: Learning to imitate corresponding actions across dissimilar embodiments,” _IEEE Transactions on Systems, Man, & Cybernetics, Part A: Systems and Humans_, vol. 32, pp. 482–496, 07 2002. * [27] A. Alissandrakis, C. L. Nehaniv, and K. Dautenhahn, “Do as I do: Correspondences across different robotic embodiments,” in _Procs. 5th German Workshop on Artificial Life. Lubeck_ , 2002. * [28] A. Alissandrakis, C. L. Nehaniv, K. Dautenhahn, and J. Saunders, “Achieving corresponding effects on multiple robotic platforms: Imitating in context using different effect metrics,” in _In: Proceedings of the Third International Symposium on Imitation in Animals and Artifacts_. AISB, 2005. * [29] A. Alissandrakis, C. L. Nehaniv, and K. Dautenhahn, “Correspondence mapping induced state and action metrics for robotic imitation,” _IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)_ , vol. 37, pp. 299–307, 2007. * [30] C. L. Nehaniv and K. E. Dautenhahn, _Imitation and social learning in robots, humans and animals: behavioural, social and communicative dimensions._ Cambridge University Press, 2007. * [31] D. B. Grimes, R. Chalodhorn, and R. P. Rao, “Dynamic imitation in a humanoid robot through nonparametric probabilistic inference.” in _Robotics: science and systems_. Citeseer, 2006, pp. 199–206. * [32] D. B. Grimes and R. P. Rao, “Learning actions through imitation and exploration: Towards humanoid robots that learn from humans,” in _Creating Brain-Like Intelligence_. Springer, 2009, pp. 103–138. * [33] F. Park, J. Bobrow, and S. Ploen, “A lie group formulation of robot dynamics,” _The International Journal of Robotics Research_ , vol. 14, no. 6, pp. 609–618, 1995. * [34] K. M. Lynch and F. C. Park, _Modern Robotics_. Cambridge University Press, 2017. * [35] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in _12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16)_. USENIX Association, 2016, pp. 265–283. * [36] C. C. Aggarwal, _Neural networks and deep learning_. Springer, 2018, vol. 10. * [37] N. Heess, D. TB, S. Sriram, J. Lemmon, J. Merel, G. Wayne, Y. Tassa, T. Erez, Z. Wang, S. Eslami, _et al._ , “Emergence of locomotion behaviours in rich environments,” _arXiv preprint arXiv:1707.02286_ , 2017. * [38] O. M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, _et al._ , “Learning dexterous in-hand manipulation,” _The International Journal of Robotics Research_ , vol. 39, no. 1, pp. 3–20, 2020. * [39] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” _arXiv preprint arXiv:1707.06347_ , 2017\.
2024-09-04T02:54:57.542672
2020-03-05T15:21:25
2003.02708
{ "authors": "Shun Fu and Ji Xu", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26059", "submitter": "Shun Fu", "url": "https://arxiv.org/abs/2003.02708" }
arxiv-papers
# The Multi-granularity in Graph Revealed by a Generalized Leading Tree Shun Fu Ji Xu ###### Abstract There are hierarchical characteristics in the network and how to effectively reveal the hierarchical characteristics in the network is a problem in the research of network structure. If a node is assigned to the community to which it belongs, how to assign the community to a higher level of community to which it belongs is a problem. In this paper, the density of data points is investigated based on the clustering task. By forming the density of data points, the hierarchical difference of data points is constructed. In combination with the distance between data points, a density-based leading tree can be constructed. But in a graph structure, it is a problem to build a lead tree that reveals the hierarchical relationships of the nodes on the graph. Based on the method of tree formation based on density, this paper extends the model of leading tree to the hierarchical structure of graph nodes, discusses the importance of graph nodes, and forms a leading tree that can reveal the hierarchical structure of graph nodes and the dependency of community. Experiments were carried out on real data sets, and a tree structure was formed in the experiment. This graph leading tree can well reveal the hierarchical relationships in the graph structure. ###### keywords: Granular computing , Network representation learning , Graph clustering ††journal: The name of international journal CS ## 1 Introduction Graph is widely used to model and display the relationships between objects and objects in the world. On the other hand, the distribution of many data objects presents multi-level characteristics, such as the organization and management structure of human beings, the food chain of the earth, the taxonomy of species, and so on. In the relational network (such as social network, academic network, traffic network, etc.), the distribution of nodes also presents multi-level characteristics. Simply taking a single node as an object to study, the problem will be very complicated. On the other hand, people can study the objects in the figure at different levels by referring to the cognitive rules and multi-granularity cognitive methods of human beings for complex problems. [question] : which nodes in the network are at the higher level and which are at the lower level? Which nodes make up the community? And which communities form a larger community? Answering these questions requires looking at complex relational networks from a multi- granularity perspective, in which objects (nodes, edges, communities, etc.) can be subordinate to different granularity levels. Therefore, it is important to reveal the multi-granularity and multi-level structure of node objects in the network. ### 1.1 The hierarchical properties in networks (introduction and related works) ### 1.2 Hierarchical clustering (introduction and related works) (a) caption1 (b) caption2 (c) caption3 Figure 1: Caption for subfigures ### 1.3 The leading tree for clustering in Euclidean space When performing efficient and accurate clustering of general data points in datasets of any shape in European space, Rodriguez and Laio proposed the DPClust algorithm [1]. Xu [2] Considered the cluster assignment process of the DPClust algorithm as a process of finding its parent node for each data point. Once the search for the parent node is completed, a tree-like subordinate relationship is formed between the nodes. As shown in Figure 1(a) in the European space, the tree structure formed by the DPClust algorithm. The application of Leading Tree in the field of hierarchical clustering makes it easy to examine the multi-level community relationships in the distribution of data points at multiple granularities. As shown in Figure 1(b) Leading Tree maps data to a tree-shaped space. One can easily see that the points in the upper layer of the tree are more at the core position of the cluster, while the points in the lower layer correspond to the edge points of a cluster. In other words, the mapping to the tree structure assigns points in the original European space to Granular Layers. On the other hand, this tree structure can be broken down into many subtrees, and small subtrees can be merged into large subtrees. This corresponds to the multi-level division of clusters in the original Euclidean space. As shown in Figure 1(c), subtree a corresponds to class cluster A, and a belongs to a larger subtree b, so in a coarser granularity, class cluster A is a component of class cluster B. For the points of Euclidean space, DenPEHC [2] has realized the rapid and effective multi-level clustering and the clear, intuitive and effective disclosure of the multi-level dependency of these points through the construction of DenPEHC guidance tree based on density. But for the data objects in our widely used Graph space, how can we construct a multi-grained tree structure to reveal the above relationships? The construction process of Leading Tree in DenPEHC is carried out for data points in Euclidean space, which relies on the distance between data points and the artificially defined density given in data points. So that the point is subordinate to the point that is closest to it, that is denser. We believe that in Euclidean space, the process of a data point finding its parent node can be more generally extended to data objects in any space, such as graph data objects. The purpose of this paper is to discuss how to extend the idea of leading tree construction to the data object of graph space so that this idea of multi-granularity computation can reveal the hierarchical relationship of data distribution in graph space. ## 2 Contribution In this paper, we offer contributions by : 1\. Proposing a novel particle calculation method is introduced to reveal the structure hierarchy of graph nodes 2\. Provide a new tool and method for Graph Clustering, which can effectively and intuitively reveal the structure hierarchy of Graph data objects 3\. The leading tree only applicable to European type space is expressed in a more general way, and it is extended to graph data objects. 4\. The measurement standard of the importance of graph nodes is discussed to inspire the exploration of this problem. ## 3 Related works ### 3.1 Granular computing Granular computing is an umbrela concept which aims revealing the hierarchical relationships between objects [3]. Granule is the basic unit of object for processing in granular computing. The granule can be defined as different types and they are flexible and scalable. … ### 3.2 Hierarchical Clustering ### 3.3 Graph Clustering ## 4 Problems ### 4.1 A generalized model of leading tree For any object, or granule, you can define its importance and its distance from other granule of the same kind. With these two elements, you can generate a tree structure. This tree structure will reveal the membership and hierarchy of all granule classes in a set. This tree structure will form a general lead tree. Not limited to data points in Euclidean space. For different data objects, or granule, its importance needs to be either manually defined or machine-learned depending on the needs of the problem. The distance between two objects also needs to be defined according to the needs of the problem. Figure 2: Two elements for GLT generation: tabel of importance and matrix of distance Take a set of 9 granule as an example. The distance matrix between them is shown in the Figure 2. Based on the distance matrix and the importance table, the parent of a granule can be found. This generates the tree structure on the right side of the diagram. With the tree structure, the dependencies between granule are clearly visible. Here, a granule can be a point in a Euclidean space, a vertex in the Graph, a person in a collaborating organization, a community of vertices, and so on. The tree structure in Fig.2 reflects the membership of the nine granule. 1,9 is led by 2. One, two, nine constitute a small grass-roots organization. Then they are led by 7, along with 5. So 1,2,9,5,7 constitutes a higher-level organization centered on 7. These organizational levels are abstract concepts, and if granule is materialized, the meaning of these levels can be materialized, such as the community membership formed by nodes in the graph, the management structure of a company, or the wildlife food chain. ## 5 Distance and importance measuring in Graph As we saw in the previous section, the necessary conditions for building a generalized granule leading tree are the distance (similarity) measure between granule and the importance of granule under certain criteria. In this section we extend the Generalized Leading Tree defined in the previous section specifically to the Graph problem. Firstly, in the data point set of Euclidean space, the distance measure can be defined by the cosine distance between the data point vectors. If the logarithmic data points are clustered, the importance measure can be the density of a certain point. In order to extend the general Tree model discussed in the previous section to the facilities of revealing the Hierarchical/multi - granularity of Property in Graph, we need to define the distance between the granules and the importance measurment for granules in Graph. This section lists several measures and discusses the information focused on under different measures. In the experimental section, we will reveal the differences in the lead trees generated under different metrics and discuss the hierarchical nature of the information expressed through different lead trees. ### 5.1 The importance measure by degrees The importance of a node can be measured by its degree. Degree is the most directe sign which counts the link between the vertex to other vertices. In most senarios, degree reflects the influence capability of one vertex. For example, in a community, if a vertex has the largest value of degree, there should be some unique properties in that vertex. Maybe that vertex is the most social active member in that community. ### 5.2 The importance measure by eigenvector centrality Eigenvector centrality is a kind of extension of degree—it looks at a combination of a node’s edges and the edges of that node’s neighbors. Eigenvector centrality cares if you are a hub, but it also cares how many hubs you are connected to. It’s calculated as a value from 0 to 1: the closer to one, the greater the centrality. Eigenvector centrality is useful for understanding which nodes can get information to many other nodes quickly. If you know a lot of well-connected people, you could spread a message very efficiently. If you’ve used Google, then you’re already somewhat familiar with Eigenvector centrality. Their PageRank algorithm uses an extension of this formula to decide which webpages get to the top of its search results. ### 5.3 The importance measure by betweenness centrality Betweenness centrality is a bit different from the other two measures in that it doesn’t care about the number of edges any one node or set of nodes has. Betweenness centrality looks at all the shortest paths that pass through a particular node (see above). To do this, it must first calculate every possible shortest path in your network, so keep in mind that betweenness centrality will take longer to calculate than other centrality measures (but it won’t be an issue in a dataset of this size). Betweenness centrality, which is also expressed on a scale of 0 to 1, is fairly good at finding nodes that connect two otherwise disparate parts of a network. If you’re the only thing connecting two clusters, every communication between those clusters has to pass through you. In contrast to a hub, this sort of node is often referred to as a broker. ### 5.4 The importance measure by the length of path In addition to the measure of importance of vertex, we need to measure the distance bwteen vertices in graphs for the GLT generation. The most simple way is the length of path between two vertices. Another way is using the similarity measurement and minused by one to obtain the distance between vertices. The similarity measurement can be Jaccard coefficient, SimRank, PageRank etc. ## 6 The graph leading tree model For vertex $v$, in the set of vertices that has higher degree than $v$, we take the closest one (e.g. $u$) as the parent vertex of $v$. By analogy, one or more trees are formed to form an interpretable semantic tree or semantic forest that reveals hierarchical semantic concepts on the structure of the relational network. ## References * [1] A. Rodriguez, A. Laio, Clustering by fast search and find of density peaks, Science 344 (6191) (2014) 1492–1496 (2014). * [2] J. Xu, G. Wang, W. Deng, DenPEHC: Density peak based efficient hierarchical clustering, Information Sciences 373 (2016) 200–218 (2016). * [3] J. T. Yao, A. V. Vasilakos, W. Pedrycz, Granular computing: perspectives and challenges, IEEE Transactions on Cybernetics 43 (6) (2013) 1977–1989 (2013).
2024-09-04T02:54:57.552760
2020-03-05T15:56:19
2003.02727
{ "authors": "L. Roa-Leguizam\\'on, H. Torres L\\'opez and A. G. Zamora", "full_text_license": null, "license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/", "provenance": "arxiv-papers-0000.json.gz:26060", "submitter": "Hugo Torres", "url": "https://arxiv.org/abs/2003.02727" }
arxiv-papers
# On the Segre Invariant for Rank Two Vector Bundles on $\mathbb{P}^{2}$ L. Roa-Leguizamón Instituto de Física y Matemáticas Universidad Michoacana de San Nicolás de Hidalgo Edificio C3, Ciudad Universitaria C.P.58040 Morelia, Mich. México<EMAIL_ADDRESS>, H. Torres-López CONACyT - U. A. Matemáticas, U. Autónoma de Zacatecas Calzada Solidaridad entronque Paseo a la Bufa, C.P. 98000, Zacatecas, Zac. México<EMAIL_ADDRESS>and A. G. Zamora U. A. Matemáticas, U. Autónoma de Zacatecas Calzada Solidaridad entronque Paseo a la Bufa, C.P. 98000, Zacatecas, Zac. México<EMAIL_ADDRESS> ###### Abstract. We extend the concept of Segre’s Invariant to vector bundles on a surface $X$. For $X=\mathbb{P}^{2}$ we determine what numbers can appear as the Segre Invariant of a rank $2$ vector bundle with given Chern’s classes. The irreducibility of strata with fixed Segre’s invariant is proved and its dimensions are computed. Finally, we present applications to the Brill- Noether’s Theory for rank $2$ vector bundles on $\mathbb{P}^{2}.$ ###### Key words and phrases: moduli of vector bundles on surfaces, Segre invariant, stratification of the moduli space. This paper was partially supported by CONACyT Grant CB-257079. The first author acknowledges the financial support of Fondo Institucional de Fomento Regional para el Desarrollo Científico, Tecnológico y de Innovación, FORDECYT 265667. ## 1\. Introduction We work over the field of complex numbers $\mathbb{C}$. Given a coherent sheaf $F$ on a variety $X$ we write $H^{i}(F)$ instead of $H^{i}(X,F)$ and $h^{i}(F)$ for the corresponding dimension. Let $C$ be a non-singular irreducible complex projective curve of genus $g$. Let $E$ be a vector bundle of rank $n$ and degree $d$ over $C$. For any integer $m$, $1\leq m<n$ the $m-$Segre invariant is defined by $S_{m}(E):=\min\\{m.\deg\,E-n.\deg\,F\\}$ where the minimum is taken over the subbundles F of $E$ of rank m. This invariant induces a stratification of the moduli space $M(n,d)$ of stable vector bundles of rank $n$ and degree $d$ over $C$. This stratification has been studied by several authors (see for instance [8], [9], [2] and [17]) in order to get topological and geometric properties of $M(n,d)$. Given a non-singular, projective surface $X$, the moduli space $M_{X}(n,c_{1},c_{2})$ of rank $n$ vector bundles with Chern classes $c_{1}$ and $c_{2}$ was constructed by Maruyama [14]. However, relatively little is known about its geometry and subvarieties. The aim of this paper is to define the Segre invariant for rank $2$ vector bundles on surfaces, with emphasis in the case $X=\mathbb{P}^{2}$. Since this invariant defines a semicontinuous function on the families of vector bundles of rank $2$ on X, we get a stratification of the moduli space $M_{\mathbb{P}^{2}}(2;c_{1},c_{2})$ into locally closed subvarieties $M_{\mathbb{P}^{2}}(2;c_{1},c_{2};s)$ of vector bundles with fixed Segre’s invariant $s$. Section 2 introduces the Segre invariant on surfaces and collects a number of results that will be subsequently used. Section 3 is the core of the paper. The main issue is to determine what numbers can appear as the Segre invariant of a vector bundle on $\mathbb{P}^{2}$ with fixed characteristic classes. The answer is given by: Theorem 3.1 1. (1) Let $c_{2}\geq 2$ and $k\in\mathbb{N}$. Then a vector bundle $E\in M_{\mathbb{P}^{2}}(2,0,c_{2})$ with $S(E)=2k$ exists if and only if $k^{2}+k\leq c_{2}$. Furthermore, $E$ fits in an exact sequence: $0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(-k)\longrightarrow E\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes I_{Z}\longrightarrow 0,$ with $Z\subset\mathbb{P}^{2}$ of codimension $2$, and $\mathcal{O}_{\mathbb{P}^{2}}(-k)\subset{E}$ maximal. 2. (2) Let $c_{2}\geq 1$ and $k\in\mathbb{N}$. Then a vector bundle $E\in M_{\mathbb{P}^{2}}(2,-1,c_{2})$ with $S(E)=2k-1$ exists if and only if $k^{2}\leq c_{2}$. Furthermore, $E$ fits in an exact sequence: $0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(-k)\longrightarrow E\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(k-1)\otimes I_{Z}\longrightarrow 0,$ with $Z\subset\mathbb{P}^{2}$ of codimension $2$, and $\mathcal{O}_{\mathbb{P}^{2}}(-k)\subset{E}$ maximal. The idea of the proof is to apply Serre’s construction in order to obtain the desired extension and then to show that the line sub-bundle $\mathcal{O}_{\mathbb{P}^{2}}(-k)$ is, indeed, maximal. Note that, since Segre’s invariant is invariant under tensor product with line bundles, it is sufficient to consider rank 2 vector bundles with $c_{1}(E)=0,-1$. Theorem 3.1 states in particular that the elements of $M_{\mathbb{P}^{2}}(2;c_{1},c_{2};s)$ are parameterized by extensions of stable vector bundles. Thus, the dimension of the stratum is obtained by “counting parameters” of such extensions. This idea is formalized in Section 4, the precise statement being: Theorem 4.1 1. (1) Let $c_{2}\geq 2$, $k\in\mathbb{N}$ such that $k^{2}+k\leq c_{2}$. Then $M_{\mathbb{P}^{2}}(2;0,c_{2};2k)$ is an irreducible variety of dimension: $\begin{cases}3c_{2}+k^{2}+3k-2,&\text{if $c_{2}>k^{2}+3k+1$}\\\ 4c_{2}-3,&\text{if $c_{2}\leq k^{2}+3k+1$.}\\\ \end{cases}$ 2. (2) Let $c_{2}\geq 1$, $k\in\mathbb{N}$ such that $k^{2}\leq c_{2}$. Then $M_{\mathbb{P}^{2}}(2;-1,c_{2};-1+2k)$ is an irreducible variety of dimension: $\begin{cases}3c_{2}+k^{2}+2k-4,&\text{if $c_{2}>k^{2}+2k$}\\\ 4c_{2}-4,&\text{if $c_{2}\leq k^{2}+2k$.}\\\ \end{cases}$ Next, observe that the morphism: $0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(-k)\longrightarrow E,$ leads immediately to the existence of a global section of $E(-k)$. Thus, Segre’s invariant is naturally related to the existence of vector bundles admitting at least a section. This is a first connection with Brill-Noether Theory. Section 5 is devoted to investigate such connection. The main result is: Theorem 5.5 1. (1) Let $r,c_{2},k$ be integer numbers satisfying $r^{2}+2\leq c_{2}$ and $k<r$. Then, a vector bundle $E\in M_{\mathbb{P}^{2}}(2;2r,c_{2},2k)$ exists such that $h^{0}(E)\geq(r-k)^{2}+4(r-k)+3$. 2. (2) Let $r,c_{2}$ be integer numbers satisfying $r^{2}-r+1\leq c_{2}$. Then, a vector bundle $E\in M_{\mathbb{P}^{2}}(2;2r-1,c_{2},2k-1)$ exists such that $h^{0}(E)\geq t$, where $t=\begin{cases}(r-k)^{2}+4(r-k)+3&\textit{ if }k\neq 1,\\\ r^{2}+r-1&\textit{ if }k=1.\end{cases}$ We finish the paper by computing a lower bound for the dimension of these Brill-Noether’s varieties. As a consequence of our results, the existence of nonempty Brill-Noether’s locus with negative Brill-Noether number can be deduced. ## 2\. Segre invariant of vector bundles on Surfaces We start this section by recalling the main results about vector bundles on surfaces that shall be use in the sequel. For a further treatment of the subject see [5] and [13]. Let $X$ be a smooth, irreducible complex projective surface and let $H$ be an ample divisor on $X$. Let $\mathcal{E}$ be a torsion free coherent sheaf on $X$ with fixed Chern classes $c_{i}(\mathcal{E})\in H^{2i}(X,\mathbb{Z})$ for $i=1,2$. The $H-$slope of $\mathcal{E}$ is defined as the rational number $\mu_{H}(\mathcal{E}):=\frac{c_{1}(\mathcal{E}).H}{rk\,\mathcal{E}},$ where $c_{1}(\mathcal{E}).H$ is the degree of $\mathcal{E}$ with respect to $H$ and is denoted by $deg_{H}(\mathcal{E})$. ###### Definition 2.1. _A torsion free coherent sheaf $\mathcal{E}$ is $H$-stable (resp. $H$-semistable) if for every nonzero subsheaves $\mathcal{F}$ of smaller rank, we have_ $\mu_{H}(\mathcal{F})<\mu_{H}(\mathcal{E})\text{ (resp. $\leq$). }$ Let $\mathcal{E}$ be a coherent sheaf on $X$. The dual of $\mathcal{E}$ is the sheaf $\mathcal{E}^{\vee}=\mathcal{H}om(\mathcal{E},\mathcal{O}_{X})$. If the natural map $\mathcal{E}\longrightarrow\mathcal{E}^{\vee\vee}$ of $\mathcal{E}$ to its double dual is an isomorphism, we say that $\mathcal{E}$ is reflexive. In particular, any locally free sheaf is reflexive. For more details related to reflexive sheaves see [11]. ###### Remark 2.2. _Let $E$ be a vector bundle on $X$. Then $E$ is $H$-stable (resp. $H$-semistable) if for all proper subbundle $F$, we have $\mu_{H}(F)<\mu_{H}(E)$ (resp. $\leq$). Indeed, let $\mathcal{F}\subset E$ be a proper subsheaf, hence $\mathcal{F}$ is free torsion and a canonical embedding exits $\mathcal{F}\rightarrow\mathcal{F}^{\vee\vee}$ which fits in the following diagram:_ --- $\textstyle{\mathcal{F}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{E\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{F}^{\vee\vee}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{E^{\vee\vee}.}$ _Since $\mathcal{F}^{\vee\vee}$ is a reflexive sheaf and $c_{1}(\mathcal{F})=c_{1}(\mathcal{F}^{\vee\vee})$, it follows that $E$ is $H$-stable (resp. $H$-semistable) if for any proper reflexive sheaf $\mathcal{F}$ we have $\mu_{H}(\mathcal{F})<\mu_{H}(E)$ (resp. $\leq$). Moreover, since the singular points of reflexive sheaf $\mathcal{F}$ has codimension greater than two, this implies that $\mathcal{F}$ is a vector bundle. _ The following lemma allows to establish a relationship between line subbundles of $E$ and extensions. ###### Lemma 2.3. [5, Chapter 2. Proposition 5.] Let $\phi:L\longrightarrow E$ be a sub-line bundle. Then a unique effective divisor $D$ on $X$ exists, such that the map $\phi$ factors through the inclusion $L\longrightarrow L\otimes\mathcal{O}_{X}(D)$ and such that $E/(L\otimes\mathcal{O}_{X}(D))$ is torsion free. From Lemma 2.3 it follows that if $L\subset E$ is maximal , then $E$ admits an extension $0\longrightarrow L\longrightarrow E\longrightarrow L^{\prime}\otimes I_{Z}\longrightarrow 0,$ where $L^{\prime}\otimes I_{Z}$ is torsion free and $I_{Z}$ denote the ideal sheaf of a subscheme $Z$ of codimension $2$. Note that $\displaystyle c_{1}(E)$ $\displaystyle=c_{1}(L)+c_{1}(L^{\prime}),$ $\displaystyle c_{2}(E)$ $\displaystyle=c_{1}(L)\cdot c_{1}(L^{\prime})+l(Z),$ where $l(Z)$ denote the length of $Z$. Now, we extend the concept of Segre’s invariant for vector bundles on curves to vector bundles of rank $2$ on surfaces. ###### Definition 2.4. Let $H$ be an ample divisor on $X$. For a vector bundle $E$ of rank $2$ on $X$ we define the Segre invariant $S_{H}(E)$ by $S_{H}(E):=2\min\\{\mu_{H}(E)-\mu_{H}(L)\\},$ where the minimum is taken over all line subbundles $L$ of $E$. Note that Definition 2.4 is equivalent to $S_{H}(E)=\min\\{c_{1}(E).H-2L.H\\}.$ Segre’s invariant is always a finite number: in case $X=\mathbb{P}^{2}$ it follows from Serre’s Vanishing Theorem, in general we have: ###### Lemma 2.5. Let $X$ be a projective smooth surface, $H$ an ample line bundle on $X$ and $E$ a rank $2$ vector bundle on $X$. Then, the set: $\\{L.H:L\subset E,L\text{ a line bundle}\\},$ is bounded from above. ###### Proof. Let $L_{1}$, $L_{2}$ be such that an exact sequence: $0\to L_{1}\to E\to L_{2}\otimes I_{Z}\to 0$ exists, with $Z$ a zero-cycle on $X$ (see Lemma 2.3 above). Consider $L\subset E$. If $L\subseteq L_{1}$, then $L.H\leq L_{1}.H$. If not, then $h^{0}(L_{1}\otimes L^{-1})=0$ and in consequence $h^{0}(L_{2}\otimes I_{Z}\otimes L^{-1})\neq 0$. From the stability of $L_{2}\otimes I_{Z}$ it follows that: $L.H<c_{1}(L_{2}\otimes I_{Z}).H.$ ∎ The term invariant is used because $S_{H}(E)=S_{H}(E\otimes L)$ for any line bundle $L\in Pic(X)$. By Remark 2.2 $E$ is $H-$ stable (resp. $H-$semistable) if and only if $S_{H}(E)>0$ (resp. $\geq$). We say that $L\subset E$ is maximal if $S_{H}(E)=c_{1}(E).H-2c_{1}(L).H.$ The Segre invariant induces a stratification of the moduli space of vector bundles: ###### Lemma 2.6. Let $H$ be an ample divisor on $X$ and let $T$ be a variety. Let $\mathcal{E}$ be a vector bundle of rank $2$ on $X\times T$. The function $\displaystyle S_{H}:T$ $\displaystyle\longrightarrow$ $\displaystyle\mathbb{Z}$ $\displaystyle t$ $\displaystyle\longmapsto$ $\displaystyle S_{H}(\mathcal{E}_{t})$ is lower semicontinuous. ###### Proof. The semicontinuity follows as a slight generalization of the openness property of stability for which the same proof works (see [14, Theorem 2.8]) ∎ The moduli space of $H$-stable vector bundles with fixed Chern classes $c_{1}$ and $c_{2}$ on $X$ were constructed in the 1970’s by Maruyama (see [15]). We shall denote the moduli space of $H-$stable vector bundles of rank $n$ and Chern classes $c_{1}$ and $c_{2}$ on $X$ by $M_{X,H}(n;c_{1},c_{2})$. In case $X$ is a rational surface the dimension of moduli of rank $2$ vector bundles is $\dim M_{X,H}(2,c_{1},c_{2})=4c_{2}-c_{1}^{2}-3$. ## 3\. Segre Invariant for rank $2$ vector bundles on $\mathbb{P}^{2}$ In this section we study the Segre invariant for vector bundles of rank $2$ on $\mathbb{P}^{2}$, in this case a uniquely determined integer number $k_{E}$ exists such that $c_{1}(E\otimes\mathcal{O}_{\mathbb{P}^{2}}(k_{E}))\in\\{-1,0\\}$. Namely $k_{E}=\begin{cases}-\frac{c_{1}(E)}{2},&\text{if $c_{1}(E)$ even}\\\ -\frac{c_{1}(E)+1}{2},&\text{if $c_{1}(E)$ odd.}\end{cases}$ Since $S(E)=S(E\otimes L)$ for any line bundle $L$ on $\mathbb{P}^{2}$, in the remainder of this section we assume that $E$ has degree $c_{1}\in\\{-1,0\\}$ and second Chern class $c_{2}$. Furthermore, by stability we mean stability with respect to $\mathcal{O}_{\mathbb{P}^{2}}(1)$. In fact as $Pic\,(\mathbb{P}^{2})\cong\mathbb{Z}$, there is a unique notion of stability for $\mathbb{P}^{2}$. Let $F$ be a vector bundle on $\mathbb{P}^{2}$, by abuse of notation we will use $c_{1}(F)$ to denote the degree of $F$ with respect to $\mathcal{O}_{\mathbb{P}^{2}}(1)$ and we write $S(E)$ to denote the Segre invariant of the vector bundle $E$ of rank $2$ on $\mathbb{P}^{2}$. The following theorem is the main result of this paper. It gives necessary and sufficient conditions for the existence of a stable vector bundle $E$ of rank $2$, degree $c_{1}=0$ and $S(E)=2k$ (resp. $c_{1}=-1$ and $S(E)=-1+2k$). ###### Theorem 3.1. 1. (1) Let $c_{2}\geq 2$ and $k\in\mathbb{N}$. Then a vector bundle $E\in M_{\mathbb{P}^{2}}(2,0,c_{2})$ with $S(E)=2k$ exists if and only if $k^{2}+k\leq c_{2}$. Furthermore, $E$ fits in an exact sequence: $0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(-k)\longrightarrow E\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes I_{Z}\longrightarrow 0,$ with $Z\subset\mathbb{P}^{2}$ of codimension $2$ and $\mathcal{O}_{\mathbb{P}^{2}}(-k)\subset{E}$ maximal. 2. (2) Let $c_{2}\geq 1$ and $k\in\mathbb{N}$. Then a vector bundle $E\in M_{\mathbb{P}^{2}}(2,-1,c_{2})$ with $S(E)=2k-1$ exists if and only if $k^{2}\leq c_{2}$. Furthermore, $E$ fits in an exact sequence: $0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(-k)\longrightarrow E\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(k-1)\otimes I_{Z}\longrightarrow 0,$ with $Z\subset\mathbb{P}^{2}$ of codimension $2$ and $\mathcal{O}_{\mathbb{P}^{2}}(-k)\subset{E}$ maximal. We need some auxiliary results: ###### Lemma 3.2. [16, Lemma 1.2.5] Let $E$ be a vector bundle on $\mathbb{P}^{2}$ of rank 2 and first Chern class $c_{1}=0$ or $c_{1}=-1$. Then $E$ is stable if and only if $h^{0}(E)=0$. Serre’s construction provides a method for constructing rank two vector bundles on a surface $X$ (see [13, Chapter 5] for more details): ###### Theorem 3.3. [13, Theorem 5.1.] Let $Z\subset X$ be a local complete intersection of codimension two in the projective non-singular surface $X$, and let $L$ and $M$ be line bundles on $X$. Then there exists an extension $0\longrightarrow L\longrightarrow E\longrightarrow M\otimes I_{Z}\longrightarrow 0$ such that $E$ is locally free if and only if the pair $(L^{-1}\otimes M\otimes\omega_{X},Z)$ satisfy the Cayley-Bacharach property: $\displaystyle(CB)\,\,\,$ if $Z^{\prime}\subset Z$ is a sub-scheme with $l({\tilde{Z}})=l(Z)-1$ and $\displaystyle\text{$s\in H^{0}(L^{-1}\otimes M\otimes\omega_{X})$ with $s|_{\tilde{Z}}=0$, then $s|_{Z}=0$}.$ ###### Lemma 3.4. Let $Z\subset\mathbb{P}^{2}$ be a zero-cycle in $\mathbb{P}^{2}$ such that $h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(d)\otimes I_{Z})=0$. Consider a point $p\in Supp(Z)$ and write $Z={\tilde{Z}}+p$. Then, $h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(d-2)\otimes I_{\tilde{Z}})=0.$ In particular, $(\mathcal{O}_{\mathbb{P}^{2}}(d-2),Z)$ satisfies (CB). ###### Proof. Suppose that $h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(d-2)\otimes I_{\tilde{Z}})\neq 0$ and let $C_{1}$ be one of its elements different from zero. Consider the map $\displaystyle H^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2)\otimes I_{p})$ $\displaystyle\rightarrow$ $\displaystyle H^{0}(\mathcal{O}_{\mathbb{P}^{2}}(d)\otimes I_{Z}),$ $\displaystyle C$ $\displaystyle\mapsto$ $\displaystyle CC_{1}$ which is lineal and injective. Since $h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2)\otimes I_{p})=5$ we obtain a contradiction. ∎ ###### Proof. Proof of Theorem 3.1 1. (1) Assume $k^{2}+k\leq c_{2}$. Then a zero cycle $Z$, locally a complete intersection and of length $l(Z)=c_{2}+k^{2}$ exists such that $Z$ is not contained in any curve of degree $2k-1$. By the Lemma 3.4, the pair $(\mathcal{O}_{\mathbb{P}^{2}}(2k-3),Z)$ satisfies the Cayley-Bacharach property. Therefore (see Theorem 3.3), an extension (3.1) $0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(-k)\longrightarrow E\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes I_{Z}\longrightarrow 0$ exists where $E$ is locally free and has Chern classes $c_{1}(E)=0$ and $c_{2}(E)=c_{2}$. Moreover, since $Z$ is not contained in any curve of degree $2k-1$ it follows that $h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes I_{Z})=h^{0}(E)=0$. Therefore, by Lemma 3.2 the vector bundle $E$ is stable. Finally, we prove that $\mathcal{O}_{\mathbb{P}^{2}}(-k)$ is maximal. Let $\mathcal{O}_{\mathbb{P}^{2}}(l)$ be a line bundle with $l<k$, from the exact sequence (3.1) we have $0\rightarrow\mathcal{O}_{\mathbb{P}^{2}}(-k+l)\rightarrow E(l)\rightarrow\mathcal{O}_{\mathbb{P}^{2}}(k+l)\otimes I_{Z}\rightarrow 0.$ Since $l<k$ it follows that: $h^{0}(E(l))=h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(k+l)\otimes I_{Z})\leq h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-1)\otimes I_{Z})=0.$ This implies that $\mathcal{O}_{\mathbb{P}^{2}}(-l)$ is not a subbundle of $E$ and thus $\mathcal{O}_{\mathbb{P}^{2}}(-k)$ is maximal. Conversely, assume that a rank $2$ bundle with Chern classes $c_{1}(E)=0$, $c_{2}(E)=c_{2}<k^{2}+k$ and $S(E)=2k$ exists. Therefore, we have an exact sequence (3.2) $0\rightarrow\mathcal{O}_{\mathbb{P}^{2}}(-k)\rightarrow E\rightarrow\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes I_{Z}\rightarrow 0,$ where $Z\subset\mathbb{P}^{2}$ is a local complete intersection of codimension $2$ with length $l(Z)=c_{2}+k^{2}$ and $\mathcal{O}_{\mathbb{P}^{2}}(-k)$ is maximal. Hence, $l(Z)<h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-1))$ and $h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-1)\otimes I_{Z})\neq 0.$ From the exact sequence (3.2) we have: $0\rightarrow\mathcal{O}_{\mathbb{P}^{2}}(-1)\rightarrow E(k-1)\rightarrow\mathcal{O}_{\mathbb{P}^{2}}(2k-1)\otimes I_{Z}\rightarrow 0,$ which induces the long exact sequence: $0\rightarrow H^{0}(\mathcal{O}_{\mathbb{P}^{2}}(-1))\rightarrow H^{0}(E(k-1))\rightarrow H^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-1)\otimes I_{Z})\rightarrow 0$ Thus, $h^{0}(E(k-1))=h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-1)\otimes I_{Z})\neq 0,$ but this implies that $\mathcal{O}_{\mathbb{P}^{2}}(-k+1)$ is a subbundle of $E$, contradicting the maximality of $\mathcal{O}_{\mathbb{P}^{2}}(-k)$. 2. (2) The proof is quite analogous to the proof of $(1)$: taking $Z\subset\mathbb{P}^{2}$ a local complete intersection of codimension 2 with length $l(Z)=c_{2}+k^{2}-k$ such that $Z$ is not contained in any curve of degree $2k-2$ with $k\neq 1$ and noting that $h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-2))=2k^{2}-k\leq l(Z)=c_{2}+k^{2}-k$ because $k^{2}\leq c_{2}$. If $k=1$ the pair $(\mathcal{O}(-2),Z)$ satisfies Cayley- Bacharach and we conclude as in $(1)$. For the converse, we can proceed analogously to the proof of $(1)$ by noting that $l(Z)<h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-2))$ and in consequence $h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-2)\otimes I_{Z})\neq 0$. ∎ ###### Corollary 3.5. 1. (1) Let $r,c_{2}$ be integer numbers and $k\in\mathbb{N}$ such that $c_{2}\geq r^{2}+2$. Then a vector bundle $E\in M_{\mathbb{P}^{2}}(2,2r,c_{2})$ with $S(E)=2k$ exists if and only if $k^{2}+k+r^{2}\leq c_{2}$. Furthermore, $E$ fits in an exact sequence $\displaystyle 0\rightarrow\mathcal{O}_{\mathbb{P}^{2}}(r-k)\rightarrow E\rightarrow\mathcal{O}_{\mathbb{P}^{2}}(r+k)\rightarrow 0.$ 2. (2) Let $r,c_{2}$ be integer numbers and $k\in\mathbb{N}$ such that $c_{2}\geq r^{2}+1$. Then a vector bundle $E\in M_{\mathbb{P}^{2}}(2,2r-1,c_{2})$ with $S(E)=2k-1$ exists if and only if $k^{2}+r^{2}-r\leq c_{2}$. Furthermore, $E$ fits in an exact sequence $\displaystyle 0\rightarrow\mathcal{O}_{\mathbb{P}^{2}}(r-k)\rightarrow E\rightarrow\mathcal{O}_{\mathbb{P}^{2}}(r+k-1)\rightarrow 0.$ ###### Proof. The proof follows from Theorem 3.1, the fact $S(E(-r))=S(E)$, and the formulas $\displaystyle c_{1}(E(-r))$ $\displaystyle=$ $\displaystyle c_{1}(E)-2r,$ $\displaystyle c_{2}(E(-r))$ $\displaystyle=$ $\displaystyle c_{2}(E)-rc_{1}(E)+r^{2}.$ ∎ ## 4\. A Stratification of the moduli space $M_{\mathbb{P}^{2}}(2,c_{1},c_{2})$ In this section we use the Segre invariant to induce a stratification of the moduli space $M_{\mathbb{P}^{2}}(2;c_{1},c_{2})$ of stable vector bundles of rank $2$ and Chern classes $c_{1}$ and $c_{2}$ on $\mathbb{P}^{2}$. If $c_{1}=0$ and $c_{2}$ is odd (resp. $c_{1}=-1$ and $c_{2}$ even) Le Potier (see for instance [10, Theorem 14.6.2]) has shown that there exists an universal family $\mathcal{E}$ parameterized by $M_{\mathbb{P}^{2}}(2;0,c_{2})$ (resp. $M_{\mathbb{P}^{2}}(2;-1,c_{2})$). If $c_{1}=0$ and $c_{2}$ is even (resp. $c_{1}=-1$ and $c_{2}$ odd) working locally in the êtale topology we can assume that there is a family $\mathcal{E}$ parameterized by $M_{\mathbb{P}^{2}}(2;0,c_{2})$ (resp. $M_{\mathbb{P}^{2}}(2;-1,c_{2})$). Let $\mathcal{E}$ be a family of rank $2$ vector bundles on $\mathbb{P}^{2}$ parameterized by $M_{\mathbb{P}^{2}}(2;c_{1},c_{2})$. By Theorem 2.6, the function $S:M_{\mathbb{P}^{2}}(2;c_{1},c_{2})\longrightarrow\mathbb{Z}$ induces a stratification of $M_{\mathbb{P}^{2}}(2;c_{1},c_{2})$ into locally closed subsets $M_{\mathbb{P}^{2}}(2;c_{1},c_{2};s):=\\{E\in M_{\mathbb{P}^{2}}(2;c_{1},c_{2}):S(E)=s\\}$ according to the value of $s$. Without loss of generality we can assume that if $c_{1}=0$ (resp. $c_{1}=-1$) then $s=2k$ for some $k$, $0<k^{2}+k\leq c_{2}$ (resp. $s=-1+2k$ for some $k$, $k^{2}\leq c_{2}$). ###### Theorem 4.1. 1. (1) Let $c_{2}\geq 2$, $k\in\mathbb{N}$ such that $k^{2}+k\leq c_{2}$. Then $M_{\mathbb{P}^{2}}(2;0,c_{2};2k)$ is an irreducible variety of dimension: $\begin{cases}3c_{2}+k^{2}+3k-2,&\text{if $c_{2}>k^{2}+3k+1$}\\\ 4c_{2}-3,&\text{if $c_{2}\leq k^{2}+3k+1$.}\\\ \end{cases}$ 2. (2) Let $c_{2}\geq 1$, $k\in\mathbb{N}$ such that $k^{2}\leq c_{2}$. Then $M_{\mathbb{P}^{2}}(2;-1,c_{2};-1+2k)$ is an irreducible variety of dimension $\begin{cases}3c_{2}+k^{2}+2k-4,&\text{if $c_{2}>k^{2}+2k$}\\\ 4c_{2}-4,&\text{if $c_{2}\leq k^{2}+2k$.}\\\ \end{cases}$ ###### Proof. We only prove $(1)$, the proof of $(2)$ being quite analogous. Let $c_{2}\geq 2$, $k\in\mathbb{Z}$ such that $k^{2}+k\leq c_{2}$. Let $Hilb^{l}(\mathbb{P}^{2})$ be the Hilbert scheme of zero-dimensional subschemes of length $l=c_{2}+k^{2}$ on $\mathbb{P}^{2}$ and let $\mathcal{I}_{\mathcal{Z}_{l}}$ be the ideal sheaf of the universal subscheme $\mathcal{Z}_{l}$ in $\mathbb{P}^{2}\times Hilb^{l}(\mathbb{P}^{2})$. Let $\mathcal{O}_{\mathbb{P}^{2}}(-k),\mathcal{O}_{\mathbb{P}^{2}}(k)$ be line bundles on $\mathbb{P}^{2}$. Let $p_{1}$, $p_{2}$ be the projections of $\mathbb{P}^{2}\times Hilb^{l}(\mathbb{P}^{2})$ on $\mathbb{P}^{2}$ and $Hilb^{l}(\mathbb{P}^{2})$ respectively. Consider on $\mathbb{P}^{2}\times Hilb^{l}(\mathbb{P}^{2})$ the sheaf $\mathcal{H}om(p_{1}^{*}\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes\mathcal{I}_{\mathcal{Z}_{l}},p_{1}^{*}\mathcal{O}_{\mathbb{P}^{2}}(-k))$. Taking higher direct image we obtain on $Hilb^{l}(\mathbb{P}^{2})$ the sheaf: $R^{1}_{p_{{2}_{*}}}\mathcal{H}om(p_{1}^{*}\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes\mathcal{I}_{\mathcal{Z}_{l}},p_{1}^{*}\mathcal{O}_{\mathbb{P}^{2}}(-k).$ From the semicontinuity Theorem [12, Theorem 12.8] we have that the set: $H:=\\{Z\in Hilb^{l}(\mathbb{P}^{2}):h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-3)\otimes I_{Z})<1\\}$ is an open set of $Hilb^{l}(\mathbb{P}^{2})$ which is non-empty by Theorem 3.1. Restricting the sheaf $R^{1}_{p_{{2}_{*}}}\mathcal{H}om(p_{1}^{*}\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes\mathcal{I}_{\mathcal{Z}_{l}},p_{1}^{*}\mathcal{O}_{\mathbb{P}^{2}}(-k))$ to $H$ we have that it is locally free because $H^{0}(\mathcal{H}om(\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes\mathcal{I}_{\mathcal{Z}_{l}},\mathcal{O}_{\mathbb{P}^{2}}(-k)))\cong Hom(\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes\mathcal{I}_{Z},\mathcal{O}_{\mathbb{P}^{2}}(-k))=0,$ and $\dim\,Ext^{2}(\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes I_{Z},\mathcal{O}_{\mathbb{P}^{2}}(-k))=h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-3)\otimes I_{Z})=0$ for any $Z\in H$. Hence, the fiber over $Z\in H$ is $Ext^{1}(\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes I_{Z},\mathcal{O}_{\mathbb{P}^{2}}(-k))$. Consider on $H$ the sheaf: $\mathbb{P}\Gamma:=\mathbb{P}R^{1}_{p_{{2}_{*}}}\mathcal{H}om(p_{1}^{*}\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes\mathcal{I}_{\mathcal{Z}_{l}},p_{1}^{*}\mathcal{O}_{\mathbb{P}^{2}}(-k)).$ By [7, Lemma 3.2] there exists an exact sequence: (4.1) $0\to(id\times\pi)^{*}p_{1}^{*}\mathcal{O}_{\mathbb{P}^{2}}(-k)\otimes\mathcal{O}_{\mathbb{P}^{2}\times\mathbb{P}\Gamma}(1)\to\mathcal{E}\to(id\times\pi)^{*}(p_{1}^{*}\mathcal{O}_{\mathbb{P}^{2}}(-k)\otimes\mathcal{I}_{\mathcal{Z}_{l}})\to 0$ on $\mathbb{P}^{2}\times\mathbb{P}\Gamma$ such that for each $p\in\mathbb{P}\Gamma$ the restriction $\mathcal{E}_{|_{p}}$ of $\mathcal{E}$ to $\mathbb{P}^{2}\times\\{p\\}$ is isomorphic to an extension $0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(-k)\longrightarrow E\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes I_{Z}\longrightarrow 0.$ Define the set $U:=\\{p\in\mathbb{P}\Gamma:\text{$\mathcal{E}_{|_{p}}$ is stable and $S(\mathcal{E}_{p})=2k$}\\}.$ From Theorem 3.1, the lower semicontinuity of the function $S$ and the fact that stability is an open condition we conclude that the set $U$ is non-empty and open in $\mathbb{P}\Gamma$. Restricting the sequence (4.1) to $\mathbb{P}^{2}\times U$ we have, from the universal property of the moduli space $M_{\mathbb{P}^{2}}(2;0,c_{2})$, a morphism $f_{s}:U\longrightarrow M_{\mathbb{P}^{2}}(2;0,c_{2})$ where $f_{s}(U)$ is precisely the stratum $M_{\mathbb{P}^{2}}(2;0,c_{2};2k)$. Hence, $M_{\mathbb{P}^{2}}(2;0,c_{2};2k)$, being the image of an irreducible variety under a morphism is irreducible. We can now determine the dimension of $\dim\,M_{\mathbb{P}^{2}}(2;0,c_{2};2k)=\dim\,U-\dim f_{s}^{-1}(E)$ for general $E$, which is equal to: $\dim\,M_{\mathbb{P}^{2}}(2;0,c_{2};2k)=\dim\,H+\dim\,Ext^{1}(\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes I_{Z},\mathcal{O}_{\mathbb{P}^{2}}(-k))-\dim\,\mathbb{P}H^{0}(E(k))-1.$ Since $H$ is an open set of $Hilb^{l}(\mathbb{P}^{2})$ we have: (4.2) $\displaystyle\dim\,M_{\mathbb{P}^{2}}(2;0,c_{2};2k)=$ $\displaystyle\dim\,Hilb^{l}(\mathbb{P}^{2})+\dim\,Ext^{1}(\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes I_{Z},\mathcal{O}_{\mathbb{P}^{2}}(-k))$ $\displaystyle-\dim\mathbb{P}H^{0}(E(k))-1,$ We now compute the values of $\dim\,Ext^{1}(\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes I_{Z},\mathcal{O}_{\mathbb{P}^{2}}(-k))\text{ and }\dim\,\mathbb{P}H^{0}(E(k)),$ where $Z\in H$. Note that by Serre duality $Ext^{1}(\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes I_{Z},\mathcal{O}_{\mathbb{P}^{2}}(-k))$ is canonically dual to $Ext^{1}(\mathcal{O}_{\mathbb{P}^{2}}(-k),\mathcal{O}_{\mathbb{P}^{2}}(k-3)\otimes I_{Z})$. Since $\mathcal{O}_{\mathbb{P}^{2}}(-k)$ is locally free, then $Ext^{1}(\mathcal{O}_{\mathbb{P}^{2}}(-k),\mathcal{O}_{\mathbb{P}^{2}}(k-3)\otimes I_{Z})\cong H^{1}(\mathcal{O}_{\mathbb{P}^{2}}(2k-3)\otimes I_{Z}).$ By the exact sequence $0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(2k-3)\otimes I_{Z}\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(2k-3)\longrightarrow\mathcal{O}_{Z}\longrightarrow 0$ we have that $h^{1}(\mathcal{O}_{\mathbb{P}^{2}}(2k-3)\otimes I_{Z})=-h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-3))+h^{0}(\mathcal{O}_{Z}),$ because $Z\in H$ and therefore: (4.3) $h^{1}(\mathcal{O}_{\mathbb{P}^{2}}(2k-3)\otimes I_{Z})=c_{2}-k^{2}+3k-1.$ Now, we compute $h^{0}(E(k))$. Since $E\in M_{\mathbb{P}^{2}}(2;0,c_{2};2k)$ it can be written in an extension (4.4) $0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(-k)\longrightarrow E\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes I_{Z}\longrightarrow 0$ from which we get: $0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}\longrightarrow E(k)\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(2k)\otimes I_{Z}\longrightarrow 0.$ Therefore, $h^{0}(E(k))=1+h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k)\otimes I_{Z}).$ It follows that for $Z\in H$ general: (4.5) $h^{0}(E(k))=\begin{cases}1,&\text{if $c_{2}>k^{2}+3k+1$}\\\ k^{2}+3k-c_{2}+2,&\text{if $c_{2}\leq k^{2}+3k+1.$}\end{cases}$ Replacing (4.3) and (4.5) in (4.2) we have $\displaystyle\dim\,M_{\mathbb{P}^{2}}(2;0,c_{2};2k)=\begin{cases}3c_{2}+k^{2}+3k-2,&\text{if $c_{2}>k^{2}+3k+1$}\\\ 4c_{2}-3,&\text{if $c_{2}\leq k^{2}+3k+1$}\\\ \end{cases}$ which proves the theorem. ∎ ###### Corollary 4.2. 1. (1) Let $c_{2},r\in\mathbb{Z}$ and $k\in\mathbb{N}$ such that $r^{2}+k^{2}+k\leq c_{2}$. Then $M_{\mathbb{P}^{2}}(2;2r,c_{2};2k)$ is an irreducible variety of dimension $\begin{cases}3c_{2}-3r^{2}+k^{2}+3k-2,&\text{if }c_{2}>r^{2}+k^{2}+3k+1\\\ 4c_{2}-4r^{2}-3,&\text{if }c_{2}\leq r^{2}+k^{2}+3k+1.\\\ \end{cases}$ 2. (2) Let $c_{2}$, $k\in\mathbb{N}$ and $r\in\mathbb{Z}$ such that $r^{2}-r+k^{2}\leq c_{2}$. Then $M_{\mathbb{P}^{2}}(2;2r-1,c_{2};2k-1)$ is an irreducible variety of dimension $\begin{cases}3c_{2}+3r-3r^{2}+k^{2}+2k-4,&\text{if $c_{2}>r^{2}-r+k^{2}+2k$}\\\ 4c_{2}+4r-4r^{2}-4,&\text{if $c_{2}\leq r^{2}-r+k^{2}+2k$.}\\\ \end{cases}$ ###### Proof. For $(1)$, since $S(E)=S(E\otimes\mathcal{O}_{\mathbb{P}^{2}}(-r))$, it follows that the map $\displaystyle M_{\mathbb{P}^{2}}(2;2r,c_{2};2k)$ $\displaystyle\rightarrow$ $\displaystyle M_{\mathbb{P}^{2}}(2;0,c_{2}-r^{2};2k)$ $\displaystyle E$ $\displaystyle\mapsto$ $\displaystyle E\otimes\mathcal{O}_{\mathbb{P}^{2}}(-r)$ is an isomorphism. Part (2) follows analogously. ∎ ###### Corollary 4.3. 1. (1) Let $c_{2}\geq r^{2}+2$ and $k\in\mathbb{N}$ the only integer such that $r^{2}+k^{2}+k\leq c_{2}\leq r^{2}+k^{2}+3k+1$. Then the stratum $M_{\mathbb{P}^{2}}(2;2r,c_{2};2k)$ has the same dimension as the moduli space $M_{\mathbb{P}^{2}}(2;2r,c_{2})$. 2. (2) Let $c_{2}\geq r^{2}-r+1$ and $k\in\mathbb{N}$ the only integer such that $r^{2}-r+k^{2}\leq c_{2}\leq r^{2}+k^{2}+2k$. Then the stratum $M_{\mathbb{P}^{2}}(2;2r-1,c_{2};-1+2k)$ has the same dimension as the moduli space $M_{\mathbb{P}^{2}}(2;2r-1,c_{2})$ ###### Remark 4.4. 1. (1) The uniqueness of the number $k$ satisfying the above inequalities follows by elementary considerations. It is, indeed, equal to the largest $k$ such that $r^{2}+k^{2}+k\leq c_{2}$ in (1) and such that $r^{2}-r+k^{2}\leq c_{2}$ in part (2). 2. (2) It follows by semicontinuity that the stratum with maximal dimension is an open set in the moduli space of stable vector bundles. ## 5\. Applications to Brill-Noether Theory In this section, we use the previous results to study the non-emptiness of some Brill-Noether loci in the moduli space $M_{\mathbb{P}^{2}}(2;c_{1},c_{2})$ of stable vector bundles of rank $2$ and fixed Chern classes $c_{1}$ and $c_{2}$ on $\mathbb{P}^{2}$. For any $t\geq 0$ the subvariety of $M_{\mathbb{P}^{2}}(2;c_{1},c_{2})$ defined as $W^{t}(2;c_{1},c_{2}):=\\{E\in M_{\mathbb{P}^{2}}(2;c_{1},c_{2}):h^{0}(E)\geq t\\}$ is called the $t-$Brill-Noether locus of the moduli space $M_{\mathbb{P}^{2}}(2;c_{1},c_{2})$ (or simply Brill-Noether locus if there is no confusion). The following theorem yields information about the variety $W^{t}(2;c_{1},c_{2})$, in particular shows that $W^{t}(2;c_{1},c_{2})$ is a determinantal variety and give a formula for the expected dimension. It was proved by Costa and Miro-Roig in [3] for every smooth projective variety of dimension $n$. ###### Theorem 5.1. [3, Corollary 2.8] Let $M_{\mathbb{P}^{2}}(2;c_{1},c_{2})$ be the moduli space of stable vector bundles of rank $2$ on $\mathbb{P}^{2}$ with fixed Chern classes $c_{1}$, $c_{2}$. Then, for any $t\geq 0$, there exists a determinantal variety $W^{t}(2;c_{1},c_{2}):=\\{E\in M_{\mathbb{P}^{2}}(2;c_{1},c_{2}):h^{0}(E)\geq t\\}.$ Moreover, each non-empty irreducible component of $W^{t}(2;c_{1},c_{2})$ has dimension greater or equal to the Brill-Noether number on $\mathbb{P}^{2}$ $\rho^{t}(2;c_{1},c_{2}):=4c_{2}-c_{1}^{2}-3-t\left(t-\frac{c_{1}^{2}}{2}-\frac{3c_{1}}{2}+c_{2}-2\right)$ and $W^{t+1}(2;c_{1},c_{2})\subset Sing(W^{t}(2;c_{1},c_{2}))$ whenever $W^{t}(2;c_{1},c_{2})\neq M_{\mathbb{P}^{2}}(2;c_{1},c_{2})$. The following result allows to establish a relationship between the Brill- Noether locus $W^{t}(2;c_{1},c_{2})$ and the different strata $M_{\mathbb{P}^{2}}(2;c_{1},c_{2};s)$. ###### Theorem 5.2. Let $r,k,c_{2}$ be integers. Assume, $t=\frac{(r-k+2)(r-k+1)}{2}.$ 1. (1) Let $E\in M_{\mathbb{P}^{2}}(2;2r,c_{2};2k)$. Then $E\notin W^{1}(2;2r,c_{2})$ if $r<k$ and $E\in W^{t}(2;2r,c_{2})$ if $r\geq k$. Moreover, the Brill- Noether number $\rho^{t}(2,2r,c_{2})<\dim M_{\mathbb{P}^{2}}(2,2r,c_{2};2k)$ for $c_{2}>>0.$ 2. (2) Let $E\in M_{\mathbb{P}^{2}}(2;2r-1,c_{2};-1+2k)$. Then $E\notin W^{1}(2;2r-1,c_{2})$ if $r<k$ and $E\in W^{t}(2;2r-1,c_{2})$ if $r\geq k$. Moreover, the Brill-Noether number $\rho^{t}(2,2r-1,c_{2})<\dim M_{\mathbb{P}^{2}}(2,2r,c_{2};2k-1)$ for $c_{2}>>0.$ ###### Proof. 1. (1) Since $E\in M_{\mathbb{P}^{2}}(2;2r,c_{2};2k)$, it follows that $E(-r)\in M_{\mathbb{P}^{2}}(2;0,c_{2}-r^{2};2k)$ because $S(E(-r))=S(E).$ By Theorem 3.1 there exists an extension (5.1) $0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(-k)\longrightarrow E(-r)\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes I_{Z}\longrightarrow 0,$ where $Z\subset\mathbb{P}^{2}$ of codimension $2$ has length $l(Z)=k^{2}+c_{2}-r^{2}$ and $\mathcal{O}_{\mathbb{P}^{2}}(-k)$ is maximal. Note that $h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-1)\otimes I_{Z})=0$ since $\mathcal{O}_{\mathbb{P}^{2}}(-k)$ is maximal. From the extension (5.1) we get $0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(r-k)\longrightarrow E\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(r+k)\otimes I_{Z}\longrightarrow 0.$ In consequence, $E\notin W^{1}(2;2r,c_{2})\,\,\text{ if \,\,\, $r<k$}$ and $h^{0}(E)\geq h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(r-k))=t,\,\,\text{if $r\geq k.$}$ The Brill-Noether number is given by $\displaystyle\rho^{t}(2;2r,c_{2})=4c_{2}-4r^{2}-3-t(t+c_{2}-2r^{2}-3r-2).$ Therefore, for $c_{2}>>0$ we have: (5.2) $\displaystyle\rho^{t}(2,2r,c_{2})<3c_{2}-3r^{2}+k^{2}+3k-3=\dim M_{\mathbb{P}^{2}}(2,2r,c_{2};2k).$ 2. (2) The proof proceeds analogously to the proof of $(1)$. ∎ Suppose that $c_{1}>0$ and the Euler-Poincaré characteristic $\chi=0$, i.e. $c_{2}=2+\frac{c_{1}^{2}+3c_{1}}{2}$. It is known that the moduli space $M_{\mathbb{P}^{2}}(2;c_{1},c_{2})$ satisfies Weak Brill-Noether, that is, there exist $E\in M_{\mathbb{P}^{2}}(2;c_{1},c_{2})$ such that $h^{i}(E)=0$ for any $i$ (see [10], Theorem 18.1.1). Note that, by semicontinuity, if $E$ is any sheaf with no cohomology then the cohomology also vanishes for any general sheaf in $M_{\mathbb{P}^{2}}(2;c_{1},c_{2})$ (cf. [7], [4]). Using the Segre Invariant, we can give a different proof that $M_{\mathbb{P}^{2}}(2;c_{1},c_{2})$ satisfies Weak Brill-Noether. The advantage in using Segre’s invariant lies in the fact that the open set satisfying Weak Brill-Noether can be explicitly described as the open stratum of $M_{\mathbb{P}^{2}}(2;c_{1},c_{2})$. ###### Corollary 5.3. Suppose that $c_{1}>0$ and $c_{2}=2+\frac{c_{1}^{2}+3c_{1}}{2}$. Then moduli space $M_{\mathbb{P}^{2}}(2,c_{1},c_{2})$ satisfies Weak-Brill Noether. ###### Proof. Since $\chi=0$ and $c_{1}>0$, it follows that $h^{2}(E)=0$ for any stable vector bundle with $c_{1}(E)=c_{1}$. Assume that $c_{1}=2r$ (resp. $c_{1}=2r-1$) with $r\in\mathbb{N}$. Since the Euler-Poincaré characteristic $\chi=0$, it follows that $c_{2}=2r^{2}+3r+2$ (resp. $c_{2}=2r^{2}+r+1$). Set $k=r+1,$ note that $k$ is the largest integer such that $r^{2}+k^{2}+k\leq c_{2}$ (resp. $k^{2}+r^{2}-r\leq c_{2}$), then by Corollary 4.3 we have that the stratum $M_{\mathbb{P}^{2}}(2;2r,c_{2};2k)$ (resp. $M_{\mathbb{P}^{2}}(2;2r-1,c_{2};2k-1)$) is open and by Theorem 5.2 $h^{0}(E)=0$ for any $E\in M_{\mathbb{P}^{2}}(2;2r,c_{2};2k)$ (resp. $E\in M_{\mathbb{P}^{2}}(2;2r-1,c_{2};2k-1)$). ∎ ###### Remark 5.4. It is also known that under the conditions of Corollary 5.3 the complement of the maximun stratum is a reduced hypersurface ([6], Theorem 2). For larger values of $t$ further information can be obtained. For this we use the existence of special configurations of points: ###### Theorem 5.5. 1. (1) Let $r,c_{2},k$ be integer numbers satisfying $r^{2}+2\leq c_{2}$ and $k<r$. Then, a vector bundle $E\in M_{\mathbb{P}^{2}}(2;2r,c_{2},2k)$ exists such that $h^{0}(E)\geq(r-k)^{2}+4(r-k)+3$. 2. (2) Let $r,c_{2}$ be integer numbers satisfying $r^{2}-r+1\leq c_{2}$. Then, a vector bundle $E\in M_{\mathbb{P}^{2}}(2;2r-1,c_{2},2k-1)$ exists such that $h^{0}(E)\geq t$, where $t=\begin{cases}(r-k)^{2}+4(r-k)+3&\text{if $k\neq 1$},\\\ r^{2}+r-1&\text{if $k=1$.}\end{cases}$ ###### Proof. 1. (1) Let $Z^{\prime}$ be a reduced zero-cycle of length $l(Z^{\prime})=\dim\mathbb{P}H^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-1))$ such that $H^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-1)\otimes I_{Z^{\prime}})=\mathbb{C}.C$ for some curve $C$ of degree $2k-1$. Complete $Z^{\prime}$ to a zero cycle $\tilde{Z}=Z^{\prime}+Z^{{}^{\prime\prime}}$ such that $Supp(Z^{{}^{\prime\prime}})\subset C$ and $l(\tilde{Z})=c_{2}+k^{2}-r^{2}-1$ and set $Z=\tilde{Z}+p$ for $p$ some point not contained in $C$. Then it follows from an argument similar to the one used in the proof of the Lemma 3.4 that the pair $(\mathcal{O}_{\mathbb{P}^{2}}(2k-3),Z)$ satisfies the Cayley-Bacharach property (see Theorem 3.3). Indeed, if $Z=Z_{0}+p_{0}$ and $0\neq C_{0}\in H^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-3)\otimes I_{Z_{0}})$ exists, then the map: $H^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2)\otimes I_{p_{0}})\to H^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-1)\otimes I_{Z}),$ $C\longmapsto CC_{0},$ must be injective. Therefore an extension: (5.3) $0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(-k)\longrightarrow E\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(k)\otimes I_{Z}\longrightarrow 0$ exists with $E\in M_{\mathbb{P}^{2}}(2,0,c_{2}-r^{2})$. Since $Z-p\subset C$ we see that any curve of degree $r-k-1$ passing through $p$ gives rise to an element of $H^{0}(\mathcal{O}_{\mathbb{P}^{2}}(r+k)\otimes I_{Z})$. Thus, $h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(r+k)\otimes I_{Z})\geq h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(r-k+1))-1=\frac{(r-k)^{2}+5(r-k)}{2}+2.$ In this way we obtain that $E(r)\in M_{\mathbb{P}^{2}}(2,2r,c_{2})$ and from the exact sequence (5.3): $h^{0}(E(r))=h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(r-k))+h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(r+k)\otimes I_{Z})\geq(r-k)^{2}+4(r-k)+3,$ as desired. 2. (2) The proof for the case $k\neq 1$ follows analogously to the proof of $(1)$. Taking $C\subset\mathbb{P}^{2}$ be an irreducible curve of degree $2k-2$ and $Z\subset\mathbb{P}^{2}$ be distinct points with length $l(Z)=c_{2}+k^{2}-k-r^{2}+r$ such that $Z-\\{p\\}$ is contained in $C$ but $Z$ is not contained in $C$. Note that the pair $(\mathcal{O}_{\mathbb{P}^{2}}(2k-4),Z)$ satisfies the Cayley- Bacharach property and $h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(r+k-1)\otimes I_{Z})\geq h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(r-k+1))-1=\frac{(r-k)^{2}+5(r-k)}{2}+2.$ From the exact sequence $0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(r-k)\longrightarrow E(r)\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(r+k-1)\otimes I_{Z}\longrightarrow 0,$ we have: $h^{0}(E(r))=h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(r-k))+h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(r+k-1)\otimes I_{Z})\geq(r-k)^{2}+4(r-k)+3.$ We now proceed to prove the case $k=1$. Let $L\subset\mathbb{P}^{2}$ be a line and let $Z\subset\mathbb{P}^{2}$ be distinct points with length $l(Z)=c_{2}-r^{2}+r$ such that $Z-\\{p\\}$ is contained in $L$ but $Z$ is not contained in $L$. Note that the pair $(\mathcal{O}_{\mathbb{P}^{2}}(-2),Z)$ satisfies the Cayley-Bacharach property, thus, there exists an extension (5.4) $0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(-1)\longrightarrow E\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}\otimes I_{Z}\longrightarrow 0.$ with $c_{1}(E)=-1$ and $c_{2}(E)=c_{2}-r^{2}+r$. Moreover, $E$ is a stable vector bundle because $Z\neq\emptyset$. Note that $h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(r)\otimes I_{Z})\geq h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(r-1))-1=\frac{r(r+1)}{2}-1$ and $E(r)$ is a stable vector bundle with $c_{1}(E(r))=2r-1$ and $c_{2}(E(r))=c_{2}$. From the exact sequence (5.3) we get $0\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(r-1)\longrightarrow E(r)\longrightarrow\mathcal{O}_{\mathbb{P}^{2}}(r)\otimes I_{Z}\longrightarrow 0,$ and taking cohomology we have $h^{0}(E(r))=h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(r-1))+h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(r)\otimes I_{Z})\geq r^{2}+r-1.$ ∎ Once that Theorem 5.5 has established the non-emptiness of some Brill-Noether loci it is natural to search for a lower bound for its dimensions. ###### Theorem 5.6. 1. (1) Let $r,k,c_{2}\in\mathbb{N}$ such that $r\geq 2$, $3k^{2}-4k+r^{2}+2<c_{2}$ and $k<r$. Let $t=(r-k)^{2}+4(r-k)+3$. Then, $dim\,W^{t}(2;2r,c_{2})\geq\begin{cases}2c_{2}+2k^{2}-2r^{2}+4k-2,&\text{if $c_{2}>k^{2}+3k+r^{2}+1$}\\\ k^{2}+3c_{2}+k-r^{2}-3,&\text{if $c_{2}\leq k^{2}+3k+r^{2}+1$.}\end{cases}$ 2. (2) Let $r,k,c_{2}\in\mathbb{N}$ such that $r\geq 2$ and $3k^{2}-7k+r^{2}-r+5<c_{2}$ and $k<r$. $t=\begin{cases}(r-k)^{2}+4(r-k)+3,&\text{if $k\neq 1$}\\\ r^{2}+r-1,&\text{if $k=1$}.\end{cases}$ Then, $dim\,W^{t}(2;2r-1,c_{2})\geq\begin{cases}2c_{2}+2k^{2}-2r^{2}+2k+2r-4,&\text{if }c_{2}>k^{2}-2k+r^{2}-r\\\ 3c_{2}+k^{2}-3r^{2}+3r-4,&\text{if }c_{2}\leq k^{2}-2k+r^{2}-r,\end{cases}$ if $k\neq 1$, and $dim\,W^{t}(2;2r-1,c_{2})\geq 3c_{2}-3r^{2}+3r+1,$ if $k-1$. ###### Proof. We only prove $(1)$, the proof of $(2)$ follows by similar arguments. Let $l=k^{2}+c_{2}-r^{2}$ and let $Hilb^{\tilde{l}}(\mathbb{P}^{2})$ be the Hilbert scheme of zero-dimensional subschemes of length $\tilde{l}=l-1$. Let $\mathcal{I}\subset\mathbb{P}H^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-1))$ be the subset of irreducible curves of degree $2k-1$ and consider the variety: $I:=\\{(p,\tilde{Z},C)\in\mathbb{P}^{2}\times Hilb^{\tilde{l}}(\mathbb{P}^{2})\times\mathcal{I}:p\notin C\,,\tilde{Z}\subset C\\}.$ For every $C\in\mathcal{I}$ its fiber $p_{3}^{-1}(C)$ under the projection onto the third factor is the irreducible variety $(\mathbb{P}^{2}-C)\times S^{\tilde{l}}C$, which have the same dimension for every $C$. Let $I_{0}\subset I$ be an irreducible component such that $p_{3}:I_{0}\to\mathcal{I}$ is dominant and $\dim I_{0}=\dim I=k(2k+1)+l$. Let $p_{12}:I_{0}\to\mathbb{P}^{2}\times Hilb^{\tilde{l}}(\mathbb{P}^{2})$ be the projection onto the first two factors. Because of the choice of $l$ and $c_{2}$, we have that $\tilde{l}>(2k-1)^{2}$. Thus $p_{12}$ is injective and therefore it is a birational morphism. In this way, an open set $\mathcal{U}\subset p_{12}(I_{0})$ exists such that $\dim I_{0}=\dim\mathcal{U}$. Consider the finite morphism $\displaystyle\phi:\mathbb{P}^{2}\times Hilb^{\tilde{l}}(\mathbb{P}^{2})$ $\displaystyle\longrightarrow Hilb^{l}(\mathbb{P}^{2})$ $\displaystyle(p,\tilde{Z})$ $\displaystyle\longmapsto p+\tilde{Z},$ and the set (5.5) $H\cap\phi(\mathcal{U}),$ where $H:=\\{Z\in Hilb^{l}(\mathbb{P}^{2}):h^{0}(\mathcal{O}_{\mathbb{P}^{2}}(2k-1)\otimes I_{Z})=0\\}.$ Note that by the proof of the Theorem 3.1 the set (5.5) is non-empty, moreover $\phi(\mathcal{U})\subset\phi(p_{12}(I))\subset H$. Therefore, $H\cap\phi(\mathcal{U})=\phi(\mathcal{U})$ Now, we can now proceed analogously to the proof of Theorem 4.1. Consider the sheaf $\mathbb{P}\Gamma:=\mathbb{P}R^{1}_{p_{{2}_{*}}}\mathcal{H}om(p_{1}^{*}\mathcal{O}_{\mathbb{P}^{2}}(k+r)\otimes\mathcal{I}_{\mathcal{Z}_{l}},p_{1}^{*}\mathcal{O}_{\mathbb{P}^{2}}(r-k))$ over $\phi(\mathcal{U})$ and the exact sequence (5.6) $0\longrightarrow(id\times\pi)^{*}p_{1}^{*}\mathcal{O}_{\mathbb{P}^{2}}(r-k)\otimes\mathcal{O}_{\mathbb{P}^{2}\times\mathbb{P}\Gamma}(1)\longrightarrow\mathcal{E}\longrightarrow(id\times\pi)^{*}(p_{1}^{*}\mathcal{O}_{\mathbb{P}^{2}}(k+r)\otimes\mathcal{I}_{\mathcal{Z}_{l}})\longrightarrow 0.$ on $\mathbb{P}^{2}\times\mathbb{P}\Gamma$. Define the set $U:=\\{p\in\mathbb{P}\Gamma:\text{$\mathcal{E}_{|_{p}}$ is stable and $S(\mathcal{E}_{p})=2k$}\\}.$ From Theorem 3.1, the lower semicontinuity of the function $S$ and stability being an open condition we conclude that the set $U$ is non-empty and open in $\mathbb{P}\Gamma$. Restricting the sequence (5.6) on $\mathbb{P}^{2}\times U$ from the universal property of the moduli space $M_{\mathbb{P}^{2}}(2;c_{1},c_{2})$ we have a morphism $f_{t}:U\longrightarrow M_{\mathbb{P}^{2}}(2;2r,c_{2}).$ Note that, by the proof of Theorem 5.5 the set $f_{t}(U)$ is contained in the locus of Brill-Noether $W^{t}(2;2r,c_{2})$ where $t=(r-k)^{2}+4(r-k)+3$. Moreover, $f_{t}(U)\subseteq\overline{f_{t}(U)}\subseteq W^{t}(2;2r,c_{2}).$ We proceed now to determine the dimension of $Im\overline{f_{t}(U)}$ and a lower bound of the locus $W^{t}(2;2r,c_{2})$, $\dim\,W^{t}(2;2r,c_{2})\geq\dim\,Im\overline{f_{t}(U)}\geq\dim\,U-dimf_{t}^{-1}(E)$ which is equivalent to $\displaystyle\dim\,Im\overline{f_{t}(U)}$ $\displaystyle\geq\dim\,\phi{(\overline{p_{12}(I)})}+\dim\,Ext^{1}(\mathcal{O}_{\mathbb{P}^{2}}(k+r)\otimes I_{Z},\mathcal{O}_{\mathbb{P}^{2}}(r-k))-\dim\,\mathbb{P}H^{0}(E(k-r))-1$ $\displaystyle=\dim\,\phi(p_{12}(I))+\dim\,Ext^{1}(\mathcal{O}_{\mathbb{P}^{2}}(k+r)\otimes I_{Z},\mathcal{O}_{\mathbb{P}^{2}}(r-k))-\dim\,\mathbb{P}H^{0}(E(k-r))-1$ $\displaystyle\geq\dim\,\phi(\mathcal{U})+\dim\,Ext^{1}(\mathcal{O}_{\mathbb{P}^{2}}(k+r)\otimes I_{Z},\mathcal{O}_{\mathbb{P}^{2}}(r-k))-\dim\,\mathbb{P}H^{0}(E(k-r))-1.$ From the above and the proof of Theorem 4.1 we conclude that $\dim\,Im\overline{f_{t}(U)}\geq\begin{cases}c_{2}+k^{2}-r^{2}+4k-2,&\text{if $c_{2}>k^{2}+r^{2}+3k+1$}\\\ 2c_{2}+k-2r^{2}-3,&\text{if $c_{2}\leq k^{2}+3k+r^{2}+1$.}\end{cases}$ as claimed. ∎ ## References * [1] W.P. Barth, K. Hulek, C.A.M Peters, A. Van de Ven. — Compact complex surfaces. Second edition. Ergebnisse der Mathematik und ihrer Grenzgebiete. Springer-Verlag, Berlin, 2004. xii+436 pp. * [2] L. Brambila-Paz, H. Lange. — A stratification of the moduli space of vector bundles on curves. Journal für die reine und angewandte Mathematik, 494, 173-187, (1998). * [3] L. Costa, R.M. Miro-Roig.— Brill-Noether theory for moduli spaces of sheaves on algebraic varieties. Forum Math. 22 (2010), no. 3, 411-432. * [4] I. Coskun, J. Huizenga. — Weak Brill-Noether for rational surfaces. Local and global methods in algebraic geometry, Contemp. Math., 712, Amer. Math. Soc., Providence, RI, 2018\. * [5] R. Friedman. — Algebraic surfaces and holomorphic vector bundles. Universitext. Springer-Verlag, New York, 1998. x+328 pp. * [6] L. Gottsche and A. Hirschowitz.— Weak Brill-Noether for vector bundles on the projective plane. Algebraic geometry (Catania, 1993/Barcelona, 1994), Lecture Notes in Pure and Appl. Math., 200, Dekker, New York, 1998. * [7] L. Gottsche. — Change of polarization and Hodge numbers of moduli spaces of torsion free sheaves on surfaces. Mathematische Zeitschrift, 223, 247-260, (1996). * [8] H. Lange.— Zur Klassifikation von Regelmannigfaltigkeiten. Mathematische Annalen. 262 (4), 447 459, (1983). * [9] H. Lange, N. Narasimhan. — Maximal subbundles of Rank two vector bundles on curves. Mathematishe Annalen, 266, 55-72, (1983). * [10] J. Le Potier. — Lectures on vector bundles. Translated by A. Maciocia. Cambridge Studies in Advanced Mathematics, 54. Cambridge University Press, Cambridge, 1997. viii+251 pp. * [11] R. Hartshorne. — Stable reflexive sheaves. Math. Ann. 254 (1980), no. 2, 121-176. * [12] R. Hartshorne. — Algebraic geometry. Graduate Texts in Mathematics, No. 52. Springer-Verlag, New York-Heidelberg, 1977. xvi+496 pp. * [13] D. Huybrechts, M. Lehn. — The geometry of moduli spaces of sheaves. Aspects of Mathematics, E31. Friedr. Vieweg Sohn, Braunschweig, 1997. xiv+269 pp. * [14] M. Maruyama. — Openness of a family of torsion free sheaves. J. Math. Kyoto Univ. 16 (1976), no. 3, 627–637. * [15] M. Maruyama. — Stable vector bundles on an algebraic surface. Nagoya Math. J. Vol. 58 (1975), 25-68. * [16] Okonek, Christian; Schneider, Michael; Spindler, Heinz. — .Vector bundles on complex projective spaces. Progress in Mathematics, 3. Birkh user, Boston, Mass., 1980. vii+389 pp. * [17] B. Russo, M. Teixidor. — On a conjecture of Lange. Journal of Algebraic Geometry, 8, 483-496, (1999).
2024-09-04T02:54:57.565027
2020-03-05T17:14:27
2003.02773
{ "authors": "Semanti Dutta, B. Sathiapalan, and H. Sonoda", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26061", "submitter": "Semanti Dutta", "url": "https://arxiv.org/abs/2003.02773" }
arxiv-papers
IMSc/2020/02/02 KOBE-TH-20-02 # Wilson Action for the $O(N)$ Model S. Dutta, B. Sathiapalan Institute of Mathematical Sciences CIT Campus, Tharamani Chennai 600113, India and Homi Bhabha National Institute Training School Complex, Anushakti Nagar Mumbai 400085, India and H. Sonoda Physics Department, Kobe University Kobe 657-8501, Japan ###### Abstract In this paper the fixed-point Wilson action for the critical $O(N)$ model in $D=4-\epsilon$ dimensions is written down in the $\epsilon$ expansion to order $\epsilon^{2}$. It is obtained by solving the fixed-point Polchinski Exact Renormalization Group equation (with anomalous dimension) in powers of $\epsilon$. This is an example of a theory that has scale and conformal invariance despite having a finite UV cutoff. The energy-momentum tensor for this theory is also constructed (at zero momentum) to order $\epsilon^{2}$. This is done by solving the Ward-Takahashi identity for the fixed point action. It is verified that the trace of the energy-momentum tensor is proportional to the violation of scale invariance as given by the exact RG, i.e., the $\beta$ function. The vanishing of the trace at the fixed point ensures conformal invariance. Some examples of calculations of correlation functions are also given. ###### Contents 1. 1 Introduction 2. 2 Background 1. 2.1 Exact Renormalization Group and Fixed Point equation 1. 2.1.1 Exact Renormalization group 2. 2.1.2 Polchinski’s ERG equation 3. 2.1.3 The limit $\Lambda\to 0+$ 4. 2.1.4 IR limit of a critical theory 5. 2.1.5 Anomalous dimension in ERG 6. 2.1.6 Dimensionless framework 7. 2.1.7 Fixed-point equation 2. 2.2 Energy Momentum Tensor: Scale Invariance and Conformal Invariance 1. 2.2.1 Energy Momentum Tensor in the Classical Theory 2. 2.2.2 Trace of the Energy Momentum Tensor in the Quantum Theory: Perturbative 3. 2.2.3 Energy Momentum Tensor in Exact RG 3. 3 Wilson-Fisher Fixed Point for the $O(N)$ Model 1. 3.1 Equations for the vertices 2. 3.2 Solving the Equations 1. 3.2.1 $\mathcal{O}(1)$: Retrieving Gaussian theory 2. 3.2.2 $\mathcal{O}(\epsilon)$: Fixed Point value of $m^{2}$ 3. 3.2.3 $\mathcal{O}(\epsilon^{2})$: Expression for the six-point vertex 4. 3.2.4 Fixed Point value of $\lambda$: Solution for $U_{4}$ at $\mathcal{O}(\epsilon)$ 3. 3.3 Determining Anomalous Dimension 4. 4 Correlation functions 1. 4.1 A more general equation 2. 4.2 Calculation of correlation functions 1. 4.2.1 Two-point function 2. 4.2.2 Four-point function 3. 4.2.3 Six-point function 5. 5 Construction of the energy-momentum tensor at the fixed point 6. 6 Summary and Conclusions: 7. A Fixed Point Action 1. A.1 Evaluation of $U_{4}$ 2. A.2 Solving for $\tilde{U}_{4}$ 3. A.3 Equation for $\tilde{U}_{2}$ 4. A.4 Expression for $\eta$ 8. B Asymptotic behaviors of $F(p)$ and $G(p)$ ## 1 Introduction Conformal field theories (CFT) are interesting for a variety of reasons. One of the most important reason is that a theory critical at a continuous phase transition is expected to acquire conformal invariance which imposes strong constraints on the correlation functions[1]. This has motivated the idea of bootstrap[2]. Particularly in two dimensions these ideas have been very fruitful [3]. Reviews of later developments and references are given in [4, 5]. The advent of the AdS/CFT correpondence [6, 7, 8, 9] or “holography” between a boundary CFT and a bulk gravity theory opened up another approach to solving CFT’s. 111It also opens up the amazing possibility of rewriting quantum gravity as a quantum field theory in flat space. There is a large amount of literature on this. See, for example, [10] for a review. In the AdS/CFT correspondence the radial direction can be interpreted as the scale of the boundary field theory. Thus, a radial evolution can be thought of as an RG evolution and has been dubbed “holgraphic RG” [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. The precise connection between the boundary RG and holographic RG is, however, still an open question. Recently a connection has been proposed between the Exact Renormalization Group (ERG) equation [11, 12, 13, 14] and the Holographic Renormalization Group (Holographic RG) equation. It was shown in [29] that the RG evolution operator for a Wilson action of a D-dimensional field theory obeying the Polchinski ERG equation can be formulated as a $D+1$-dimensional functional integral. The extra dimension, corresponding to the moving scale $\Lambda$ of the ERG, makes it a “holographic” formulation. Furthermore, a change of field variables or field redefinition maps the $D+1$ dimensional action for the functional integral to the action of a free massive scalar field in $AdS_{D+1}$. It was then shown that the calculation of the two point function reduces to the familiar calculation using the AdS/CFT correspondence. This proposal is quite general, and detailed calculations were done for the Gaussian theory [29]. The scalar field theory action has a free parameter, i.e., the mass of the scalar field, which is related to the anomalous dimension of the boundary operator in the AdS/CFT context . This parameter appears to come out of nowhere. To understand the origin of the anomalous dimension parameter, an ERG equation with anomalous dimension was analysed in [30]. The same change of variables mapped this to a scalar field theory in the AdS space-time, and this time it was easy to see that the mass parameter is naturally related to the anomalous dimension parameter in the ERG. Normally, interactions are required for a field to have anomalous dimension. Since the exact RG for interacting theories is difficult, a Gaussian theory with an anomalous dimension introduced by hand was studied in [30]. In order to improve our understanding of the connection between ERG and the AdS/CFT correspondence, it is necessary to have an interacting example — one needs a non-trivial boundary CFT and a fixed-point Wilson action for this CFT 222Note that the “Wilson action” always has a finite UV cutoff — this is a point of departure from the usual CFT actions written in the continuum.. Then the RG evolution of small perturbations to this theory can be studied by ERG. Using the ideas of [29, 30] this can be mapped to a scalar field theory in $D+1$-dimensional AdS space. This would make a contact with more detailed AdS/CFT calculations of higher point correlators. A well studied field theory is the $\lambda\phi^{4}$ scalar field theory in $4-\epsilon$ dimensions that has the famous Wilson-Fisher fixed point. When there are $N$ scalar fields, this is often referred to as the $O(N)$ model. In this paper, as a first step, we construct a fixed-point Wilson action for this theory to order $\epsilon^{2}$. It is at this order that the anomalous dimension first shows up. The action is obtained by solving the fixed-point ERG equation perturbatively. The fixed-point equation imposes the constraint of scale invariance. In fact the theory is also conformally invariant. This follows from the properties of the energy momentum tensor — if it is traceless the theory is conformally invariant. Indeed the tracelessness of the energy-momentum tensor defines what we mean by a CFT [31, 32, 33, 34]. It is thus important to study the energy-momentum tensor and we construct it in this paper. The energy-momentum tensor is also important in the context of AdS/CFT: one of the really interesting aspects of the AdS/CFT correspondence is that the $D+1$-dimensional bulk theory has dynamical gravity. In addition to the scalar field, there is the gravitational field that couples to the energy momentum tensor of the boundary CFT. Thus to extend the ideas of [29, 30] to understand bulk gravity in AdS/CFT correspondence, from ERG one has to construct the energy momentum operator. The energy-momentum tensor for $\phi^{4}$ field theory has been worked out in the dimensional regularization scheme [33]. The construction of the energy- momentum tensor from the ERG point of view has been studied in general in [35, 37]. The main idea is to solve the Ward Identity associated with coordinate transformations. This can be done in perturbation theory. We construct the leading terms that corresponds to the zero momentum energy momentum tensor. One can also check that the trace of the energy momentum tensor is proportional to the number operator. We apply this prescription here and construct the zero momentum energy momentum tensor to $O(\lambda^{2})$. This paper is organized as follows: In Section 2 we give a review of ERG and the fixed-point equation. We also give some background material on the energy- momentum tensor. In Section 3 we construct the solution to the fixed-point equation and obtain the fixed-point action. In Section 4 we give a different approach to obtaining the fixed point equation and also calculate some correlation functions. In Section 5 the construction of the energy-momentum tensor is given. We conclude the paper in Section 6. ## 2 Background ### 2.1 Exact Renormalization Group and Fixed Point equation We review the necessary background in this section. It depends mostly on [44, 54]. #### 2.1.1 Exact Renormalization group Renormalization means essentially going from one scale $\Lambda_{0}$ to a lower scale $\Lambda$, where the initial scale $\Lambda_{0}$ is typically called a bare scale. One will want to see how the physics changes with scale. What do we mean by physics at $\Lambda_{0}$? It means our theory will not be sensitive to momentum $p>\Lambda_{0}$. The partition function of the full theory is given by $\displaystyle Z=\int\mathcal{D}\phi\,e^{-S[\phi]}$ where $\displaystyle S=\int_{p}\frac{1}{2}p^{2}\phi^{2}+S_{I}[\phi]$ To make it a partition function at scale $\Lambda_{0}$ we will try to suppress the kinetic energy term for $\infty<p<\Lambda_{0}$. To execute this we will put a smooth cutoff in the kinetic energy term to obtain the bare action $\displaystyle S_{B}[\phi]\equiv\frac{1}{2}\int_{p}\phi\frac{p^{2}}{K(p^{2}/\Lambda_{0}^{2})}\phi+S_{I,B}[\phi]$ (2.1) and the bare partition function $Z_{B}\equiv\int\mathcal{D}\phi\,e^{-S_{B}[\phi]}$ (2.2) We will choose the cutoff function will follow the condition $K(0)=1$ and $K(\infty)=0$. In general cutoff functions satisfy stronger properties , but that will not affect the fixed point values of the couplings [55]. Now we want to go to a lower scale $\Lambda$. For that, observe the following identity $\displaystyle\int\mathcal{D}\phi\exp\left[-\frac{1}{2}\int_{p}\phi(-p)\frac{1}{A(p)+B(p)}\phi(p)-S_{I,B}[\phi]\right]$ $\displaystyle=$ $\displaystyle\int\mathcal{D}\phi_{1}\mathcal{D}\phi_{2}\exp\left[-\frac{1}{2}\int_{p}\frac{1}{A(p)}\phi_{1}(-p)\phi_{1}(p)-\frac{1}{2}\int_{p}\frac{1}{B(p)}\phi_{2}(-p)\phi_{2}(p)-S_{I,B}[\phi_{1}+\phi_{2}]\right]$ Using this we can write $\displaystyle Z_{B}=$ $\displaystyle\int\mathcal{D}\phi_{l}\mathcal{D}\phi_{h}\exp\bigg{\\{}-\frac{1}{2}\int_{p}\frac{p^{2}}{K(p^{2}/\Lambda^{2})}\phi_{l}(-p)\phi_{l}(p)$ $\displaystyle-$ $\displaystyle\frac{1}{2}\int_{p}\frac{p^{2}}{K(p^{2}/\Lambda_{0}^{2})-K(p^{2}/\Lambda^{2})}\phi_{h}(-p)\phi_{h}(p)-S_{I,B}[\phi_{l}+\phi_{h}]\bigg{\\}}$ We can effectively call $\phi_{l}(\phi_{h})$ as low(high) energy field as it is propagated by low(high) momentum propagator $\Delta_{l}(\Delta_{h})$ defined below $\displaystyle\Delta_{l}=\frac{K(p^{2}/\Lambda^{2})}{p^{2}},\quad\Delta_{h}=\frac{K(p^{2}/\Lambda^{2})-K(p^{2}/\Lambda_{0}^{2})}{p^{2}}$ (2.3) So we can write $\displaystyle Z_{B}=$ $\displaystyle\int\mathcal{D}\phi_{l}\exp\left[-\frac{1}{2}\int_{p}\phi_{l}\Delta_{l}^{-1}\phi_{l}\right]\int\mathcal{D}\phi_{h}\exp\left[-\frac{1}{2}\int_{p}\phi_{h}\Delta_{h}^{-1}\phi_{h}-S_{I,B}[\phi_{l}+\phi_{h}]\right]$ $\displaystyle=$ $\displaystyle\int\mathcal{D}\phi_{l}\exp\left[-\frac{1}{2}\int_{p}\phi_{l}\Delta_{l}^{-1}\phi_{l}\right]\exp\\{-S_{I,\Lambda}[\phi_{l}]\\}$ where $\displaystyle\exp\\{-S_{I,\Lambda}[\phi_{l}]\\}\equiv\int\mathcal{D}\phi_{h}\exp\bigg{\\{}-\frac{1}{2}\int_{p}\phi_{h}\Delta_{h}^{-1}\phi_{h}-S_{I,B}[\phi_{l}+\phi_{h}]\bigg{\\}}$ (2.4) $S_{I,\Lambda}$ is the interaction part of an effective low energy field theory with a UV cutoff $\Lambda$. Let $S_{\Lambda}[\phi]\equiv\frac{1}{2}\int_{p}\phi_{l}\Delta_{l}^{-1}\phi_{l}+S_{I,\Lambda}[\phi_{l}]$ (2.5) be the whole action so that $Z_{B}=\int\mathcal{D}\phi_{l}\,e^{-S_{\Lambda}[\phi_{l}]}$ (2.6) Using (2.4), we obtain $\displaystyle e^{-S_{\Lambda}[\phi]}$ $\displaystyle=\int\mathcal{D}\varphi\,\exp\left[-S_{B}[\varphi]+\frac{1}{2}\int_{p}\frac{p^{2}}{K(p/\Lambda_{0})}\varphi(p)\varphi(-p)-\frac{1}{2}\int_{p}\frac{p^{2}}{K(p/\Lambda)}\phi(p)\phi(-p)\right.$ $\displaystyle\left.\quad-\frac{1}{2}\int_{p}\frac{p^{2}}{K(p/\Lambda_{0})-K(p/\Lambda)}\left(\varphi(p)-\phi(p)\right)\left(\varphi(-p)-\phi(-p)\right)\right]$ (2.7) where we have written $\phi_{l}$ as $\phi$ and $\phi_{h}$ as $\varphi-\phi$. This will be useful later. It is to be noted that one can go back to the bare partition function anytime . For this reason this scheme is called “exact”, i.e. we lose no physical information by varying the scale. It is easy to see this explicitly. Using (2.7), we can calculate the generating functional of $S_{B}$ using $S_{\Lambda}$ as $\displaystyle\int\mathcal{D}\phi\,\exp\left(-S_{B}[\phi]-\int_{p}J(-p)\phi(p)\right)$ $\displaystyle=\exp\left[\frac{1}{2}\int_{p}J(p)J(-p)\frac{1}{p^{2}}\left\\{K(p/\Lambda_{0})\left(1-K(p/\Lambda_{0})\right)-\left(\frac{K(p/\Lambda_{0})}{K(p/\Lambda)}\right)^{2}K(p/\Lambda)\left(1-K(p/\Lambda)\right)\right\\}\right]$ $\displaystyle\qquad\times\int\mathcal{D}\phi\,\exp\left(-S_{\Lambda}[\phi]-\int_{p}J(-p)\frac{K(p/\Lambda_{0})}{K(p/\Lambda)}\,\phi(p)\right)$ (2.8) We observe that the correlation functions of $S_{B}$ are the same as those of $S_{\Lambda}$ up to the trivial (short-distance) contribution to the two-point function and up to the momentum-dependent rescaling of the field by $\frac{K(p/\Lambda_{0})}{K(p/\Lambda)}$ [54]. If we ignore the small corrections to the two-point functions, we can write $\prod_{i=1}^{n}\frac{1}{K(p_{i}/\Lambda)}\,\left\langle\phi(p_{1})\cdots\phi(p_{n})\right\rangle_{S_{\Lambda}}=\prod_{i=1}^{n}\frac{1}{K(p_{i}/\Lambda^{\prime})}\,\left\langle\phi(p_{1})\cdots\phi(p_{n})\right\rangle_{S_{\Lambda^{\prime}}}$ (2.9) #### 2.1.2 Polchinski’s ERG equation We have given an integral formula (2.4) for $S_{I,\Lambda}$ and (2.7) for $S_{\Lambda}$. It is easy to derive differential equations from these. From (2.4), we obtain Polchinski’s ERG equation $-\Lambda\frac{\partial S_{I,\Lambda}[\phi]}{\partial\Lambda}=\int_{p}(-)\frac{dK(p/\Lambda)}{dp^{2}}\left(-\frac{\delta S_{I,\Lambda}[\phi]}{\delta\phi(p)}\frac{\delta S_{I,\Lambda}[\phi]}{\delta\phi(-p)}+\frac{\delta^{2}S_{I,\Lambda}[\phi]}{\delta\phi(p)\delta\phi(-p)}\right)$ (2.10) for $S_{I,\Lambda}$. From (2.7) we obtain $-\Lambda\frac{\partial S_{\Lambda}[\phi]}{\partial\Lambda}=\int_{p}\left[-2p^{2}\frac{d\ln K(p/\Lambda)}{dp^{2}}\,\phi(p)\frac{\delta S_{\Lambda}}{\delta\phi(p)}+\frac{dK(p/\Lambda)}{dp^{2}}\left(-\frac{\delta S_{\Lambda}}{\delta\phi(p)}\frac{\delta S_{\Lambda}}{\delta\phi(-p)}+\frac{\delta^{2}S_{\Lambda}}{\delta\phi(p)\delta\phi(-p)}\right)\right]$ (2.11) for the entire Wilson action. #### 2.1.3 The limit $\Lambda\to 0+$ In the limit $\Lambda\to 0+$ we expect $S_{\Lambda}[\phi]$ approaches something related to the partition function. If we substitute $\lim_{\Lambda\to 0+}K(p/\Lambda)=0$ (2.12) into (2.7), we get $\displaystyle\lim_{\Lambda\to 0+}e^{-S_{\Lambda}[\phi]+\frac{1}{2}\int_{p}\frac{p^{2}}{K(p/\Lambda)}\phi(p)\phi(-p)}=\lim_{\Lambda\to 0+}e^{-S_{I,\Lambda}[\phi]}$ $\displaystyle=e^{-\frac{1}{2}\int_{p}\frac{p^{2}}{K(p/\Lambda_{0})}\phi(p)\phi(-p)}\int\mathcal{D}\varphi\,\exp\left[-S_{B}[\varphi]+\int_{p}\frac{p^{2}}{K(p/\Lambda_{0})}\varphi(p)\phi(-p)\right]$ (2.13) Hence, rewriting $\phi(p)$ by $\frac{K(p/\Lambda_{0})}{p^{2}}J(p)$, we obtain the generating functional of the bare theory as the $\Lambda\to 0+$ limit of $S_{I,\Lambda}$: $\displaystyle Z_{B}[J]$ $\displaystyle\equiv\int\mathcal{D}\varphi\,\exp\left[-S_{B}[\varphi]-\int_{p}\varphi(p)J(-p)\right]$ $\displaystyle=e^{-\frac{1}{2}\int_{p}J(p)J(-p)\frac{K(p/\Lambda_{0})}{p^{2}}}\lim_{\Lambda\to 0+}\exp\left(-S_{I,\Lambda}\left[\frac{K(p/\Lambda_{0})}{p^{2}}J(p)\right]\right)$ (2.14) #### 2.1.4 IR limit of a critical theory For the bare theory at criticality, we expect that the correlation functions $\left\langle\varphi(p_{1})\cdots\varphi(p_{n})\right\rangle_{B}\equiv\int\mathcal{D}\varphi\,\varphi(p_{1})\cdots\varphi(p_{n})\,e^{-S_{B}[\varphi]}$ (2.15) to become scale invariant in the IR limit, i.e., for small momenta. To be more precise, we can define the limit $\mathcal{C}(p_{1},\cdots,p_{n})\equiv\lim_{t\to\infty}e^{\frac{n}{2}\left(-(D+2)+\eta\right)t}\left\langle\varphi(p_{1}e^{-t})\cdots\varphi(p_{n}e^{-t})\right\rangle_{B}$ (2.16) where $\frac{\eta}{2}$ is the anomalous dimension. What does this mean for $S_{\Lambda}$ in the limit $\Lambda\to 0+$? As we have seen above, the interaction part $S_{I,\Lambda}$ becomes the generating functional of the bare theory in this limit. Since only the IR limit of the correlation functions are scale invariant, only the low momentum part of $\lim_{\Lambda\to 0+}S_{I,\Lambda}$ corresponds to the scale invariant theory defined by the IR limit (2.16). To understand the IR limit better, we follow Wilson [11] and reformulate the ERG trasnformation in two steps: 1. 1. introduction of an anomalous dimension (section 2.1.5) — the anomalous dimension is an important ingredient of the IR limit. We need to introduce an anomalous dimension of the field within ERG. 2. 2. introduction of a dimensionless framework (section 2.1.6) — each time we lower the cutoff $\Lambda$ we have to rescale space to restore the same momentum cutoff. This is necessary to realize scale invariance within ERG. #### 2.1.5 Anomalous dimension in ERG The cutoff dependent Wilson action $S_{\Lambda}[\phi]$ has two parts: $S_{\Lambda}[\phi]=\frac{1}{2}\int_{p}\frac{p^{2}}{K(p/\Lambda)}\phi(p)\phi(-p)+S_{I,\Lambda}[\phi]$ (2.17) The first term is a kinetic term, but this is not the only kinetic term; part of the interaction quadratic in $\phi$’s also contains the kinetic term. The normalization of $\phi$ has no physical meaning, and it is natural to normalize the field so that $S_{I,\Lambda}$ contains no kinetic term. To do this, we modify the ERG differential equation (2.11) by adding a number operator [44, 55]: $\displaystyle-\Lambda\partial_{\Lambda}S_{\Lambda}[\phi]$ $\displaystyle=\int_{p}\left(-2p^{2}\frac{d}{dp^{2}}\ln K(p/\Lambda)\,\phi(p)\frac{\delta S_{\Lambda}}{\delta\phi(p)}-\frac{d}{dp^{2}}K(p/\Lambda)\left\\{\frac{\delta^{2}S_{\Lambda}}{\delta\phi(p)\delta\phi(-p)}-\frac{\delta S_{\Lambda}}{\delta\phi(p)}\frac{\delta S_{\Lambda}}{\delta\phi(-p)}\right\\}\right)$ $\displaystyle\quad-\frac{\eta_{\Lambda}}{2}\mathcal{N}_{\Lambda}[\phi]$ (2.18) where the number operator $\mathcal{N}_{\Lambda}[\phi]$ is defined by $\mathcal{N}_{\Lambda}[\phi]\equiv\int_{p}\left[\phi(p)\frac{\delta S_{\Lambda}}{\delta\phi(p)}+\frac{K(p/\Lambda)\left(1-K(p/\Lambda)\right)}{p^{2}}\left\\{\frac{\delta^{2}S_{\Lambda}}{\delta\phi(p)\delta\phi(-p)}-\frac{\delta S_{\Lambda}}{\delta\phi(p)}\frac{\delta S_{\Lambda}}{\delta\phi(-p)}\right\\}\right]$ (2.19) This counts the number of fields: $\left\langle\mathcal{N}_{\Lambda}[\phi]\,\phi(p_{1})\cdots\phi(p_{n})\right\rangle_{S_{\Lambda}}=n\left\langle\phi(p_{1})\cdots\phi(p_{n})\right\rangle_{S_{\Lambda}}$ (2.20) (Again we are ignoring small corrections to the two-point functions.) Under (2.18) the correlation functions change as $\prod_{i=1}^{n}\frac{1}{K(p_{i}/\Lambda)}\,\left\langle\phi(p_{1})\cdots\phi(p_{n})\right\rangle_{S_{\Lambda}}=\left(\frac{Z_{\Lambda}}{Z_{\Lambda^{\prime}}}\right)^{\frac{n}{2}}\prod_{i=1}^{n}\frac{1}{K(p_{i}/\Lambda^{\prime})}\,\left\langle\phi(p_{1})\cdots\phi(p_{n})\right\rangle_{S_{\Lambda^{\prime}}}$ (2.21) where $Z_{\Lambda}$ is the solution of $-\Lambda\frac{\partial}{\partial\Lambda}Z_{\Lambda}=\eta_{\Lambda}\,Z_{\Lambda}$ (2.22) satisfying the initial condition $Z_{\Lambda_{0}}=1$ (2.23) We can choose $\eta_{\Lambda}$ so that $S_{\Lambda}$ has the same kinetic term independent of $\Lambda$. For (2.18), the integral formula (2.7) must be changed to [54] $\displaystyle e^{S_{\Lambda}[\phi]}$ $\displaystyle=\int\mathcal{D}\varphi\,e^{S_{0}[\varphi]}$ $\displaystyle\quad\times\exp\left[-\frac{1}{2}\int_{p}\frac{p^{2}}{\frac{1-K(p/\Lambda)}{Z_{\Lambda}K(p/\Lambda)}-\frac{1-K(p/\Lambda_{0})}{K(p/\Lambda_{0})}}\left(\frac{\varphi(p)}{K(p/\Lambda_{0})}-\frac{\phi(p)}{\sqrt{Z_{\Lambda}}\,K(p/\Lambda)}\right)\left(\frac{\varphi(-p)}{K(p/\Lambda_{0})}-\frac{\phi(-p)}{\sqrt{Z_{\Lambda}}\,K(p/\Lambda)}\right)\right]$ (2.24) This reduces to (2.7) for $Z_{\Lambda}=1$. #### 2.1.6 Dimensionless framework To reach the IR limit (2.16) we must look at smaller and smaller momenta as we lower the cutoff $\Lambda$. We can do this by measuring the momenta in units of the cutoff $\Lambda$. At the same time we render all the dimensionful quantities such as $\phi(p)$ dimensionless by using appropriate powers of $\Lambda$. We introduce a dimensionless parameter $t$ by $\Lambda=\mu\,e^{-t}$ (2.25) where $\mu$ is an arbitary fixed momentum scale. We then define the dimensionless field with dimensionless momentum by $\bar{\phi}(p)\equiv\Lambda^{\frac{D+2}{2}}\phi(p\Lambda)$ (2.26) and define a Wilson action parametrized by $t$: $\bar{S}_{t}[\bar{\phi}]\equiv S_{\Lambda}[\phi]$ (2.27) We can now rewrite (2.18) for $\bar{S}_{t}$: $\displaystyle\partial_{t}\bar{S}_{t}[\bar{\phi}]$ $\displaystyle=\int_{p}\left(-2p^{2}\frac{d}{dp^{2}}\ln K(p)+p\cdot\partial_{p}+\frac{D+2}{2}\right)\bar{\phi}(p)\cdot\frac{\delta\bar{S}_{t}[\bar{\phi}]}{\delta\bar{\phi}(p)}$ $\displaystyle\quad+\int_{p}(-)\frac{d}{dp^{2}}K(p)\,\left\\{\frac{\delta^{2}\bar{S}_{t}}{\delta\bar{\phi}(p)\delta\bar{\phi}(-p)}-\frac{\delta\bar{S}_{t}}{\delta\bar{\phi}(p)}\frac{\delta\bar{S}_{t}}{\delta\bar{\phi}(-p)}\right\\}-\frac{\eta_{t}}{2}\mathcal{N}_{t}[\bar{\phi}]$ (2.28) where we have replaced $\eta_{\Lambda}$ by $\eta_{t}$, and $\mathcal{N}_{t}[\bar{\phi}]\equiv\int_{p}\bar{\phi}(p)\frac{\delta\bar{S}_{t}[\bar{\phi}]}{\delta\bar{\phi}(p)}+\int_{p}\frac{K(p)\left(1-K(p)\right)}{p^{2}}\left(\frac{\delta^{2}\bar{S}_{t}}{\delta\bar{\phi}(p)\delta\bar{\phi}(-p)}-\frac{\delta\bar{S}_{t}}{\delta\bar{\phi}(p)}\frac{\delta\bar{S}_{t}}{\delta\bar{\phi}(-p)}\right)$ (2.29) is the number operator for $\bar{S}_{t}$. Rewriting (2.21) in terms of dimensionless fields, we obtain $\displaystyle\prod_{i=1}^{n}\frac{1}{K(p_{i})}\,\left\langle\bar{\phi}(p_{1})\cdots\bar{\phi}(p_{n})\right\rangle_{\bar{S}_{t}}$ $\displaystyle=\left(\frac{Z_{t}}{Z_{t^{\prime}}}\right)^{\frac{n}{2}}e^{-\frac{n}{2}\left(D-2\right)(t-t^{\prime})}\prod_{i=1}^{n}\frac{1}{K(p_{i}e^{-(t-t^{\prime})})}\,\left\langle\bar{\phi}(p_{1}e^{-(t-t^{\prime})})\cdots\bar{\phi}(p_{n}e^{-(t-t^{\prime})})\right\rangle_{\bar{S}_{t^{\prime}}}$ (2.30) where $Z_{t}$ satisfies $\partial_{t}Z_{t}=\eta_{t}\,Z_{t}$ (2.31) (The corrections to the two-point functions are ignored.) Comparing (2.30) with (2.16), the existence of the IR limit implies that $\lim_{t\to\infty}\eta_{t}=\eta$ (2.32) and $\lim_{t\to\infty}\prod_{i=1}^{n}\frac{1}{K(p_{i})}\,\left\langle\bar{\phi}(p_{1})\cdots\bar{\phi}\right\rangle_{\bar{S}_{t}}=\mathcal{C}(p_{1},\cdots,p_{n})$ (2.33) In other words $\bar{S}_{t}$ approaches a limit as $t\to+\infty$: $\lim_{t\to+\infty}\bar{S}_{t}=\bar{S}_{\infty}$ (2.34) We call $\bar{S}_{\infty}$ a fixed point because the right-hand side of (2.28) vanishes for it: $\displaystyle 0$ $\displaystyle=\int_{p}\left(-2p^{2}\frac{d}{dp^{2}}\ln K(p)+p\cdot\partial_{p}+\frac{D+2}{2}\right)\bar{\phi}(p)\cdot\frac{\delta\bar{S}_{\infty}[\bar{\phi}]}{\delta\bar{\phi}(p)}$ $\displaystyle\quad+\int_{p}(-)\frac{d}{dp^{2}}K(p)\,\left\\{\frac{\delta^{2}\bar{S}_{\infty}}{\delta\bar{\phi}(p)\delta\bar{\phi}(-p)}-\frac{\delta\bar{S}_{\infty}}{\delta\bar{\phi}(p)}\frac{\delta\bar{S}_{\infty}}{\delta\bar{\phi}(-p)}\right\\}-\frac{\eta}{2}\mathcal{N}_{\infty}[\bar{\phi}]$ (2.35) #### 2.1.7 Fixed-point equation Instead of choosing $\eta$ dependent on $t$, we may choose $\eta$ as a constant so that there is a non-trivial fixed-point solution $\bar{S}_{\infty}$ for which the right-hand side of (2.28) vanishes. With a constant anomalous dimension, the dimensionless ERG equation is given by $\displaystyle\partial_{t}\bar{S}_{t}[\bar{\phi}]$ $\displaystyle=\int_{p}\left(-2p^{2}\frac{d}{dp^{2}}\ln K(p)+\frac{D+2}{2}-\frac{\eta}{2}+p\cdot\partial_{p}\right)\bar{\phi}(p)\cdot\frac{\delta\bar{S}_{t}[\bar{\phi}]}{\delta\bar{\phi}(p)}$ $\displaystyle\quad+\int_{p}\left(-2\frac{d}{dp^{2}}K(p)-\eta\frac{K(p)\left(1-K(p)\right)}{p^{2}}\right)\frac{1}{2}\left(\frac{\delta^{2}\bar{S}_{t}[\bar{\phi}]}{\delta\bar{\phi}(p)\delta\bar{\phi}(-p)}-\frac{\delta\bar{S}_{t}[\bar{\phi}]}{\delta\bar{\phi}(p)}\frac{\delta\bar{S}_{t}[\bar{\phi}]}{\delta\bar{\phi}(-p)}\right)$ (2.36) For the O($N$) model with $N$ fields $\phi^{i}\,(i=1,\cdots,N)$, the ERG equation becomes $\displaystyle\partial_{t}\bar{S}_{t}[\bar{\phi}]$ $\displaystyle=\int_{p}\left(-2p^{2}\frac{d}{dp^{2}}\ln K(p)+\frac{D+2}{2}-\frac{\eta}{2}+p\cdot\partial_{p}\right)\bar{\phi}^{i}(p)\cdot\frac{\delta\bar{S}_{t}[\bar{\phi}]}{\delta\bar{\phi}^{i}(p)}$ $\displaystyle\quad+\int_{p}\left(-2\frac{d}{dp^{2}}K(p)-\eta\frac{K(p)\left(1-K(p)\right)}{p^{2}}\right)\frac{1}{2}\left(\frac{\delta^{2}\bar{S}_{t}[\bar{\phi}]}{\delta\bar{\phi}^{i}(p)\delta\bar{\phi}^{i}(-p)}-\frac{\delta\bar{S}_{t}[\bar{\phi}]}{\delta\bar{\phi}^{i}(p)}\frac{\delta\bar{S}_{t}[\bar{\phi}]}{\delta\bar{\phi}^{i}(-p)}\right)$ (2.37) where the repeated indices $i$ are summed over. ### 2.2 Energy Momentum Tensor: Scale Invariance and Conformal Invariance #### 2.2.1 Energy Momentum Tensor in the Classical Theory In this paper we will focus on the following Euclidean action whenever a concrete action is required for a calculation $S_{E}=\int d^{D}x\sqrt{g}[{1\over 2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi+{1\over 2}m^{2}\phi^{2}+\frac{\lambda}{4!}\phi^{4}]$ Using $\delta g=gg^{\mu\nu}\delta g_{\mu\nu},\quad\delta\sqrt{g}={1\over 2}\sqrt{g}g^{\mu\nu}\delta g_{\mu\nu},\quad\delta g_{\mu\nu}=-g_{\mu\rho}\delta g^{\rho\sigma}g_{\sigma\nu}$ we get $\delta S_{E}=-\int d^{D}x~{}{1\over 2}\delta g_{\mu\nu}\sqrt{g}[\partial^{\mu}\phi\partial^{\nu}\phi-g^{\mu\nu}{\cal L}]\equiv-\int d^{D}x~{}{1\over 2}\delta g_{\mu\nu}\sqrt{g}T^{\mu\nu}$ (2.38) where $T^{\mu\nu}\equiv-\frac{2}{\sqrt{g}}\frac{\delta S}{\delta g_{\mu\nu}}=\partial^{\mu}\phi\partial^{\nu}\phi-g^{\mu\nu}{\cal L}$ (2.39) One can check that $\partial^{\nu}T_{\mu\nu}=-\partial_{\mu}\phi\left[\frac{\partial{\cal L}}{\partial\phi}-\partial^{\rho}\left(\frac{\partial{\cal L}}{\partial^{\rho}\phi}\right)\right]=-\partial_{\mu}\phi\frac{\delta S_{E}}{\delta\phi}$ (2.40) Thus, classically the energy momentum tensor is conserved on-shell. Now we rewrite $T_{\mu\nu}$ in a form that will be useful later. Define the traceless tensor $t_{\mu\nu}=D\partial_{\mu}\partial_{\nu}-g_{\mu\nu}\Box~{}~{}$ (2.41) and the transverse tensor $\sigma_{\mu\nu}=(g_{\mu\nu}\Box-\partial_{\mu}\partial_{\nu})\phi^{2}$ (2.42) Using the identity $\partial_{\mu}\phi\partial_{\nu}\phi=\partial_{\mu}\partial_{\nu}{1\over 2}\phi^{2}-\phi\partial_{\mu}\partial_{\nu}\phi$ one can rewrite $\displaystyle T_{\mu\nu}$ $\displaystyle=\frac{1}{4(D-1)}t_{\mu\nu}\phi^{2}+\frac{D-2}{4(D-1)}(\partial_{\mu}\partial_{\nu}-g_{\mu\nu}\partial^{2})\phi^{2}-\frac{1}{D}\phi t_{\mu\nu}\phi$ $\displaystyle\quad-\frac{1}{D}g_{\mu\nu}\left[m^{2}\phi^{2}+(4-D)\frac{\lambda}{4!}\phi^{4}+\frac{D-2}{2}E\right]$ (2.43) The trace which is proportional to $g_{\mu\nu}\frac{\delta S}{\delta g_{\mu\nu}}$ can be written as $\frac{\partial S}{\partial t}$ when $g_{\mu\nu}=e^{2t}\delta_{\mu\nu}$ and is the response to scale transformations. $T^{\mu}_{\mu}=\frac{(2-D)}{4}\Box\phi^{2}-\left[m^{2}\phi^{2}+(4-D)\frac{\lambda}{4!}\phi^{4}+\frac{D-2}{2}E\right]$ (2.44) with $E=\phi\frac{\delta S_{E}}{\delta\phi}$ proportional to the equation of motion. The terms proportional to $m^{2}$ and $\lambda$ are genuine violations of scale invariance. But the first term can be gotten rid of by defining the improved energy momentum tensor $\Theta_{\mu\nu}=T_{\mu\nu}+\frac{D-2}{4(D-1)}\sigma_{\mu\nu}\phi^{2}$ (2.45) which is still conserved. So in a genuinely classically scale invariant theory with $m^{2}=0$ and $\lambda=0~{}\mathrm{or}~{}D=4$ one expects $\Theta^{\mu}_{\mu}=\frac{2-D}{2}E$ #### 2.2.2 Trace of the Energy Momentum Tensor in the Quantum Theory: Perturbative When quantum corrections 333We are working in Euclidean space. So “quantum” fluctuations are actually statistical fluctuations are included the condition for scale invariance is modified. The trace will be defined as before proportional to $\frac{\partial S}{\partial t}$. Before we turn to the exact RG let us see what happens in the usual lowest order perturbation theory. Let us start at $\Lambda_{0}$ and evolve to $\Lambda$ with $\Lambda$ close to $\Lambda_{0}$. $S_{\Lambda_{0}}=\int_{x}\left[{1\over 2}\partial_{\mu}\phi\partial^{\mu}\phi+{1\over 2}m_{0}^{2}\phi^{2}+\lambda_{0}\frac{\phi^{4}}{4!}\right]$ (2.46) and $S_{\Lambda}=\int_{x}\left[(1-\delta Z(t)){1\over 2}\partial_{\mu}\phi\partial^{\mu}\phi+{1\over 2}(m_{0}^{2}+\delta m_{0}(t)^{2})\phi^{2}+(\lambda_{0}+\delta\lambda_{0}(t))\frac{\phi^{4}}{4!}+O(1/\Lambda)\right]$ Here $\delta Z$ is the correction to the kinetic term coming from the two loop diagram at $\mathcal{O}(\lambda^{2})$, $\delta m_{0}^{2}\approx O(\lambda)$ and $\delta\lambda_{0}\approx\mathcal{O}(\lambda^{2})$ are the corrections starting at one loop. We rewrite $S_{\Lambda}$ in a suggestive way by adding and subtracting some terms proportional to $\delta Z$: $\displaystyle S_{\Lambda}$ $\displaystyle=\int_{x}\Big{[}{1\over 2}\partial_{\mu}\phi\partial^{\mu}\phi+{1\over 2}\underbrace{(m_{0}^{2}+\delta m_{0}(t)^{2}+\delta Zm_{0}^{2})}_{m^{2}(t)=m^{2}_{R}}\phi^{2}+\underbrace{(\lambda_{0}+\delta\lambda_{0}(t)+2\delta Z\lambda_{0})}_{\lambda(t)=\lambda_{R}}\frac{\phi^{\prime 4}}{4!}+\mathcal{O}(1/\Lambda)\Big{]}$ $\displaystyle\quad-\delta Z\underbrace{\left[{1\over 2}\partial_{\mu}\phi\partial^{\mu}\phi+{1\over 2}m_{0}^{2}\phi^{2}+2\lambda_{0}\frac{\phi^{4}}{4!}\right]}_{\phi\frac{\partial{\cal L}}{\partial\phi}}$ (2.47) If we think of $S_{\Lambda_{0}}$ as the bare action $S_{B}$ and $S_{\Lambda}$ as the renormalized action $S_{R}$ so that $S_{B}=S_{R}+S_{counter-term}$, then $\lambda_{0}=\lambda_{B}$ and $\lambda(t)=\lambda_{R}$. The relation between renormalized and bare quantities is $\lambda_{B}=\frac{\lambda_{R}+\delta\lambda_{R}}{Z^{2}}$ Here $\delta\lambda_{R}$ is the counterterm and is chosen to cancel the correction $\delta\lambda_{0}$ so $\delta\lambda_{R}=-\delta\lambda_{0}$. Let us write everything in terms of $\lambda_{B}$: $\displaystyle\lambda_{B}$ $\displaystyle=\lambda_{R}+\delta\lambda_{R}-2\delta Z\lambda_{R}\approx\lambda_{R}+\delta\lambda_{R}-2\delta Z\lambda_{0}$ $\displaystyle\lambda_{B}+2\delta Z\lambda_{0}-\delta\lambda_{R}$ $\displaystyle=\lambda_{0}+2\delta Z\lambda_{0}+\delta\lambda_{0}=\lambda_{R}=\lambda(t)$ Thus for small $t$: $\lambda(t)=\lambda_{0}+\beta(\lambda_{0})t~{}~{};~{}~{}~{}m^{2}(t)=m^{2}(0)(1+\gamma_{m}t)~{}~{}~{};~{}~{}\delta Z=-2\gamma t$ Furthermore define $x=\bar{x}\Lambda^{-1}=\bar{x}\Lambda_{0}e^{t}$ The trace of the energy momentum tensor is given by the dependence on $t$ $\displaystyle-T^{\mu}_{~{}\mu}$ $\displaystyle=\frac{\partial S_{\Lambda_{0}}}{\partial t}$ $\displaystyle=\Lambda^{-D}\left\\{\int_{\bar{x}}\left[{1\over 2}m_{0}^{2}\gamma_{m}(\lambda_{0})\phi^{2}+\beta(\lambda_{0})\frac{\phi^{4}}{4!}\right]+2\gamma\int_{x}{1\over 2}\phi\frac{\delta S_{\Lambda_{0}}}{\delta\phi(x)}\right.$ $\displaystyle\qquad\left.+D\int_{\bar{x}}[{1\over 2}m_{0}^{2}\phi^{2}+\lambda_{0}\frac{\phi^{4}}{4!}]+(D-2)\int_{\bar{x}}{1\over 2}\partial_{\bar{\mu}}\phi\partial^{\bar{\mu}}\phi+O(1/\Lambda_{0})]\right\\}$ (2.48) Define dimensionless variables as $m_{0}^{2}=\bar{m}^{2}\Lambda_{0}^{2}=\bar{m}^{2}e^{2t}\Lambda^{2}$ and $\lambda_{0}=(\Lambda_{0})^{4-D}\bar{\lambda}_{0}=\bar{\lambda}_{0}e^{(4-D)t}(\Lambda)^{4-D}$ and fields $\phi=(\Lambda)^{\frac{D-2}{2}}\bar{\phi}=e^{-\frac{D-2}{2}t}\Lambda_{0}^{\frac{D-2}{2}}\bar{\phi}$ Now add and subtract $(\frac{D-2}{2})\int_{\bar{x}}\bar{\phi}\frac{\delta S_{\Lambda_{0}}}{\delta\bar{\phi}(x)}$ to get $-T^{\mu}_{~{}\mu}=\int_{\bar{x}}\underbrace{\Big{[}{1\over 2}\bar{m}^{2}(2+\gamma_{m}(\lambda_{0}))\bar{\phi}^{2}+(\beta(\lambda_{0})-(D-4)\lambda_{0})\frac{\bar{\phi}^{4}}{4!}\Big{]}}_{\textrm{``$\beta$-function''}}+\left(\frac{D-2}{2}+\gamma\right)\int_{\bar{x}}\bar{\phi}\frac{\delta S_{\Lambda_{0}}}{\delta\bar{\phi}(x)}+O(1/\Lambda_{0})$ (2.49) LHS can be identified with the trace of the energy momentum tensor in the quantum theory and can be compared with the corresponding classical expression in (2.44). The above gives an idea of how the quantum corrections modify $T_{\mu\nu}$. A detailed calculation of the energy momentum tensor in the renormalized theory in terms of composite operators and using dimensional regularization is given in [33]. A systematic and precise treatment is provided by ERG and is given in [35, 37] and is summarized below. #### 2.2.3 Energy Momentum Tensor in Exact RG We summarize the properties of the energy momentum tensor in ERG, given in [35]. The Ward Identity almost 444up to transverse terms of the form $\partial_{\mu}\partial_{\nu}-\Box\delta_{\mu\nu}$ that do not contribute defines the energy momentum. Because of general coordinate invariance $\delta x^{\mu}=-\epsilon^{\mu}~{}~{};~{}~{}~{}~{}\phi^{\prime}(x)=\phi(x)+\epsilon^{\mu}\partial_{\mu}\phi(x)$ is equivalent to (Assume that $g_{\mu\nu}=\eta_{\mu\nu}$) $\delta g_{\mu\nu}=\epsilon_{(\mu,\nu)}$ and $\int{\cal D}\phi^{\prime}=\int{\cal D}\phi_{g+\delta g}~{}~{}~{};~{}~{}~{}S[\phi,g+\delta g]=S[\phi^{\prime},g]$ Thus the following identity must hold $Z[J]=\int{\cal D}\phi^{\prime}e^{-S[\phi^{\prime}(x)]+\int_{x}J(x)\phi^{\prime}(x)}=\int{\cal D}\phi_{g+\delta g}e^{-S[\phi(x),g+\delta g]+\int_{x}J(x)(\phi(x)+\epsilon^{\mu}\partial_{\mu}\phi(x))}$ Then using the definition of the energy momentum tensor, i.e. $Z[J=0,g+\delta g]=\int{\cal D}\phi_{g+\delta g}e^{-S[\phi,g+\delta g]}\equiv\int{\cal D}\phi_{g}e^{-S[\phi,g]+{1\over 2}\int\sqrt{g}\delta g_{\mu\nu}T^{\mu\nu}}$ (2.50) we get the Ward identity $-\partial_{\mu}\langle T^{\mu}_{~{}\nu}(x)\phi(x_{1})...\phi(x_{n})\rangle+\sum_{i=1}^{n}\delta(x-x_{i})\langle\phi(x_{1})....\partial_{\nu}\phi(x_{i})...\phi(x_{n})\rangle=0$ (2.51) This is a statement of the conservation of $T_{\mu\nu}$ corresponding to the classical statement (2.40). In ERG this can be written as a Ward identity for the composite operator $[T_{\mu\nu}]$ $q^{\mu}[T_{\mu\nu}(q)]=\int_{p}e^{S[\phi]}K(p)(p+q)_{\nu}\frac{\delta}{\delta\phi(p)}([\phi(p+q)]e^{-S[\phi]})$ (2.52) The equation corresponding to (2.49) and (2.44) is $T^{\mu}_{\mu}(0)=-\frac{\partial S}{\partial t}-(\frac{D-2}{2}+\gamma){\cal N}$ (2.53) where $-\frac{\partial S}{\partial t}$ gives the ERG evolution, with anomalous dimension, in terms of dimensioness variables - the “$\beta$-function”. It vanishes at the fixed point. ${\cal N}$ is the number operator. Note that this equation is obtained for zero momentum or as an integral over space-time in position space. The classical analog of this is (2.44), which was obtained for arbitrary momentum. Note that in equations (2.52) and (2.53), both LHS and RHS are composite operators. So one strategy will be to evaluate $T_{\mu\nu}$ using these equations in the bare theory at some scale $\Lambda_{0}$ which will be taken to be infinity. The bare theory is very simple so the calculations can be done exactly. Then one can evolve $T_{\mu\nu}$ down to a scale $\Lambda<<\Lambda_{0}$ order by order using the ERG evolution operator. If we choose $\lambda$ and $m$ to be on the critical surface we are guaranteed that at $\Lambda$ the theory flows to the fixed point action. Thus we will have evaluated the energy momentum tensor at the fixed point. Another approach is to work directly with the known fixed point action and solve the Ward identity order by order. In this paper we follow the second approach. ## 3 Wilson-Fisher Fixed Point for the $O(N)$ Model We will find the fixed-point Wilson action by putting $\frac{\partial\bar{S}_{t}}{\partial t}=0$ in (2.1.7). As we will work mostly with dimensionless variables we will remove the bar sign from the dimensionless variables unless otherwise mentioned. Also t dependence of actions and fields being readily implied, the subscript t will be omitted too. We give the fixed point action $S$ in the following form: $S=S_{2}+S_{4}+S_{6}$ where $S_{2}$ and $S_{4}$ are given by $\displaystyle S_{2}$ $\displaystyle=\int\frac{d^{D}p}{(2\pi)^{D}}U_{2}(p){1\over 2}\phi^{I}(p)\phi^{I}(-p)$ (3.54) $\displaystyle S_{4}$ $\displaystyle=\frac{1}{2}\prod_{i=1}^{3}\int\frac{d^{D}p_{i}}{(2\pi)^{D}}U_{4}(p_{1},p_{2};p_{3},p_{4}){1\over 2}\phi^{I}(p_{1})\phi^{I}(p_{2}){1\over 2}\phi^{J}(p_{3})\phi^{J}(p_{4})$ (3.55) where $p_{1}+p_{2}+p_{3}+p_{4}=0$ is implied. Instead of putting an explicit delta function and integrating over $p_{4}$ we will simply impose momentum conservation at every stage. Accordingly $S_{6}$ is given by $S_{6}=\frac{1}{3!}\prod_{i=1}^{5}\int\frac{d^{D}p_{i}}{(2\pi)^{D}}U_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},p_{6}){1\over 2}\phi^{I}(p_{1})\phi^{I}(p_{2}){1\over 2}\phi^{J}(p_{3})\phi^{J}(p_{4}){1\over 2}\phi^{K}(p_{5})\phi^{K}(p_{6})$ (3.56) ### 3.1 Equations for the vertices We get the following equations for $U_{2}$,$U_{4}$ and $U_{6}$: ##### Equation for $U_{2}$ $\displaystyle 0=\int$ $\displaystyle\frac{d^{D}p}{(2\pi)^{D}}\Bigg{\\{}\bigg{(}\frac{-\eta}{2}\frac{K(1-K)}{p^{2}}-K^{\prime}(p^{2})\bigg{)}\frac{1}{8}\bigg{[}4NU_{4}(p_{1},-p_{1};p,-p)+8U_{4}(p_{1},p;-p_{1},-p)\bigg{]}$ $\displaystyle-$ $\displaystyle\frac{1}{2!}2U_{2}(p)U_{2}(p)\delta^{D}(p-p_{1})\Bigg{\\}}+\bigg{(}\frac{-\eta}{2}+1-2\frac{p_{1}^{2}}{K(p_{1}^{2})}K^{\prime}(p_{1}^{2})\bigg{)}U_{2}(p_{1})-\frac{1}{2!}p_{1}\frac{dU_{2}(p_{1})}{dp_{1}}$ ##### Equation for $U_{4}$ $\displaystyle 0=$ $\displaystyle\int\frac{d^{D}p}{(2\pi)^{D}}\bigg{(}\frac{-\eta}{2}\frac{K(1-K)}{p^{2}}-K^{\prime}(p^{2})\bigg{)}\frac{1}{48}$ $\displaystyle\times\bigg{\\{}$ $\displaystyle 6NU_{6}(p_{1},p_{2};p_{3},p_{4};p,-p)+12U_{6}(p_{1},p;p_{2},-p;p_{3},p_{4})+12U_{6}(p_{1},p_{2};p_{3},p;p_{4},-p)\bigg{\\}}$ $\displaystyle-$ $\displaystyle\sum_{j=1}^{4}\bigg{(}\frac{-\eta}{2}\frac{K(1-K)}{p_{j}^{2}}-K^{\prime}(p_{j}^{2})\bigg{)}U_{2}(p_{j})~{}\frac{2}{8}U_{4}(p_{1},p_{2};p_{3},p_{4})+\sum_{j=1}^{4}\bigg{(}\frac{-\eta}{2}-2\frac{p^{2}}{K(p_{j}^{2})}K^{\prime}(p_{j}^{2})\bigg{)}~{}\frac{1}{8}U_{4}(p_{1},p_{2};p_{3},p_{4})$ $\displaystyle+$ $\displaystyle\bigg{[}4-D-\sum_{i=1}^{4}p_{i}\frac{d}{dp_{i}}\bigg{]}\frac{1}{8}U_{4}(p_{1},p_{2};p_{3},p_{4})$ (3.58) Here $p=p_{a}+p_{b}+p_{n}=-(p_{i}+p_{j}+p_{m})$. ##### Equation for $U_{6}$ $\displaystyle 0=$ $\displaystyle\frac{2}{48}\sum_{6~{}perm~{}of~{}(m,n)}\bigg{(}\frac{-\eta}{2}\frac{K(1-K)}{(p_{i}+p_{j}+p_{m})^{2})}-K^{\prime}((p_{i}+p_{j}+p_{m})^{2})\bigg{)}U_{4}(p_{i},p_{j};p_{m},p)U_{4}(p_{a},p_{b};p_{n},-p)$ $\displaystyle+$ $\displaystyle\sum_{j=1}^{6}\bigg{(}K^{\prime}(p_{j}^{2})-\frac{-\eta}{2}\frac{K(1-K)}{p_{j}^{2}}\bigg{)}U_{2}(p_{j})\frac{2}{48}U_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},p_{6})$ $\displaystyle+$ $\displaystyle\sum_{j=1}^{6}\bigg{(}\frac{-\eta}{2}-2\frac{p^{2}}{K(p_{j}^{2})}K^{\prime}(p_{j}^{2})\bigg{)}\frac{1}{48}U_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},p_{6})+\bigg{[}6-2D-\sum_{i=1}^{6}p_{i}\frac{d}{dp_{i}}\bigg{]}\frac{1}{48}U_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},p_{6})$ (3.59) ### 3.2 Solving the Equations We know that $U_{4}\approx\mathcal{O}(\epsilon)$ and $U_{6}\approx\mathcal{O}(\epsilon^{2})$ and $\eta\approx\mathcal{O}(\epsilon^{2})$, where $\epsilon=4-D$. #### 3.2.1 $\mathcal{O}(1)$: Retrieving Gaussian theory We start with (3.1) for $U_{2}$. Neglecting $U_{4}$ and $\eta$ and collecting coefficients of $\phi^{2}$ we get $0=K^{\prime}(p^{2})U_{2}(p)U_{2}(p)+\bigg{(}1-2\frac{p^{2}}{K(p^{2})}K^{\prime}(p^{2})\bigg{)}U_{2}(p)-p^{2}\frac{dU_{2}(p)}{dp^{2}}$ (3.60) $U_{2}(p)=\frac{p^{2}}{K(p^{2})}$ solves this equation. This is expected since the Gaussian theory is expected to be a fixed point - and this ERG was obtained from Polchinski’s ERG by adding on the kinetic term ${1\over 2}\int\frac{d^{D}p}{(2\pi)^{D}}\phi(p)\frac{p^{2}}{K(p^{2})}\phi(-p)$. Thus our solution can be written as $\boldmath U_{2}(p)=\frac{p^{2}}{K(p^{2})}+\underbrace{U_{2}^{(1)}(p)}_{\mathcal{O}(\epsilon)}+\mathcal{O}(\epsilon^{2})$ (3.61) #### 3.2.2 $\mathcal{O}(\epsilon)$: Fixed Point value of $m^{2}$ We go back to (3.1) and keep $U_{4}$ which is $\mathcal{O}(\epsilon)$ but drop $\eta$ which is $\mathcal{O}(\epsilon^{2})$. $\displaystyle 0=\int$ $\displaystyle\frac{d^{D}p}{(2\pi)^{D}}\bigg{(}\frac{-\eta}{2}\frac{K(1-K)}{p^{2}}-K^{\prime}(p^{2})\bigg{)}\times$ $\displaystyle\bigg{\\{}$ $\displaystyle\frac{1}{8}\Big{[}4NU_{4}(p_{1},p_{2};p,-p)+8U_{4}(p_{1},p;-p,-p_{1})\Big{]}-\frac{1}{2!}2U_{2}(p)U_{2}(p)\delta^{D}(p-p_{1})\bigg{\\}}$ $\displaystyle+$ $\displaystyle\bigg{(}\frac{-\eta}{2}+1-2\frac{p_{1}^{2}}{K(p_{1}^{2})}K^{\prime}(p_{1}^{2})\bigg{)}U_{2}(p_{1})-\frac{1}{2!}p_{1}\frac{dU_{2}(p_{1})}{dp_{1}}$ We use (3.61) in the above equation and look at the terms of order $\epsilon$. To leading order we set $U_{4}=\lambda$, which is $\mathcal{O}(\epsilon)$. The equation for $U_{2}^{(1)}$ is given by $\displaystyle 0=-\lambda\frac{4N+8}{8}\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p^{2})+2\frac{p_{1}^{2}}{K(p_{1}^{2})}U_{2}^{(1)}\bigg{(}p_{1})K^{\prime}(p_{1}^{2})+(1-2\frac{p_{1}^{2}}{K(p_{1}^{2})}K^{\prime}(p_{1}^{2})\bigg{)}U_{2}^{(1)}(p_{1})-p_{1}^{2}\frac{dU_{2}^{(1)}(p_{1})}{dp_{1}^{2}}$ To leading order this equation is solved by a constant $U_{2}^{(1)}$, i.e. $0=-\lambda\frac{4N+8}{8}\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p^{2})+U_{2}^{(1)}$ (3.63) Thus $\boldsymbol{U_{2}^{(1)}=\lambda\frac{N+2}{2}\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p^{2})}$ (3.64) Here $\int\frac{d^{D}p}{(2\pi)^{D}}=\frac{1}{2^{D}\pi^{D/2}\Gamma(D/2)}\int(p^{2})^{\frac{D-2}{2}}dp^{2}$ To get leading results we can set $D=4$: $U_{2}^{(1)}=\lambda\frac{4N+8}{8}\frac{1}{(4\pi)^{2}}\int_{0}^{\infty}dp^{2}p^{2}K^{\prime}(p^{2})=-\lambda\frac{4N+8}{8}\frac{1}{(4\pi)^{2}}\int_{0}^{\infty}dp^{2}K(p^{2})$ (3.65) We have used $K(0)=1,K(\infty)=0$. This gives the fixed point value of the dimensionless mass parameter: $U_{2}^{(1)}=m_{\star}^{2}=-\lambda\frac{N+2}{2}\frac{1}{(4\pi)^{2}}\int_{0}^{\infty}dp^{2}K(p^{2})$ (3.66) To evaluate the integral explicitly we need a specific form for $K$. We use $K(p^{2})=e^{-p^{2}}$. Then the integral is equal to 1. #### 3.2.3 $\mathcal{O}(\epsilon^{2})$: Expression for the six-point vertex Let us turn to (3.1) reproduced below: $\displaystyle 0=-$ $\displaystyle\frac{2}{48}\sum_{6~{}perm~{}of~{}(i,j,m)}\bigg{(}\frac{-\eta}{2}\frac{K(1-K)}{(p_{i}+p_{j}+p_{m})^{2}}-K^{\prime}((p_{i}+p_{j}+p_{m})^{2})\bigg{)}U_{4}(p_{i},p_{j};p_{m},p)U_{4}(p_{a},p_{b};p_{n},-p)$ $\displaystyle+$ $\displaystyle\sum_{j=1}^{6}\bigg{\\{}\bigg{(}K^{\prime}(p_{j}^{2})-\frac{-\eta}{2}\frac{K(1-K)}{p_{j}^{2}}\bigg{)}2U_{2}(p_{j})+\bigg{(}\frac{-\eta}{2}-2\frac{p^{2}}{K(p_{j}^{2})}K^{\prime}(p_{j}^{2})\bigg{)}\bigg{\\}}\frac{1}{48}U_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},p_{6})$ $\displaystyle+$ $\displaystyle\bigg{[}6-2D-\sum_{i=1}^{6}p_{i}\frac{d}{dp_{i}}\bigg{]}\frac{1}{48}U_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},p_{6})$ (3.67) where $p=p_{a}+p_{b}+p_{n}=-(p_{i}+p_{j}+p_{m})$. In this equation we keep terms of $\mathcal{O}(\epsilon^{2})$. Since $\eta$ is $\mathcal{O}(\epsilon^{2})$, and multiplies terms of $\mathcal{O}(\epsilon^{2})$, it contributes only at $\mathcal{O}(\epsilon^{4})$ in this equation, so it can be dropped here. Furthermore then, if we use the leading order solution for $U_{2}=\frac{p^{2}}{K(p^{2})}$, the second and third terms cancel each other. So we are left with $\displaystyle 0=-$ $\displaystyle\frac{2}{48}\sum_{6~{}perm~{}(i,j,m)}K^{\prime}\big{(}(p_{i}+p_{j}+p_{m})^{2}\big{)}U_{4}(p_{i},p_{j};p_{n},p)U_{4}(p_{a},p_{b};p_{n},-p)$ $\displaystyle+$ $\displaystyle\bigg{[}(6-2D-\sum_{i=1}^{6}p_{i}\frac{d}{dp_{i}})\bigg{]}\frac{1}{48}U_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},p_{6})$ (3.68) Since $U_{4}=\lambda$ to this order, we obtain $0=\lambda^{2}\frac{2}{48}\sum_{6~{}perm~{}(i,j,m)}K^{\prime}((p_{i}+p_{j}+p_{m})^{2})+\bigg{[}6-2D-\sum_{i=1}^{6}p_{i}\frac{d}{dp_{i}}\bigg{]}\frac{1}{48}U_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},p_{6})$ (3.69) The solution for one permutation is $U_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},p_{6})=\lambda^{2}\frac{K((p_{1}+p_{2}+p_{3})^{2})-K(0)}{(p_{1}+p_{2}+p_{3})^{2}}$ The full solution is given by $\displaystyle U_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},p_{6})=-\lambda^{2}$ $\displaystyle\big{\\{}h(p_{1}+p_{2}+p_{3})+h(p_{1}+p_{2}+p_{4})+h(p_{1}+p_{2}+p_{5})$ $\displaystyle+$ $\displaystyle h(p_{1}+p_{2}+p_{6})+h(p_{1}+p_{3}+p_{4})+h(p_{2}+p_{3}+p_{4})\big{\\}}$ (3.70) where $h(x)=\frac{K(0)-K(x)}{x^{2}}$. #### 3.2.4 Fixed Point value of $\lambda$: Solution for $U_{4}$ at $\mathcal{O}(\epsilon)$ The $U_{4}$ equation is given by (3.1). In this equation $\eta$ can be neglected as $-\eta\approx\mathcal{O}(\epsilon^{2})$ . Also we put the value of $U_{2}$ upto order of $\epsilon$ found above. There is a cancellation between the second and third terms on the R.H.S and we obtain $\displaystyle\bigg{[}\bigg{(}4-D-\sum_{i=1}^{4}p_{i}\frac{d}{dp_{i}}\bigg{)}-\sum_{j=1}^{4}2K^{\prime}(p_{j}^{2})\frac{\lambda}{16\pi^{2}}\frac{N+2}{2}\bigg{]}\frac{1}{8}U_{4}(p_{1},p_{2};p_{3},p_{4})$ $\displaystyle=$ $\displaystyle\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p^{2})\frac{1}{48}\bigg{\\{}6NU_{6}(p_{1},p_{2};p_{3},p_{4};p,-p)+12U_{6}(p_{1},p;p_{2},-p;p_{3},p_{4})+12U_{6}(p_{1},p_{2};p_{3},p;p_{4},-p)\bigg{\\}}$ The solution is given in the Appendix (A.1). The fixed point value $\lambda^{*}$ given below solves the above equation: $\boldmath\lambda^{*}=(4-D)\frac{16\pi^{2}}{N+8}$ (3.72) ### 3.3 Determining Anomalous Dimension $U_{2}$ equation at $\mathcal{O}(\epsilon^{2})$ $\displaystyle 0=\int$ $\displaystyle\bigg{\\{}\frac{d^{D}p}{(2\pi)^{D}}\bigg{(}\frac{-\eta}{2}\frac{K(1-K)}{p^{2}}-K^{\prime}(p^{2})\bigg{)}\left[\frac{\delta^{2}S_{4}}{\delta\phi^{I}(p)\delta\bar{\phi}^{I}(-p)}-\frac{\delta S_{2}}{\delta\phi^{I}(p)}\frac{\delta S_{2}}{\delta\phi^{I}(-p)}\right]\bigg{\\}}$ $\displaystyle+\bigg{\\{}$ $\displaystyle-\frac{\eta}{2}-2\frac{p^{2}}{K(p^{2})}K^{\prime}(p^{2})\bigg{\\}}\phi(p).\frac{\delta S}{\delta\phi(p)}+\mathcal{G}_{dil}^{c}S_{2}$ where we plug in: $\displaystyle U_{4}(p_{1},p_{2};p_{3},p_{4})$ $\displaystyle=$ $\displaystyle\lambda+\underbrace{\tilde{U}_{4}(p_{1},p_{2};p_{3},p_{4})}_{O(\epsilon^{2})}$ $\displaystyle U_{2}(p)$ $\displaystyle=$ $\displaystyle\frac{p^{2}}{K}-\lambda\frac{N+2}{2}\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p^{2})+\underbrace{\tilde{U}_{2}(p)}_{O(\epsilon^{2})}$ (3.73) and keep only $\mathcal{O}(\epsilon^{2})$ terms in the above equation to get $\displaystyle 0=\int$ $\displaystyle\frac{d^{D}p}{(2\pi)^{D}}\bigg{(}\frac{-\eta}{2}\frac{K(1-K)}{p^{2}}-K^{\prime}(p^{2})\bigg{)}\times$ $\displaystyle\bigg{\\{}$ $\displaystyle\frac{1}{8}\bigg{[}4N\tilde{U}_{4}(p_{1},-p_{1};p,-p)+8\tilde{U}_{4}(p_{1},p;-p,-p_{1})\bigg{]}-\frac{1}{2!}2U_{2}(p)U_{2}(p)\delta^{D}(p-p_{1})]\bigg{\\}}$ $\displaystyle+$ $\displaystyle\bigg{(}\frac{-\eta}{2}+1-2\frac{p_{1}^{2}}{K(p_{1}^{2})}K^{\prime}(p_{1}^{2})\bigg{)}U_{2}(p_{1})-p_{1}^{2}\frac{dU_{2}(p_{1})}{dp_{1}^{2}}$ (3.74) On simplification it gives $-\frac{-\eta}{2}\frac{(1-K)}{K}p_{1}^{2}-\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p^{2})\frac{1}{8}\bigg{[}4N\tilde{U}_{4}(p_{1},-p_{1};p,-p)+8\tilde{U}_{4}(p_{1},p_{1};-p,-p_{1})\bigg{]}+K^{\prime}(p_{1}^{2})U_{2}(p_{1})U_{2}(p_{1})$ $+\frac{-\eta}{2}\frac{p_{1}^{2}}{K}+\tilde{U}_{2}(p_{1})-p_{1}^{2}\frac{d\tilde{U}_{2}(p_{1})}{dp_{1}^{2}}=0$ (3.75) In the L.H.S the third term will cancel with part of the second term (shown in A.3). Also the raison d’etre for introducing $\eta$ is to ensure that $U_{2}=p^{2}+\mathcal{O}(p^{4})$. So we let $\tilde{U}_{2}=\mathcal{O}(p^{4})$. So The anomalous dimension is given by $\boldmath\frac{\eta}{2}=-\frac{d}{dp_{1}^{2}}\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p^{2})\frac{1}{8}\bigg{[}4N\tilde{U}_{4}^{II}(p_{1},-p_{1};p,-p)+8\tilde{U}_{4}^{II}(p_{1},p;-p_{1},-p)\bigg{]}~{}\Bigg{|}_{p_{1}^{2}=0}$ (3.76) Here the superscript $II$ is explained in Appendix A and refers to a class of Feynman diagrams. $\tilde{U}_{4}$ is determined by solving (3.2.4). So using (3.76) and (A.4) one can determine $\eta$. This is done in the Appendix (A.4). The result is of course well known [11]: $\frac{\eta}{2}=\lambda^{2}\frac{N+2}{4}\frac{1}{(16\pi^{2})^{2}}=\frac{N+2}{(N+8)^{2}}\frac{\epsilon^{2}}{4}$ (3.77) Collecting results we have (we have put D=4 for $\mathcal{O}(\epsilon^{2})$ terms), $\displaystyle\boldsymbol{U_{2}(p)=\frac{p^{2}}{K(p^{2})}-\lambda\frac{N+2}{2}\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p^{2})+\tilde{U}_{2}(p)}$ (3.78) The expression for $\tilde{U}_{2}(p)$ is given in (A.3) (also in the next section a neater expression is presented). $\displaystyle\boldsymbol{U_{4}(p_{1},p_{2};p_{3},p_{4})=}$ $\displaystyle\boldsymbol{(4-D)\frac{16\pi^{2}}{N+8}+\frac{(N+2)}{2}\frac{\lambda^{2}}{16\pi^{2}}\sum_{j=1}^{4}h(p_{j})}$ $\displaystyle\boldsymbol{-}$ $\displaystyle\boldsymbol{\lambda^{2}\bigg{[}(N+4)F(p_{1}+p_{2})+2F(p_{1}+p_{3})+2F(p_{1}+p_{4})\bigg{]}}$ (3.79) where $\displaystyle F(p)=\frac{1}{2}\int\frac{d^{D}p}{(2\pi)^{D}}h(q)\bigg{[}h(p+q)-h(q)\bigg{]}$ and $\displaystyle h(p)=\frac{K(0)-K(p^{2})}{p^{2}}$ $\displaystyle U_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},p_{6})=-\lambda^{2}\bigg{\\{}$ $\displaystyle h(p_{1}+p_{2}+p_{3})+h(p_{1}+p_{2}+p_{4})+h(p_{1}+p_{2}+p_{5})$ $\displaystyle+$ $\displaystyle h(p_{1}+p_{2}+p_{6})+h(p_{1}+p_{3}+p_{4})+h(p_{2}+p_{3}+p_{4})\bigg{\\}}$ (3.80) and the anomalous dimension is given by $\displaystyle\boldsymbol{\frac{\eta}{2}=\lambda^{2}\frac{N+2}{4}\frac{1}{(16\pi^{2})^{2}}=\frac{N+2}{(N+8)^{2}}\frac{\epsilon^{2}}{4}}$ (3.81) To evaluate the integrals we have put $D=4$ and used specific form of $K(p^{2})=e^{-p^{2}}$. This completes the solution of the fixed point ERG equation and determination of the eigenvalue $\eta$ corresponding to anomalous dimension up to $O(\epsilon^{2})$. In the next section we give a slightly different approach to obtaining the fixed point action and evaluate correlation functions. ## 4 Correlation functions ### 4.1 A more general equation In the previous section we set $\frac{\partial S}{\partial t}=0$ and solved the fixed point equation for the action order by order. One can also solve a more general equation where the LHS is not set to zero but to $\frac{\partial S}{\partial t}=\beta_{J}\frac{\partial S}{\partial\lambda_{J}}$. The parameters can be chosen so that the beta functions are zero. This has the effect that the equations are modifed at each order by terms of higher order. The advantage is that the solutions are easier to write down. We want to obtain the fixed-point Wilson action to order $\lambda^{2}$ in the following form: $\displaystyle S[\phi^{I}]$ $\displaystyle=\int_{p}\frac{1}{2}\phi^{I}(p)\phi^{I}(-p)\,\left(\frac{p^{2}}{K(p)}+U_{2}(p)\right)$ $\displaystyle\quad+\frac{1}{2}\int_{p_{1},p_{2},p_{3},p_{4}}\frac{1}{2}\phi^{I}(p_{1})\phi^{I}(p_{2})\frac{1}{2}\phi^{J}(p_{3})\phi^{J}(p_{4})\,\delta\left(\sum_{i=1}^{4}p_{i}\right)\,\bigg{(}\lambda+V_{4}(p_{1},p_{2};p_{3},p_{4})\bigg{)}$ $\displaystyle\quad+\frac{1}{3!}\int_{p_{1},\cdots,p_{6}}\frac{1}{2}\phi^{I}(p_{1})\phi^{I}(p_{2})\frac{1}{2}\phi^{J}(p_{3})\phi^{J}(p_{4})\frac{1}{2}\phi^{K}(p_{5})\phi^{K}(p_{6})\,\delta\left(\sum_{i=1}^{6}p_{i}\right)$ (4.82) $\displaystyle\qquad\qquad\times V_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},p_{6})$ As we will all vertex function in powers of $\lambda$ we have to put the general expression for $\frac{\partial\lambda}{\partial t}$ i.e $\displaystyle\frac{\partial\lambda}{\partial t}=(\epsilon\lambda+\beta_{N}^{(1)}\lambda^{2})$ Where $\beta_{N}^{(1)}$ , the leading term in the beta function, is given by $\displaystyle\beta_{N}^{(1)}=2(N+8)\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p)\frac{K((0)-K(p)}{p^{2}}\equiv-(N+8)\int_{p}f(p)h(p)$ where $f(p)=-2K^{\prime}(p^{2})$. If we assume $V_{2}(p)=\lambda v_{2}^{(1)}(p)+\lambda^{2}v_{2}^{(2)}(p)=\lambda v_{2}^{(1)}(p)+\Big{(}V_{2}^{I}(p)+V_{2}^{II}(p)\Big{)}$, where $V_{2}^{I(II)}$ is analog of $\tilde{U}_{2}^{I(II)}$ in A.3, then $\displaystyle\frac{\partial V_{2}(p)}{\partial t}=\Big{(}\epsilon\lambda+\beta_{N}^{(1)}\lambda^{2}\Big{)}v_{2}^{(1)}(p)+2\lambda^{2}\epsilon v_{2}^{(2)}(p)+2\lambda^{3}\beta_{N}^{(1)}v_{2}^{(2)}(p)$ Similarly if $V_{4}(p_{1},p_{2};p_{3},p_{4})=V_{4}^{I}(p_{1},p_{2};p_{3},p_{4})+V_{4}^{II}(p_{1},p_{2};p_{3},p_{4})$, where $V_{4}^{I(II)}(p_{1},p_{2};p_{3},p_{4})$ is equivalent to $\tilde{U}_{4}^{I(II)}(p_{1},p_{2};p_{3},p_{4})$ in A.2. $\displaystyle\frac{\partial}{\partial t}\Big{[}\lambda+V_{4}(p_{1},p_{2};p_{3},p_{4})\Big{]}=\Big{(}\epsilon\lambda+\beta_{N}^{(1)}\lambda^{2}\Big{)}+2V_{4}(p_{1},p_{2};p_{3},p_{4})\Big{(}\epsilon+\beta_{N}^{(1)}\lambda\Big{)}$ A. (3.64) is modified to $\displaystyle\frac{1}{2}\epsilon~{}v_{2}^{(1)}(p)=-\frac{4N+8}{8}\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p^{2})+v_{2}^{(1)}(p).$ gives $\displaystyle v_{2}^{(1)}(p)=$ $\displaystyle-\frac{N+2}{2-\epsilon}\frac{1}{2}\int\frac{d^{D}p}{(2\pi)^{D}}f(p)$ (4.83) $\displaystyle\equiv$ $\displaystyle-(N+2)v_{2}$ where $v_{2}=\frac{1}{2-\epsilon}\frac{1}{2}\int\frac{d^{D}p}{(2\pi)^{D}}f(p)$ B. (A.2) turns into $\displaystyle\bigg{[}\epsilon+\sum_{j=1}^{4}p_{j}\frac{d}{dp_{j}}\bigg{]}V_{4}^{II}(p_{1},p_{2};p_{3},p_{4}))$ $\displaystyle=$ $\displaystyle-2\lambda^{2}\int_{\bar{p}}K^{\prime}(p^{2})\bigg{[}(N+4)h(p_{1}+p_{2}+p)+2h(p+p_{1}+p_{3})+2h(p+p_{1}+p_{4})-(N+8)h(p)\bigg{]}$ (4.84) If we write $V_{4}^{II}(p_{1},p_{2};p_{3},p_{4}))=-\lambda^{2}\Big{\\{}(N+4)F(p_{1}+p_{2})+2F(p_{1}+p_{3})+2F(p_{1}+p_{4})\Big{\\}}$ the equation for $F(p)$ can be written as, $\displaystyle\big{(}p.\partial p+\epsilon\big{)}F(p)=\int\frac{d^{D}p}{(2\pi)^{D}}f(q)h(q+p)+\frac{1}{3}\beta^{(1)}$ (4.85) where $\displaystyle\frac{1}{3}\beta^{(1)}=-\int\frac{d^{D}p}{(2\pi)^{D}}f(p)h(p)$ The solution , analytic at $p=0$ is, $\displaystyle F(q)=\frac{1}{2}\int\frac{d^{D}p}{(2\pi)^{D}}h(p)\Big{(}h(q+p)-h(p)\Big{)}$ (4.86) C. Similarly (A.139a) gets modified to, $\displaystyle\bigg{[}\epsilon+\sum_{j=1}^{4}p_{j}\frac{d}{dp_{j}}\bigg{]}\frac{1}{8}V_{4}^{I}(p_{1},p_{2};p_{3},p_{4})=\lambda^{2}(N+2)\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p^{2})\bigg{\\{}-\frac{1}{8}\sum_{j=1}^{4}h(p_{j})-\frac{1}{4(2-\epsilon)}K^{\prime}(p_{j}^{2})\bigg{\\}}$ (4.87) whose solution is, $\displaystyle V_{4}^{I}(p_{1},p_{2};p_{3},p_{4})$ $\displaystyle=\lambda^{2}\frac{(N+2)}{2-\epsilon}\int\frac{d^{D}p}{(2\pi)^{D}}(-K^{\prime}(p^{2}))\sum_{j=1}^{4}h(p_{j})$ (4.88) Also $\displaystyle\frac{1}{8}\Big{\\{}4NV_{4}^{I}(p_{1},-p_{1};p,-p)+8V_{4}^{I}(p,p_{1};-p,-p_{1})\Big{\\}}$ $\displaystyle=\frac{(N+2)^{2}}{2-\epsilon}\lambda^{2}\int\frac{d^{D}q}{(2\pi)^{D}}(-K^{\prime}(q^{2}))\Big{[}h(p_{1})+h(p)\Big{]}$ D. (3.75) turns into, $\displaystyle(2-2\epsilon)V_{2}^{I}-2p_{1}^{2}\frac{dV_{2}^{I}(p_{1})}{dp_{1}^{2}}=~{}$ $\displaystyle-\frac{2\lambda^{2}}{2-\epsilon}(N+2)^{2}\bigg{\\{}\int\frac{d^{D}p}{(2\pi)^{D}}(-K^{\prime}(p^{2}))\bigg{\\}}^{2}h(p_{1})-2\big{(}v_{2}^{(1)}\big{)}^{2}K^{\prime}(p_{1}^{2})$ (4.89) The solution is $\displaystyle V_{2}^{I}(p_{1})=-(N+2)^{2}\lambda^{2}\frac{1}{(2-\epsilon)^{2}}\frac{1}{4}\bigg{\\{}\int\frac{d^{D}p}{(2\pi)^{D}}f(p)\bigg{\\}}^{2}h(p_{1})$ E. (A.3) changes to $\displaystyle\Big{(}-2+2\epsilon\Big{)}V_{2}^{II}(p_{1})+\beta_{N}^{(1)}\lambda^{2}v_{2}^{(1)}(p)+2p_{1}^{2}\frac{dV_{2}^{II}(p_{1})}{dp_{1}^{2}}$ $\displaystyle=$ $\displaystyle-3\lambda^{2}(N+2)\int_{r,p}(-K^{\prime}(p^{2}))h(r)\Big{[}h(p_{1}+p+r)-h(r)\Big{]}+\frac{2}{2-\epsilon}\Big{[}(N+2)^{2}\lambda^{2}\int(-K^{\prime}(q^{2}))\Big{]}\int_{p}(-K^{\prime}(p^{2}))h(p)-\eta~{}p_{1}^{2}$ If we assume $\displaystyle V_{2}^{II}(p)=-3\lambda^{2}(N+2)G(p)$ Then $G(p)$ satisfies the following equation, $(p.\partial p-2+2\epsilon)G(p)=\int f(q)F(p+q)+\frac{2v_{2}}{3}\int_{p}f(p)h(p)+\eta^{(2)}p^{2}$ (4.90) From (3.76) we get $\eta=3(N+2)\lambda^{2}\eta^{(2)}$ where, $\displaystyle\eta^{(2)}=-\frac{d}{dp^{2}}\int f(q)F(q+p)\Big{|}_{p=0}$ The solution , analytic at $p=0$ is $\displaystyle G(p)=\frac{1}{3}\int h(q)(F(p+q)-F(q))+\frac{1}{\epsilon}\frac{\eta^{(2)}}{2}p^{2}-\frac{1}{2-2\epsilon}\Bigg{(}\int f(q)F(q)+\frac{2v_{2}}{3}\int_{p}f(p)h(p)\Bigg{)}$ (4.91) $V_{2}^{I}(p)+V_{2}^{II}(p)$ when calculated in the limit $\epsilon\rightarrow 0$ gives the expression of $\tilde{U}_{2}(p)$ mentioned in the previous section. The solutions are given by, $\displaystyle\boldsymbol{V_{2}(p)}$ $\displaystyle\boldsymbol{=-\lambda(N+2)v_{2}-\lambda^{2}\left(3(N+2)G(p)+(N+2)^{2}\left(v_{2}\right)^{2}h(p)\right)}$ (4.92a) $\displaystyle\boldsymbol{V_{4}(p_{1},p_{2};p_{3},p_{4})}$ $\displaystyle\boldsymbol{=-\lambda^{2}\Big{(}(N+4)F(p_{1}+p_{2})+2F(p_{1}+p_{3})+2F(p_{1}+p_{4})}$ $\displaystyle\qquad\qquad\boldsymbol{-(N+2)v_{2}\sum_{i=1}^{4}h(p_{i})}\Big{)}$ (4.92b) $\displaystyle\boldsymbol{V_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},p_{6})}$ $\displaystyle\boldsymbol{=-\lambda^{2}}\left(\boldsymbol{h(p_{1}+p_{2}+p_{3})+h(p_{1}+p_{2}+p_{4})+h(p_{1}+p_{2}+p_{5})}\right.$ $\displaystyle\qquad\left.\boldsymbol{+h(p_{1}+p_{2}+p_{6})+h(p_{3}+p_{4}+p_{1})+h(p_{3}+p_{4}+p_{2})}\right)$ (4.92c) where $f(p)=-2K^{\prime}(p^{2});~{}h(p)=\frac{K(0)-K(p^{2})}{p^{2}}$ and $\boldsymbol{v_{2}=\frac{1}{2-\epsilon}\frac{1}{2}\int\frac{d^{D}p}{(2\pi)^{D}}f(p)}$ (4.93) If we take the limit $\epsilon\rightarrow 0$ and $K(p^{2})=e^{-p^{2}}$ we get $\displaystyle v_{2}=\frac{1}{2}\int\frac{d^{4}p}{(2\pi)^{4}}e^{-p^{2}}=\frac{1}{2}\frac{1}{16\pi^{2}}$ $F(p)=\frac{1}{2}\int\frac{d^{D}p}{(2\pi)^{D}}h(q)\Big{[}h(p+q)-h(q)\Big{]}$ The coupling constant $\lambda$ is given, to order $\epsilon=4-D$, as $\boldsymbol{\lambda=\frac{\epsilon}{-\beta_{N}^{(1)}}=\frac{(4\pi)^{2}}{N+8}}\,\epsilon$ (4.94) The anomalous dimension is given, to order $\epsilon^{2}$, as $\boldsymbol{\eta=\frac{N+2}{2(N+8)^{2}}}\,\boldsymbol{\epsilon^{2}}$ (4.95) ### 4.2 Calculation of correlation functions In this section we will calculate two-, four-, and six-point correlation functions. Recall that our Wilson action has a fixed momentum cutoff of order $1$. If we consider the momenta much larger than the cutoff, the vertices of the Wilson action gives the correlation functions [36]. We first rescale the field $J^{I}(p)\equiv\frac{1}{h(p)}\phi^{I}(p)$ (4.96) and define $W[J^{I}]\equiv-S[\phi^{I}]+\frac{1}{2}\int_{p}J^{I}(p)J^{I}(-p)\frac{h(p)}{K(p)}$ (4.97) For our Wilson action, this is given by $\displaystyle W[J^{I}]$ $\displaystyle=\int_{p}\frac{1}{2}J^{I}(p)J^{I}(-p)\,h(p)^{2}\left(\frac{1}{h(p)}-V_{2}(p)\right)$ $\displaystyle\quad+\frac{1}{2}\int_{p_{1},p_{2},p_{3},p_{4}}\frac{1}{2}J^{I}(p_{1})J^{I}(p_{2})\frac{1}{2}J^{J}(p_{3})J^{J}(p_{4})\,\delta\left(\sum_{i=1}^{4}p_{i}\right)$ $\displaystyle\qquad\qquad\times\prod_{i=1}^{4}h(p_{i})\,\cdot\left(-\lambda- V_{4}(p_{1},p_{2};p_{3},p_{4})\right)$ $\displaystyle\quad+\frac{1}{3!}\int_{p_{1},\cdots,p_{6}}\frac{1}{2}J^{I}(p_{1})J^{I}(p_{2})\frac{1}{2}J^{J}(p_{3})J^{J}(p_{4})\frac{1}{2}J^{K}(p_{5})J^{K}(p_{6})\,\delta\left(\sum_{i=1}^{6}p_{i}\right)$ $\displaystyle\qquad\qquad\times\prod_{i=1}^{6}h(p_{i})\,\cdot(-)V_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},p_{6})$ (4.98) In the high momentum limit we obtain the generating functional of the connected correlation functions $\mathcal{W}[J^{I}]=\lim_{t\to+\infty}W[J_{t}^{I}]$ (4.99) where $J_{t}^{I}(p)\equiv\exp\left(-t\frac{D-2+\eta}{2}\right)J^{I}(pe^{-t})$ (4.100) In our case we obtain $\displaystyle W[J_{t}^{I}]$ $\displaystyle=\int_{p}\frac{1}{2}J^{I}(p)J^{I}(-p)\,\exp\left(t(2-\eta)\right)h(pe^{t})^{2}\left(\frac{1}{h(pe^{t})}-V_{2}(pe^{t})\right)$ $\displaystyle\quad+\frac{1}{2}\int_{p_{1},p_{2},p_{3},p_{4}}\frac{1}{2}J^{I}(p_{1})J^{I}(p_{2})\frac{1}{2}J^{J}(p_{3})J^{J}(p_{4})\,\delta\left(\sum_{i=1}^{4}p_{i}\right)$ $\displaystyle\qquad\quad\times\exp\left(t(D+4-2\eta\right)\prod_{i=1}^{4}h(p_{i}e^{t})\,\cdot\left(-\lambda- V_{4}(p_{1}e^{t},p_{2}e^{t};p_{3}e^{t},p_{4}e^{t})\right)$ $\displaystyle\quad+\frac{1}{3!}\int_{p_{1},\cdots,p_{6}}\frac{1}{2}J^{I}(p_{1})J^{I}(p_{2})\frac{1}{2}J^{J}(p_{3})J^{J}(p_{4})\frac{1}{2}J^{K}(p_{5})J^{K}(p_{6})\,\delta\left(\sum_{i=1}^{6}p_{i}\right)$ $\displaystyle\qquad\quad\times\exp\left(t(2D+6-3\eta\right)\prod_{i=1}^{6}h(p_{i}e^{t})\,\cdot(-)V_{6}(p_{1}e^{t},p_{2}e^{t};p_{3}e^{t},p_{4}e^{t};p_{5}e^{t},p_{6}e^{t})$ (4.101) In the limit $t\to+\infty$ we obtain $\displaystyle\mathcal{W}[J^{I}]$ $\displaystyle=\int_{p}\frac{1}{2}J^{I}(p)J^{I}(-p)\,C_{2}(p)$ $\displaystyle\quad+\frac{1}{2}\int_{p_{1},p_{2},p_{3},p_{4}}\frac{1}{2}J^{I}(p_{1})J^{I}(p_{2})\frac{1}{2}J^{J}(p_{3})J^{J}(p_{4})\,\delta\left(\sum_{i=1}^{4}p_{i}\right)\,C_{4}(p_{1},p_{2};p_{3},p_{4})$ $\displaystyle\quad+\frac{1}{3!}\int_{p_{1},\cdots,p_{6}}\frac{1}{2}J^{I}(p_{1})J^{I}(p_{2})\frac{1}{2}J^{J}(p_{3})J^{J}(p_{4})\frac{1}{2}J^{K}(p_{5})J^{K}(p_{6})\,\delta\left(\sum_{i=1}^{6}p_{i}\right)$ $\displaystyle\qquad\qquad\times C_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},p_{6})$ (4.102) #### 4.2.1 Two-point function $\displaystyle C_{2}(p)$ $\displaystyle=\lim_{t\to+\infty}\exp\left(t(2-\eta)\right)h(pe^{t})^{2}\left(\frac{1}{h(pe^{t})}-V_{2}(pe^{t})\right)$ $\displaystyle=\lim_{t\to+\infty}\frac{1}{(p^{2})^{2}}\left[p^{2}(1-\eta\,t)+\lambda^{2}3(N+2)e^{-2t}G(pe^{t})\right]$ (4.103) Using $G(pe^{t})\overset{t\to\infty}{\longrightarrow}p^{2}e^{2t}\frac{1}{12(4\pi)^{4}}\ln\left(p^{2}e^{2t}\right)$ (4.104) we obtain $\boxed{C_{2}(p)=\frac{1}{p^{2}}\left(1+\frac{\eta}{2}\ln p^{2}\right)=\frac{1}{p^{2-\eta}}}$ (4.105) #### 4.2.2 Four-point function $\displaystyle C_{4}(p_{1},p_{2};p_{3},p_{4})$ $\displaystyle=\lim_{t\to+\infty}\exp\left(t(D+4-2\eta)\right)\prod_{i=1}^{4}h(p_{i}e^{t})\,\cdot\left(-\lambda- V_{4}(p_{1}e^{t},p_{2}e^{t};p_{3}e^{t},p_{4}e^{t})\right)$ $\displaystyle=\prod_{i=1}^{4}\frac{1}{p_{i}^{2}}\lim_{t\to+\infty}\left(1-\epsilon\,t\right)\Big{[}-\lambda$ $\displaystyle\qquad+\lambda^{2}\left((N+4)F\left((p_{1}+p_{2})e^{t}\right)+2F\left((p_{1}+p_{3})e^{t}\right)+2F\left((p_{2}+p_{3})e^{t}\right)\right)\Big{]}$ (4.106) Using $F(pe^{t})\overset{t\to+\infty}{\longrightarrow}-\frac{1}{(4\pi)^{2}}\ln\left(pe^{t}\right)$ (4.107) we obtain $\prod_{i=1}^{4}p_{i}^{2}\cdot C_{4}(p_{1},p_{2};p_{3},p_{4})=-\lambda\left(1+\epsilon\frac{1}{N+8}\ln\left\\{(p_{1}+p_{2})^{N+4}(p_{1}+p_{3})^{2}(p_{2}+p_{3})^{2}\right\\}\right)$ (4.108) #### 4.2.3 Six-point function Since $V_{6}$ is already of order $\lambda^{2}$, we can take $D=4$ and $\eta=0$ to obtain $\displaystyle C_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},p_{6})$ $\displaystyle=\lim_{t\to+\infty}e^{t\left(2D+6-3\eta\right)}\prod_{i=1}^{6}h(p_{i}e^{t})\,(-)V_{6}(p_{1}e^{t},p_{2}e^{t};p_{3}e^{t},p_{4}e^{t};p_{5}e^{t},p_{6}e^{t})$ $\displaystyle=\lim_{t\to+\infty}e^{14t}\prod_{i=1}^{6}\frac{1}{p_{i}^{2}e^{2t}}\cdot\lambda^{2}\left(h\left((p_{1}+p_{2}+p_{3})e^{t}\right)+\cdots\right)$ $\displaystyle=\lambda^{2}\prod_{i=1}^{6}\frac{1}{p_{i}^{2}}\left(\frac{1}{(p_{1}+p_{2}+p_{3})^{2}}+\cdots+\frac{1}{(p_{3}+p_{4}+p_{2})^{2}}\right)$ (4.109) ## 5 Construction of the energy-momentum tensor at the fixed point Given a fixed-point Wilson action, we wish to construct the energy-momentum tensor $\Theta_{\mu\nu}(p)$. It is a symmetric tensor implicitly determined by the Ward identity $p_{\mu}\Theta_{\mu\nu}(p)=e^{S}\int_{q}K(q)(q+p)_{\nu}\frac{\delta}{\delta\phi^{I}(q)}\left(\left[\phi^{I}(q+p)\right]\,e^{-S}\right)$ (5.110) where $\left[\phi^{I}(p)\right]\equiv\frac{1}{K(p)}\left(\phi^{I}(p)-\frac{K(p)\left(1-K(p)\right)}{p^{2}}\frac{\delta S}{\delta\phi^{I}(-p)}\right)$ (5.111) is the composite operator corresponding to $\phi^{I}(p)$. The Ward identity leaves an additive ambiguity of the form $\left(p^{2}\delta_{\mu\nu}-p_{\mu}p_{\nu}\right)\mathcal{O}(p)$ where $\mathcal{O}(p)$ is a scalar composite operator. Since $\Theta_{\mu\nu}$ must have zero scale dimension, $\mathcal{O}$ must have scale dimension $-2$. There is no such $\mathcal{O}$, since the squared mass operator $\frac{1}{2}\phi^{2}$ acquires a positive anomalous dimension at the fixed point. Hence, the Ward identity determines $\Theta_{\mu\nu}$ unambiguously. In fact we are going to calculate $\Theta_{\mu\nu}(p)$ only at $p=0$; we need not worry about this ambiguity anyway. It is convenient to expand $\Theta_{\mu\nu}(p)$ in powers of $\left[\phi^{I}\right]$: $\displaystyle\Theta_{\mu\nu}(p)$ $\displaystyle=\sum_{n=0}^{\infty}\frac{1}{n!}\int_{p_{1},\cdots,p_{2n}}\prod_{i=1}^{n}\frac{1}{2}\left[\phi^{I_{i}}(p_{2i-1})\right]\left[\phi^{I_{i}}(p_{2i})\right]\,\delta\left(\sum_{i=1}^{2n}p_{i}-p\right)$ $\displaystyle\quad\times c_{\mu\nu,2n}(p_{1},p_{2};\cdots;p_{2n-1},p_{2n})$ (5.112) To order $\lambda^{2}$, we only have three coefficients $c_{\mu\nu,0},c_{\mu\nu,2},c_{\mu\nu,4}$. Since the field-independent term ($n=0$) is proportional to $\delta(p)$, we cannot determine $c_{\mu\nu,0}$ from the Ward identity. So, we will determine only $c_{\mu\nu,2}$ and $c_{\mu\nu,4}$. From (4.1), we obtain $\displaystyle\left[\phi^{I}(p)\right]=\phi^{I}(p)-h(p)\Big{\\{}V_{2}(p)\phi^{I}(p)$ $\displaystyle\quad+\int_{p_{1},p_{2},p_{3}}\frac{1}{2}\phi^{J}(p_{1})\phi^{J}(p_{2})\phi^{I}(p_{3})\,\delta\left(\sum_{i=1}^{3}p_{i}-p\right)\,\left(\lambda+V_{4}(p_{1},p_{2};p_{3},-p)\right)$ $\displaystyle\quad+\frac{1}{2}\int_{p_{1},\cdots,p_{5}}\frac{1}{2}\phi^{J}(p_{1})\phi^{J}(p_{2})\frac{1}{2}\phi^{K}(p_{3})\phi^{K}(p_{4})\phi^{I}(p_{5})\,\delta\left(\sum_{i=1}^{5}p_{i}-p\right)$ $\displaystyle\qquad\qquad\qquad\times V_{6}(p_{1},p_{2};p_{3},p_{4};p_{5},-p)\Big{\\}}$ (5.113) Inverting this we obtain, to order $\lambda^{2}$, $\displaystyle\phi^{I}(p)=\left[\phi^{I}(p)\right]+h(p)\Big{\\{}V_{2}^{1PI}(p)\left[\phi^{I}(p)\right]$ $\displaystyle\quad+\int_{p_{1},p_{2},p_{3}}\frac{1}{2}\left[\phi^{J}(p_{1})\right]\left[\phi^{J}(p_{2})\right]\left[\phi^{I}(p_{3})\right]\,\delta\left(\sum_{i=1}^{3}p_{i}-p\right)\,\left(\lambda+V_{4}^{1PI}(p_{1},p_{2};p_{3},-p)\right)\Big{\\}}$ (5.114) where we have defined the 1PI vertices as $\displaystyle V_{2}^{1PI}(p)$ $\displaystyle=-\lambda(N+2)v_{2}-\lambda^{2}3(N+2)G(p)$ (5.115a) $\displaystyle V_{4}^{1PI}(p_{1},p_{2};p_{3},p_{4})$ $\displaystyle=-\lambda^{2}\left((N+4)F(p_{1}+p_{2})+2F(p_{1}+p_{3})+2F(p_{1}+p_{4})\right)$ (5.115b) Note that $\phi^{I}$ has no sixth order term expanded in $\left[\phi\right]$’s to order $\lambda^{2}$. The rhs of (5.110) gives $\displaystyle e^{S}\int_{q}K(q)(q+p)_{\nu}\frac{\delta}{\delta\phi^{I}(q)}\left(\left[\phi^{I}(q+p)\right]e^{-S}\right)$ $\displaystyle=\int_{q}K(q)(q+p)_{\nu}\left(-\left[\phi^{I}(q+p)\right]\frac{\delta S}{\delta\phi^{I}(q)}+\frac{\delta}{\delta\phi^{I}(q)}\left[\phi^{I}(q+p)\right]\right)$ (5.116) Expanding this in powers of $\left[\phi\right]$’s, we obtain from (5.110) the following equations that determine the coefficients $c_{\mu\nu,2}$ and $c_{\mu\nu,4}$. $\displaystyle p_{\mu}c_{\mu\nu,2}(p_{1},p_{2})=-p_{1\nu}p_{2}^{2}-p_{2\nu}p_{1}^{2}$ $\displaystyle\quad+\lambda(N+2)\left(v_{2}p_{\nu}-\int_{q}(q+p)_{\nu}R(q)h(q)h(q+p)\right)$ $\displaystyle\quad+\lambda^{2}(N+2)\Big{[}3\left(p_{1\nu}G(p_{2})+p_{2\nu}G(p_{1})\right)$ $\displaystyle\qquad-(N+2)v_{2}\int_{q}(q+p)_{\nu}R(q)h(q)h(q+p)\left(h(q)+h(q+p)\right)$ $\displaystyle\qquad+\frac{1}{2}\int_{q}\left\\{(q+p)_{\nu}R(q)-q_{\nu}R(q+p)\right\\}h(q)h(q+p)$ $\displaystyle\qquad\qquad\quad\times\left\\{(N+2)F(p)+3F(q+p_{1})+3F(q+p_{2})\right\\}\Big{]}$ (5.117a) and $\displaystyle p_{\mu}c_{\mu\nu,4}(p_{1},p_{2};p_{3},p_{4})=-\lambda p_{\nu}$ $\displaystyle\quad+\lambda^{2}\Big{\\{}(N+4)\left(F(p_{1}+p_{2})(p_{3}+p_{4})_{\nu}+F(p_{3}+p_{4})(p_{1}+p_{2})_{\nu}\right)$ $\displaystyle\qquad+2p_{1\nu}\left(F(p_{2}+p_{3})+F(p_{2}+p_{4})\right)+2p_{2\nu}\left(F(p_{2}+p_{3})+F(p_{2}+p_{4})\right)$ $\displaystyle\qquad+2p_{3\nu}\left(F(p_{4}+p_{1})+F(p_{4}+p_{2})\right)+2p_{4\nu}\left(F(p_{3}+p_{1})+F(p_{3}+p_{2})\right)\Big{\\}}$ $\displaystyle\quad+\lambda^{2}\frac{1}{2}\int_{q}\left\\{(q+p)_{\nu}R(q)-q_{\nu}R(q+p)\right\\}h(q)h(q+p)$ $\displaystyle\,\times\left\\{(N+4)\left(h(q+p_{1}+p_{2})+h(q+p_{3}+p_{4})\right)+4\left(h(q+p_{1}+p_{3})+h(q+p_{1}+p_{4})\right)\right\\}$ (5.117b) To determine $c_{\mu\nu,2}(p_{1},p_{2})$ at $p=0$, we substitute $p_{2}=p-p_{1}$ into the rhs of (5.117a), and expand the result to first order in $p$. This gives $\displaystyle c_{\mu\nu,2}(p_{1},-p_{1})$ $\displaystyle=-p_{1}^{2}\delta_{\mu\nu}+2p_{1\mu}p_{1\nu}$ $\displaystyle\quad+\lambda(N+2)\delta_{\mu\nu}\left\\{v_{2}-\int_{q}R(q)\left(h(q)^{2}+\frac{1}{D}h(q)q\cdot\partial_{q}h(q)\right)\right\\}$ $\displaystyle\quad+\lambda^{2}(N+2)\Big{\\{}3\left(\delta_{\mu\nu}G(p_{1})-2p_{1\mu}p_{1\nu}G^{\prime}(p_{1})\right)$ $\displaystyle\qquad+\int_{q}\left(\delta_{\mu\nu}R(q)-2q_{\mu}q_{\nu}R^{\prime}(q)\right)h(q)^{2}\left(-(N+2)v_{2}h(q)+3F(q+p_{1})\right)\Big{\\}}$ (5.118) Similarly, substituting $p_{4}=p-(p_{1}+p_{2}+p_{3})$ into the rhs of (5.117b) and expanding the result to first order in $p$, we obtain $\displaystyle c_{\mu\nu,4}(p_{1},p_{2};p_{3},-(p_{1}+p_{2}+p_{3}))=-\lambda\delta_{\mu\nu}$ $\displaystyle\quad+\lambda^{2}\Big{\\{}(N+4)\left(\delta_{\mu\nu}F(p_{1}+p_{2})-2(p_{1}+p_{2})_{\mu}(p_{1}+p_{2})_{\nu}F^{\prime}(p_{1}+p_{2})\right)$ $\displaystyle\qquad+2\left(\delta_{\mu\nu}F(p_{1}+p_{3})-2(p_{1}+p_{3})_{\mu}(p_{1}+p_{3})_{\nu}F^{\prime}(p_{1}+p_{3})\right)$ $\displaystyle\qquad+2\left(\delta_{\mu\nu}F(p_{2}+p_{3})-2(p_{2}+p_{3})_{\mu}(p_{2}+p_{3})_{\nu}F^{\prime}(p_{2}+p_{3})\right)$ $\displaystyle\qquad+\int_{q}\left(\delta_{\mu\nu}R(q)-2q_{\mu}q_{\nu}R^{\prime}(q)\right)h(q)^{2}$ $\displaystyle\qquad\quad\times\left((N+4)h(q+p_{1}+p_{2})+2h(q+p_{1}+p_{3})+2h(q+p_{2}+p_{3})\right)\Big{\\}}$ (5.119) ### Check of the trace anomaly Using the energy-momentum tensor obtained above, we can verify the trace anomaly $\Theta(0)=-\left(\frac{D-2}{2}+\frac{1}{2}\eta\right)\mathcal{N}(0)$ (5.120) where the anomalous dimension is given by (4.95) to order $\epsilon^{2}$. The trace is easily obtained from (5.118, 5.119) as $\displaystyle\Theta(0)$ $\displaystyle=\int_{p}\frac{1}{2}\left[\phi^{I}(p)\right]\left[\phi^{I}(-p)\right]\Bigg{[}-(D-2)p^{2}$ $\displaystyle\quad+\lambda(N+2)D\left\\{v_{2}-\int_{q}R(q)\left(h(q)^{2}+\frac{1}{D}h(q)q\cdot\partial_{q}h(q)\right)\right\\}$ $\displaystyle\quad+\lambda^{2}(N+2)\Big{\\{}3(D-p\cdot\partial_{p})G(p)$ $\displaystyle\qquad+\int_{q}\left(D-q\cdot\partial_{q}\right)R(q)\cdot h(q)^{2}\left(-(N+2)v_{2}+3F(q+p)\right)\Big{\\}}\Bigg{]}$ $\displaystyle\quad+\frac{1}{2}\int_{p_{1},\cdots,p_{4}}\frac{1}{2}\left[\phi^{I}(p_{1})\right]\left[\phi^{I}(p_{2})\right]\frac{1}{2}\left[\phi^{J}(p_{3})\right]\left[\phi^{J}(p_{4})\right]\,\delta\left(\sum_{i=1}^{4}p_{i}\right)$ $\displaystyle\qquad\times\Bigg{[}-\lambda D$ $\displaystyle\qquad\quad+\lambda^{2}\Big{\\{}(N+4)\left(D-p\cdot\partial_{p}\right)F(p)\Big{|}_{p=p_{1}+p_{2}}$ $\displaystyle\qquad\qquad\quad+2\left(D-p\cdot\partial_{p}\right)F(p)\Big{|}_{p=p_{1}+p_{3}}+2\left(D-p\cdot\partial_{p}\right)F(p)\Big{|}_{p=p_{2}+p_{3}}$ $\displaystyle\qquad\qquad+\int_{q}(D-q\cdot\partial_{q})R(q)\cdot h(q)^{2}$ $\displaystyle\qquad\qquad\quad\times\left((N+4)h(q+p_{1}+p_{2})+2h(q+p_{1}+p_{3})+2h(q+p_{2}+p_{3})\right)\Big{\\}}\Bigg{]}$ (5.121) On the other hand the number operator, defined by $\mathcal{N}(0)\equiv-e^{S}\int_{q}K(q)\frac{\delta}{\delta\phi^{I}(q)}\left(\left[\phi^{I}(q)\right]e^{-S}\right)\,,$ (5.122) is calculated as $\displaystyle\mathcal{N}(0)$ $\displaystyle=\int_{p}\frac{1}{2}\left[\phi^{I}(p)\right]\left[\phi^{I}(-p)\right]\Big{[}2p^{2}+(N+2)\lambda\left(-2v_{2}+\int_{q}R(q)h(q)^{2}\right)$ $\displaystyle\quad+\lambda^{2}(N+2)\left\\{-6G(p)+2(N+2)v_{2}\int_{q}R(q)h(q)^{3}-6\int_{q}R(q)h(q)^{2}F(q+p)\right\\}\Big{]}$ $\displaystyle\quad+\frac{1}{2}\int_{p_{1},\cdots,p_{4}}\frac{1}{2}\left[\phi^{I}(p_{1})\right]\left[\phi^{I}(p_{2})\right]\frac{1}{2}\left[\phi^{J}(p_{3})\right]\left[\phi^{J}(p_{4})\right]\,\delta\left(\sum_{i=1}^{4}p_{i}\right)$ $\displaystyle\qquad\times\Big{[}4\lambda-4\lambda^{2}\left\\{(N+4)F(p_{1}+p_{2})+2F(p_{1}+p_{3})+2F(p_{2}+p_{3})\right\\}$ $\displaystyle\qquad\quad-2\lambda^{2}\int_{q}R(q)h(q)^{2}\big{\\{}(N+4)h(p+p_{1}+p_{2})$ $\displaystyle\qquad\qquad\qquad\qquad\qquad+2h(p+p_{1}+p_{3})+2h(p+p_{2}+p_{3})\big{\\}}\Big{]}$ (5.123) Using $f(q)=\left(q\cdot\partial_{q}+2\right)h(q)=(2-q\cdot\partial_{q})R(q)\cdot h(q)^{2}$ (5.124) and the equations satisfied by $F$ and $G$ $\displaystyle\left(p\cdot\partial_{p}+\epsilon\right)F(p)$ $\displaystyle=\int_{q}f(q)\cdot\left(h(q+p)-h(q)\right)$ (5.125a) $\displaystyle\left(p\cdot\partial_{p}-2+2\epsilon\right)G(p)$ $\displaystyle=\frac{2}{3}v_{2}\int_{q}f(q)\cdot h(q)+\eta\,p^{2}+\int_{q}f(q)\cdot F(q+p)$ (5.125b) we obtain $\displaystyle\Theta(0)+\left(\frac{D-2}{2}+\gamma_{N}^{(2)}\lambda^{2}\right)\mathcal{N}(0)$ $\displaystyle=\left(\epsilon\lambda+\beta_{N}^{(1)}\lambda^{2}\right)\Bigg{[}\int_{p}\frac{1}{2}\left[\phi^{I}(p)\right]\left[\phi^{I}(-p)\right]\,(N+2)v_{2}$ $\displaystyle\quad-\frac{1}{2}\int_{p_{1},p_{2},p_{3},p_{4}}\frac{1}{2}\left[\phi^{I}(p_{1})\right]\left[\phi^{I}(p_{2})\right]\frac{1}{2}\left[\phi^{J}(p_{3})\right]\left[\phi^{J}(p_{4})\right]\,\delta\left(\sum_{i=1}^{4}p_{i}\right)\Bigg{]}$ (5.126) where we have dropped $\epsilon\lambda^{2}G(p)$ and $\epsilon\lambda^{2}F(p)$, which are terms of order $\epsilon^{3}$. This vanishes at the fixed point, where $\epsilon\lambda+\beta_{N}^{(1)}\lambda^{2}=0,$ to order $\epsilon^{2}$. ### Correlation functions In the previous section we saw how the fixed-point Wilson action gives the correlation functions. Similarly, the coefficient functions $c_{\mu\nu,2}(p_{1},p_{2})$ and $c_{\mu\nu,4}(p_{1},p_{2};p_{3},p_{4})$ give the 1PI correlation functions of the energy-momentum tensor at $p=0$: $\displaystyle\left\langle\Theta_{\mu\nu}(0)\phi^{I}(p)\phi^{J}(q)\right\rangle^{1PI}$ $\displaystyle=p^{2-\eta}q^{2-\eta}\left\langle\Theta_{\mu\nu}(0)\phi^{I}(p)\phi^{J}(q)\right\rangle$ $\displaystyle=\delta(p+q)\delta^{IJ}\lim_{t\to\infty}e^{(-2+\eta)t}c_{\mu\nu,2}(pe^{t},-pe^{t})$ (5.127) and $\displaystyle\left\langle\Theta_{\mu\nu}(0)\phi^{I}(p_{1})\phi^{J}(p_{2})\phi^{K}(p_{3})\phi^{L}(p_{4})\right\rangle^{1PI}$ $\displaystyle\qquad=\prod_{i=1}^{4}p_{i}^{2-\eta}\cdot\left\langle\Theta_{\mu\nu}(0)\phi^{I}(p_{1})\phi^{J}(p_{2})\phi^{K}(p_{3})\phi^{L}(p_{4})\right\rangle$ $\displaystyle\qquad=\delta\left(\sum_{i=1}^{4}p_{i}\right)\lim_{t\to\infty}e^{(-\epsilon+4\eta)t}\left[\delta^{IJ}\delta^{KL}c_{\mu\nu,4}(p_{1}e^{t},p_{2}e^{t};p_{3}e^{t},p_{4}e^{t})\right.$ $\displaystyle\qquad\qquad\left.+\delta^{IK}\delta^{JL}c_{\mu\nu,4}(p_{1}e^{t},p_{3}e^{t};p_{2}e^{t},p_{4}e^{t})+\delta^{IL}\delta^{JK}c_{\mu\nu,4}(p_{1}e^{t},p_{4}e^{t};p_{2}e^{t},p_{3}e^{t})\right]$ (5.128) We obtain the two-point function as $\displaystyle\lim_{t\to\infty}e^{(-2+\eta)t}c_{\mu\nu,2}(pe^{t},-pe^{t})$ $\displaystyle=\lim_{t\to\infty}\left\\{\left(1+\eta\,t\right)\left(-p^{2}\delta_{\mu\nu}+2p_{\mu}p_{\nu}\right)\right.$ $\displaystyle\left.\quad+\lambda^{2}(N+2)3e^{-2t}\left(\delta_{\mu\nu}G(pe^{t})-2p_{\mu}p_{\nu}e^{2t}G^{\prime}(pe^{t})\right)\right\\}$ $\displaystyle=p^{-\eta}\left(-p^{2}\delta_{\mu\nu}+2p_{\mu}p_{\nu}\right)$ (5.129) where we have used the asymptotic form $G(p)-2p_{\mu}p_{\nu}G^{\prime}(p)\overset{p\to\infty}{\longrightarrow}\frac{1}{12(4\pi)^{4}}\left(p^{2}\delta_{\mu\nu}-2p_{\mu}p_{\nu}\right)\ln p^{2}$ (5.130) We obtain the four-point function as $\displaystyle\lim_{t\to\infty}e^{(-\epsilon+4\eta)t}c_{\mu\nu,4}(p_{1}e^{t},p_{2}e^{t};p_{3}e^{t},p_{4}e^{t})$ $\displaystyle=\lambda\lim_{t\to\infty}(1-\epsilon\,t)\Bigg{[}\delta_{\mu\nu}\big{\\{}-1$ $\displaystyle\qquad\qquad+\lambda\left((N+4)F\left((p_{1}+p_{2})e^{t}\right)+2F\left((p_{1}+p_{3})e^{t}\right)+F\left((p_{2}+p_{3})e^{t}\right)\right)\big{\\}}$ $\displaystyle\,\,-\lambda\left\\{(N+4)\frac{(p_{1}+p_{2})_{\mu}(p_{1}+p_{2})_{\nu}}{(p_{1}+p_{2})^{2}}+\frac{(p_{1}+p_{3})_{\mu}(p_{1}+p_{3})_{\nu}}{(p_{1}+p_{3})^{2}}+\frac{(p_{2}+p_{3})_{\mu}(p_{2}+p_{3})_{\nu}}{(p_{2}+p_{3})^{2}}\right\\}\Bigg{]}$ $\displaystyle=-\lambda\delta_{\mu\nu}\left[1+\frac{\epsilon}{N+8}\ln\left\\{(p_{1}+p_{2})^{N+4}(p_{1}+p_{3})^{2}(p_{2}+p_{3})^{2}\right\\}\right]$ (5.131) where we have kept only the logarithms of momenta at order $\epsilon^{2}$. ## 6 Summary and Conclusions: In this paper we have studied some aspects of the $O(N)$ model using the Exact RG formalism. We have done two things: 1) We have constructed the Wilson action for the $O(N)$ model at the Wilson Fisher fixed point in $4-\epsilon$ dimensions up to order $\epsilon^{2}$. This is done by solving the fixed point equation, order by order in $\epsilon$. Some correlation functions have also been calculated. 2) We have constructed the energy momentum tensor for this theory. This is done by solving the Ward Identity for diffeomorphism invariance. The traceless-ness of the energy momentum tensor implies that the Wilson action is scale and conformal invariant. It is important to note that all this is in the presence of a finite cutoff $\Lambda$. As mentioned in the introduction, one of the motivations for this construction is the use the ideas in [29, 30] and construct the AdS action corresponding to this CFT. A related problem is to construct the AdS action for sources for composite operators such as $\phi^{i}\phi^{i}$. Even more interesting would be to study the massless spin 2 field that would be the source for the energy momentum tensor. This would give dynamical gravity in the bulk as a consequence of Exact RG in the boundary by a direct change of variables similar to what was done for the scalar field in [29, 30]. ## Appendix A Fixed Point Action ### A.1 Evaluation of $U_{4}$ We need to solve $\displaystyle\bigg{[}\bigg{(}4-D-\sum_{i=1}^{4}p_{i}\frac{d}{dp_{i}}\bigg{)}+\sum_{j=1}^{4}2K^{\prime}(p_{j}^{2})U_{2}^{(1)}(p_{j})\bigg{]}\frac{1}{8}U_{4}(p_{1},p_{2};p_{3},p_{4})$ $\displaystyle=$ $\displaystyle\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p^{2}))\frac{1}{48}\bigg{\\{}6NU_{6}(p_{1},p_{2};p_{3},p_{4};p,-p)+12U_{6}(p_{1},p;p_{2},-p;p_{3},p_{4})+12U_{6}(p_{1},p_{2};p_{3},p;p_{4},-p)\bigg{\\}}$ $\displaystyle=$ $\displaystyle\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p^{2}))\bigg{\\{}-\frac{(N+2)}{8}\bigg{(}h(p_{1})+h(p_{2})+h(p_{3})+h(p_{4})\bigg{)}$ $\displaystyle-\frac{(N+4)}{4}\bigg{(}h(p+p_{1}+p_{2})+2h(p+p_{1}+p_{3})+2h(p+p_{1}+p_{4})\bigg{)}\bigg{\\}}$ (A.132) where $\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p^{2}))\bigg{\\{}-\frac{(N+2)}{8}\bigg{(}h(p_{1})+h(p_{2})+h(p_{3})+h(p_{4})\bigg{)}\bigg{\\}}$ (A.133) corresponds to the kind of diagrams shown in 1. Here the external loop does not involve momenta $p_{i}+p_{j}$. We will call it Type I diagrams. Considering only leading order terms in $p_{j}^{2}$ the contribution from type I diagram in (A.1) is $\displaystyle=-\frac{N+2}{8}\frac{\lambda^{2}}{16\pi^{2}}~{}4K^{\prime}(p_{j}^{2})\bigg{|}_{p_{j}=0}$ (A.134) Now consider the second term in L.H.S of (A.1). In the limit of small external momenta after putting the value of $U_{2}^{(1)}(p)=-\frac{N+2}{2}\frac{\lambda}{16\pi^{2}}$( as we are considering terms of $\mathcal{O}(\epsilon^{2})$ we have put D=4 to find $U_{2}^{(1)}$) we get $\displaystyle-$ $\displaystyle\sum_{j=1}^{4}2K^{\prime}(p_{j}^{2})\bigg{|}_{p_{j}\rightarrow 0}\frac{\lambda}{16\pi^{2}}\frac{N+2}{2}\frac{1}{8}V_{4}(p_{1},p_{2};p_{3},p_{4})$ $\displaystyle=$ $\displaystyle-4K^{\prime}(p_{j}^{2})\bigg{|}_{p_{j}\rightarrow 0}\frac{\lambda^{2}}{16\pi^{2}}\frac{N+2}{8}$ (A.135) This cancels exactly with (A.134). $\phi^{I}$$\phi^{J}$$\phi^{J}$$\phi^{I}$$p$$p_{2}$$p_{3}$$p_{1}$$p_{4}$$-p$ Figure 1: Type I diagram Similarly in (A.1) the term $\displaystyle\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p^{2})\bigg{\\{}-\frac{(N+4)}{4}\bigg{(}h(p+p_{1}+p_{2})+2h(p+p_{1}+p_{3})+2h(p+p_{1}+p_{4})\bigg{)}\bigg{\\}}$ (A.136) corresponds to the kind of diagram shown in 2. We will call it Type II diagram. In the limit $p_{i}\rightarrow 0$ the above term becomes $\phi^{I}$$\phi^{J}$$\phi^{J}$$\phi^{I}$$\phi^{K}$$\phi^{J}$$\phi^{K}$$p$$-p$$p_{1}$$p_{3}$$p_{2}$$p_{4}$ Figure 2: Type II diagram $\displaystyle\lambda^{2}~{}\frac{(N+8)}{4}\frac{1}{16\pi^{2}}\int_{0}^{\infty}dp^{2}K^{\prime}(p^{2})\Big{(}K(p^{2})-K(0)\Big{)}$ $\displaystyle=$ $\displaystyle\lambda^{2}\frac{(N+8)}{4}\frac{1}{16\pi^{2}}\int_{0}^{\infty}dp^{2}\bigg{\\{}\frac{1}{2}\frac{d(K^{2})}{dp^{2}}-K(0)K^{\prime}(p^{2})\bigg{\\}}$ Using $K(\infty)=0$ and $K(0)=1$, this integral gives $\frac{1}{2}$. Equating this contribution with $\epsilon\frac{\lambda}{4!}$ from L.H.S of (A.1) we obtain $\frac{1}{8}(4-D)\lambda=\frac{N+8}{8}\frac{\lambda^{2}}{(4\pi)^{2}}$ Thus in addition to the trivial fixed point $\lambda=0$, we have a non trivial fixed point: $\boldmath\lambda=(4-D)\frac{16\pi^{2}}{N+8}$ (A.137) ### A.2 Solving for $\tilde{U}_{4}$ $\tilde{U}_{4}$ will have contribution from both type I and II diagram explained above. We write $\displaystyle\tilde{U}_{4}=\tilde{U}_{4}^{I}+\tilde{U}_{4}^{II}$ according to contributions from type I(II) diagrams. (We shall set $D=4$ while evaluating integrations in those terms that are already of $\mathcal{O}(\epsilon^{2})$.) ##### Type I diagram In (A.1) the first term on the LHS and the first terms on the RHS (Type I) cancel only in leading order. In general their difference is $\lambda^{2}\frac{N+2}{8}\times\frac{1}{(4\pi)^{2}}\int_{0}^{\infty}dp^{2}K^{\prime}(p^{2})\Bigg{[}\sum_{j}\frac{K(p_{j}^{2})-K(0)}{p_{j}^{2}}-K^{\prime}(p_{j}^{2})\Bigg{]}$ Taylor expanding we find $\lambda^{2}\frac{N+2}{8}\times\frac{1}{(4\pi)^{2}}\int dp^{2}K^{\prime}(p^{2})K^{\prime\prime}(0){1\over 2}\sum_{j}p_{j}^{2}\equiv c\sum_{j}p_{j}^{2}$ This is a contribution to $\tilde{U}_{4}(p_{1},p_{2};p_{3},p_{4})$ that we can call $\Delta U_{4}^{I}(p_{1},p_{2};p_{3},p_{4})$. Consider a type I graph where the line at one end has $p_{1}$ and lines with momenta $p_{2},p_{3},p_{4}$ are at the other end. This corresponds to the term $\lambda^{2}\frac{N+2}{8}\times\frac{1}{(4\pi)^{2}}\int dp^{2}K^{\prime}(p^{2})K^{\prime\prime}(0){1\over 2}p_{1}^{2}\equiv cp_{1}^{2}$ when contracted in a loop in order to contribute to $\tilde{U}_{2}$, so that say $p_{3}=-p_{4}$, we have $p_{2}=-p_{1}$. It contributes to $\tilde{U}_{2}(p_{1}^{2})$ an amount $\displaystyle\int dp^{2}K^{\prime}(p^{2}){1\over 2}\Delta U_{4}^{I}(p_{1},-p_{1},p,-p)=\int dp^{2}K^{\prime}(p^{2}){1\over 2}c(p_{1}^{2})=\Big{[}c\int dp^{2}K^{\prime}(p^{2})\Big{]}p_{1}^{2}\equiv Ap_{1}^{2}$ This is just a simple wave function renormalization that does not depend on $p_{1}$. There is no contribution to the mass. The same argument applies to all the other permutations of the type I terms. A simple wave function renormalization $\phi^{\prime 2}=(1+A)\phi^{2}$ can ensure the normalization of the kinetic term.. They do not affect the physics or contribute to $\eta$. However, type-I term contributes to sub-leading order term of $m^{2}$ or $U_{2}$. $\tilde{U}_{4}^{I}$ satisfies the following equation: $\displaystyle-\sum_{i=1}^{4}p_{i}\frac{d}{dp_{i}}\frac{1}{8}\tilde{U}_{4}^{I}(p_{1},p_{2};p_{3},p_{4})=\lambda^{2}\frac{N+2}{8}\times\frac{1}{(4\pi)^{2}}\int_{0}^{\infty}dp^{2}K^{\prime}(p^{2})\Bigg{[}\sum_{j}\frac{K(p_{j}^{2})-K(0)}{p_{j}^{2}}-K^{\prime}(p_{j}^{2})\Bigg{]}$ (A.138) The solution is $\displaystyle\tilde{U}_{4}^{I}(p_{1},p_{2};p_{3},p_{4})=$ $\displaystyle-\lambda^{2}\frac{(N+2)}{2}\frac{1}{16\pi^{2}}\sum_{j=1}^{4}\frac{K(p_{j}^{2})-K(0)}{p_{j}^{2}}$ (A.139a) $\displaystyle=$ $\displaystyle\lambda^{2}\frac{(N+2)}{2}\frac{1}{16\pi^{2}}\sum_{j=1}^{4}h(p_{j})$ (A.139b) where $K(p)=e^{-p^{2}}$ is assumed. ##### Type II Diagram In (A.1) if we keep terms upto $\mathcal{O}(\epsilon^{2})$, $\displaystyle\frac{1}{8}\bigg{[}\sum_{j=1}^{4}p_{j}\frac{d}{dp_{j}}\bigg{]}\tilde{U}_{4}^{II}(p_{1},p_{2};p_{3},p_{4})$ $\displaystyle=$ $\displaystyle\frac{\lambda^{2}}{4}\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p^{2})\bigg{\\{}(N+4)h(p+p_{1}+p_{2})+2h(p+p_{1}+p_{3})+2h(p+p_{1}+p_{4})-(N+8)h(p)\bigg{\\}}$ (A.140) where $h(p)=\frac{K(0)-K(p)}{p^{2}}$. It is to be noted in the momentum independent part $-\epsilon\frac{\lambda}{4!}$ we have written $\epsilon$ in terms of $\lambda$ using the fixed point value of $\lambda$. The solution at $\mathcal{O}(\epsilon^{2})$, analytic at zero external momenta, is given by $\displaystyle\tilde{U}_{4}^{II}(p_{1},p_{2};p_{3},p_{4})$ $\displaystyle=$ $\displaystyle-\frac{\lambda^{2}}{2}\int\frac{d^{D}p}{(2\pi)^{D}}h(p)\Big{[}(N+4)h(p_{1}+p_{2}+p)+2h(p+p_{1}+p_{3})+2h(p+p_{1}+p_{4})-(N+8)h(p)\Big{]}$ (A.141a) $\displaystyle=$ $\displaystyle-\lambda^{2}\Big{[}(N+4)F(p_{1}+p_{2})+2F(p_{1}+p_{3})+2F(p_{1}+p_{4})\Big{]}$ (A.141b) where $F(q)=\frac{1}{2}\int\frac{d^{D}p}{(2\pi)^{D}}h(p)\Big{(}h(p+q)-h(p)\Big{)}$. ### A.3 Equation for $\tilde{U}_{2}$ From (3.75) we get $\displaystyle 0=$ $\displaystyle\int\frac{d^{D}p}{(2\pi)^{D}}\Big{(}-K^{\prime}(p^{2})\Big{)}\times$ $\displaystyle\bigg{\\{}$ $\displaystyle\frac{1}{8}\Big{[}4N\tilde{U}_{4}^{I}(p_{1},-p_{1};p,-p)+4N\tilde{U}_{4}^{II}(p_{1},-p_{1};p,-p)+8\tilde{U}_{4}^{I}(p_{1},p;-p_{1},-p)+8\tilde{U}_{4}^{II}(p_{1},p;-p_{1},-p)\Big{]}$ $\displaystyle-$ $\displaystyle v_{2}^{(1)}(p)v_{2}^{(1)}(p)\delta^{D}(p-p_{1})\bigg{\\}}-\frac{\eta}{2}p_{1}^{2}+\tilde{U}_{2}(p_{1})-p_{1}^{2}\frac{d\tilde{U}_{2}(p_{1})}{dp_{1}^{2}}$ (A.142) From(A.139a) $\displaystyle\frac{1}{8}\bigg{\\{}4N\tilde{U}_{4}^{I}(p_{1},-p_{1};p,-p)+8\tilde{U}_{4}^{I}(p_{1},p;-p,-p_{1})\bigg{\\}}$ $\displaystyle=$ $\displaystyle\frac{1}{2}(N+2)^{2}\frac{\lambda^{2}}{16\pi^{2}}\bigg{\\{}h(p)+h(p_{1})\bigg{\\}}$ (A.143) and from (A.141) $\displaystyle\frac{1}{8}\bigg{\\{}4N\tilde{U}_{4}^{II}(p_{1},-p_{1};p,-p)+8\tilde{U}_{4}^{II}(p_{1},p;-p,-p_{1})\bigg{\\}}$ $\displaystyle=$ $\displaystyle-\frac{3\lambda^{2}}{2}(N+2)\int_{r}\bigg{\\{}h(r)\Big{[}h(r+p_{1}+p)-h(r)\Big{]}\bigg{\\}}$ (A.144) If we decompose $\tilde{U}_{2}$ in two parts namely $\tilde{U}_{2}^{I}$ and $\tilde{U}_{2}^{II}$ respectively, in the following way, 1. $\displaystyle\tilde{U}_{2}^{I}(p_{1})-p_{1}^{2}\frac{d\tilde{U}_{2}^{I}(p_{1})}{dp_{1}^{2}}=\int\frac{d^{D}p}{(2\pi)^{D}}K^{\prime}(p^{2})\frac{1}{2}(N+2)^{2}\frac{\lambda^{2}}{16\pi^{2}}h(p_{1})-\big{(}U_{2}^{(1)}\big{)}^{2}K^{\prime}(p_{1}^{2})$ (A.145) which gives $\displaystyle\tilde{U}_{2}^{I}(p_{1})=-\frac{\lambda^{2}}{(16\pi^{2})^{2}}\frac{(N+2)^{2}}{4}h(p_{1})$ (A.146) 2. $\displaystyle-2\tilde{U}_{2}^{II}(p_{1})+2p_{1}^{2}\frac{d\tilde{U}_{2}^{II}(p_{1})}{dp_{1}^{2}}$ $\displaystyle=$ $\displaystyle-6\lambda^{2}(N+2)\int\frac{d^{D}p}{(2\pi)^{D}}\Big{(}-K^{\prime}(p^{2})\Big{)}F(p_{1}+p)+(N+2)^{2}\frac{\lambda^{2}}{16\pi^{2}}\int\frac{d^{D}p}{(2\pi)^{D}}\Big{(}-K^{\prime}(p^{2})\Big{)}h(p)-\eta~{}p_{1}^{2}$ (A.147) which gives $\displaystyle\tilde{U}_{2}^{II}(p_{1})=~{}$ $\displaystyle p_{1}^{2}\int_{p^{2}=0}^{p_{1}^{2}}dp^{2}\frac{\int\frac{d^{D}q}{(2\pi)^{D}}\Big{\\{}-6\lambda^{2}(N+2)(-K^{\prime}(q^{2}))F(p+q)\Big{\\}}-\eta~{}p^{2}}{2p^{4}}-\frac{(N+2)^{2}}{4}\frac{\lambda^{2}}{(16\pi^{2})^{2}}$ (A.148) The second term in the expression of $\tilde{U}_{4}^{II}$ is evaluated using $K(p)=e^{-p^{2}}$. Hence The full expression of $\tilde{U}_{2}(p_{1})$ is given by $\displaystyle\tilde{U}_{2}(p_{1})=-$ $\displaystyle\frac{\lambda^{2}}{(16\pi^{2})^{2}}\frac{(N+2)^{2}}{4}h(p_{1})$ $\displaystyle+$ $\displaystyle p_{1}^{2}\int_{p^{2}=0}^{p_{1}^{2}}dp^{2}\frac{\int\frac{d^{D}q}{(2\pi)^{D}}\Big{\\{}-6\lambda^{2}(N+2)(-K^{\prime}(q^{2}))F(p+q)\Big{\\}}-\eta~{}p^{2}}{2p^{4}}-\frac{(N+2)^{2}}{4}\frac{\lambda^{2}}{(16\pi^{2})^{2}}$ (A.149) ### A.4 Expression for $\eta$ Only Type II diagrams contribute to $\eta$. Because we need the external momentum to flow through the loop - to get a momentum dependence in $U_{2}$. This can happen only in Type II terms and that too for certain contractions. (Calculation of this section requires us to go back to bar denoted variable as dimensionless variable. So $p$’s from last section are replaced with $\bar{p}$. ) From (3.76) we have $\frac{\eta}{2}=-\frac{1}{8}\frac{d}{d\bar{r}^{2}}\int_{\bar{q}}K^{\prime}(\bar{q}^{2})\bigg{\\{}4N\tilde{U}_{4}^{II}(\bar{q},-\bar{q};\bar{r},-\bar{r})+8\tilde{U}_{4}^{II}(\bar{q},\bar{r};-\bar{r},-\bar{q})\bigg{\\}}~{}\Bigg{|}_{\bar{r}^{2}=0}$ (A.150) We can convert differentiation w.r.t $p_{j}$ into that w.r.t $\Lambda$ , i.e. $\displaystyle-\sum_{j=1}^{4}\bar{p}_{j}\frac{d}{d\bar{p}_{j}}=\Lambda\frac{d}{d\Lambda}$ So (A.2) gives following expression for $\tilde{U}_{4}^{II}$: $\displaystyle\frac{1}{8}$ $\displaystyle\tilde{U}_{4}^{II}(\frac{p_{1}}{\Lambda},\frac{p_{2}}{\Lambda};\frac{p_{3}}{\Lambda},\frac{p_{4}}{\Lambda})$ $\displaystyle=$ $\displaystyle\frac{\lambda^{2}}{4}\int_{0}^{\ln\Lambda}d\ln\Lambda^{\prime}~{}\int_{\bar{p}}K^{\prime}(\bar{p}^{2})\bigg{[}(N+4)h(\bar{p}+\frac{p_{1}}{\Lambda^{\prime}}+\frac{p_{2}}{\Lambda^{\prime}})+2h(\bar{p}+\frac{p_{1}}{\Lambda^{\prime}}+\frac{p_{3}}{\Lambda^{\prime}})+2h(\bar{p}+\frac{p_{1}}{\Lambda^{\prime}}+\frac{p_{4}}{\Lambda^{\prime}})-(N+8)h(\bar{p})\bigg{]}$ (A.151) Hence $\displaystyle\frac{1}{8}\bigg{\\{}4N\tilde{U}_{4}^{II}(\bar{q},-\bar{q};\bar{r},-\bar{r})+8\tilde{U}_{4}^{II}(\bar{q},\bar{r};-\bar{r},-\bar{q})\bigg{\\}}$ $\displaystyle=$ $\displaystyle\frac{\lambda^{2}}{4}\int_{0}^{\ln\Lambda}d\ln\Lambda^{\prime}~{}\int_{\bar{p},\bar{r}}K^{\prime}(\bar{p}^{2})\bigg{\\{}(12N+48)h(\bar{p}+\frac{q}{\Lambda^{\prime}}+\frac{r}{\Lambda^{\prime}})+(12N+48)h(\bar{p}+\frac{q}{\Lambda^{\prime}}-\frac{r}{\Lambda^{\prime}})-24(N+2)h(\bar{p})\bigg{\\}}$ (A.152) So we need to find the coefficient of $\bar{r}^{2}$ in $\Big{[}h(\bar{p}+\frac{q}{\Lambda^{\prime}}+\frac{r^{\prime}}{\Lambda^{\prime}})+h(\bar{p}+\frac{q}{\Lambda^{\prime}}-\frac{r^{\prime}}{\Lambda^{\prime}})\Big{]}$ which is calculated as $\displaystyle{1\over 2}\frac{r^{\mu}r^{\nu}}{\Lambda^{\prime 2}}\frac{d^{2}}{dr^{\prime\mu}dr^{\prime\nu}}\Big{[}h(\bar{p}+\frac{q}{\Lambda^{\prime}}+\frac{r^{\prime}}{\Lambda^{\prime}})+h(\bar{p}+\frac{q}{\Lambda^{\prime}}-\frac{r^{\prime}}{\Lambda^{\prime}})\Big{]}\Bigg{|}_{r^{\prime}=0}$ $\displaystyle=$ $\displaystyle\frac{r^{\mu}r^{\nu}}{\Lambda^{\prime 2}}\bigg{(}\frac{d^{2}}{d\bar{r}^{\prime\mu}d\bar{r}^{\prime\nu}}h(\bar{p}+\frac{q}{\Lambda^{\prime}}+\bar{r}^{\prime}\bigg{)}\Bigg{|}_{\bar{r}^{\prime}=0}$ $\displaystyle=$ $\displaystyle\frac{\bar{r}^{2}}{4}\bigg{(}\frac{d^{2}}{d\bar{r}^{\mu}d\bar{r}_{\mu}}h(\bar{p}+\frac{q}{\Lambda^{\prime}}+\bar{r}\bigg{)}\Bigg{|}_{r=0}$ $\displaystyle=$ $\displaystyle-\frac{\bar{r}^{2}}{4}\frac{d^{2}}{d\bar{r}^{\mu}d\bar{r}_{\mu}}\frac{K(\bar{r}^{2})-1}{\bar{r}^{2}}\Bigg{|}_{\bar{r}=\bar{p}+\frac{q}{\Lambda^{\prime}}}$ $\displaystyle=$ $\displaystyle\bar{r}^{2}K^{\prime\prime}((\bar{p}+\frac{q}{\Lambda^{\prime}})^{2})$ (A.153) where we have used the facts: in 4 dimensions $(\frac{d}{dp_{\mu}}\frac{1}{p^{2}})=\delta^{4}(p)$ and $K(0)=1$. From (A.150),(A.4) and (A.4) we get $\frac{\eta}{2}=3\lambda^{2}(N+2)\int_{\bar{q}}K^{\prime}(\bar{q}^{2})\int_{0}^{\ln\Lambda}d\ln\Lambda^{\prime}~{}(\frac{\Lambda}{\Lambda^{\prime}})^{2}\int_{\bar{p}}K^{\prime}(\bar{p}^{2})K^{\prime\prime}((\bar{p}+\frac{q}{\Lambda^{\prime}})^{2})$ (A.154) Evaluation of integral: Let us use $\bar{q}^{\prime}=\frac{q}{\Lambda^{\prime}}$ and $\Lambda^{\prime}$ as variables of integration, rather than $\bar{q}=\frac{q}{\Lambda}$ and $\Lambda^{\prime}$. So change variables: $\bar{q}=\bar{q}^{\prime}\frac{\Lambda^{\prime}}{\Lambda}~{};~{}~{}~{}\bar{q}^{2}=\bar{q}^{\prime 2}\Big{(}\frac{\Lambda^{\prime}}{\Lambda}\Big{)}^{2}~{};~{}~{}~{}\int d^{4}\bar{q}=\int d^{4}\bar{q}\Big{(}\frac{\Lambda^{\prime}}{\Lambda}\Big{)}^{4}$ to get $\frac{\eta}{2}=-3\lambda^{2}(N+2)\int_{0}^{\ln\Lambda}d\ln\Lambda^{\prime}~{}\int_{\bar{q}^{\prime}}\Big{(}\frac{\Lambda^{\prime}}{\Lambda}\Big{)}^{-2}K^{\prime}(\bar{q}^{\prime 2})\Big{(}\frac{\Lambda^{\prime}}{\Lambda}\Big{)}^{2}\int_{\bar{p}}K^{\prime}(\bar{p}^{2})K^{\prime\prime}((\bar{p}+\frac{q}{\Lambda^{\prime}})^{2})$ Using $K^{\prime}(\bar{q}^{\prime 2})=\frac{dK}{d\Lambda^{\prime}}\frac{d\Lambda^{\prime}}{d\bar{q}^{\prime 2}}=-\frac{\Lambda^{\prime}}{2\bar{q}^{\prime 2}}\frac{dK}{d\Lambda^{\prime}}$ we get $\frac{\eta}{2}=-3\lambda^{2}(N+2)\int_{0}^{\Lambda}d\Lambda^{\prime}~{}\frac{dK}{d\Lambda^{\prime}}\int_{\bar{q}^{\prime}}\frac{1}{2\bar{q}^{\prime 2}}\int_{\bar{p}}K^{\prime}(\bar{p}^{2})K^{\prime\prime}((\bar{p}+\bar{q}^{\prime})^{2})$ Since $\bar{q}^{\prime}$ is an independent variable we can write this as $\frac{\eta}{2}=-3\lambda^{2}(N+2)\int_{\bar{q}^{\prime}}\int_{0}^{\Lambda}d\Lambda^{\prime}~{}\frac{dK}{d\Lambda^{\prime}}\frac{1}{2\bar{q}^{\prime 2}}\int_{\bar{p}}K^{\prime}(\bar{p}^{2})K^{\prime\prime}((\bar{p}+\bar{q}^{\prime})^{2})$ The integral over $\bar{p}$ is a function of $\bar{q}^{\prime}$ and not $\Lambda^{\prime}$. So we can do the $\Lambda^{\prime}$ integral easily. Using $K(\infty)=0$ we get $\frac{\eta}{2}=-\frac{3\lambda^{2}}{2}(N+2)\underbrace{\int_{\bar{q}^{\prime}}~{}K(\bar{q}^{\prime 2})\frac{1}{\bar{q}^{\prime 2}}\int_{\bar{p}}K^{\prime}(\bar{p}^{2})K^{\prime\prime}((\bar{p}+\bar{q}^{\prime})^{2})}_{-\frac{\pi^{4}}{6(2\pi)^{8}}}=\frac{1}{4}\lambda^{2}(N+2)\frac{1}{(16\pi^{2})^{2}}$ The integral underbraced above is calculated to give $-\frac{\pi^{4}}{6(2\pi)^{8}}$ for $K(x)=e^{-x}$. But it can be shown to give identical result for any smooth $K(x)$ [56]. Using $\lambda=\frac{16\pi^{2}}{N+8}\epsilon$ we can write the anomalous dimension as: $\boldmath\frac{\eta}{2}=\frac{1}{4}\lambda^{2}(N+2)\frac{1}{(16\pi^{2})^{2}}=\frac{N+2}{(N+8)^{2}}\frac{\epsilon^{2}}{4}$ (A.155) ## Appendix B Asymptotic behaviors of $F(p)$ and $G(p)$ The function $F(p)$ is defined by $\left(p\cdot\partial_{p}+\epsilon\right)F(p)=\int_{q}f(q)\Big{(}h(q+p)-h(q)\Big{)}$ (B.156) For large $p$, we obtain an equation satisfied by the asymptotic form $F_{\mathrm{asymp}}(p)$: $\left(p\cdot\partial_{p}+\epsilon\right)F_{\mathrm{asymp}}(p)=-\int_{q}f(q)h(q)=-\frac{1}{(4\pi)^{2}}+\mathrm{O}(\epsilon)$ (B.157) This implies $F_{\mathrm{asymp}}(p)=-\frac{1}{\epsilon}\int_{q}f(q)h(q)+C_{F}(\epsilon)p^{-\epsilon}$ (B.158) where $C_{F}(\epsilon)$ is independent of $p$. Since $F(p)$ is finite in the limit $\epsilon\to 0+$, we must find $C_{F}(\epsilon)=\frac{1}{\epsilon}\frac{1}{(4\pi)^{2}}+\cdots$ (B.159) Hence, expanding in $\epsilon$, we obtain $F_{\mathrm{asymp}}(p)=-\frac{1}{(4\pi)^{2}}\ln p+\mathrm{const}+\mathrm{O}(\epsilon)$ (B.160) We next consider $G(p)$ satisfying $\left(p\cdot\partial_{p}-2+2\epsilon\right)G(p)=\int_{q}f(q)F(q+p)+2v_{2}\int_{q}f(q)h(q)+\eta^{(2)}p^{2}$ (B.161) where $\eta^{(2)}=-\frac{d}{dp^{2}}\int_{q}f(q)F(q+p)\Big{|}_{p=0}=\frac{1}{6(4\pi)^{4}}+\mathrm{O}(\epsilon)$ (B.162) The asymptotic form $G_{\mathrm{asymp}}(p)$ satisfies $\left(p\cdot\partial_{p}-2+2\epsilon\right)G_{\mathrm{asymp}}(p)=\eta^{(2)}p^{2}$ (B.163) This gives $G_{\mathrm{asymp}}(p)=\frac{1}{2\epsilon}\eta^{(2)}p^{2}+C_{G}(\epsilon)p^{2-2\epsilon}$ (B.164) Since $G(p)$ is finite as $\epsilon\to 0+$, we obtain $C_{G}(\epsilon)=-\frac{1}{\epsilon}\frac{1}{12(4\pi)^{4}}+\cdots$ (B.165) Hence, $G_{\mathrm{asymp}}(p)=p^{2}\left(\frac{1}{6(4\pi)^{4}}\ln p+\mathrm{const}\right)+\mathrm{O}(\epsilon)$ (B.166) ## References * [1] A. M. Polyakov, “Conformal symmetry of critical fluctuations,” JETP Lett. 12, 381 (1970) [Pisma Zh. Eksp. Teor. Fiz. 12, 538 (1970)] * [2] A. M. Polyakov, “Nonhamiltonian approach to conformal quantum field theory,” Zh. Eksp. Teor. Fiz. 66, 23 (1974) [Sov. Phys. JETP 39, 9 (1974)]. * [3] A. M. Polyakov, A. A. Belavin and A. B. Zamolodchikov, “Infinite Conformal Symmetry of Critical Fluctuations in Two-Dimensions,” J. Statist. Phys. 34, 763 (1984). doi:10.1007/BF01009438 * [4] P. Di Francesco, P. Mathieu and D. Senechal, “Conformal Field Theory,” doi:10.1007/978-1-4612-2256-9 * [5] S. Rychkov, “EPFL Lectures on Conformal Field Theory in D¿= 3 Dimensions,” doi:10.1007/978-3-319-43626-5 arXiv:1601.05000 [hep-th]. * [6] J. M. Maldacena, “The Large N limit of superconformal field theories and supergravity,” Int. J. Theor. Phys. 38, 1113 (1999) [Adv. Theor. Math. Phys. 2, 231 (1998)] doi:10.1023/A:1026654312961 arXiv:hep-th/9711200. * [7] S. S. Gubser, I. R. Klebanov, and A. M. Polyakov, “Gauge theory correlators from non-critical string theory,” Phys. Lett. B428 (1998) 105-114, arXiv:hep-th/9802109. * [8] E. Witten, “Anti-de Sitter space and holography,” Adv. Theor. Math. Phys. 2 (1998) 253-291, arXiv:hep-th/9802150. * [9] E. Witten, “Anti-de Sitter space, thermal phase transition, and confinement in gauge theories,” Adv. Theor. Math. Phys. 2, 505 (1998) arXiv:hep-th/9803131. * [10] J. Penedones, “TASI lectures on AdS/CFT,” doi:10.1142/9789813149441-0002 arXiv:1608.04948 [hep-th]. * [11] K. G. Wilson and J. B. Kogut, “The Renormalization group and the epsilon expansion,” Phys. Rept. 12, 75 (1974). doi:10.1016/0370-1573(74)90023-4 * [12] F. J. Wegner and A. Houghton, “Renormalization group equation for critical phenomena,” Phys. Rev. A8 (1973) 401-412. * [13] K. G. Wilson, “The renormalization group and critical phenomena,” Rev. Mod. Phys. 55 (1983) 583-600. * [14] J. Polchinski, “Renormalization and Effective Lagrangians,” Nucl. Phys. B231, 269 (1984). doi:10.1016/0550-3213(84)90287-6 * [15] E. T. Akhmedov, “A Remark on the AdS / CFT correspondence and the renormalization group flow,” Phys. Lett. B442 (1998) 152-158, arXiv:hep-th/9806217 [hep-th]. * [16] E. T. Akhmedov1 “Notes on multitrace operators and holographic renormalization group”. Talk given at 30 Years of Supersymmetry, Minneapolis, Minnesota, 13-27 Oct 2000, and at Workshop on Integrable Models, Strings and Quantum Gravity, Chennai, India, 15-19 Jan 2002. arXiv: hep-th/0202055 * [17] E. T. Akhmedov, I.B. Gahramanov, E.T. Musaev,“ Hints on integrability in the Wilsonian/holographic renormalization group” arXiv:1006.1970 [hep-th] * [18] E. Alvarez and C. Gomez, “Geometric holography, the renormalization group and the c theorem,” Nucl.Phys. B541 (1999) 441-460, arXiv:hep-th/9807226 [hep-th]. * [19] V. Balasubramanian and P. Kraus, “Space-time and the holographic renormalization group,” Phys. Rev. Lett. 83 (1999) 3605-3608, arXiv:hep-th/9903190 [hep-th]. * [20] D. Freedman, S. Gubser, K. Pilch, and N. Warner, “Renormalization group flows from holography supersymmetry and a c theorem,” Adv. Theor. Math. Phys. 3 (1999) 363-417, arXiv:hep-th/9904017 [hep-th]. * [21] J. de Boer, E. P. Verlinde, and H. L. Verlinde, “On the holographic renormalization group,” JHEP 08 (2000) 003, arXiv:hep-th/9912012. * [22] J. de Boer, “The Holographic renormalization group,” Fortsch. Phys. 49 (2001) 339-358, arXiv:hep-th/0101026 [hep-th]. * [23] T. Faulkner, H. Liu, and M. Rangamani, “Integrating out geometry: Holographic Wilsonian RG and the membrane paradigm,” JHEP 1108, 051 (2011) doi:10.1007/JHEP08(2011)051 arXiv:1010.4036 [hep-th]. * [24] I. R. Klebanov and E. Witten, “AdS / CFT correspondence and symmetry breaking,” Nucl. Phys. B556, 89 (1999) doi:10.1016/S0550-3213(99)00387-9 arXiv:hep-th/9905104. * [25] I. Heemskerk and J. Polchinski, “Holographic and Wilsonian Renormalization Groups,” JHEP 1106, 031 (2011) doi:10.1007/JHEP06(2011)031 arXiv:1010.1264 [hep-th]. * [26] J. M. Lizana, T. R. Morris, and M. Perez-Victoria, “Holographic renormalisation group flows and renormalisation from a Wilsonian perspective,” JHEP 1603, 198 (2016) doi:10.1007/JHEP03(2016)198 arXiv:1511.04432 [hep-th]. * [27] A. Bzowski, P. McFadden, and K. Skenderis, “Scalar 3-point functions in CFT: renormalisation, beta functions and anomalies,” JHEP 1603, 066 (2016) doi:10.1007/JHEP03(2016)066 arXiv:1510.08442 [hep-th]. * [28] S. de Haro, S. N. Solodukhin, and K. Skenderis, “Holographic reconstruction of space-time and renormalization in the AdS / CFT correspondence,” Comm. Math. Phys. 217, 595 (2001) doi:10.1007/s002200100381 arXiv:hep-th/0002230. * [29] B. Sathiapalan and H. Sonoda, “A Holographic form for Wilson’s RG,” Nucl. Phys. B 924, 603 (2017) doi:10.1016/j.nuclphysb.2017.09.018 [arXiv:1706.03371 [hep-th]]. * [30] B. Sathiapalan and H. Sonoda, “Holographic Wilson’s RG,” arXiv:1902.02486 [hep-th]. * [31] C. G. Callan, Jr., S. R. Coleman and R. Jackiw, “A New improved energy - momentum tensor,” Annals Phys. 59, 42 (1970). doi:10.1016/0003-4916(70)90394-5 * [32] S. R. Coleman and R. Jackiw, “Why dilatation generators do not generate dilatations?,” Annals Phys. 67, 552 (1971). doi:10.1016/0003-4916(71)90153-9 * [33] L. S. Brown, “Dimensional Regularization of Composite Operators in Scalar Field Theory,” Annals Phys. 126, 135 (1980). doi:10.1016/0003-4916(80)90377-2 * [34] J. Polchinski, “Scale and Conformal Invariance in Quantum Field Theory,” Nucl. Phys. B303 (1988) 226-236. DOI: 10.1016/0550-3213(88)90179-4 * [35] H. Sonoda, “Construction of the Energy-Momentum Tensor for Wilson Actions,” Phys. Rev. D 92, no. 6, 065016 (2015). doi:10.1103/PhysRevD.92.065016 [arXiv:1504.02831 [hep-th]]. * [36] H. Sonoda, “The Generating Functional of correlation functions as a high momentum limit of a Wilson action,” Progress of Theoretical and Experimental Physics,Issue 12,123B01(2017). doi: 10.1093/ptep/ptx152 arXiv:1706.00198v3[hep-th]. * [37] O. J. Rosten, “A Wilsonian Energy-Momentum Tensor,” arXiv:1605.01055 [hep-th]. * [38] B. Sathiapalan, “Loop Variables, the Renormalization Group and Gauge Invariant Equations of Motion in String Field Theory,” Nucl. Phys. B326, 376 (1989). doi:10.1016/0550-3213(89)90137-5. * [39] E. Witten, “Branes and the dynamics of QCD,” Nucl. Phys. B507, 658 (1997) doi:10.1016/S0550-3213(97)00648-2 arXiv:hep-th/9706109. * [40] T. R. Morris, “The Exact renormalization group and approximate solutions,” Int. J. Mod. Phys. A 9, 2411 (1994) doi:10.1142/S0217751X94000972 arXiv:hep-ph/9308265. * [41] C. Becchi, “On the construction of renormalized gauge theories using renormalization group techniques,” arXiv:hep-th/9607188 * [42] C. Bagnuls and C. Bervillier, “Exact renormalization group equations and the field theoretical approach to critical phenomena,” Int. J. Mod. Phys. A 16, 1825 (2001) doi:10.1142/S0217751X01004505 hep-th/0101110. * [43] C. Bagnuls and C. Bervillier, “Exact renormalization group equations. An Introductory review,” Phys. Rept. 348, 91 (2001) doi:10.1016/S0370-1573(00)00137-X hep-th/0002034. * [44] Y. Igarashi, K. Itoh, and H. Sonoda, “Realization of Symmetry in the ERG Approach to Quantum Field Theory,” Prog. Theor. Phys. Suppl. 181, 1 (2010) doi:10.1143/PTPS.181.1 arXiv:0909.0327 [hep-th]. * [45] O. J. Rosten, “Fundamentals of the Exact Renormalization Group,” Phys. Rep. 511 (2012)177-272, arXiv:1003.1366 [hep-th]. * [46] O. J. Rosten, “On Functional Representations of the Conformal Algebra,” arXiv:1411.2603 [hep-th] * [47] O. J. Rosten, “A Conformal Fixed-Point Equation for the Effective Average Action,” arXiv:1605.01729 [hep-th]. * [48] O. J. Rosten, “Wilsonian Ward Identities,” arXiv:1705.05837 [hep-th]. * [49] H. Sonoda, “Conformal invariance for Wilson actions,” arXiv:1705.01239 [hep-th]. * [50] C. Wetterich, “Exact evolution equation for the effective potential,” Phys. Lett. B301, 90 (1993). doi:10.1016/0370-2693(93)90726-X * [51] W. Mück and K. S. Viswanathan, “Conformal field theory correlators from classical scalar field theory on AdS(d+1),” Phys. Rev. D58, 041901 (1998) doi:10.1103/PhysRevD.58.041901 arXiv:hep-th/9804035. * [52] Prafulla Oak and B. Sathiapalan, “Exact Renormalization Group and Sine Gordon Theory,” arXiv:1703.01591 [hep-th]. * [53] G. Vidal, “Entanglement Renormalization,” Phys. Rev. Lett. 99, no. 22, 220405 (2007) doi:10.1103/PhysRevLett.99.220405 [cond-mat/0512165]. * [54] H. Sonoda, “Equivalence of Wilson actions,” PTEP 2015, no. 10, 103B01 (2015) doi:10.1093/ptep/ptv130 [arXiv:1503.08578 [hep-th]]. * [55] Y. Igarashi, K. Itoh and H. Sonoda, “On the wave function renormalization for Wilson actions and their one particle irreducible actions,” PTEP 2016, no. 9, 093B04 (2016) doi:10.1093/ptep/ptw121 [arXiv:1607.01521 [hep-th]]. * [56] J. Hughes and J. Liu “$\beta$-functions and the exact renormalization group” Nuclear Physics B 307(1988) 183-197 doi:10.1016/0550-3213(88)90528-7.
2024-09-04T02:54:57.583754
2020-03-05T18:11:11
2003.02804
{ "authors": "Igor V. Tetko, Pavel Karpov, Ruud Van Deursen, Guillaume Godin", "full_text_license": null, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "arxiv-papers-0000.json.gz:26062", "submitter": "Pavel Karpov Dr", "url": "https://arxiv.org/abs/2003.02804" }
arxiv-papers
# State-of-the-Art Augmented NLP Transformer models for direct and single- step retrosynthesis, which overperforms all published approaches Igor V. Tetko Institute of Structural Biology Helmholtz Zentrum München, and BigChem GmbH, Germany, Unterschleißheim <EMAIL_ADDRESS> &Pavel Karpov Institute of Structural Biology Helmholtz Zentrum München, and BigChem GmbH, Germany, Unterschleißheim <EMAIL_ADDRESS> &Ruud Van Deursen Firmenich International SA, Research&Development Division, Switzerland, Geneva <EMAIL_ADDRESS>&Guillaume Godin Firmenich International SA, Research&Development Division, Switzerland, Geneva <EMAIL_ADDRESS>Helmholtz Zentrum München – Research Center for Environmental Health (GmbH), Institute of Structural Biology, Ingolstädter Landstraße 1, D-85764 Neuherberg, Germany ###### Abstract We investigated the effect of different training scenarios on predicting the (retro)synthesis of chemical compounds using text-like representation of chemical reactions (SMILES) and Natural Language Processing neural network Transformer architecture. We showed that data augmentation, which is a powerful method used in image processing, eliminated the effect of data memorization by neural networks and improved their performance for prediction of new sequences. This effect was observed when augmentation was used simultaneously for input and the target data simultaneously. The Top-5 accuracy was 84.8% for the prediction of the largest fragment (thus identifying principal transformation for classical retro-synthesis) for the USPTO-50k test dataset, and was achieved by a combination of SMILES augmentation and a beam search algorithm. The same approach provided significantly better results for the prediction of direct reactions from the single-step USPTO-MIT test set. Our model achieved 90.6% Top-1 and 96.1% Top-5 accuracy for its challenging mixed set and 97% Top-5 accuracy for the USPTO- MIT separated set. It also significantly improved results for USPTO-full set single-step retrosynthesis for both Top-1 and Top-10 accuracies. The appearance frequency of the most abundantly generated SMILES was well correlated with the prediction outcome and can be used as a measure of the quality of reaction prediction. #### Synposis Training and application of neural networks with randomized sequences significantly improves direct and retro-synthetic models and can be used to estimate quality of reaction predictions ## 1 Introduction To synthesize an organic compound is to solve a puzzle with many pieces and potentially several pieces missing. Here, the pieces are single reactions, and finding their sequential combination to create a final product is the retrosynthesis task. The success of the logic of organic synthesis developed by E.J. Corey [1] triggered the development of computer programs aiming to find appropriate ways to synthesize a molecule. The first retrosynthesis program LHASA [2] utilizes a template-based [3, 4] approach. Every template (rule, synthon) in a curated database of known transformations is sequentially applied to a target molecule, and then sets of reagents are selected according to a specified strategy. Reagents, in turn, undergo the same decompositions until a set of commercially available compounds is found. Retrosynthesis always has multiple routes – a retrosynthetic tree – ending with different starting materials. Thus, a practical algorithm for retrosynthesis has to solve not only the rule acquisition and selection problem but also has capabilities to effectively navigate this tree [5], taking into account different strategies. These tasks relate directly to artificial intelligence strategies [6, 7, 8]. Due to the difficulty of maintaining template databases, most projects dependent on them, including LHASA, did not become widely used tools. The only major exception is, perhaps, the program Synthia™(previously CHEMATICA [9]) which is a successful commercial product. In the Synthia™program, rules are automatically extracted from atom-mapped reaction examples [10]. However, there is an ambiguity in the mapping definition and, more importantly, the automatic rule does not take into account other undefined possible reactive centers in a molecule. Applying such transformations may result in molecules that fail to react as predicted, e.g., ’out-of-scopes’ and special care to filter out these cases has to be taken [5]. An alternative approach for the extraction of these rules is to apply a data-driven deep learning technique that corresponds to a machine learning approach where an algorithm (usually in the form of a neural network) is trained on the raw data. After the training finishes, the network contains all the implicitly encoded features (rules) of the corresponding input via its parameters. Works on reaction prediction outcomes [11] and retrosynthesis [12, 13] showed the feasibility of a symbolic approach, where reactions are written as SMILES [14] strings as in a machine translation. The product is written in the “source language”, whereas the set of reactants is written in the “target language”. For the “reaction translation” task both languages, however, are SMILES strings, having the same alphabet and grammar. The first works on symbolic (retro)synthesis [12, 15] were carried out with Seq2Seq [16] models following robust and more easy to train Transformer approaches [17, 18] that bring state-of-the-art results [11, 19]. Meanwhile other approaches based on similarity [20], convolutional [21, 22, 23], and graphs show promising results [24, 25]. The SMILES representation of molecules is ambiguous. Though the canonicalization procedure exists [26], it has been shown that models benefit from using a batch of random SMILES (augmentation) during training and inference [27, 28, 29, 30]. Recently, such augmentation was also applied to reaction modeling [11, 18, 31, 32]. The augmented (also sometimes called “random”) SMILES are all valid structures with the exception that the starting atom and the direction of the graph enumerations are selected randomly. In this article, we scrutinize the various augmentation regimes and show that augmentation leads to better performance compared to the standard beam search inference or evaluation of the model under different temperatures. We clearly mention that our study is to predict single-step and not multi-step retrosynthesis, which has been also targeted using Transformer [33, 34]. We show that by using more complicated data augmentation strategies we decrease overfitting [35] of neural networks and increase their accuracy to achieve top performances for both direct and retro-synthesis. We observe that the harder are the data to train the model, the better it will predict new ones. Moreover, we introduce a new measure MaxFrag accuracy for the prediction of the largest fragment (thus identifying principal transformation for classical retro-synthesis). ## 2 Results and discussion The baseline dataset contained only canonical SMILES. The other datasets also contained SMILES, augmented as described in the Methods section, p. Methods. Four different scenarios were used to augment training set sequences. Namely, we used augmentation of products only (xN), augmentation of products and reactants/reagents (xNF), augmentation of products and reactants/reagents followed by shuffling of the order of reactant/reagents (xNS), and finally mixed forward/reverse reactions, where each retrosynthesis reaction from xNS was followed by the inverse (forward synthesis) reaction (xNM). Only the simplest augmentation xN was used for test sets because no information about reactant/reagents could be used for the retrosynthesis prediction. At least one copy of canonical SMILES for each reaction was present in all augmentation scenarios. ### 2.1 Reaction Synthesis Data We used a training set filtered from USPTO database [36] containing 50k reactions classified into 10 reaction types. We used splitting proposed by [12] and divided it into 40k, 5k and 5k reactions for the training, validation, and test sets, respectively. As in the previous study [13], after observing that early stopping using the validation set did not improve model test accuracy (the model performance for each of the sets was monotonically increasing with number of iterations, see Fig. A1), we combined the training and the validation sets into a combined training set. The 5k test reactions were predicted only once the model training was finished and were not used at any stage of the model development. In a similar way we joined training and validation sets of USPTO-MIT [22] dataset for direct reaction prediction. In order to provide a more straightforward comparison with results of the previous studies we also reported performances when developing models using only the respective training sets. Moreover a model with the largest published USPTO-full set [24] was also developed. ### 2.2 Analysis of canonical datasets The development of a model with canonical SMILES (x1) as the training set provided 40.9% accuracy for prediction of the canonical test set. An attempt to use this model to predict the augmented test set (x5, x10), resulted in much lower Top-1 predictions of 23.3% and 18.4%, respectively. This result was to be expected, because the model trained with only canonical sequences was not able to generalize and predict augmented SMILES, which use different styles of molecular representation. ### 2.3 Augmentation of products only (xN) The augmentation of the products (input data), with just one additional augmented SMILES x2, increased Top-1 accuracy to 43.7% for the test data composed of canonical sequences. Increasing the number of augmentations in the training set did not increase the Top-1 prediction accuracy. Thus, the augmentation of the training set with just one random SMILES contributed the best performance. This result is in concordance with another study where only one random SMILES was used to augment data [18]. ### 2.4 Analysis of character and exact sequence-based prediction accuracy To better understand the model training, we also developed several models where approximately 10% of the dataset did not participate in training but was used to monitor its prediction performance. Different from the test set, which tested the performance of models when predicting a new reaction, the monitoring set tested the ability of the Transformer to predict different SMILES generated for the same reaction. The Transformer was able to recognize different representations of the same reaction. For example, when training x1, the character and exact-sequence-based accuracies when predicting the monitoring sequences were 96.5% and 34.5%, respectively. The final performance for the test set, 40.9%, was higher because some reaction products from the Transformer provided non-canonical SMILES, which were correctly matched after transformation to canonical ones. When using augmented training sequences (x10), the accuracies increased to 99.97% and 98.9%, for character and exact sequence-based accuracy, respectively (see Fig. 1). The Transformer recognized different representations of SMILES for reactants and reagents of the same training set reaction, and was able to exactly restore the target products which were memorized. Demonstrably, it was also able to memorize any random sequences. To show this, we used a random SMILES sequence (xNR set in Tables A1, LABEL:tbl:s2 and Fig. A2) instead of the canonical sequences as the target for prediction. While this task was more difficult and took more epochs to train, the Transformer was able to perfectly memorize random sequences. Since the SMILES prediction target was random, the Transformer was not able to learn canonicalization rules on how to write the target. Despite this fact, it still calculated a Top-1 prediction accuracy of 26.8% for the test set which was, however, significantly lower compared to the 42.2% achieved using the x10 dataset with canonical sequences as the target. Figure 1: Character and exact sequence based accuracies calculated for the monitoring set. The transformer memorized the target sequences if the target sequences were all canonical SMILES (red dots). It also reasonably predicted the sequence composition for randomized target SMILES (cyan rectangle, dashed) but its performance decreased for prediction of exact full SMILES (cyan circle). The performance normalized by the percentage of canonical sequences increased with the number of augmentations, N, since some of the random sequences were canonical ones. ### 2.5 Augmentation of reactants and reagents A boost of the Transformer performance was observed when, in addition to products i.e. the inputs SMILES, we also augmented the target SMILES, i.e. reactants and reagents. This task was more difficult for the Transformer, which resulted in a drop in both character and sequence based scores for monitoring sequences during the training stage. For example, when using the training dataset with one augmented SMILES, x2F, the character based accuracy dropped to 91.3%, which was lower than 98.6% calculated with the x2 dataset composed of canonical product SMILES (Fig. 1). For a larger number of augmentations, the character-based accuracy converged to a plateau, e.g. 89.96% and 89.67% for the x5F and x20F training sets, respectively. The character-based accuracy was calculated as the percentage of exact matches between target and predicted sequences, e.g. “CCCCN” and “NCCCC” have an accuracy of 80%, despite being the same SMILES but written from different starting atoms. Thus despite the fact that the Transformer faced a prediction of random SMILES, it was still able to provide a reasonable prediction of their character composition. However, of course, the Transformer was not able to predict the exact random product SMILES. This resulted in a decrease in sequence-based accuracy based on the number of augmentations for xNF training datasets (Fig. 1, cyan circle). Still the Transformer was able to predict some of the sequences, which corresponded to the subset of canonical sequences in the monitoring set. Interestingly, the sequence accuracy normalized to the percentage of canonical SMILES in the monitoring sets increased with the number of augmentations since some randomly generated sequences were canonical SMILES. ### 2.6 Top-1 performance analysis For augmentations with 1 or 2 random SMILES, the Top-1 prediction performance of the models trained with augmentation of reactants and reagents only, xN, and full reaction augmentation, xNF, were similar. For a larger number of augmentations the models trained with xNF sets had systematically better performances than those developed with xN sets (Fig. 2). The training with the x80F set provided the highest Top-1 performance of 52.3% when this model was applied to the test set generated with x20 number of augmented sequences. While it was possible that further increase in the augmentations could still increase the Top-1 performance, we did not perform such calculations due to limitations on available computational resources. Figure 2: Top-1 performance of models developed with different number of augmentation (shown on x axis) and different augmentation scenarios applied to both test and training sets (red colour: only products were augmented; cyan colour: full reactions were augmented). The use of the large number of augmentations for the test set (solid lines) improved prediction accuracy for models developed with augmentation of full reactions but did not influence the performance of models where only input data were augmented. ### 2.7 Shuffling order of reactants In addition to augmenting the full reaction, we also randomly shuffled the orders of reactants (see xNS set description in Table A1 and LABEL:tbl:s2). The effect of this additional data perturbation improved Top-1 performance to 53.1% for the x20S training dataset applied to the test set with the same number of augmentations (Fig. A3). Further increasing the number of augmentations resulted in the loss of Top-1 prediction accuracy. ### 2.8 Shuffling and mixing of retrosynthesis and direct reactions The training of retrosynthesis and direct reactions simultaneously could create a mixed representation of latent space and further increase the ability of the Transformer to make generalizations. We tested this hypothesis by combining direct and reverse reactions in one training set by reversing the order of product/reactant+reagents and adding a dot to distinguish direct reactions (see e.g., Table LABEL:tbl:s2, x2M augmentation). Contrary to previous analysis, which required 20 augmentations of training set sequences to achieve the highest performance, the mixed dataset achieved it with only 10 augmentations (Fig. A3). Since the mixed dataset also included direct reactions, it had the same number of 19 augmented SMILES per canonical SMILES as in the previous analyses. Thus, this number of augmentations was optimal for the training of the Transformer. A smaller number of augmentations did not allow it to fully explore the data diversity while a larger number created too much noise and made it difficult to learn canonization rules, which were injected by the single canonical sequence. For the x10M training set, the Transformer calculated 52.8%, which was similar to 53.1% calculated using the x20S training dataset. ### 2.9 Top-5 performance analysis This measure provided a relaxed estimation of the performance of the model by measuring if the correct reaction is listed in the Top-5 predicted reactions. Actually, it is questionable whether for retrosynthetic models having the highest Top-1 accuracy is desirable. The goal of a retrosynthetic model is to obtain several precursor suggestions and not exclusively the ones stated in the literature. Moreover, multiple reactions for the same product exist. An example includes the aromatic substitution of an aryl halide (R-X) to an aryl amine (R-NH2) or aryl hydroxide (R-OH). Models with higher Top-n scores do suggest other probable reactions (indeed, all reactions amid Top-n have similar probability) which may correspond to those not reported in the literature for the analysed example. Thus models with higher Top-N scores but with similar Top-1 scores could be interesting for a chemist since they do propose the correct prediction along with similar quality Top-1 reactions. For each number of augmentations, the Top-5 performance generally increased with the number of augmented sequences. The highest Top-5 value was consistently calculated across different scenarios for training sets with 4-5 augmentations only (Fig. 3). The highest accuracy, 78.9%, was calculated for the mixture dataset using the x5M training set augmentation. This number had approximately 1% higher accuracy than that calculated using the x5S training set (Fig. 3). Figure 3: Top-5 performance of transformer models developed with different training set augmentation protocol (See Tables A1 & LABEL:tbl:s2) for prediction of the x20 test set. ### 2.10 Reference USPTO-50k model For all studies we used a fixed number of epochs N=100. However, we needed to confirm that this was a sufficient number of epochs and to determine if we could calculate better results by training for longer. We selected the model developed with the x5M training set, which provided the highest performance for Top-5 accuracy, and trained it for an additional 400 iterations (in total 500). This additional training improved Top-1 accuracy to 53.3% while Top-5 performance increased to 79.4% (Table 1111The final reference model was built using 500 iterations for the x5M training set. Its reference performance was evaluated using beam size = 5, temperature = 1. The altered parameters are shown for several other application scenarios. For beam = 1 and x1000 augmentations the model calculated 53.7, 80 and 84.3 for Top-1, Top-5 and Top-10 predictions, respectively. This augmentation as well as the one with beam size=10 applied to x100 analysed the same number of predicted sequences. The best results were shown in bold. Larger beam sizes contributed better results for larger Top-n predictions.) when using beam=5, e.g. the same as in previous analyses. Further improvement was achieved by using a large number of augmentations, and x100 as the test set. With this setting the model achieved an accuracy of 53.6% and 80.8% for Top-1 and Top-5 predictions, respectively. Table 1: Analysis of the reference model performance depending on the parameters of the application protocol. Apply model setting | Test set x1 | Test set x20 | Test set x100 | ---|---|---|---|--- Top-1 | Top-5 | Top-1 | Top-5 | Top-1 | Top-5 | Top-10 reference accuracy | 48.5 | 72.5 | 53.3 | 79.4 | 53.6 | 80.8 | 85 temperature, t=1.3 | 49.1 | 67.7 | 52.7 | 77.7 | 53.3 | 78.4 | 83.2 no beam search | 47.7 | 47.7 | 53.3 | 75.3 | 53.8 | 78.8 | 81.7 beam size, beam= 10 | 48.3 | 73.4 | 53.5 | 80 | 53.5 | 81 | 85.7 beam size, beam = 44 | 48.3 | 72.5 | 53.5 | 80 | 53.5 | 80.5 | 85.8 ### 2.11 Influence of temperature In our previous study [13], we observed that using higher temperatures during beam search increased model accuracy for the Top-1 prediction. It should be mentioned that no augmentation was used in that study. Under the same experimental setup with no augmentation, i.e. when predicting test set composed of only canonical sequences, x1, the Top-1 accuracy of the model increased from 48.3% to 49.1% and 49.2% when using temperatures 1.3 or 1.5, respectively. However, the Top-1 and Top-5 performances for the augmented data (x20) decreased from 53.3% to 52.7% and 52.4%, respectively. For the same test set the Top-5 accuracies also decreased from 79.4% to 77.7% and 77.4% for both temperatures, respectively. Thus, while higher temperatures increased the variability of predictions and thus performance for prediction of canonical sequences, its effect was negative for the augmented data. In particular, it resulted in the lower accuracy of Top-5 predictions. ### 2.12 Influence of beam search In the above studies we consistently used a beam size of 5 for all analyses. The goal of the beam search was to generate multiple predictions for the same data and thus to better explore the variability of predictions. For example, when using the x20 test set and a beam size of 5, we obtained up to 100 individual predictions, which were used to select the most frequently appearing Top-1 and Top-5 sequences. Increasing the beam size to 10 further improved Top-1 by 0.2 to 53.5% and Top-5 by 0.6% to 80% for the test set. The decrease of the beam size to 3 provided a slightly higher Top-1 score of 53.4% but decreased the Top-5 to 78.5% for the same test set. The use of beam size 1 resulted in a Top-1 accuracy of 53.3% and a reduced Top-5 accuracy of 75.3% (Table 1). These results were expected: the variation of the beam size slightly influenced the identification of the highest ranked sequence but its smaller number reduced exploration of the space of other Top-reactions for larger $n$. Both beam search and augmentation increased the number of predicted SMILES which in turn led to better accuracy of model predictions. Thus both of these methods could contribute to the generation of multiple predictions to be used to identify Top-ranked sequences. The maximum size of the beam was restricted by the size of the target vocabulary (number of characters in the target SMILES), which was 44 characters for our dataset. Because of the design of the beam search and because we explicitly excluded duplicated predictions (see section “Analysis of predicted SMILES” as well as Table A3), the dataset used for analysis did not generate duplicated sequences for the same beam search. However, such sequences were indeed generated at different positions of the beam as different representations of the same SMILES. The number of non-unique sequences generated within the same beam search increased with the length of the beam. Interestingly, the use of canonical SMILES as input data contributed to the largest number of unique SMILES, which were 86%, 82% and 78% for beam searches of size 5, 10 and 44, respectively. The use of augmented random SMILES as input contributed smaller numbers of unique sequences, e.g., 42%, 28% and 13% for beam searches of size 5, 10 and 44, respectively. For both types of SMILES some generated SMILES were erroneous and could not be correctly converted by RDKit. Such sequences were excluded from analysis. For large beam sizes, canonical SMILES produced a much bigger percentage of incorrect SMILES, as compared to the use of random SMILES (see Fig. A4). The large difference in the results generated when starting from canonical and random SMILES was also observed for analysis of the percentage of correct predictions for each beam position. In general, the number of erroneous SMILES was low, e.g., on average it was less than 1% and 3% for beam search 10, when using augmented and canonical SMILES as input, respectively (Fig. A4). While graph-based methods predict exact chemical structures and thus have 0% syntactically invalid SMILES, a few percentage points of incorrectly predicted structures by the Transformer model does not make a large difference to these methods. The use of canonical SMILES provided (Fig. A4) a higher accuracy for the first beam position, but its accuracy was much lower for other beams. This was because the Transformer generated canonical SMILES for the canonical input sequences (e.g., 91% of valid SMILES produced at the position 1 of the beam search for input canonical SMILES were canonical ones) and since only one valid canonical SMILES could be produced, it failed to generate new correct SMILES. Indeed, during the training phase, the Transformer always had a pair of canonical SMILES as input and target sequences. Contrary to that, using augmented SMILES allowed more freedom and allowed it to contribute valid but not necessarily canonical SMILES (e.g., only 33% of generated SMILES at the position one of the beam search were canonical ones if augmented SMILES were used as input). The decrease in performance of SMILES generated when using canonical SMILES was one of the main reasons to implement deduplication of data and retain only the first SMILES for the prediction of reactions (see section “Analysis of predicted SMILES”). When deduplication was not performed and all SMILES generated during the beam search were used to rank predictions (compare Tables A3 and A4), the Top-1 performances of models were most significantly affected when using only few augmentations, e.g. for the reference model its accuracy dropped from 48.3% (reference prediction, Table 1) to 47% but did not change for, e.g. Top-5 performance. In principle, the analysis retaining multiple predicted sequences was based on more data and thus was more stable. Therefore, it could be used when several augmentations and/or large values of Top-n are used for analysis. As it was mentioned above, both data augmentation and beam search could be used to generate multiple predictions. For the same number of generated sequences, 1000 per SMILES, using a beam = 10 search for the x100 set produced lower accuracy, 53.5% compared to 53.7% using augmented data with the x1000 test set without any beam search. The performance of both methods were the same and equal to 53.7% when the deduplication procedure was not used. However, the beam search contributed to better accuracy, i.e., 81% vs 80% and 85.7% vs 84.3% compared to the use of augmentation alone for Top-5 and Top-10, respectively. Thus, using beam search allowed a better exploration of data when suggesting several alternative reactions. In any case the augmentation was a very important part of the beam search and for the best performance, both of these approaches should be used simultaneously. We also do not exclude that optimisation of the augmentation may improve its results in the future. Moreover, data augmentation used alone without a beam search contributed superior models to the beam search used without any data augmentation. ### 2.13 Accuracy of prediction For some reaction predictions without the use of augmented sequences or position at the beam search the majority of predicted sequences were identical, while for other reactions the Transformer generated as many different SMILES as possible reactants (see Table A5). While the beam generation procedure guaranteed that each prediction had exactly the same sequence of characters, in many cases the Transformer produced multiple non- canonical instances of the same SMILES. The frequency of the appearance of the most frequent (after conversion to the canonical representation) SMILES could, therefore, indicate the confidence of the Transformer in the prediction. Fig. 4 indicated that such frequency (which was calculated on 100x augmented dataset) correlated well with the accuracy of prediction and could be used as a confidence score for the chemist. Indeed, the reactions in which the most frequent SMILES dominated amid all predictions for Top-1 were likely to be predicted correctly. If the most frequent SMILES had low frequencies, such predictions were likely to be incorrect ones. For about 20% of the most frequent predictions, the accuracy of the retrosynthesis prediction was above 80% and for 4% more than 90%. It should be mentioned, that for a practical implementation which critically depends on the speed, e.g., multistep synthesis, there is no reason to always run all 100 predictions to get the confidence estimations. One can always estimate the probability of the most frequent SMILES and its confidence interval based on a much smaller number of predictions thus decreasing the number of calculations. As shown in Fig. 4, the same correlations were observed for two other datasets USPTO-MIT and USPTO-Full, which are analysed in the following section. The same approach can be used for Top-n predictions by suggesting one or more plausible pathways for retrosynthesis. An example of such correlations for Top-5 MaxFrag accuracy were shown in Fig. A5. Moreover, the same approach also predicted the accuracy for the direct synthesis as it was demonstrated at Fig. A6. It should be mentioned that use of data augmentation is not the only approach to estimate the accuracy of predictions, and other methods based on the likelihood of the direct reaction prediction were also proposed [18, 34] and were shown to correlate with the accuracy of the predictions. A comparison of such methods is beyond the scope of this study. Figure 4: Accuracy and density (fraction of predictions) of the Transformer for MaxFrag Top-1 retrosynthesis accuracy as a function of the frequency of appearance of the Top-1 SMILES in the output of the Transformer for the respective test sets of the models. ### 2.14 Analysis of prediction accuracy for different scenarios The accuracy of the reference model was about 5% to 7% (Top-1) and 10% (Top-5) higher for reactions without stereochemistry than for those with it (Table 2222The classical retro-synthesis accuracy was estimated as the percentage of correctly predicted largest fragments, i.e., “maximum fragment” (MaxFrag) accuracy. The best results are shown in bold.). 20% of the reactions in the test set contained molecules with stereochemistry. An increase in the number of augmentations of the test set increased the accuracy of both stereo and non-stereochemical reactions. Stereochemical reactions in the dataset may also suffer from a larger number of annotation errors or/and can have lower prediction scores since such data were underrepresented in the training set. Additionally, for some reactions despite the relative stereochemistry being conserved it may still define confusing information for the model due to the reactant satellite effect. The R/S could be also affected by the way the SMILES was written, e.g. from A to Z or Z to A. Table 2: Prediction accuracy of the reference model for different subsets of the test set of USPTO-50k using a beam search of size 10. Test set augmentation | Top-1 | Top-5 | Top-10 ---|---|---|--- all | stereo (20%) | no stereo (80%) | all | stereo (20%) | no stereo (80%) | all | stereo (20%) | no stereo (80%) x1 | 48.3 | 44.7 | 49.2 | 73.4 | 67.3 | 74.9 | 77.4 | 71 | 79 x20 | 53.4 | 47.3 | 55 | 80 | 73.3 | 81.9 | 84.2 | 79.2 | 85.4 x100 | 53.5 | 47.1 | 55.1 | 81 | 74.6 | 82.6 | 85.7 | 81.2 | 86.8 MaxFrag, x1 | 53.5 | 48.7 | 54.7 | 79.2 | 72.7 | .80.9 | 81.6 | 75.1 | 83.3 MaxFrag, x20 | 58.5 | 52 | 60.1 | 84.7 | 79 | 86.1 | 88.6 | 83.6 | 89.8 MaxFrag, x100 | 58.5 | 51.2 | 60.3 | 85.4 | 79.4 | 86.9 | 90 | 85.1 | 91.2 ### 2.15 Classical Retro-Synthesis accuracy: recognition accuracy for the largest fragment The prediction of SMILES for retro-synthesis includes exact prediction of the reactants. However, the same reaction performed using different reactants can result in a similar yield. In general the database does not contain all possible reaction conditions to make a given product. Therefore, a prediction of only the main (the largest) reactant can be considered more relevant for retro-synthesis predictions, since we need to first identify the reaction type. Indeed, a chemist generally writes a retrosynthesis by decomposing a target molecule into pieces. This classical procedure, focusing only on main compound transformations, is the minimal information required to get an efficient retrosynthesis route and at the same time all reactions needed (see Fig. 5). The selection of reaction conditions of the reactions can be considered as a subsequent task. Figure 5: Classical representation of the retrosynthesis of cimetidine focusing on the principal transformations, as is typically written by synthetic chemists (adapted from https://de.wikipedia.org/wiki/Cimetidin under CC BY-SA 3.0 license). The currently used Top-n accuracy measures also include prediction of other reactants [12, 13, 19, 20, 24, 32], which may not be necessary for classical methodical retrosynthesis planning. That is why we decided to consider the recognition of the largest reactant as a new measure of the model performance: Classical Retro-Synthesis Accuracy, i.e., the accuracy of prediction of the “Maximal Fragment” (MaxFrag). The MaxFrag was 85.4% for the Top-5 reaction prediction (Table 2). The MaxFrag is important to estimate an ability of a system to automatically deduce the correct reaction class. This strategy is orthogonal to explicitly providing reaction class information as input to a model [24]. Adding the reaction class as prior information is equivalent to getting a hint on an exam: this is impractical and also reduces the chance of proposing alternate feasible reactions. Using MaxFrag is more accurate and logical than providing a reaction class as prior information. Besides MaxFrag and Top-n, other scores were proposed to evaluate the success of retro suggestions/reactions, e.g., the matching score by Satoh and Funatsu [37], the ’in-scope’ filter by Segler et al. [5], and the forward transformer score by Schwaller et al. [34]. However, MaxFrag is the easiest and the interpretable one. ### 2.16 Retrosynthesis data quality and MaxFrag accuracy The use of the classical retro-synthesis accuracy (MaxFrag Top-n) calculated a systematic higher accuracy in comparison to the traditional Top-n scores. To explain this fact, we analysed our datasets and found four types of reactions: non-reagent reactions, one reagent reactions, multiple reagent reactions, and unclear reagent reactions. Non-reagent reactions were reactions that did not work (i.e., A + B -> A). One reagent reactions had only one starting material for the product (A -> P), multiple reagents had multiple starting materials for the same end product (A + B -> P), and finally unclear reagents where the reaction conditions, solvent, salts, and so on, were included as reagents (A + B + N -> P, where N were chemicals that did not participate to form the product). Depending on the dataset the proportions of these reaction categories slightly varied. In the MIT dataset around 0.5% of reactions were non-reagent reactions, and around 10% of the reactions were unclear reagent reactions while there were less than 1% of such reactions in the USPTO-50k dataset. Thus, for the MIT set it would be impossible to fully predict about 10% of reactions for the retrosynthesis, since they contained chemicals “N” that did not form the reaction, but only conditions, solvent, etc. This more challenging problem of predicting not only the reactants but also the reagents, while still keeping diverse precursor suggestions was addressed elsewhere 34. For the direct synthesis that was not a severe problem since the Transformer could correctly identify and pick-up the interacting components (“A” and “B”) and predict the product. However, the use of Top-n for retrosynthesis is questionable due to the aforementioned problem. The use of MaxFrag accuracy decreased those effects by focusing on the main reagent. That is why, in our opinion, the MaxFrag score better reflected the chemistry than Top-1 accuracy. Still there is an unsolved challenge with this score due to the possibilities to synthesize the same products starting from multiple reagents. Both Top-n and MaxFrag Top-n scores were calculated by using the exact match of the predicted and target molecules. But, for example, in the reaction R-R1 + NH3 -> R-NH2 multiples choices of R1, i.e. -OH, -Cl, -Br, -I or -F, would be all correct predictions. The only difference would be the reaction rates and yields, which are not part of the prediction algorithms. Unfortunately the currently used scores, including MaxFrag, cannot yet correctly account for this problem. The problem to some extent could be alleviated by using Top/MaxFrag-n instead of Top/MaxFrag-1 scores: by considering multiple reagents generated by the model, we could also get the one provided in the initial reaction. Thus, the retrosynthesis task is not about getting high Top-1 accuracy. Any classical organic synthesis book, such as the famous Larock’s “Comprehensive Organic Transformations” [38] indicates multiple ways to synthesize chemical compounds and this has to be reflected in the score. The classical retro-synthesis accuracy measured by MaxFrag is a first attempt to better handle those data ambiguities during the validation process and we highly encourage other users to use it. However, in order to enable a comparison with the previous studies we also reported traditional Top-n scores. ### 2.17 Analysis of prediction accuracy for different classes The original dataset USPTO-50k set [12] provides a reaction type label for every reaction. In total, 10 reaction classes ranging from protection/deprotection, to carbon-carbon bond and heterocycle formation present the most common reactions in organic synthesis. The comparison of accuracy for each class of reactions was presented in Fig. 6. Our best model showed excellent results, outperforming the state-of-the-art Self-Corrected Transformer (SCORP) [19]. Functional group interconversion and addition, as well as carbon-carbon bond formation were the most difficult for the models to predict. It was not surprising, due to the diverse possibilities for choosing reactions and corresponding reactants for C-C bond creation compared to more straightforward oxidation or protection where the set of groups and reactants is more narrow. Figure 6: Top-10 accuracy of prediction of different classes of reactions. ### 2.18 Prediction of direct reactions The same strategy described in this work was applied to predicting direct reactions from the USPTO-MIT dataset [22]. We used 439k reactions (training and validation set were joined together) as the training set and predicted 40k reactions from the test set by training the Transformer with the same architecture and parameters. The separated and mixed sets were used. In the separated set reactants and reagents were separated with the “>” sign while in mixed set all “>” are substituted with “.” and the order of reactants and reagents was additionally shuffled. The mixed set was more difficult for training since the Transformer had to identify the reaction center from a larger number of molecules. However, such a set better reflected a practical application since separation of data on reactants and reagents in some cases would not be possible without a knowledge of the target product, and thus it did provide a hint to the Transformer about the reaction center. We have removed 316 reactions from the training set where the largest products had length smaller than 5 characters (no reactions were removed from the test set). The Transformer was training using the x5N augmentation protocol for the separated set as well as the x5S and x5M protocols for the mixed set. Since it would be impractical to predict all reagents and reactants for the retrosynthesis task, which was used to additionally augment data in the x5M protocol, only the largest reactant was retained as a target for the reverse reactions. Augmented test sets were predicted using beam size 10 (Table 3). For the mixed test set the order of reactants and reagents was shuffled. Table 3: Prediction accuracy for direct reaction from USPTO-MIT test set using beam size = 10. Training set | Test set x1 | Test set x20 | Test set x100 ---|---|---|--- Top-1 | Top-5 | Top-10 | Top-1 | Top-5 | Top-10 | Top-1 | Top-5 | Top-10 x5N (separated) | 91.1 | 96.3 | 96.7 | 91.8 | 96.9 | 97.3 | 91.9 | 97 | 97.4 x5S (mixed) | 90 | 95.8 | 96.2 | 90.4 | 96.4 | 96.9 | 90.4 | 96.5 | 97 x5M (mixed) | 90 | 95.5 | 95.7 | 90.2 | 96.1 | 96.5 | 90.2 | 96.2 | 96.8 As in previous studies, separation of reagent and reactants with “>” symbols contributed to a model (x5N) with higher prediction scores than for models with mixed sets (x5S and x5M). The additional augmentation of data using retrosynthesis reactions (x5M) did not improve the model. This could be due to the fact that the data for direct reactions were much larger and already contained sufficient information to develop accurate models. While using the x100 test set still contributed better prediction accuracy than using x20, the improvements were in the order of 0.1% or no improvement at all. Thus the effect of using larger augmentations on model performance reached saturation for the x100 test set. ### 2.19 Comparison with published models for direct synthesis using USPTO- MIT set The USPTO-MIT was used as benchmarking for direct synthesis predictions in multiple articles. The AT provided the highest gain in performance for prediction of the more challenging mixed dataset (Table 4333The results of the models applied to x100 augmented dataset using beam size = 10. Model was trained on a set of 439k reactions, which combines both the training set of 400k and the validation set of 39k from [22]. The model was trained on the 400k training set to better match performance of previous models.). Since the model was trained with randomly shuffled augmented data, it was able to generalise very well and provide excellent predictions for the new mixed data. In order to provide a more adequate comparison with previous studies we also developed a model based on exactly the same training data of 400k. Interestingly, the use of a smaller dataset slightly increased Top-1 performance to 90.6% but decreased Top-5 performance to 96.1. It should be noted that improvements for direct synthesis look small, i.e. just few percentages. Indeed, the model performance for the direct synthesis increased from 88.6 to 90.6 (Top-1) and 96.1 from 94.2 (Top-5) as compared to the single model reported in ref18. Actually, this is a significant increase in performance since AT decreased the relative errors by 15% and 30% for both sets, respectively, if we consider that we can predict direct synthesis with 100%. In reality we approach the experimental accuracy and further decrease of the errors will be challenging. Table 4: Comparison of recently published methods for direct synthesis prediction on the USPTO-MIT set. Model | Top-1 | Top-2 | Top-5 | Reference ---|---|---|---|--- separated | mixed | separated | mixed | separated | mixed Transformer (single) | 90.4 | 88.6 | 93.7 | 92.4 | 95.3 | 94.2 | [18] Transformer (ensemble) | 91 | | 94.3 | | 95.8 | | [18] Seq2Seq | 80.3 | | | | 87.5 | | [11] WLDN | 79.6 | | | | 89.2 | | [32] GTPN | 83.2 | | | | 86.5 | | [39] WLDN5 | 85.6 | | | | 93.4 | | [23] This work (x100, beam 10) | 91.9 | 90.4 | 95.4 | 94.6 | 97 | 96.5 | AT trained with same training set as in [22] | 92 | 90.6 | 95.4 | 94.4 | 97 | 96.1 | ### 2.20 Comparison with published models for retrosynthesis tasks #### USPTO-50k: The proposed augmentation protocol achieved the best published results on the USPTO-50k dataset (Table 5). In the previous studies with this set the authors separated data on training, validation and test sets. In all our analyses, since the validation set was not used for model selection and we did not observe the model overfitting [35] we joined training and validation sets to use all data in order to develop better models. While we think this is a fair comparison (it is up to the developers of the model to decide on how to best use the available data), we also added results when the model was developed with only the 40k compounds for USPTO-50k set (Table 5). The accuracies of the models developed with 40k and 45k sets were very similar for the test set. Thus, the data augmentation allowed to compensate for the smaller size of the training set. Table 5: Comparison of retrosynthesis recently published methods for retrosynthesis prediction on USPTO-50k. Model | Top-1 | Top-2 | Top-5 | Top-10 | Ref | Comments ---|---|---|---|---|---|--- Seq2Seq | 37.4 | | 57.0 | 61.7 | [12] | 40/5/5 split; splitting any reactions with multiple products into multiple single product and removal of trivial products Transformer (3*6) | 42.7 | 52.5 | 69.8 | - | [13] | 45/5 split: no validation set was used Transformer (6*8), (self corrected) | 43.7 | | 65.2 | 68.7 | [19] | 40/5/5 split, reagents from reactants are removed Transformer, augmentation | 44.8 | 57.1 | 57.7 | 79.4 | [32] | same as in [12] Similarity-based | 37.3 | | 63.3 | 74.1 | [20] | same as in [12] Graph Logic Network | 52.5 | | 75.6 | 83.7 | [24] | same as in [12, 19] G2Gs | 48.9 | | 72.5 | 75.5 | [25] | same as in [12] AT | 53.5 | 69.4 | 81 | 85.7 | | same as in [13]. The results of the reference model applied to x100 augmented dataset using beam size = 10. AT | 53.2 | 68.1 | 80.5 | 85.2 | | only 40k samples were used as training set to match the other results. AT MaxFrag | 58.5 | 73 | 85.4 | 90 | | same as in [13]. The classical retro-synthesis accuracy was estimated as accuracy for prediction of the largest fragment (MaxFrag). AT MaxFrag | 58 | 73.4 | 84.8 | 89.1 | | only 40k samples were used as training set to match the other results. #### USPTO-MIT: We also analysed the performance of the model at retrosynthesis of the USPTO- MIT set. Compared to USPTO-50k this database also contained multiple reagents and possible catalysts. In our previous analysis (Table 3), we used the retrosynthesis of the largest fragment as part of the “mix” protocol (x5M), i.e. the products were used as input data contained with “.” to predict the largest reactant (as explained in the supplementary materials, in order to distinguish both direct and retrosynthesis reactions, one of them started with a dot). The dot in front of the SMILES allowed the Transformer to distinguish retrosynthesis from the primary studied direct synthesis reaction. But, of course, the model trained with such data could be also used for retrosynthesis, provided that input data also started with “.”. We also developed a new retrosynthesis model for this set by making it more compatible to USPTO-50k. For this we kept only the 1-2 largest fragments as the targets for retrosynthesis prediction and trained a new model using the x5S protocol. Both models were used to predict the 40k test set which was augmented 100 times. The MaxFrag performances of x5S model, 61.9% (Top-1), 84.4% (Top-5) and 86.9% (Top-10) were very similar to those calculated for the USPTO-50k set (58.5, 85.4 and 90 - see Table 5). The x5M model, which as aforementioned was a “by-product” of our direct reaction predictions, calculated MaxFrag of 61.1%, 84.4% and 88.2% for the MaxFrag Top1-,Top-5 and Top-10, respectively. Considering that the USPTO-MIT set contained more diverse reactions than USPTO-50k, this result clearly demonstrated the excellent performance of the developed approach and its scalability. The Augmented Transformer (AT) was able to improve its performance for the Top-1 by extracting knowledge from a much larger dataset of reactions. #### USPTO-full: The final testing was done using a USPTO-full set by Dai et al. [24] The authors created a large dataset from the entire set of reactions from USPTO 1976-2016. For reactions with multiple products they duplicated them into multiple ones with one product each. The authors also removed duplications in reactions as well as those with wrong mapping to obtain train/valid/test datasets with 800k/100k/100k sizes. Our analysis identified that some reactions in these sets were still invalid, e.g. contained no products or just single ions as reactants (e.g., US08163899B2,>>[OH2:11], US06048982,CC(=O)OCCCCC[I:22]>>[I-:22], US07425593B2,>>[K:12], US08114877B2,CC[I:13]>>[I-:13]). We eliminated such reactions as well as those where reactants had less than five atoms in total, since these were unlikely to be correct reactions. This procedure decreased sizes of the train/valid/test sets on average by 4% to 769k/96k/96k. The AT trained using x5M protocol using the 769k training set calculated the higher performance compared to results from the previous study (Table 6444Model was trained using a filtered training set of 769k from [20]. Accuracies in parentheses correspond to those recalculated for the test set by assuming that AT failed for all 4% of excluded reactions. Results for retrosim and neuralsym approaches as reported by Dai et al. [24] ). Considering that after the removal of the 4% erroneous reactions the test dataset was decreased, we also included recalculated performance for it by assuming the worst case scenario: that AT and other tested methods failed for all excluded sequences. Even for this very conservative estimation the AT provided significant improvements compared to previously reported results. The MaxFrag accuracies for USPTO-full were lower compared to that of other analysed sets due to the much higher diversity of this set. Table 6: Top-k accuracy for retrosynthesis prediction on USPTO-full dataset. | Retrosim [20] | Neuralsym [3] | GLN [24] | AT ---|---|---|---|--- Top-1 | 32.8 | 35.8 | 39.3 | 46.2 (44.4) Top-2 | | | | 57.2 (54.9) Top-10 | 56.1 | 60.8 | 63.7 | 73.3 (70.4) MaxFrag Top-1 | | | | 54 MaxFrag Top-2 | | | | 66.3 MaxFrag Top-5 | | | | 77.3 MaxFrag Top-10 | | | | 80.6 Thus for all analysed data sets the AT provided an outstanding performance by consistently and significantly overperforming all previously published models for all statistical performances. ## 3 Conclusions and outlook This study showed that careful design of the training set was of paramount importance for the performance of the Transformer. Training the model to learn different representations of the same reaction by distorting the initial canonical data eliminated the effect of memorisation and increased the generalisation performance of models. These ideas are intensively used, e.g. for image recognition [40], and have been already successfully used in the context of several chemical problems [27, 28, 29, 30], including reaction predictions [18, 31], but were limited to the input data. For the first time we showed that application of augmentation to the target data significantly boosts the quality of the reaction prediction. We also showed for the first time that the frequency of predicted SMILES could be used as a confidence metric for (retro)synthesis prediction and can provide quantitative estimation of the most probable reactions amid Top-n predicted outcomes. It is very critical to estimate the quality of the reaction prediction since it could help to better prioritise multi-step retrosynthesis. The developed methodology is unique to the use of augmentation techniques, currently unavailable to GCNs [24], which directly operates with graphs. The estimated accuracy of prediction can help to distinguish reactions, which are difficult to predict, from typo and erroneous reaction data, which will be important to clean up the reaction data and further improve model quality. We also introduced a new MaxFrag measure, classical retro-synthesis accuracy, which in our opinion better reflects the requirements for retrosynthesis analysis. It should be mentioned that use augmentation was first studied by authors of [18], who introduced Transformer to chemistry and applied it to chemical reactions by using SMILES instead of the text sequences. The augmentation of input data, which was done in that article, provided only a minor improvement of their models. Because of its small impact it was not followed in several other Transformer-based works, including our own study [13, 19]. In this article, we brought an original idea on how to augment chemical data, which provided a significant improvement of the results for all analysed datasets. ## Supporting Information The Supporting Information contains a description of methods, explanation and examples of augmentation protocols, illustration of the procedure of ranking predicted reactions, examples of distributions of predicted SMILES, figures explaining performances of Transformer models. The training and test set (including augmented data), model and model predictions are available at . ## Acknowledgments This study was partially funded by the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Innovative Training Network European Industrial Doctorate grant agreement No. 676434, “Big Data in Chemistry” and ERA-CVD "CardioOncology" project, BMBF 01KL1710 as well as Intel grant. The article reflects only the author’s view and neither the European Commission nor the Research Executive Agency (REA) are responsible for any use that may be made of the information it contains. The authors thank NVIDIA Corporation for donating Quadro P6000, Titan Xp, and Titan V graphics cards for this research work. The authors thank Michael Withnall (Apheris AI) and Alli Michelle Keys (Stanford University) for their comments and English corrections as well as Marios Theodoropoulos (University of Geneva) for interesting discussions. We also would like to thank the anonymous reviewers for their insightful and sometimes even provocative comments answering of which significantly increased the value of this study. ## References * [1] Corey, E. J. & Cheng, X.-M. _The Logic of Chemical Synthesis_ (Wiley-Interscience, 1995). * [2] Corey, E. J., Long, A. K. & Rubenstein, S. D. Computer-assisted analysis in organic synthesis. _Science_ 228, 408–418 (1985). * [3] Segler, M. H. S. & Waller, M. P. Neural-symbolic machine learning for retrosynthesis and reaction prediction. _Chemistry – A European Journal_ 23, 5966–5971 (2017). URL https://chemistry-europe.onlinelibrary.wiley.com/doi/abs/10.1002/chem.201605499. https://chemistry-europe.onlinelibrary.wiley.com/doi/pdf/10.1002/chem.201605499. * [4] Coley, C. W., Barzilay, R., Jaakkola, T. S., Green, W. H. & Jensen, K. F. Prediction of organic reaction outcomes using machine learning. _ACS Central Science_ 3, 434–443 (2017). URL https://doi.org/10.1021/acscentsci.7b00064. PMID: 28573205, https://doi.org/10.1021/acscentsci.7b00064. * [5] Segler, M. H. S., Preuss, M. & Waller, M. P. Planning chemical syntheses with deep neural networks and symbolic ai (2018). URL https://doi.org/10.1038/nature25978. * [6] Baskin, I. I., Madzhidov, T. I., Antipin, I. S. & Varnek, A. A. Artificial intelligence in synthetic chemistry: achievements and prospects. _Russ. Chem. Rev._ 86, 1127 (2017). * [7] Struble, T. J. _et al._ Current and future roles of artificial intelligence in medicinal chemistry synthesis. _Journal of Medicinal Chemistry_ 63, 8667–8682 (2020). URL https://doi.org/10.1021/acs.jmedchem.9b02120. PMID: 32243158, https://doi.org/10.1021/acs.jmedchem.9b02120. * [8] Muratov, E. N. _et al._ Qsar without borders. _Chem. Soc. Rev._ 49, 3525–3564 (2020). URL http://dx.doi.org/10.1039/D0CS00098A. * [9] Klucznik, T. _et al._ Efficient syntheses of diverse, medicinally relevant targets planned by computer and executed in the laboratory. _Chem_ 4, 522 – 532 (2018). URL http://www.sciencedirect.com/science/article/pii/S2451929418300639. * [10] Law, J. _et al._ Route designer: a retrosynthetic analysis tool utilizing automated retrosynthetic rule generation. _J. Chem. Inf. Model._ 49, 593–602 (2009). * [11] Schwaller, P., Gaudin, T., Lányi, D., Bekas, C. & Laino, T. “Found in translation”: predicting outcomes of complex organic chemistry reactions using neural sequence-to-sequence models. _Chem. Sci._ 9, 6091–6098 (2018). * [12] Liu, B. _et al._ Retrosynthetic reaction prediction using neural Sequence-to-Sequence models. _ACS Central Science_ 3, 1103–1113 (2017). * [13] Karpov, P., Godin, G. & Tetko, I. V. A transformer model for retrosynthesis. In Tetko, I. V., Kůrková, V., Karpov, P. & Theis, F. (eds.) _Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions_ , 817–830 (Springer International Publishing, Cham, 2019). * [14] Weininger, D. SMILES, a chemical language and information system. 1\. introduction to methodology and encoding rules. _J. Chem. Inf. Comput. Sci._ 28, 31–36 (1988). * [15] Nam, J. & Kim, J. Linking the Neural Machine Translation and the Prediction of Organic Chemistry Reactions. _arXiv e-prints_ arXiv:1612.09529 (2016). 1612.09529. * [16] Sutskever, I., Vinyals, O. & Le, Q. V. Sequence to sequence learning with neural networks. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N. D. & Weinberger, K. Q. (eds.) _Advances in Neural Information Processing Systems 27_ , 3104–3112 (Curran Associates, Inc., 2014). * [17] Vaswani, A. _et al._ Attention is all you need. _ArXiv_ (2017). * [18] Schwaller, P. _et al._ Molecular transformer: A model for Uncertainty-Calibrated chemical reaction prediction. _ACS Cent Sci_ 5, 1572–1583 (2019). * [19] Zheng, S., Rao, J., Zhang, Z., Xu, J. & Yang, Y. Predicting retrosynthetic reactions using Self-Corrected transformer neural networks. _J. Chem. Inf. Model._ 60, 47–55 (2020). * [20] Coley, C. W., Rogers, L., Green, W. H. & Jensen, K. F. Computer-Assisted retrosynthesis based on molecular similarity. _ACS Cent Sci_ 3, 1237–1245 (2017). * [21] Ishida, S., Terayama, K., Kojima, R., Takasu, K. & Okuno, Y. Prediction and interpretable visualization of retrosynthetic reactions using graph convolutional networks. _J. Chem. Inf. Model._ 59, 5026–5033 (2019). * [22] Jin, W., Coley, C., Barzilay, R. & Jaakkola, T. Predicting organic reaction outcomes with Weisfeiler-Lehman network. In Guyon, I. _et al._ (eds.) _Advances in Neural Information Processing Systems 30_ , 2607–2616 (Curran Associates, Inc., 2017). * [23] Coley, C. W. _et al._ A graph-convolutional neural network model for the prediction of chemical reactivity. _Chem. Sci._ 10, 370–377 (2019). * [24] Dai, H., Li, C., Coley, C., Dai, B. & Song, L. Retrosynthesis prediction with conditional graph logic network. In Wallach, H. _et al._ (eds.) _Advances in Neural Information Processing Systems 32_ , 8872–8882 (Curran Associates, Inc., 2019). * [25] Shi, C., Xu, M., Guo, H., Zhang, M. & Tang, J. A graph to graphs framework for retrosynthesis prediction (2020). 2003.12725. * [26] Weininger, D., Weininger, A. & Weininger, J. L. SMILES. 2. algorithm for generation of unique SMILES notation. _J. Chem. Inf. Comput. Sci._ 29, 97–101 (1989). * [27] Kimber, T. B., Engelke, S., Tetko, I. V., Bruno, E. & Godin, G. Synergy Effect between Convolutional Neural Networks and the Multiplicity of SMILES for Improvement of Molecular Prediction. _arXiv e-prints_ arXiv:1812.04439 (2018). 1812.04439. * [28] Tetko, I. V., Karpov, P., Bruno, E., Kimber, T. B. & Godin, G. Augmentation is what you need! In Tetko, I. V., Kůrková, V., Karpov, P. & Theis, F. (eds.) _Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions_ , 831–835 (Springer International Publishing, Cham, 2019). * [29] Jannik Bjerrum, E. SMILES Enumeration as Data Augmentation for Neural Network Modeling of Molecules. _arXiv e-prints_ arXiv:1703.07076 (2017). 1703.07076. * [30] Karpov, P., Godin, G. & Tetko, I. V. Transformer-CNN: Swiss knife for QSAR modeling and interpretation. _J. Cheminform._ 12, 17 (2020). * [31] Fortunato, M., Coley, C. W., Barnes, B. & Jensen, K. F. Data Augmentation and Pretraining for Template-Based Retrosynthetic Prediction in Computer-Aided Synthesis Planning (2020). * [32] Chen, B., Shen, T., Jaakkola, T. S. & Barzilay, R. Learning to Make Generalizable and Diverse Predictions for Retrosynthesis. _arXiv e-prints_ arXiv:1910.09688 (2019). 1910.09688. * [33] Lin, K., Xu, Y., Pei, J. & Lai, L. Automatic retrosynthetic route planning using template-free models. _Chem. Sci._ 11, 3355–3364 (2020). URL http://dx.doi.org/10.1039/C9SC03666K. * [34] Schwaller, P. _et al._ Predicting retrosynthetic pathways using transformer-based models and a hyper-graph exploration strategy. _Chem. Sci._ 11, 3316–3325 (2020). URL http://dx.doi.org/10.1039/C9SC05704H. * [35] Tetko, I. V., Livingstone, D. J. & Luik, A. I. Neural network studies. 1. comparison of overfitting and overtraining. _Journal of Chemical Information and Computer Sciences_ 35, 826–833 (1995). URL https://pubs.acs.org/doi/abs/10.1021/ci00027a006. https://pubs.acs.org/doi/pdf/10.1021/ci00027a006. * [36] Lowe, D. M. _Extraction of chemical structures and reactions from the literature_. Ph.D. thesis (2012). * [37] Satoh, H. & Funatsu, K. Sophia, a knowledge base-guided reaction prediction system - utilization of a knowledge base derived from a reaction database. _Journal of Chemical Information and Computer Sciences_ 35, 34–44 (1995). URL https://pubs.acs.org/doi/abs/10.1021/ci00023a005. https://pubs.acs.org/doi/pdf/10.1021/ci00023a005. * [38] . * [39] Do, K., Tran, T. & Venkatesh, S. Graph transformation policy network for chemical reaction prediction. In _Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, 750–760 (2019). * [40] Shorten, C. & Khoshgoftaar, T. M. A survey on image data augmentation for deep learning. _Journal of Big Data_ 6, 60 (2019). * [41] Landrum, G. RDKit: Open-source cheminformatics. http://www.rdkit.org. Supplementary materials ## Methods ### Model architecture Following our previous study [13] we used the Transformer [17] architecture to train all the models. The key component of the Transformer architecture is a self-attention block equipped with internal memory and attention. During the training phase the block extracts and structures the incoming data, splitting it into memory keys and associated values. Thus, the block resembles a library, where all the books (values) are referred to by an index (keys). On a new request the model calculates the attention to the known keys and then extracts knowledge from the values proportionally. The Transformer shows excellent results not only on (retro) synthesis [13, 11, 19] tasks but also on ordinary classification and regression QSAR studies [30]. The performance of the Transformer was estimated for the prediction of the whole training set after each epoch. The five models with the highest fraction of correctly predicted training set SMILES were stored. As a rule, the stored models correspond to the latest epochs of training. The weights of five stored models were averaged to form the final model, which was used to predict reactions from the test sets. After several trials, we decided to use a Transformer architecture with 6 layers and 8 heads (6x8), which was used in the original work [11]. We found that using a smaller architecture with 3 layers and 8 heads (3x8), which was used in our previous study 1, required more epochs to converge and thus longer overall training time to achieve the same performance. We restricted training of the model to 100 epochs to perform model development in a reasonable time and preserve the possibility to compare different augmentation approaches. For the final optimal architectures, we further investigated the effect of training time. ### Influence of the batch size The speed of calculations using augmented data was linearly increasing with the dataset size. One epoch using the USPTO-50k set (40k reactions) took 82s on a Titan V. Training of the USPTO-full augmented set (4.3M reactions) took 9514s, i.e. approximately 120 times longer. The use of a larger batch size (in our work we formed batches of length ca. 3000 characters, which approximately corresponded to 12-15 reactions and required about 3.5G of GPU memory for the given Transformer configuration) could increase the speed of calculations. However, we noticed that large batches (i.e., we tried a batch of length 30,000 characters on Tesla V100 with 32GB of memory) could result in a decrease in the speed of convergence. Therefore, for this study we used a batch with 3000 characters. ### Beam search When generating new SMILES, the Transformer predicted at each step probabilities for all characters from its vocabulary. There are two common approaches to decoding from a linguistic model, such as a Transformer. The first one, a greedy search, always takes the element (symbol, word) with the maximum probability at each step. The second one, beam search 6, tracks in parallel several possible decodings (beam size) and sorts them according to the sums of logarithms of probabilities of each element. Thus, beam search can select those decodings where at one step the element to be chosen has no maximum probability but later symbols have maximum so the overall sum is greater than in greedy search settings. The beam search with n=5 or n=10 beams were used to predict the test set for the majority of analyses performed in this study. As a result of a search using a beam with size n, the Transformer produced up to n SMILES. Because of the generation procedure these were always unique sequences. Some of them, however, could be errors or could be different representations of the same SMILES. ### Augmentation The datasets used in this study comprised both canonical and so-called augmented SMILES. Both canonical and augmented SMILES were generated using RDKit [41]. We introduced this SMILES free augmentation method into RDKit at the end of 2018 [27, 28]. The augmented SMILES were all valid structures with an exception that the starting atom and the direction of graph enumerations were selected by chance. The augmentation increased the diversity of the training set. The baseline dataset contained only canonical SMILES. The other datasets also contained SMILES augmented as summarized. Four different scenarios were used to augment training set sequences. Sequences were augmented using increasingly complex datasets as shown in Tables A1 and LABEL:tbl:s2. Namely, we used augmentation of products only (xN), augmentation of products and reactant/reagents (xNF), augmentation of products and reactants/reagents followed by shuffling of the order of reactant/reagents (xNS), and finally mixed forward/reverse reactions, where each retrosynthesis reaction from xNS was followed by the inverse (forward synthesis) reaction (xNM). One more analysis was performed where the Transformer was asked to predict a fixed random SMILES string. Only xN were used for augmentations of the test sets because no information about reactant/reagents could be used for the retrosynthesis prediction. Table A1: Augmentations of analyzed training datasets. Dataset | Description ---|--- xN | For N=1 the dataset contains canonical SMILES for reactants and products. For N>1 in addition to one canonical SMILES, the dataset contains (N-1) instances of the same reaction with augmented SMILES for the products (input data). The SMILES of reactants were canonical. xNR | Products are encoded as canonical SMILES, but for reactants only one of possible random SMILES was chosen. xNF | The first instances of each reaction contained canonical SMILES while other (N-1) instances were augmented for both input (products) and output (reactants) data. The order of SMILES in output data was not changed. xNS | Same as xNF but the order of SMILES in reactants was randomly shuffled. xNM | The same as xNS but also contained the same number of inverted (forward synthesis) reactions. The forward reactions started with “.” to distinguish them from retro-synthetic ones. Table A2: Examples of data augmentations for two reactions. Canonical SMILES are shown in bold. Dataset | Input (product), output (reactants) data | Example ---|---|--- x0 | canonical, canonical | CC(c1ccc(Br)nc1)N(C)C,CC(=O)c1ccc(Br)nc1.CNC O=Cc1cncc(Br)c1,O=C(O)c1cncc(Br)c1 x2 | canonical,canonical random, canonical | CC(c1ccc(Br)nc1)N(C)C,CC(=O)c1ccc(Br)nc1.CNC n1c(Br)ccc(c1)C(N(C)C)C,CC(=O)c1ccc(Br)nc1.CNC O=Cc1cncc(Br)c1,O=C(O)c1cncc(Br)c1 c1(cncc(Br)c1)C=O,O=C(O)c1cncc(Br)c1 x2R | canonical, fixed random random, fixed random | CC(c1ccc(Br)nc1)N(C)C, c1cc(Br)ncc1C(=O)C.CNC n1c(Br)ccc(c1)C(N(C)C)C, c1cc(Br)ncc1C(=O)C.CNC O=Cc1cncc(Br)c1, c1c(cncc1C(=O)O)Br c1(cncc(Br)c1)C=O, c1c(cncc1C(=O)O)Br x2F | canonical, canonical random, random | CC(c1ccc(Br)nc1)N(C)C, CC(=O)c1ccc(Br)nc1.CNC n1c(Br)ccc(c1)C(N(C)C)C, CC(=O)c1ccc(nc1)Br.CNC O=Cc1cncc(Br)c1, O=C(O)c1cncc(Br)c1 c1(cncc(Br)c1)C=O, c1c(cncc1C(=O)O)Br x3S | canonical, canonical random, shuffled random, shuffled | CC(c1ccc(Br)nc1)N(C)C,CC(=O)c1ccc(Br)nc1.CNC n1c(Br)ccc(c1)C(N(C)C)C,CNC.CC(=O)c1ccc(nc1)Br CN(C(c1ccc(Br)nc1)C)C,CNC.c1cc(Br)ncc1C(O)C O=Cc1cncc(Br)c1,O=C(O)c1cncc(Br)c1 c1(cncc(Br)c1)C=O,c1c(cncc1C(=O)O)Br n1cc(cc(c1)C=O)Br,OC(=O)c1cncc(c1)Br x2M | canonical, canonical .canonical, canonical random, shuffled .shuffled. random | CC(c1ccc(Br)nc1)N(C)C,CC(=O)c1ccc(Br)nc1.CNC .CC(=O)c1ccc(Br)nc1.CNC,CC(c1ccc(Br)nc1)N(C)C n1c(Br)ccc(c1)C(N(C)C)C,CNC.CC(=O)c1ccc(nc1)Br .CNC.CC(=O)c1ccc(nc1)Br,n1c(Br)ccc(c1)C(N(C)C)C O=Cc1cncc(Br)c1,O=C(O)c1cncc(Br)c1 .O=C(O)c1cncc(Br)c1,O=Cc1cncc(Br)c1 c1(cncc(Br)c1)C=O,c1c(cncc1C(=O)O)Br .c1c(cncc1C(=O)O)Br,c1(cncc(Br)c1)C=O ## Analysis of predicted SMILES The beam search was used to infer n=5 (or more) reactant sets from the model for each entry in the test file. The SMILES predicted within the same beam search were sorted in the decreasing order of their probabilities. Predictions containing erroneous SMILES representations, which could not be processed by RDKit, were discarded. The remaining predictions were converted to canonical SMILES. In cases where the predicted reaction contained several disconnected SMILES, they were sorted to have the same representation. If two or more identical predictions were found for the same input only the first prediction was kept: in this way we deduplicated reactions predicted for the same input data. For augmented test datasets, SMILES predicted for the same reaction were accumulated and those with the largest number of occurrences were selected as the Top-ranked. If exactly the same number of predictions were found for two or more SMILES, the weights of the SMILES were set to be inversely proportional to their relative position in the respective beam search. Precisely, to rank predictions we used the following formula $rank(SMILES)=\sum_{n\in[{0,augmentations})}\sum_{i\in[1,beam]}\frac{\delta(SMILES_{n,i},TARGET)}{1.0+0.001*i}$ (1) where the first sum was over canonical (n=0) and augmented SMILES for the same input reaction. When the target canonicalized SMILES was equal to the predicted canonicalized SMILES at position i of the beam search for augmentation n, $\delta$=1. Otherwise, if predicted and target SMILES did not coincide, $\delta$=0. The term 0.001*i was used to weight the predicted SMILES to be inversely proportional to its position in the beam search (see also Table A3 and A4). ### Top-n performance accuracy For the analysed input reaction we received a set of generated canonical SMILES (contributed both by beam search and augmentation procedure), which were ranked as explained above. If any of these Top-n sequences coincided with the target canonical SMILES for the analysed reaction, the prediction was considered to be the correct one. The Top-n accuracy was the ratio of the number of correct predictions to the total number of sequences in the test set. Table A3: Illustration of a procedure used to rank predicted reactions. Step | Input SMILES | Beam1,Beam2,Beam3 ---|---|--- Initial prediction | SMILES_CAN SMILES_AUG1 SMILES_AUG2 | CC(C),C(C)CC,C(N)N CNN,CCC,CC= CC.CCC,CCC.CC,C# Canonicalisation, sorting and error detection | SMILES_CAN SMILES_AUG1 SMILES_AUG2 | CCC,CCCC,CNN CNN,CCC,error CC.CCC,CC.CCC,error Elimination of duplicates and erros | SMILES_CAN SMILES_AUG1 SMILES_AUG2 | CCC,CCCC,CNN CNN,CCC CC.CCC Enumeration | SMILES_CAN SMILES_AUG1 SMILES_AUG2 | CCC(0),CCCC(1),CNN(2) CNN(0),CCC(1) CC.CCC(0) Ranks, see Eq. 1. | | CCC = [1] +[ 1/(1+1./1000)] + [0] = 1.999 CNN = [1/(1+2./1000)] + [1] + [0] = 1.998 CC.CCC =[ 0] + [0] + [1] = 1 CCCC = [1/(1+1./1000)] + [0] + [0] = 0.999 The Top-2 ranked predictions are CCC and CNN. Table A4: Illustration of procedure used to rank predicted reactions when using multiple predictions within the same beam. Step | Input SMILES | Beam1,Beam2,Beam3 ---|---|--- Initial prediction | SMILES_CAN SMILES_AUG1 SMILES_AUG2 | CC(C),C(C)CC,C(N)N CNN,CCC,CC= CC.CCC,CCC.CC,C# Canonicalisation, sorting and error detection | SMILES_CAN SMILES_AUG1 SMILES_AUG2 | CCC,CCCC,CNN CNN,CCC,error CC.CCC,CC.CCC,error Elimination of duplicates and erros | SMILES_CAN SMILES_AUG1 SMILES_AUG2 | CCC,CCCC,CNN CNN,CCC CC.CCC Enumeration | SMILES_CAN SMILES_AUG1 SMILES_AUG2 | CCC(0),CCCC(1),CNN(2) CNN(0),CCC(1) CC.CCC(0) Ranks, see Eq. 1. | | CCC = [1] +[ 1/(1+1./1000)] + [0] = 1.999 CNN = [1/(1+2./1000)] + [1] + [0] = 1.998 CC.CCC =[0] + [0] + [1] + [1/(1+1./1000)] = 1.999 CCCC = [1/(1+1./1000)] + [0] + [0] = 0.999 The Top-2 ranked predictions are CCC and CC.CCC. The SMILES strings with the largest weights and thus those that appeared most frequently amidst the first sequences within the beam predictions were selected as the Top-ranked. The Top-1 and Top-5 SMILES were used to estimate the prediction performances of models. ### Analysis of stereochemistry-free datasets About 20% of the reactions in the training and test sets had molecules with stereochemistry. The stereochemistry was encoded in SMILES with “/”,”\”,”@” and “@@” characters. However, a number of practical projects have relaxed stereochemistry requirements. Therefore, we separately reported the performance of the models for datasets with and without stereo-chemical information. ### Character and exact sequence performance during training During the model training, we calculated character-based performance, which corresponded to the number of exactly predicted characters for the target SMILES, as well as exact sequence accuracy indicating the number of correctly predicted exact sequences. Both of these measures were approximations of the final accuracy, for which the predicted SMILES were first converted to canonical ones and only after were compared to the target values. Table A5: Examples of distributions of predicted SMILES. Reaction | Frequency of SMILES | Ratio of the most frequent to all SMILES ---|---|--- CCOC(=O)C1CCCN(C(=O)COc2ccc(-c3ccc(C#N)cc3)cc2)C1>>CCOC(=O)C1CCCNC1. N#Cc1ccc(-c2ccc(OCC(=O)O)cc2)cc1 | 926* 51 7 6 2 1 1 1 1 1 1 1 | 926/999 = 0.93 CCCCC(=O)O>>CCCCC(=O)OC(=O)CCCC | 203 112 107 98 57 19 16 13 12 12 12 12 11 11 11 11 11 11 11 11 11 11 8 8 8 8 6 6 6 6 6 6 6 6 5 5 5 5 5 5 5 5 5 4 4 4 4 4 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2 2 1 1 1 1 1 1 | 203/999 = 0.2 Star indicates the correctly predicted reaction. For the first reaction the most frequent SMILES was predicted 926 times or 93% of all predictions. For this SMILES the Transformer was very confident in the outcome of retrosynthesis, which it correctly predicted. For the second reaction the Transformer generated 78 different SMILES and the Top-1 SMILES appeared only in 20% of all predictions. For this reaction the Transformer failed to predict the correct SMILES at all. Unless otherwise noticed, the results presented in the supplementary Figures were calculated using the USPTO-50k set. Figure A1: Top-1 accuracies calculated for models developed with different augmentation scenarios and using 40k sequences as the training set. All models were applied to the x1 (canonical) test and validation set as they were defined in 10. As we can see the performance of models is similar for both training and validation sets and it is monotonically increasing with the number of iterations. This observation was the main motivation to join training and validation sets as a single set, which was used for model development. Figure A2: Monitoring set accuracy (measured as a character accuracy) of Transformer for prediction of canonical (x10) and random (x10R) SMILES for USPTO-50k set (see Table 1 and 2 for explanation of used abbreviations). Figure A3: Top-1 full-sequence retrosynthesis accuracies calculated for models developed with different augmentation scenarios for USPTO-50k training set. All models were applied to the x20 test set. Figure A4: Accuracy of prediction of SMILES generated at the respective position of the beam search using the largest beam size=44. The results were calculated for test set prediction using the model trained with 500 iterations on the USPTO-50k set. The use of canonical SMILES as input produced the highest accuracy (48.3%) for the first beam, which degraded for other positions of the beam while use of augmented SMILES provides about 44% correct predictions, which is slowly decreasing with the increase of the beam position. The number of erroneous SMILES is increasing with the beam position for both types of SMILES, but it was significantly higher for predictions when using canonical SMILES as input. Figure A5: Accuracy and density (fraction of predictions) of the Transformer for MaxFrag Top-5 retrosynthesis accuracy as a function of the frequency of appearance of five most frequent SMILESes in the output of the Transformer (see also Fig. 4 in the article). Due to a small number of samples and high variability of data the average accuracy is shown for each first left datapoint for the same or smaller frequencies. For example for USPTO 50k set the accuracy of 58.9% for frequency 0.6 was calculated by averaging the MaxFrag accuracies for SMILES with frequencies $\leq$ 0.6. There were 7.1% of such predictions in the test dataset. Figure A6: Accuracy and density (fraction of predictions) of the Transformer for MaxFrag Top-1 direct synthesis accuracy as a function of the frequency of appearance of the Top-1 SMILES in the output of the Transformer for the respective test sets of the models (see also Fig. 4 in the article). ### Calculation of the decrease of the relative errors Let us assume that we can (theoretically) get 100% accuracy. Top-5 error of the previous model for the mixed set was 100-94.2 = 5.8%. The error of our model is 100 - 96.1 = 3.9%. The relative decrease of the error is (5.8-3.9)/5.8 = 32.7%.
2024-09-04T02:54:57.597595
2020-03-05T18:14:33
2003.02806
{ "authors": "Rumen N. Georgiev, Sara O. Toscano, William E. Uspal, Bram Bet, Sela\n Samin, Rene van Roij, and Huseyin Burak Eral", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26063", "submitter": "Rumen Georgiev", "url": "https://arxiv.org/abs/2003.02806" }
arxiv-papers
# Universal motion of mirror-symmetric microparticles in confined Stokes flow Rumen N. Georgiev and Sara O. Toscano Process and Energy Department, Delft University of Technology, The Netherlands William E. Uspal Department of Mechanical Engineering, University of Hawaií at Manoa, USA Bram Bet, Sela Samin and René van Roij Institute for Theoretical Physics, Utrecht University, The Netherlands Huseyin Burak Eral Process and Energy Department, Delft University of Technology, The Netherlands and Van’t Hoff Laboratory for Physical and Colloid Chemistry, Utrecht University, The Netherlands Corresponding author<EMAIL_ADDRESS> Separation on the microscale is a persistent industrial challenge: pharmaceutical crystal polymorphs [1, 2], specific strains of yeast cells in the food industry [3], mammalian cells [4] and microplastic pollutants [5, 6] all come in different shapes, yet comparable sizes. Advances in microfluidics have resulted in robust and high throughput methods for micron-scale segregation. These techniques rely on external force fields [7, 8], sorting based on fluorescence [9], intricate separator geometries [10, 11, 12, 13, 14, 15, 16, 17, 18] or carriers with non-Newtonian behaviour [19]. An alternative approach towards microscale separation is to leverage the long-range hydrodynamic interactions emerging from fluid-structure coupling [20, 21]. By tuning these interactions particle trajectory can be controlled, thus enabling separation [22]. A model system common in microfluidic applications, exhibiting such interactions, is confined Stokes flow in a Hele-Shaw cell. In it, particles or droplets are sandwiched between a pair of confining walls of a shallow microfluidic channel and are subjected to creeping flow [23]. Owing to the shallowness of the cell, the flow is effectively two-dimensional [24]. What is more, the particle scatters the surrounding fluid, creating a dipolar flow disturbance, which decays with $1/r^{2}$, where $r$ is the distance from the particle centre. This flow disturbance strongly couples the particle to its surroundings. Experimentally, creating and driving particles in shallow channels has become widely accessible with the advent of microfluidics and soft lithography [25, 26, 27, 28, 29, 30, 31]. Their easy fabrication and versatile out-of-equilibrium behaviour make particles in confined Stokes flow an interesting toy system for the study of flow-mediated separation and self- assembly [32, 33]. Utilizing long-ranged hydrodynamic interactions (HIs), Beatus et al. demonstrated how trains of ‘pancake’ droplets flow along a Hele-Shaw cell as out-of-equilibrium 1D crystals [34, 35]. In a similar experiment, Shen and co- workers compare the dynamics of clusters comprising 2 or 3 droplets as they interact near or far away from the side walls of the cell [36]. The presence of a side wall breaks the symmetry of the system and induces transversal motion of the cluster. Cross-streamline migration is also present if the symmetry of an individual particle, rather than that of an ensemble of particles, is reduced. A particle with two planes of mirror symmetry, such as a rod [37, 38] or a symmetric disk dimer [22], also moves towards one of the side walls of a Hele-Shaw cell, provided its long axis is neither normal, nor parallel to the flow. As one such particle approaches the channel boundary, it begins to interact with its hydrodynamic image [39], the flow symmetry is reduced even further and the particle begins to rotate. All three modes of motion, namely, rotation, streamwise and cross-streamwise translation, are also present when an asymmetric disk dimer is far away from any side walls as demonstrated by Uspal, Eral & Doyle [22]. Evidently, screened hydrodynamic interactions give rise to non-trivial behaviour not only in particle ensembles [40, 41, 42, 43, 44], but also in single-particle systems with broken symmetry [45, 46, 47, 48, 49, 50, 51]. A first step towards the development of low-cost flow separators requires understanding the relation between the geometry of one such particle and its trajectory in confined Stokes flow. In this study, we combine theoretical and experimental approaches to investigate how particle shape can be tailored to induce self-steering under flow in quasi-2D microchannels. Controlling the motion of a particle in flow facilitates its separation. To this end, we use optical microscopy to track the in-plane motion of a variety of particles with a single mirror plane subjected to creeping flow in a shallow microfluidic channel. The mirror plane is perpendicular to the top and bottom walls of the channel and bisects the particle in two identical pieces (white dashes in Fig. 1 (a)-(d)). Through finite element calculations we link the shape-dependent dynamics of the particles to the flow disturbances they create as they lag the far-field flow. Using Stokes linearity and the force-free nature of the particles, we collapse their re-orientation and cross-streamwise dynamics onto two master curves. We accomplish this collapse by scaling each particle’s angular and transversal velocities by two characteristic times. Finally, through minimalistic scaling relations we link these timescales to a particle’s geometrical parameters including, but not limited to, area, moment of inertia and length. Our scaling arguments predict the characteristic times from both experiments and finite element computations up to a factor on the order of unity. This good agreement among experiments, simulations and scaling arguments is a strong indication that the observed dynamics is universal to mirror-symmetric particles in quasi-2D Stokes flow. Figure 1: Mirror-symmetric particles in quasi-2D Stokes flow. Stop-flow lithography [29] produces strongly confined microparticles with various shapes in a Hele-Shaw cell (a-d). We investigate particles with a single mirror plane, each consisting of two or three simple building blocks such as disks, squares or triangles, connected with rigid shafts. These particles are a useful toy system to study how the geometry of a particle determines its trajectory. We demonstrate this strong shape dependence by comparing the trajectories of three particles with $R_{1}/R_{2}=1.5$: from top to bottom a trimer with $\phi=$90\text{\,}\mathrm{\SIUnitSymbolDegree}$$, a dimer and a trimer with $\phi=$68\text{\,}\mathrm{\SIUnitSymbolDegree}$$ (e). The small arrows denote the orientation of the particles. The trajectories are obtained via 3D finite element calculations. We assume a planar Poiseuille profile along the height of the channel and Couette flow in the thin lubrication gaps with height $h_{\textrm{g}}$ (f). Due to channel symmetry, we only present half of a Hele-Shaw cell with particle to scale. Upon depth-averaging, we arrive at the so-called Brinkman flow with steep velocity gradients near the horizontal walls and constant velocity $u$ along most of the channel width (g). In this top view the particle is magnified 2.5 times. The streamlines in all three flow profiles are represented by horizontal blue arrows. Scale bars are $50\text{\,}\mathrm{\SIUnitSymbolMicro m}$. Figure 2: Particle-induced flow disturbances in a Hele-Shaw cell. As the particle thickness $H_{\mathrm{p}}=H-2h_{\mathrm{g}}$ is comparable to the channel height $H_{\mathrm{p}}/H\simeq 0.8$, the particle lags the surrounding flow, creating shape-specific velocity and pressure disturbances (cf. arrows and density plots in a and b). As the disturbances differ, so too do the hydrodynamic forces and torque acting on each particle differ. While the streamwise forces $F_{x}$ on a dimer and a trimer have similar magnitudes (horizontal blue arrows), the drift forces $F_{y}$ and torques $T_{z}$ acting on them differ (vertical red arrows and clockwise green arcs, respectively). This shape- dependence of the forces and torque results in distinct linear and angular velocities, which manifest themselves in the different trajectories followed by different particles (cf. c, d and e). The orientation and scaled position $x/H$ as function of scaled time $t\times u/H$ are strongly dependent on particle shape. The disturbances to the pressure and velocity fields, as well as the forces and torques on the particles, are calculated using a 3D finite element scheme [52]. In all sub-figures the flow is from left to right as denoted by the white arrow in a. Scale bars are $50\text{\,}\mathrm{\SIUnitSymbolMicro m}$. To produce strongly confined polymeric particles with distinct shapes in a Hele-Shaw cell we use stop-flow lithography (SFL) [29], as depicted in Fig. 1 (a)-(d). In a nutshell, SFL creates particles by projecting the image of a mask onto a photoreactive fluid. We choose dimeric and trimeric particles, composed of, respectively, two or three simple shapes connected by rigid shafts. The building blocks for dimers are either disks, triangles or squares (Fig. 1 (b), (d) and (a)), while those for trimers are always disks (Fig. 1 (c)). In both cases one of the building blocks is larger with a size ratio $\kappa\equiv R_{1}/R_{2}$, where $1<\kappa\leq 3$ and $R_{2}$ is the radius of the circle escribing the smaller shape. This asymmetry in the particle ensures its rotation even far away from any side walls [53]. The trimers have an additional geometrical parameter, namely, the angle $\phi$ formed between the three disks (Fig. 1 (c)). The vertex of $\phi$ is defined as the centre of the larger disk, while the two rays starting from it point to the centres of the smaller equally-sized disks. By changing $\phi$ we gain additional control over the dynamics of the particles (Fig. 1 (e)). The geometry of the particle profoundly influences its trajectory: particles with identical starting positions, yet slightly different geometries, follow dramatically different paths, as demonstrated numerically in Fig. 1 (e). As the particles are created in situ, we directly track their motion in the viscous fluid by moving the stage of an optical microscope. We set the system in motion by applying a small pressure drop across the channel, thus inducing creeping flow with a Reynolds number $Re\sim 10^{-5}$. This flow regime, together with the large aspect ratio of the channel $W/H>15$, allows us to average out the parabolic profile expected along the channel height (Fig. 1 (f)). Thus, the particle is effectively subjected to an in-plane potential flow with steep velocity gradients near the side walls of the channel and a constant velocity $u$ for most of its width [54] (Fig. 1 (g)). Apart from preventing sticking, the fluid layers with thickness $h_{\mathrm{g}}$ present above and below the particle strongly affect its motion (Fig. 1 (f) and its inset). As the particle moves along the channel with a longitudinal velocity $\dot{x}$, it experiences additional drag, because it shears the lubricating fluid in the gaps. Due to the strong particle confinement the velocity profile in the gaps is close to linear [38, 52], allowing us to assume Couette flow in the gaps (Fig. 1 (f)). The drag from the confining walls $F_{x,\mathrm{w}}$ scales with $2\dot{x}\eta/h_{\mathrm{g}}$ and slows down the particle, where $\eta$ is the dynamic viscosity of the fluid. Furthermore, it ensures the particle is confined to the plane of the flow, because any tilt or out-of-plane motion results in additional force acting on either face of the particle. Thus, the particle exhibits three degrees of freedom: translation along the length $x$ and width $y$ of the channel and in-plane rotation $\theta$ (Fig. 1 (g)). The particle lags the flow, perturbing the velocity field, and as a result pressure builds up on the upstream particle surface. This flow disturbance is strongly dependent on the particle shape (cf. (a) and (b) in Fig. 2). To illustrate this phenomenon, we use finite element computations [52] to calculate the forces and torque acting on two distinctly shaped particles with $\kappa=1.6$: a dimer and a trimer with $\phi=$120\text{\,}\mathrm{\SIUnitSymbolDegree}$$. We impose a unidirectional inlet flow with height-averaged velocity $u$ and prescribe a longitudinal velocity $\dot{x}=u/2$ to each particle. We orient the particles in such a way that their mirror axes form an angle $\theta=$60\text{\,}\mathrm{\SIUnitSymbolDegree}$$ with the flow. The particle heights $H_{\mathrm{p}}$ in both cases are equal and comparable to the channel height $H$, with $H_{\mathrm{p}}/H\sim 0.8$. While the longitudinal forces $F_{x}$ acting on the two shapes are identical ($F^{\mathrm{D}}_{x}/F^{\mathrm{T}}_{x}=0.99$), the torques differ – the dimer experiences a smaller torque $T^{\mathrm{D}}_{x}/T^{\mathrm{T}}_{x}=0.81$. The superscripts ‘D’ and ‘T’ refer to ‘dimer’ and ‘trimer’. The difference in the transversal forces $F_{y}$ is even more evident, as its direction also changes: $F^{\mathrm{D}}_{y}/F^{\mathrm{T}}_{y}=-0.67$. This disparity can be traced back to the pressure disturbance created by each particle – the larger the disturbance, the larger the forces. Figure 3: Universal behaviour of mirror-symmetric particles. Regardless of their detailed shape, all studied particles follow a universal trajectory. They exhibit the same quantitative behaviour as long as we take into account two characteristic times, $\tau$ and $\tau_{y}$, scaling their modes of motion [52]: exponentially-decaying rotation $\theta(t)$ to orient with the big disk upstream (top curves) and bell-shaped translation in the lateral direction $y\left(t\right)-y\left(t_{\perp}\right)$ (bottom curves). The only geometrical element common to all studied particles is their single plane of mirror symmetry. In all cases, the error bars denoting experimental uncertainty are smaller than the symbols and are omitted. The particles and their motion is sketched in the middle section of the figure. In the legend disk, square and triangle dimers are dubbed ‘dumbbell’, ‘squares’ and ‘flask’ for brevity. Disk trimers are denoted as ‘tripod’. The shape-dependence of the disturbances manifests itself in the distinct dynamics of different particles, as shown in Fig. 1 (e). To demonstrate this distinction experimentally, we compare the motion of three particles with different shapes, which have one and the same initial position and orientation, $x/H$, $y/H$ and $\theta_{0}=7\pi/9$, respectively (Fig. 2 (c), (d) and (e)). While all three particles rotate to orient their larger building block upstream, only the dimers experience a significant lateral drift. Nagel et al. [38] report a similar coupling between longitudinal and transversal motion for symmetric rods, which drift at a constant velocity as they flow downstream. However, cross-streamwise motion is orientation-dependent, resulting in a non-linear cross-stream trajectory when an asymmetric dimer rotates: as our particles become perpendicular to the flow, their transversal velocities diminish. Moreover, after acquiring this perpendicular orientation both particles change the direction of their lateral motion (cf. panel 3 in Fig. 2 (c) and panel 2 in Fig. 2 (d)). The coupling between rotation and translation explains why the disk dimer moves further away from its initial position $\Delta y_{\mathrm{max}}(t\times u/H=60)\sim 1.5H$ compared to the square dimer, which covers half of that distance in half the time (cf. panel 3 in Fig. 2 (c) and panel 2 in Fig. 2 (d)). Due to its slower rotation, the disk dimer spends a longer time crossing streamlines before orienting perpendicular to the flow and starting to move in the opposite direction. This reasoning does not, however, answer the question why the trimer experiences negligible drift, even though its rotational velocity is comparable to that of the disk dimer. Evidently, the observed coupling among the modes of translation and the rotation is hallmark of low-symmetry particles in a flow [55]. Mathematically, we represent this inter-dependence using a resistance tensor $\textsf{{R}}_{\mathrm{p}}$, a symmetric matrix with size equal to the number of degrees of freedom a particle exhibits (Supplementary Text 1A). The resistance tensor relates the hydrodynamic forces and torque a stationary fluid $u=0$ exerts on a particle, which translates through it with velocities $\dot{x}$ and $\dot{y}$, while also rotating at a rotational velocity $\dot{\theta}$ [56, 57]: $\begin{pmatrix}F_{x}\\\ F_{y}\\\ T_{z}\end{pmatrix}=-\eta\textsf{{R}}_{\mathrm{p}}\cdot\begin{pmatrix}\dot{x}\\\ \dot{y}\\\ \dot{\theta}\end{pmatrix},\text{with}\ \textsf{{R}}_{\mathrm{p}}\sim\begin{pmatrix}l_{xx}&l_{xy}&l_{x\theta}^{2}\\\ l_{yx}&l_{yy}&l_{y\theta}^{2}\\\ l_{\theta x}^{2}&l_{\theta y}^{2}&l_{\theta\theta}^{3}\end{pmatrix}.$ (1) We present each component of $\textsf{{R}}_{\mathrm{p}}$ in terms of arbitrary length scales $l_{ij}$ to demonstrate one of its defining features – much like Stokes flow itself, the resistance tensor is time-independent and defined purely by geometry. If the particle possesses only a single mirror plane, all nine components of $\textsf{{R}}_{\mathrm{p}}$ are generally non-zero, reflecting the entwined nature of its modes of motion (Supplementary Text 1C). Conversely, for a rod the $l_{ij}^{2}$ components become zero, since its coupled translational modes are unaffected by rotation. Particles with an even higher symmetry such as disks have all three modes independent of each other and their resistance tensors are diagonal matrices. Utilizing the concept of the resistance tensor together with Stokes linearity, we recently derived equations of motion for a force-free mirror-symmetric particle subjected to confined Stokes flow [58] (Supplementary Text 1B). Both equations, as presented in [58], seemingly depend on the initial orientation of the particle $\theta_{0}$. However, once we realize Stokes flow is time- reversible, $\theta_{0}$ becomes an arbitrary reference angle. For convenience, we set $\theta_{0}=\pi/2$, resulting in: $\theta\left(t\right)=2\arctan\left[\exp\left(-\frac{t-t_{\perp}}{\tau}\right)\right]$ (2) and $y\left(t\right)=y\left(t_{\perp}\right)+2H\frac{\tau}{\tau_{y}}\left[\mathrm{sech}\left(\frac{t-t_{\perp}}{\tau}\right)-1\right],$ (3) where $t_{\perp}=t\left(\theta=\pi/2\right)$ denotes the time at which the particle is perpendicular to the flow. The two timescales, $\tau$ and $\tau_{y}$, are characteristic for the re-orientation and cross-stream migration of each particle. Numerically, they can be computed directly from the resistance tensor [58], and just like $\textsf{{R}}_{\mathrm{p}}$, they are purely geometrically determined. Furthermore, Eq. 3 captures the coupling between rotation and translation, because the particle path depends on both timescales. The generality of these equations of motion points to their validity for a wide range of particle shapes provided they have at least one plane of mirror symmetry. The equations also hold for particles that do not rotate – shapes with more than one mirror plane have an infinitely large $\tau$ and translate at a constant lateral velocity (Supplementary Text 1C). To test the validity of these equations, we produce a variety of disk dimers and track their motion as they rotate from $\theta\sim 0.85\pi$ to $\theta\sim 0.10\pi$. Upon comparing the obtained raw experimental trajectories, we see a qualitative similarity (Fig. S4). However, as some particles rotate more slowly than others, the overall paths the particles follow differ considerably in quantitative terms. We fit Eqs. 2 and 3 to the observed trajectories and extract the two characteristic times for each particle, as discussed in Supplementary Text 2. Finally, we transform experimental time to $\left(t-t_{\perp}\right)/\tau$ for each shape and compare the angle evolution for the set of dimers (top curve in Fig. 3 (a)). The re-orientation dynamics of the studied disk dimers do not only agree quantitatively – they seem to be independent of the exact particle shape as evident from the collapsed experimental data, which closely follows Eq. 2, as well as 3D finite element computations. This apparent shape-independence implies that the characteristic time captures all geometric details of a particle. By condensing them in $\tau$ and factoring them out, we are left with the general dynamics determined by the mirror symmetry and described well by our equation for $\theta\left(t\right)$. This notion is reaffirmed once we take a look at the lateral motion of the disk dimers (bottom curve in Fig. 3 (a)). Their cross- streamwise motion also appears shape-independent once we use $\left(t-t_{\perp}\right)/\tau$ instead of experimental time and scale their lateral displacement by the channel height and the characteristic times. Even when the lateral motion of a particle deviates from the one predicted by Eq. 3, the deviation can be traced back to the re-orientation dynamics. Some dimers stop rotating before their mirror axes align with the flow direction, leading to a decoupling of rotation and translation. Thus, they begin to behave as rods with a finite cross stream velocity even at long timescales [38]. A possible reason for these deviations is interaction with hydrodynamic images if the particle comes too close to the wall. Additionally, artefacts of the lithography process such as slight asymmetry in the particle itself or dust of size comparable to $h_{\mathrm{g}}$, are other possible culprits. We test these notions by simulating the full trajectory of a dimer whose experimental behaviour deviates from the theoretically predicted. Since the 3D finite element results are well-described by the equations of motion and agree with the experimental trajectories, we conclude that the observed deviations are indeed experimental artefacts. Encouraged by the close agreement between theory and experiments in Fig. 3 (a), we broaden our scope to mirror symmetric particles of various shape. Substituting the disks with pointy building blocks such as squares and triangles leads to different timescales, but does not affect the general particle dynamics (Fig. 3 (b)). Increasing the number of building blocks has the same effect – trimers with different size ratios and inter-disk angle also behave identically once we isolate the geometrical details condensed in $\tau$ and $\tau_{y}$. This universality, remarkable as it is, is not entirely unexpected – Eqs. 2 and 3 are derived with the sole assumptions of a force- and torque-free particle with a mirror plane moving in creeping flow. Moreover, our findings suggest we should expect this type of dynamics from any particle that has at least one mirror plane and is subjected to confined Stokes flow. Our reasoning also raises the question what is the behaviour of an asymmetric particle, for instance, a trimer where all three disks have different radii (Fig. S3). One such shape rotates until it acquires a stable orientation $\theta_{\infty}\neq 0$ as discussed in Supplementary Text 1C. However, since the flow disturbance it creates is asymmetric, the particle has a non-zero lateral velocity even after it has ceased re-orienting [58, 48]. Figure 4: Relation of the characteristic timescales to particle geometry. The (a) rotation and (b) translation timescales needed to fully describe particle motion via Eqs. 2 and 3 are solely dependent on the geometry of the system. For identical flow parameters such as depth-averaged flow velocity $u$, gap thickness $h_{\textrm{g}}$ and channel height $H$, the detailed shape of the particle determines $\tau$ and $\tau_{y}$. The rotational timescale depends on the polar moment of inertia $I_{\mathrm{p}}$ of the particle, its area $S_{\mathrm{p}}$ (yellow particle sketch), its projected length when perpendicular to the flow $L_{\perp}$ and the distance $r_{\mathrm{arm}}$ spanning from the centroid $\textit{C}_{0}$ to the centre of perimeter $\textit{C}_{\textrm{p}}$. We obtain the translational timescale via the area of the particle and its projected lengths $L_{\perp}$ and $L_{\parallel}$ when its mirror plane is perpendicular or parallel to the flow, respectively. The vertical error bars represent the standard deviation of the timescales within an experimental series (Table S1). The horizontal error bars are calculated from the uncertainty of the confinement $\tilde{h}$. The dashed lines are a guide to the eye. Though we have a rigorous description of the general trajectory of a mirror- symmetric particle, its exact motion still depends on two timescales. Up to now we obtain $\tau$ and $\tau_{y}$ as fitting parameters in Eqs. 2 and 3. However, knowing their values a priori opens the door towards tailoring the shape of a particle to a desired trajectory. One possible way to obtain this target-specific shape is to survey a large variety of particles, compute their resistance tensors and estimate $\tau$ and $\tau_{y}$ [58]. As robust as this method is, it is not particularly insightful as it does not yield an explicit relation between the timescales and a particle’s geometric parameters. By considering imbalanced rods, we propose scaling arguments linking the timescales $\tau$ and $\tau_{y}$ of a particle to its geometry. We do so by first identifying $\tau$ and $\tau_{y}$ are functions of the particle velocities $\dot{\theta}$ and $\dot{x}$ at specific orientations: $\tau_{\mathrm{scaling}}=-1/\dot{\theta}\left(\theta=\pi/2\right)$ and $\tau_{y\mathrm{,scaling}}=2H/(\dot{x}_{\perp}-\dot{x}_{\parallel})$ as discussed in Supplementary Text 1D. The subscripts of the streamwise velocities denote particle orientation: $\dot{x}_{\perp}=\dot{x}\left(\theta=\pi/2\right)$ and $\dot{x}_{\parallel}=\dot{x}\left(\theta=0\right)$. To compute the three velocities, we make use of the force- and torque-free nature of the particle. At any instant in time, the angular momentum it gains from the in-plane flow is dissipated as Couette torque from the confining walls above and below its faces: $T_{\mathrm{f}}+T_{\mathrm{w}}=0$. We write a similar balance for the streamwise force – the drag from the surrounding fluid and the friction from the confining walls cancel: $F_{x\mathrm{,f}}+F_{x,\mathrm{w}}=0$. In Supplementary Text 4 we propose linear scaling expressions for each torque and force. We solve the two balances for the three velocities and substitute the solutions in the expressions for the two timescales: $\tau_{\mathrm{scaling}}\simeq\frac{1}{6\tilde{h}H_{\mathrm{p}}}\times\frac{H}{u}\times\frac{\sqrt{\pi}I_{\mathrm{p}}}{r_{\mathrm{arm}}L_{\parallel}\sqrt{S_{\mathrm{p}}}}$ (4) and $\tau_{y,\mathrm{scaling}}\simeq 2H\times\frac{1}{6\tilde{h}H_{\mathrm{p}}}\times\frac{H}{u}\times\frac{\sqrt{\pi S_{\mathrm{p}}}}{L_{\perp}-L_{\parallel}}\frac{L_{\perp}}{L_{\parallel}}.$ (5) The proposed scaling relations provide estimates for $\tau$ and $\tau_{y}$ by simplifying the particle geometry to projections of shape $L_{\perp}$, $L_{\parallel}$, moment of inertia $I_{\mathrm{p}}$, area $S_{\mathrm{p}}$, as well as other geometrical parameters. These parameters are illustrated in the insets of Fig. 4 and detailed in Table 1. Table 1: Scaling expressions for the longitudinal forces and in-plane torques acting on a particle in confined Stokes flow. The particle moves at velocity $\dot{x}_{i}$ while rotating with frequency $\dot{\theta}$ in a fluid with depth-averaged flow velocity $u$. The subscript $i\equiv\perp\vee\parallel$ denotes orientation. The forces and torques depend on the particle geometry through its area $S_{\mathrm{p}}$, polar moment of inertia $I_{\mathrm{p}}$, thickness $H_{\mathrm{p}}$, confinement $\tilde{h}$ and projected length $L_{i}$. The gap height $h_{\mathrm{g}}=\tilde{h}H$ is made dimensionless with the height of the channel $H$. The two dimensional projected lengths $L_{\perp}$ and $L_{\parallel}$ are sketched in Fig. 1 (g) and Fig. 4 (b). We define $r_{\mathrm{arm}}$ as the distance between the centroid of a particle $C_{0}$ and its centre of perimeter $C_{\mathrm{p}}$ and sketch it in Fig. 4 (a). | | Fluid | | Wall ---|---|---|---|--- $F_{i}\sim$ | | $\displaystyle 12\eta\frac{u}{H^{2}}H_{\mathrm{p}}L_{\parallel}\sqrt{\frac{S}{\pi}}\times\frac{L_{i}}{L_{\perp}}$ | | $\displaystyle-\frac{2\eta}{h}\dot{x}_{i}S_{\mathrm{p}}$ $T\sim$ | | $\displaystyle 12\eta\frac{u}{H^{2}}H_{\mathrm{p}}L_{\parallel}\sqrt{\frac{S}{\pi}}\times r_{\mathrm{arm}}$ | | $\displaystyle-\frac{2\eta}{h}\dot{\theta}I_{\mathrm{p}}$ We verify the scaling models by comparing our experimental timescales to the ones computed via Eqs. 4 and 5 in Fig. 4. We complement this comparison with numerical timescales, computed via 3D FEM, and present them in Fig. S18 and S19. The scaling relation for $\tau_{\mathrm{scaling}}$ overestimates $\tau_{\mathrm{exp}}$ by a factor of 1.25, while $\tau_{y\mathrm{,scaling}}$ underestimates $\tau_{y\textrm{, exp}}$ by a factor 1.5. This mismatch is to be expected as the proposed minimalistic scalings strip the particles of any geometric detail. A possible remedy is the incorporation of mean particle curvatures, which, however, comes at the expense of model simplicity. Though we determine the two timescales up to a scaling factor of order 1, Eqs. 4 and 5 accurately predict when $\tau$ or $\tau_{y}$ diverge and when $\tau_{y}$ becomes negative. In some trivial cases, particles cease to rotate and $\tau\to\infty$ when they are either too thick ($\tilde{h}\to 0$), too thin ($H_{\mathrm{p}}\to 0$) or there is no flow ($u\to 0$). The timescale also diverges when the distance between the centroid and the centre of perimeter vanishes ($r_{\mathrm{arm}}\to 0$). Particles with more than one mirror plane – rods, symmetric dimers and disks – all have $r_{\mathrm{arm}}=0$. Similarly, particles do not cross streamlines when their two projected lengths match $L_{\perp}=L_{\parallel}$. One such particle is a trimer with $\kappa=1.5$ and $\phi\sim$68\text{\,}\mathrm{\SIUnitSymbolDegree}$$, which rotates without drifting away from the centre-line of the channel, as demonstrated by finite element computations in Fig. 1 (e). We also observe this phenomenon experimentally: the trimer with $\kappa=1.84$ and $\phi=$51\text{\,}\mathrm{\SIUnitSymbolDegree}$$ barely moves in the lateral direction (Fig. 2 (e)). Its large $\tau_{y}$, dampening its lateral motion, is due to its comparable projected lengths. Furthermore, $\tau_{y}$ may become negative for trimers with large $\phi$, as demonstrated in Fig. 1 (e). This change in drift direction is present experimentally for a trimer with $\kappa=1.5$ and $\phi\sim$90\text{\,}\mathrm{\SIUnitSymbolDegree}$$ and is the reason why we compare $\left|\tau_{y\mathrm{,exp}}\right|$ to $\left|\tau_{y\mathrm{,model}}\right|$ in Fig. 4 (b). The applicability of the proposed scaling relations to a wide range of particles with different geometry and symmetry supports the main conclusion of our work: in confined Stokes flow, particles with at least one mirror plane behave identically as long as we scale their trajectories by characteristic times, directly related to their shape. The proposed scaling can be utilized to predict trajectories of particles based on minimalistic scaling arguments. ## Conclusion In summary, by combining experiments, simulations and theory, we investigate how the trajectory of a confined particle subjected to Stokes flow is determined by its geometry. We observe that particles with a single mirror plane exhibit qualitatively similar behaviour: they rotate in-plane to align their mirror axis with the flow and their larger building block upstream, all the while crossing streamlines. However, the timescales over which this dynamics happens are strongly dependent on particle shape. We fit our experimental trajectories and finite element calculations to theoretical equations of motion we have recently derived, thus extracting characteristic rotational and translational times for each particle. By scaling experimental time by the respective rotational timescale for each experiment, we collapse the evolution of the orientation for all particles onto a single curve. Similarly, we obtain a universal bell-shaped path by scaling real time and a particle’s cross-streamline velocity. Finally, we propose minimalistic scaling relations linking the characteristic times of a particle to its geometry. We strip the particles of all geometrical details and treat them as imbalanced rods, thus reinforcing the idea that it is solely their symmetry that defines their overall dynamics. Our observations suggest the trajectories are universal for particles with at least one mirror plane. This finding deepens our understanding of fluid-structure interactions in confined Stokes flow. Moreover, it opens new opportunities in lab-on chip and industrial applications enabling shape-based separation of suspended particles solely through hydrodynamic interactions. ## Methods ### Experimental setup Polymeric microparticles are produced and observed with an experimental setup, similar to the one used by Uspal, Eral and Doyle [22]. Polydimethylsiloxane (PDMS, Sylgard®184, Dow Corning) microfluidic devices of width $W=512\pm$2\text{\,}\mathrm{\SIUnitSymbolMicro m}$$ are fabricated according to Dendukuri et al. [59]. Disk dimers are tracked in channels with height $H=30\pm$1\text{\,}\mathrm{\SIUnitSymbolMicro m}$$. Trimers, triangle and square dimers are tracked in a 33-micron high channels. A UV-crosslinking oligomer, poly-(ethyleneglycol) diacrylate (PEG-DA $M_{n}=$700\text{\,}$$ , $\eta=$95\text{\,}\mathrm{mPa}\text{\,}\mathrm{s}$$ , Sigma-Aldrich), is mixed with a photoinitiator, hydroxy-2-methylpropiophenone, (Darocur®1173, Sigma- Aldrich), in a 19:1 volume ratio and the mixture is pumped through the microfluidic channel. The device, loaded with prepolymer, is mounted on the stage of a motorized Nikon Ti Eclipse inverted optical microscope. A photolithographic mask with well-defined shape is inserted as a field stop. Mask designs are made in Wolfram Mathematica®and post-processed in Dassault Systémes’ DraftSight®. ### Particle production and tracking Microparticles are produced by shining a $100\text{\,}\mathrm{ms}$ pulse of UV light through the mask onto the channel, thus confining photopolymerization to a discrete part of the prepolymer mixture. Oxygen, diffusing through the permeable PDMS walls of the device, inhibits polymerization in their vicinity [59]. This facilitates the formation of two thin lubrication layers, $h_{\mathrm{g}}=2.5\pm$0.5\text{\,}\mathrm{\SIUnitSymbolMicro m}$$, which separate the particles from the confining walls of the channel. Particles are produced and observed with a 20X lens. The microparticle is set in motion by applying a pressure drop $\Delta p\approx$1.5\text{\,}\mathrm{kPa}$$ across the channel resulting in a depth-average flow velocity $u=$55\text{\,}\mathrm{\SIUnitSymbolMicro m}\text{\,}{\mathrm{s}}^{-1}$$ for the shallower channel and $u=$70\text{\,}\mathrm{\SIUnitSymbolMicro m}\text{\,}{\mathrm{s}}^{-1}$$ for the 33-micron high channel. The particle is tracked by moving the automated microscope stage in a stepwise manner. The positions and orientations of particles containing disks are extracted from the acquired time series using a custom-written MATLAB script, which employs circular Hough transforms to identify the particle shape in each frame. The script utilizes MATLAB’s Bio-Formats package [60] and the calcCircle tool. Particles comprising triangles and squares are tracked by fitting an ellipse to them, calculating the angle and detecting their straight edges. ### Finite element computations All computational results are obtained through the Finite element method as implement in the Creeping Flow module of COMSOL Multiphysics 5.3, which we couple to MATLAB via LiveLink. Each solution is carried out on a single computational node fitted with an Intel Xeon E5-2620 v4 @ 2.10GHz CPU and 64 GB memory. Technical details regarding geometry building, meshing and solver settings are given in Supplementary Text 3 [61, 62, 63]. We use the channel height $H=1$ as a length scale. We set the inlet flow velocity $u$, the kinematic viscosity of the fluid $\eta$ and its mass density $\rho$ to unity. To simulate creeping flow at this $Re=1$, we neglect the inertial term in the momentum equation and solve the Stokes equation with no external forcing: $\nabla\cdot\left(-p\textsf{{I}}+\eta\left(\nabla\bm{U}_{\mathrm{f}}+\nabla\bm{U}_{\mathrm{f}}^{\intercal}\right)\right)=0$ $\nabla\cdot\bm{U}_{\mathrm{f}}=0,$ where we solve for $\bm{U}_{\mathrm{f}}$ and $p$, the fluid velocity and pressure fields. We integrate the total stress over the particle surface to obtain the forces and torque acting on it at a given position and orientation with respect to the flow. To compute the force- and torque-free velocities of the particle at this configuration, we numerically solve the force balance: $\begin{pmatrix}\dot{x}\\\ \dot{y}\\\ \dot{\theta}\end{pmatrix}=-\frac{1}{\mu}\textsf{{R}}_{\mathrm{p}}^{-1}\cdot\bm{F}_{0},$ where $\bm{F}_{0}$ is the forces and torque acting on a stationary particle in a flow and $\textsf{{R}}_{\mathrm{p}}$ is the resistance tensor for this configuration (Supplementary Note 1A, equation (1)). We obtain the trajectory of a particle through a first order time integration scheme, where we apply $\left(\dot{x},\dot{y},\dot{\theta}\right)$ over a timestep $t_{\text{step}}$, which we determine every iteration (Supplementary Text 3). ## References * Bauer et al. [2001] J. Bauer, S. Spanton, R. Henry, J. Quick, W. Dziki, W. Porter, and J. Morris, Pharm. Res. 18, 859 (2001). * Shet et al. [2004] A. R. Shet, S. Bates, F. X. Muller, and D. J. Grant, Crys. Growth Des. 4, 1091 (2004). * Piel and Tran [2009] M. Piel and P. T. Tran, Curr. Biol. 19, R823 (2009). * Ginzberg et al. [2015] M. B. Ginzberg, R. Kafri, and M. Kirschner, Science 348, 1245075 (2015). * Thompson et al. [2005] R. Thompson, C. Moore, A. Andrady, M. Gregory, H. Takada, and S. Weisberg, Science 310, 1117b (2005). * Taylor et al. [2016] M. L. Taylor, C. Gwinnett, L. F. Robinson, and L. C. Woodall, Sci. Rep. 6, 33997 (2016). * Ding et al. [2014] X. Ding, Z. Peng, S. C. S. Lin, M. Geri, S. Li, P. Li, Y. Chen, M. Dao, S. Suresh, and T. J. Huang, Proc. Nat. Acad. Sci. U. S. A. 111, 12992 (2014). * Lenshof and Laurell [2010] A. Lenshof and T. Laurell, Chem. Soc. Rev. 39, 1203 (2010). * Mage et al. [2019] P. L. Mage, A. T. Csordas, T. Brown, D. Klinger, M. Eisenstein, S. Mitragotri, C. Hawker, and H. T. Soh, Nat. Mater. 18, 82 (2019). * Nivedita and Papautsky [2013] N. Nivedita and I. Papautsky, Biomicrofluidics 7, 054101 (2013). * Son et al. [2017] J. Son, R. Samuel, B. K. Gale, D. T. Carrell, and J. M. Hotaling, Biomicrofluidics 11, 054106 (2017). * Kim et al. [2016] J. Kim, J. Lee, C. Wu, S. Nam, D. Di Carlo, and W. Lee, Lab Chip 16, 992 (2016). * Jiang et al. [2019] D. Jiang, D. Huang, G. Zhao, W. Tang, and N. Xiang, Microfluid Nanofluid 23, 7 (2019). * Behdani et al. [2018] B. Behdani, S. Monjezi, M. J. Carey, C. G. Weldon, J. Zhang, C. Wang, and J. Park, Biomicrofluidics 12, 051503 (2018). * Li et al. [2017] M. Li, H. E. Muñoz, K. Goda, and D. Di Carlo, Sci. Rep. 7, 10802 (2017). * Russom et al. [2009] A. Russom, A. K. Gupta, S. Nagrath, D. D. Carlo, J. F. Edd, and M. Toner, New J. Phys. 11, 075025 (2009). * Mach et al. [2011] A. J. Mach, J. H. Kim, A. Arshi, S. C. Hur, and D. Di Carlo, Lab Chip 11, 2827 (2011). * Huang et al. [2004] L. R. Huang, E. C. Cox, R. H. Austin, and J. C. Sturm, Science 304, 987 (2004). * Raoufi et al. [2019] M. A. Raoufi, A. Mashhadian, H. Niazmand, M. Asadnia, A. Razmjou, and M. E. Warkiani, Biomicrofluidics 13, 034103 (2019). * Hur et al. [2011] S. C. Hur, S. E. Choi, S. Kwon, and D. D. Carlo, Appl. Phys. Lett. 99, 044101 (2011). * Masaeli et al. [2012] M. Masaeli, E. Sollier, H. Amini, W. Mao, K. Camacho, N. Doshi, S. Mitragotri, A. Alexeev, and D. Di Carlo, Phys. Rev. X 2, 31017 (2012). * Uspal et al. [2013] W. E. Uspal, H. B. Eral, and P. S. Doyle, Nat. Commun. 4, 2666 (2013). * Beatus et al. [2017] T. Beatus, I. Shani, R. H. Bar-Ziv, and T. Tlusty, Chem. Soc. Rev. 46, 5620 (2017). * Batchelor [2000] G. K. Batchelor (Cambridge University Press, 2000), p. 174–263. * Teh et al. [2008] S. Y. Teh, R. Lin, L. H. Hung, and A. P. Lee, Lab Chip 8, 198 (2008). * Zhu and Wang [2017] P. Zhu and L. Wang, Lab Chip 17, 34 (2017). * Shang et al. [2017] L. Shang, Y. Cheng, and Y. Zhao, Chem. Rev. 117, 7964 (2017). * Dendukuri et al. [2006] D. Dendukuri, D. C. Pregibon, J. Collins, T. A. Hatton, and P. S. Doyle, Nat. Mater. 5, 365 (2006). * Dendukuri et al. [2007] D. Dendukuri, S. S. Gu, D. C. Pregibon, T. A. Hatton, and P. S. Doyle, Lab Chip 7, 818 (2007). * Dendukuri and Doyle [2009] D. Dendukuri and P. S. Doyle, Adv. Mater. 21, 4071 (2009). * Chung et al. [2007] S. E. Chung, W. Park, H. Park, K. Yu, N. Park, and S. Kwon, Appl. Phys. Lett. 91, 17 (2007). * Ge et al. [2019] Z. Ge, O. Tammisola, and L. Brandt, Soft Matter 15, 3451 (2019). * Uspal and Doyle [2014] W. E. Uspal and P. S. Doyle, Soft matter 10, 5177 (2014). * Beatus et al. [2006] T. Beatus, T. Tlusty, and R. Bar-Ziv, Nat. Phys. 2, 743 (2006). * Beatus et al. [2012] T. Beatus, R. H. Bar-Ziv, and T. Tlusty, Phys. Rep. 516, 103 (2012). * Shen et al. [2014] B. Shen, M. Leman, M. Reyssat, and P. Tabeling, Exp. Fluids 55, 1728 (2014). * Berthet et al. [2013] H. Berthet, M. Fermigier, and A. Lindner, Phys. Fluids 25 (2013). * Nagel et al. [2018] M. Nagel, P. T. Brun, H. Berthet, A. Lindner, F. Gallaire, and C. Duprat, J. Fluid Mech. 835, 444 (2018). * Uspal and Doyle [2012] W. E. Uspal and P. S. Doyle, Phys. Rev. E 85, 016325 (2012). * Schneider et al. [2011] T. M. Schneider, S. Mandre, and M. P. Brenner, Phys. Rev. Lett. 106, 094503 (2011). * Green [2018] Y. Green, J. Fluid Mech. 853, 253 (2018). * Cui et al. [2002] B. Cui, H. Diamant, and B. Lin, Phys. Rev. Lett. 89, 188302 (2002). * Schiller et al. [2015] U. D. Schiller, J. B. Fleury, R. Seemann, and G. Gompper, Soft Matter 11, 5850 (2015). * Shani et al. [2014] I. Shani, T. Beatus, R. H. Bar-Ziv, and T. Tlusty, Nat. Phys. 10, 140 (2014). * du Roure et al. [2019] O. du Roure, A. Lindner, E. N. Nazockdast, and M. J. Shelley, Annu. Rev. Fluid Mech. 51, 539 (2019). * Fiorucci et al. [2019] G. Fiorucci, J. T. Padding, and M. Dijkstra, Soft Matter 15, 321 (2019). * Chakrabarty et al. [2013] A. Chakrabarty, A. Konya, F. Wang, J. V. Selinger, K. Sun, and Q. H. Wei, Phys. Rev. Lett. 111, 160603 (2013). * Bechert et al. [2019] M. Bechert, J. Cappello, M. Daïeff, F. Gallaire, A. Lindner, and C. Duprat, EPL 126, 44001 (2019). * Gruziel et al. [2018] M. Gruziel, K. Thyagarajan, G. Dietler, A. Stasiak, M. L. Ekiel-Jezewska, and P. Szymczak, Phys. Rev. Lett. 121, 127801 (2018). * Cappello et al. [2019] J. Cappello, M. Bechert, C. Duprat, O. Du Roure, F. Gallaire, and A. Lindner, Phys. Rev. Fluids 4, 034202 (2019). * Słowicka et al. [2013] A. M. Słowicka, E. Wajnryb, and M. L. Ekiel-Jeżewska, Eur. Phys. J. E 36, 31 (2013). * Bet et al. [2018a] B. Bet, R. Georgiev, W. Uspal, H. B. Eral, R. van Roij, and S. Samin, Microfluid Nanofluid 22, 77 (2018a). * Bretherton [1962] F. P. Bretherton, J. Fluid Mech. 14, 284 (1962). * Bruus [2011] H. Bruus, Lab Chip 11, 3742 (2011). * Russel et al. [1977] W. B. Russel, E. J. Hinch, L. G. Leal, and G. Tieffenbruck, J. Fluid Mech. 83, 273 (1977). * Brenner [1963] H. Brenner, Chem. Eng. Sci. 18, 1 (1963). * Brenner [1964] H. Brenner, Chem. Eng. Sci. 19, 599 (1964). * Bet et al. [2018b] B. Bet, S. Samin, R. Georgiev, H. B. Eral, and R. van Roij, J. Phys. Condens. Matter 30, 224002 (2018b). * Dendukuri et al. [2008] D. Dendukuri, P. Panda, R. Haghgooie, J. M. Kim, T. A. Hatton, and P. S. Doyle, Macromolecules 41, 8547 (2008). * Linkert et al. [2010] M. Linkert, C. T. Rueden, C. Allan, J.-m. Burel, W. Moore, A. Patterson, B. Loranger, J. Moore, C. Neves, D. Macdonald, et al., J. Cell Biol. 189, 777 (2010). * Amestoy et al. [2001] P. R. Amestoy, I. S. Duff, J. Koster, and J.-Y. L’Excellent, SIAM J. Matrix Anal. Appl. 23, 15 (2001). * Amestoy et al. [2006] P. R. Amestoy, A. Guermouche, J.-Y. L’Excellent, and S. Pralet, Parallel Computing 32, 136 (2006). * Holzbecher and Si [2008] E. Holzbecher and H. Si, in _Proceedings of the COMSOL Conference_ (Hanover, 2008), 1, p. 7, URL https://www.comsol.nl/paper/accuracy-tests-for-comsol-and-delaunay-meshes-5436.
2024-09-04T02:54:57.607836
2020-03-05T18:16:49
2003.02807
{ "authors": "Shan Jaffry", "full_text_license": null, "license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/", "provenance": "arxiv-papers-0000.json.gz:26064", "submitter": "Shan Jaffry", "url": "https://arxiv.org/abs/2003.02807" }
arxiv-papers
# Cellular Traffic Prediction with Recurrent Neural Network Shan Jaffry DGUT-CNAM Institute Dongguan University of Technology, Dongguan, China <EMAIL_ADDRESS> ###### Abstract Autonomous prediction of traffic demand will be a key function in future cellular networks. In the past, researchers have used statistical methods such as Autoregressive integrated moving average (ARIMA) to provide traffic predictions. However, ARIMA based predictions fail to give an exact and accurate forecast for dynamic input quantities such as cellular traffic. More recently, researchers have started to explore deep learning techniques, such as, recurrent neural networks (RNN) and long-short-term-memory (LSTM) to autonomously predict future cellular traffic. In this research, we have designed a LSTM based cellular traffic prediction model. We have compared the LSTM based prediction with the base line ARIMA model and vanilla feed-forward neural network (FFNN). The results show that LSTM and FFNN accurately predicted the future cellular traffic. However, it was found that LSTM train the prediction model in much shorter time as compared to FFNN. Hence, we conclude that LSTM models can be effectively even used with small amount of training data which will allow to timely predict the future cellular traffic. ###### Index Terms: Cellular traffic prediction, recurrent neural network, LSTM, feed forward neural network. ## I Introduction Cellular communication is the most popular and ubiquitous telecommunication technology. Recently, owing to novel use cases, such as, multimedia video download, 4K/8K streaming etc., the amount of cellular data traffic has soared exponentially. It is expected that in the near future, i.e. by 2023, the monthly mobile data demands will exceed beyond 109 Exabyte (Exa = $10^{18}$) which currently rests at a modest 20 Exabytes per month consumption [1]. Cellular users, nevertheless, will expect high speed and ubiquitous connectivity from the network operators. Providing unhindered, ubiquitous, and high quality of service will be a serious challenge for network operators. Network operators must update traffic planning tools so they can know in advance about the state of future traffic demands. Hence, operators will rely on data-driven self-organizing networks (SON) powered by machine learning (ML) and artificial intelligence (AI). ML and AI enabled networks can preemptively take important decisions with limited human intervention. Prediction of cellular and data traffic patterns will be a key job that SON perform. Cellular traffic prediction will enable network operators to promptly distribute resources as per the requirement of competing users. With the informed network state, operators may also allow resource sharing between devices [2, 3]. This will also enable high spectral efficiency and will prevent outages caused due to cell overload. If a network can accurately predict future traffic loads in specific cells, it may take preventive actions to avoid outages. For example, network may permit device-to-device communication to relieve the base station [4]. Figure 1: LTE-A network architecture. Recent advances in data analytics and the availability of powerful computing machines have enabled operators to harness the power of Big data to analyze and predict network operations. Hence, advanced variations of data driven ML and AI techniques are playing an ever increasing role in all aspects of modern human lives. In particular, deep learning, a special class of ML and AI algorithms, can solve enormously complex problems by leveraging the power of very deep neural network layers [5]. Deep learning algorithms can extract valuable feature information from the raw data to predict outcomes. Deep learning has made great strides recently due to advent of user friendly libraries and programming environment such as Tensorflow, Keras, PyTorch, Pandas, and Scikit etc. Deep learning algorithms such as recurrent neural network (RNN), convolutional neural network (CNN) etc. are being extensively used in application, such as, computer vision [6], health informatics [7], speech recognition [8], and natural language processing [9] etc. In future, it is anticipated that majority operations in sixth generation (6G) cellular network will be solely catered by AI and deep learning algorithms [10]. An AI-enabled SON network will perform long and short term analysis on the data obtained from the end users and/or network [11]. This self-optimization will reduce the over all capital expenditures (CAPEX) and operational expenditure (OPEX) required for network planning and maintenance. For example, a key issue concerning increasing CAPEX and OPEX for service providers is the identification and remedy of anomalies that may arise within a cell. To learn and prevent the cell from going into the anomalous state, it is necessary for the network to predict the future traffic demands. Figure 2: Grid 01 Internet Activity for first 5 days. In the past researcher have proposed to forecast the cellular traffic using statistical models, such, as Autoregressive integrated moving average (ARIMA) and its variants [12]. A known limitation of ARIMA is that it reproduce the time series patterns based on average of the past values. However, ARIMA may fail to accurately predict traffic patterns in highly dynamic environments such as cellular network. Nevertheless, ARIMA can give a descent estimate of future traffic and may serve as a baseline prediction model. Recently, deep learning based techniques to forecast any time series traffic is getting more popular. For cellular applications, deep learning techniques learn the past history of network traffic to train models such as vanilla feed-forward neural network (FFNN), recurrent neural network (RNN), or long- short-term-memory (LSTM) etc. In [13], researchers have proposed to use RNN with multi-task learning to design a spatio-temporal prediction model for cellular networks. Researchers in [14] have applied neural network models on cellular traffic to analyze trade activities in urban business districts. A comparative study between LSTM and ARIMA models was conducted by researchers in [15]. Inspired by the works presented earlier, in this paper we will use the real world call data record to forecast future cellular traffic using LSTM. In particular, we will compare our results with the ARIMA model and vanilla feed forward neural network (FFNN) models. We will demonstrate that LSTM models learn the traffic patterns very quickly as compared to FFNN and ARIMA models. The rest of the paper is organized as follows. The system model is presented in Section II. The cellular traffic prediction model is presented in Section III. We discuss the results in Section IV followed by conclusion in Section V. ## II System Model Figure 1 shows the our system model which comprise of Long Term Evolution - Advanced (LTE-A) network. The architecture of LTE-A is broadly categorized into three layers. The core network (CN), the access network, and the end user equipment (UE) [16]. The wireless communication take place between a UE and evolved NodeB (eNB) over the access network which is called UMTS terrestrial radio access network (E-UTRAN) in LTE-A nomenclature. The core network, which is formally known as evolved packet core (EPC), makes essential the network level decisions. The EPC further contain several logical entities such as serving gateway (SGW), packet data network gateway (PGW), and mobility management unit (MMU) etc. Detailed explanation of these logical entities and LTE-A architecture is out of scope of current paper. Readers can refer relevant materials, for example [16]. The call data record (CDR) that we will use in this research was gathered at the EPC level layer. The execution of LSTM predictive model will also take place at this layer. ### II-A Data Record Details The call data record used in this research was published by Telecom Italia for Big Data Challenge competition [17]. Telecom Italia collected cellular and internet activities of its subscribers within the city of Milan in Italy. In the CDR, Milan city is divided into $100\times 100$ square grids. Each grid has a length of 0.235 Km and an area of 0.055 Km2. The data record has been collected for 62 days, starting from 1st November 2013 till 1st January 2014. Data for the single day is stored in a single file which means that there are 62 files in the dataset. Readers can refer to [18] for detailed explanation on the CDR. The spatio-temporal CDR contains following fields. * • Grid ID. * • Time Stamp: Raw timestamp was recorded in milliseconds units with the interval of 10 minutes. * • Country code. * • Inbound SMS Activity: Indicates the incoming SMS activity in a particular grid observed within 10 minute interval. * • Outbound SMS Activity: Indicates the outgoing SMS activity in a particular grid observed within 10 minute interval. * • Inbound Call Activity: Indicates the incoming calling in a particular grid observed within 10 minute interval. * • Outbound Call Activity: Indicates the outgoing calling activity in a particular grid observed within 10 minute interval. * • Internet Activity: Indicates the internet usage by cellular users in a particular grid observed within 10 minute interval. Figure 3: Single LSTM Cell. The CDR does not specify activity in terms of particular units. However, an intuitive interpretations is that the activities are proportional to the amount of real traffic. For example, the magnitude of Inbound or outbound SMS activities are high for a greater number of SMS received or sent, respectively. The data was provided in the raw format. Hence, we will discuss the data cleansing method in the next step. ### II-B Data cleansing The CDR, in its raw format, could not be used to extract any meaningful information. Hence we applied data cleansing and filtering over the CDR. The timestamps were changed from milliseconds to minutes. There were some missing fields which we marked as zeros (0). There were multiple entry records for each timestamp. We summed them to make a single activity record per timestamp. Figure 2 shows the 24-hour Internet Activity for Grid 01. In our prediction model, we have only used Internet traffic Activity. However, the our model can be used to predict activities for SMS and calls without any modification. We will discuss traffic prediction in the next section. ## III Cellular Traffic Prediction In this section, we will first briefly describe basics of feed forward and recurrent neural network (NN) followed by the LSTM based learning model. ### III-A Feed Forward and Recurrent NN In artificial neural networks, the nodes are connected to form a directed graph which is ideal for handling temporal sequence predictions. In vanilla feed forward network (FFNN), information flows only in forward direction. In FFNN, the input layer feeds the forward looking hidden layer for calculations and manipulations. The hidden layers forward the information to the output layer which produce regression of classification predictions. Figure 4: LSTM Network. A NN maps inputs to the outputs by learning from the examples provided during the training phase and can be used for prediction in classification and regression problems. During the training process, the predictions are compared against the expected output values (often known as ground truth data) to calculate the loss function. In the beginning of the training, the loss function is usually quite high indicating the incorrect prediction by the model. With back propagation and gradient descent method, the model adjust the weights and biases corresponding to the input value to minimize the loss function. A fully trained NN has the minimal loss (also called as error) between the predicted and the expected output value [19]. After successful training, the model is validated and a validation error is calculated. A model is fully trained for prediction when the training and validation errors are both minimized. In recurrent neural network (RNN), though the learning process is the same as FFNN, the architecture is slightly different. RNNs takes the output of one layer, and feed it as the input to the next layer. Hence, each layer has information from the past input values. RNN considers the current input as well as the input received in the previous time steps during training and prediction. This enables RNN to learn the knowledge from the all previous time instances to make a well informed prediction for time series data. However, vanilla RNNs have inherent vanishing and exploding gradient problem which halts the learning process as gradient either diminishes completely or explodes to very large value. Hence Long-Short-Term-Memory (LSTM), which is a variant of RNN was proposed in [20]. LSTMs were designed to avoid the long- term dependency issue, which is the cause of the vanishing-gradient problem in Vanilla RNNs [19]. ### III-B Learning Through LSTMs The structure of LSTM units (often known as cells) enable a neural network to learn long term dependencies. The learning processing is strictly controlled by multiple gates that allow (or bar) the flow of incoming data from the previous cell and/or input as shown in Figure 3. The standard LSTM unit is shown in Figure 3. There are three main gates in any LSTM unit, the forget gate ($\Gamma_{f}$), the update or input gate ($\Gamma_{i}$), and the output gate ($\Gamma_{o}$). The cell state for the current unit $C^{t}$ is updated by the information passed through the update gate ($\Gamma_{i}$). The candidate value for current cell’s state (i.e. $C_{u}^{t}$) is updated based on the information from the previous hidden state (i.e. $\text{a}^{t-1}$) and input $X^{t}$. The update gate decides to allow or bar the flow of this candidate value to the output state. Finally the output gate $\Gamma_{o}$ allows the information to pass from the current cell. The forget gate lets the current cell keep or forget the state value from the previous time step. The prediction is made as $\hat{y}$ after passing through an activation function (often sigmoid or softmax). Figure 5: Traffic prediction with large-sized training set. The LSTM cells are chained to form one layer of the LSTM network as shown in Figure 4. Each cells computes operation for one time step and transfer the output to the next cell. The number of cells in a LSTM network indicates the amount of observations of data that is being considered before making any prediction. For our case, the input $X^{t}$ is the internet activity and the number of observations is the amount of selected time steps T. The expression for all the gates, cell states, out of the hidden layer, and the final prediction are given as below: $\Large\Gamma_{f}^{t}=\sigma(W_{f}[a^{t-1},X^{t}]+b_{f})$ (1) $\Large\Gamma_{i}^{t}=\sigma(W_{i}[a^{t-1},X^{t}]+b_{i})$ (2) $\Large C_{u}^{t}=\varphi(W_{c}[a^{t-1},X^{t}]+b_{c})$ (3) $\Large\Gamma_{o}^{t}=\sigma(W_{o}[a^{t-1},X^{t}]+b_{o})$ (4) $\Large C^{t}=\Gamma_{f}^{t}\ast C^{t-1}+\Gamma\ast C_{u}^{t}$ (5) $\Large a^{t}=\Gamma_{o}^{t}\ast\varphi(C^{t})$ (6) The final output $Y^{t}$ is then calculated as : $\Large y^{t}=\sigma(W_{y}a^{t}+b_{y})$ (7) In the equations above, symbol $\sigma$ represent the sigmoid function which is often known as the squashing function because it limits the output between 0 (gate OFF) and 1 (Gate fully ON). Formally, the sigmoid function is defined as $\sigma(x)=\Large\frac{1}{1+e^{-x}}$ . Symbol $\varphi$ is another squashing function and often $\tanh$ or rectified linear unit (relu) operations are used for $\varphi$. Readers can refer to relevant literature to gather further information about these functions. [19]. The symbol $\ast$ represents the element-wise multiplication. Finally, $W_{(.)}$ and $b_{(.)}$ are the vectors of weights and biases corresponding to the respective gates, hidden layer, input, and output layer. The exact values of these weights and biases are selected by the libraries described in the next sub-section. ### III-C Training and Prediction Software We have used Matlab for data cleansing and filtering. All the algorithms are implemented in Python using Keras and Scikit libraries with Tensorflow at the backend. Figure 6: Error for traffic prediction with medium-sized training set. ## IV Results and Discussion In this section we will show the performance comparison of LSTM model with the base line ARIMA and vanilla feed forward neural network model. We have compared the performance of each technique with the ground truth test data from CDR. We have fixed the training epochs to 20 for each cycle. For LSTM model, we have used two hidden layers to make an even comparison with the FFNN and ARIMA. The first hidden layer contains 50 LSTM cells followed by a dense layer with single unit. The FFNN contains two hidden layers with first layer containing 5 activation units activated by relu operation. The second hidden layer contains one non-linear activation unit. Training and validation losses are calculated using mean absolute error. Figure 7: Traffic prediction with medium-sized training set. Figure 5 shows the traffic prediction by LSTM, FFNN model, and ARIMA model. We used 7142 samples for training LSTM and FFNN models. For validation and testing, we used 893 samples for each case. The LSTM and FFNN both learned the pattern in less than 5 epochs due to large number of training examples. It can be observed that LSTM and FFNN predictions match to that of the ground truth data. The ARIMA model predicts very close to the ground truth but does not exactly match the traffic pattern. Figure 8: Traffic prediction with small-sized training set. We later reduced the training samples to 3571. The training and validation errors for this case are shown in Figure 6 and the prediction results are presented in Figure 7. We can observe that LSTM and FFNN still predict very accurately. The ARIMA baseline model however does not exactly match the ground truth traffic. It should be noted from Figure 6 that when we reduced the number of training samples, the training and validation error for LSTM converges to near zero (0) only after 2 epochs. However, FFNN took at least 10 epochs to fully train the model to enable accurate predictions. Nevertheless, both models’ errors converged to zero before the 20 epochs limit. When we further reduced the training samples to 892, we observed that after training the models for 20 epochs, the FFNN could not predict according to actual ground truth data. In fact, its performance worsened even to that of the ARIMA model. The LSTM, on the other hand, very accurately predicted the traffic activity. This is due to the fact that LSTM trained the network within 20 epochs and the training and validation error converged to zero as shown in Figure 9. On the other hand, the error for the FFNN remains high even after 20 epochs. Interestingly, FFNN could estimate patterns of future traffic, however, with very low accuracy. ## V Conclusion In this paper, we presented cellular data traffic prediction using recurrent neural network, in particular, with long-short-term-memory model. We demonstrated that LSTM and vanilla feed-forward neural networks predict more accurately as compared to the statistical ARIMA model. However, the LSTM models were shown to be learning more quickly as compared to the FFNN, even with a small amount of training data sample. As our future work, we are working to design a LSTM based resource allocation method for 6G networks. Figure 9: Error for traffic prediction with small-sized training set. ## References * [1] P. Cerwall, A. Lundvall, P. Jonsson, R. Möller, S. Bävertoft, S. Carson, and I. Godor, “Ericsson mobility report 2018,” 2018. * [2] S. Jaffry, S. F. Hasan, and X. Gui, “Effective resource sharing in mobile-cell environments,” _arXiv preprint arXiv:1808.01700_ , 2018. * [3] ——, “Shared spectrum for mobile-cell’s backhaul and access link,” in _2018 IEEE Global Communications Conference (GLOBECOM)_. IEEE, 2018, pp. 1–6. * [4] S. Jaffry, S. F. Hasan, X. Gui, and Y. W. Kuo, “Distributed device discovery in prose environments,” in _TENCON 2017-2017 IEEE Region 10 Conference_. IEEE, 2017, pp. 614–618. * [5] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” _nature_ , vol. 521, no. 7553, pp. 436–444, 2015. * [6] A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep learning for computer vision: A brief review,” _Computational intelligence and neuroscience_ , vol. 2018, 2018. * [7] D. Ravì, C. Wong, F. Deligianni, M. Berthelot, J. Andreu-Perez, B. Lo, and G.-Z. Yang, “Deep learning for health informatics,” _IEEE journal of biomedical and health informatics_ , vol. 21, no. 1, pp. 4–21, 2016. * [8] L. Deng, G. Hinton, and B. Kingsbury, “New types of deep neural network learning for speech recognition and related applications: An overview,” in _2013 IEEE International Conference on Acoustics, Speech and Signal Processing_. IEEE, 2013, pp. 8599–8603. * [9] T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” _ieee Computational intelligenCe magazine_ , vol. 13, no. 3, pp. 55–75, 2018. * [10] A. Zappone, M. Di Renzo, and M. Debbah, “Wireless networks design in the era of deep learning: Model-based, ai-based, or both?” _arXiv preprint arXiv:1902.02647_ , 2019. * [11] M. Chen, U. Challita, W. Saad, C. Yin, and M. Debbah, “Artificial neural networks-based machine learning for wireless networks: A tutorial,” _IEEE Communications Surveys & Tutorials_, vol. 21, no. 4, pp. 3039–3071, 2019. * [12] Y. Shu, M. Yu, O. Yang, J. Liu, and H. Feng, “Wireless traffic modeling and prediction using seasonal arima models,” _IEICE transactions on communications_ , vol. 88, no. 10, pp. 3992–3999, 2005. * [13] C. Qiu, Y. Zhang, Z. Feng, P. Zhang, and S. Cui, “Spatio-temporal wireless traffic prediction with recurrent neural network,” _IEEE Wireless Communications Letters_ , vol. 7, no. 4, pp. 554–557, 2018. * [14] Y. Zhao, Z. Zhou, X. Wang, T. Liu, Y. Liu, and Z. Yang, “Celltrademap: Delineating trade areas for urban commercial districts with cellular networks,” in _IEEE INFOCOM 2019-IEEE Conference on Computer Communications_. IEEE, 2019, pp. 937–945. * [15] A. Azari, P. Papapetrou, S. Denic, and G. Peters, “Cellular traffic prediction and classification: a comparative evaluation of lstm and arima,” in _International Conference on Discovery Science_. Springer, 2019, pp. 129–144. * [16] A. ElNashar, M. A. El-Saidny, and M. Sherif, _Design, deployment and performance of 4G-LTE networks: A practical approach_. John Wiley & Sons, 2014. * [17] “Telecom Italia, Open Big Data, Milano Grid,” Online, 2014. [Online]. Available: https://dandelion.eu/ * [18] M. S. Parwez, D. B. Rawat, and M. Garuba, “Big data analytics for user-activity analysis and user-anomaly detection in mobile wireless network,” _IEEE Transactions on Industrial Informatics_ , vol. 13, no. 4, pp. 2058–2065, 2017. * [19] I. Goodfellow, Y. Bengio, and A. Courville, _Deep learning_. MIT press, 2016. * [20] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” _Neural computation_ , vol. 9, no. 8, pp. 1735–1780, 1997.
2024-09-04T02:54:57.617241
2020-03-05T18:25:27
2003.02811
{ "authors": "CMS Collaboration", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26065", "submitter": "The CMS Collaboration", "url": "https://arxiv.org/abs/2003.02811" }
arxiv-papers
FSQ-16-006 FSQ-16-006 # Study of central exclusive $\PGpp\PGpm$ production in proton-proton collisions at $\sqrt{s}=5.02$ and 13 ###### Abstract Central exclusive and semiexclusive production of $\PGpp\PGpm$ pairs is measured with the CMS detector in proton-proton collisions at the LHC at center-of-mass energies of 5.02 and 13. The theoretical description of these nonperturbative processes, which have not yet been measured in detail at the LHC, poses a significant challenge to models. The two pions are measured and identified in the CMS silicon tracker based on specific energy loss, whereas the absence of other particles is ensured by calorimeter information. The total and differential cross sections of exclusive and semiexclusive central $\PGpp\PGpm$ production are measured as functions of invariant mass, transverse momentum, and rapidity of the $\PGpp\PGpm$ system in the fiducial region defined as transverse momentum $\pt(\PGp)>0.2\GeV$ and pseudorapidity $\abs{\eta(\PGp)}<2.4$. The production cross sections for the four resonant channels 500, $\PGrzP{770}$, 980, and 1270 are extracted using a simple model. These results represent the first measurement of this process at the LHC collision energies of 5.02 and 13. ## 0.1 Introduction The central exclusive production (CEP) process has been studied for a long time from both theoretical [1, 2, 3, 4, 5, 6, 7] and experimental [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18] perspectives. In this process, both protons remain intact in the collision and a central system is produced. The process is referred to as exclusive when no particles other than the central system are produced. If one or both protons dissociate into a forward diffractive system, the process is called semiexclusive production. Various central systems can be produced in this process, like $\PGpp\PGpm$, $\PKp\PKm$, and $4\PGp$. In this paper, the $\PGpp\PGpm$ central system is measured. At the CERN LHC energies, the two dominant mechanisms of $\PGpp\PGpm$ production via CEP are _double pomeron exchange_ (DPE) and _vector meson photoproduction_ (VMP), which are illustrated by the diagrams shown in Fig. 1. The pomeron ($\mathbb{P}$) is a color singlet object introduced to explain the rise of the inelastic cross section at high collision energies [19, 20]. The quantum numbers of the pomeron constrain the possible central systems in DPE processes, whereas the photon exchange restricts the central system in VMP processes. By functioning as a quantum number filter, the CEP process is suitable to study low-mass resonances, which would be difficult to study otherwise. Furthermore, DPE processes are also suitable to search for glueballs (bound states of gluons without valence quarks), because they provide a gluon-rich environment [21, 22]. Another process that could contribute to the same final state is the two-photon fusion $\PGg\PGg\to\PGpp\PGpm$, which is expected to have a much smaller cross section than DPE and VMP processes and gives a negligible contribution [23]. Figure 1: Diagrams of the dominant mechanisms for $\PGpp\PGpm$ production via CEP in proton-proton collisions: (a) continuum; (b) resonant double pomeron exchange; and (c) vector meson photoproduction. The DPE process of pion pair production has two subcategories: continuum and resonant production. In the case of continuum production, the pion pair is directly produced; thus the pairs have a nonresonant invariant mass spectrum. Resonant production means that an intermediate meson resonance is produced centrally, which manifests itself as a peak in the invariant mass distribution of the pion pair. Since the pomeron is a Regge trajectory running over states with quantum numbers $J^{PC}=\\{0^{++},1^{++},2^{++},\dots\\}$ and $I^{G}=0^{+}$, the resonance is restricted to have $J^{PC}=\\{0^{++}$, $2^{++}$, $4^{++},\dots\\}$ and $I^{G}=0^{+}$, where $J$ is the total angular momentum, $I$ is the isospin, $P$ is the parity, $C$ is the charge parity, and $G=C\,(-1)^{I}$. The known particles [24] satisfying these criteria are the f0, f2, ${\HepParticle{\PGc}{c0}{}}\Xspace$, ${\HepParticle{\PGc}{c2}{}}\Xspace$, ${\HepParticle{\PGc}{b0}{}}\Xspace$, and ${\HepParticle{\PGc}{b2}{}}\Xspace$ resonances. The cross section for DPE ($\sigma_{\PGpp\PGpm}^{\text{DPE}}$) can be calculated from the amplitude of continuum ($A_{\PGpp\PGpm}^{\text{DPE,C}}$) and resonant ($A_{\PGpp\PGpm}^{\text{DPE,R}}$) production as $\sigma_{\PGpp\PGpm}^{\text{DPE}}\propto\abs{A_{\PGpp\PGpm}^{\text{DPE,C}}+A_{\PGpp\PGpm}^{\text{DPE,R}}}^{2}.$ (1) Interference terms between the continuum and resonant production channels must be included to describe the observed spectra and to measure the cross sections for resonances. In VMP, one of the protons emits a virtual photon, which fluctuates into a quark-antiquark bound state and scatters from the proton via the pomeron exchange. The quantum numbers of the possible resonances are constrained by the quantum numbers of the pomeron and the photon ($J^{PC}=1^{--}$), leading to mesons with odd spin and the following quantum numbers $J^{PC}=\\{1^{--},3^{--},\dots\\}$. Resonances satisfying these conditions are ${\HepParticle{\PGr}{}{0}{}{}{}}$, $\PGo$, $\PGf$, , $\PGyP{2S}$, and $\PGU$, but only the ${\HepParticle{\PGr}{}{0}{}{}{}}\to\PGpp\PGpm$ decay has a significant branching fraction, since decays in this channel are strongly suppressed in the case of $\PGf$, , $\PGyP{2S}$, and $\PGU$ according to the Okubo–Zweig–Iizuka rule [25, 26, 27] and in the case of $\PGo$ because of G-parity conservation [28]. This paper presents measurements of exclusive and semiexclusive $\PGpp\PGpm$ total and differential cross sections as functions of invariant mass $m(\PGpp\PGpm)$, transverse momentum $\pt(\PGpp\PGpm)$, and rapidity $y(\PGpp\PGpm)$ of the pion pair, in a fiducial region defined by single pion transverse momentum $\pt(\PGp)>0.2\GeV$ and single pion pseudorapidity $\abs{\eta(\PGp)}<2.4$. Because the outgoing protons are not tagged in this measurement, there is a residual contribution from semiexclusive production with all dissociation products outside the $\abs{\eta}>4.9$ range. In the following, the exclusive and the residual semiexclusive contribution together will be referred to as central exclusive production. The data were recorded by CMS with beam conditions ensuring a small probability of multiple $\Pp\Pp$ collisions in the same bunch crossing (pileup) in August 2015 at a center-of- mass energy of 13with luminosity 258and in November 2015 at 5.02with a luminosity of 522. The average number of $\Pp\Pp$ collisions in a bunch crossing was around 0.3–0.5 for the 5.02and around 0.5 for the 13data sets. ## 0.2 The CMS detector The central feature of the CMS apparatus is a superconducting solenoid of 6m internal diameter. Within the solenoid volume are a tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections, covering the $\abs{\eta}<3.0$ region. Forward calorimeters extend the $\eta$ coverage provided by the barrel and endcap detectors. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid. The silicon tracker measures charged particles within the range $\abs{\eta}<2.5$. It consists of 1440 silicon pixel and 15 148 silicon strip detector modules and is located in the 3.8T solenoid field. Three pixel barrel layers (PXB) are situated at radii of 4.4, 7.3, and 10.2cm; PXB also has two pixel endcap disks (PXF). The strip tracker consists of the innermost tracker inner barrel (TIB) and the tracker inner disks (TID), which are surrounded by the tracker outer barrel (TOB). It is completed by endcaps (TEC) on both sides. The barrel part of the strip tracker has a total of 10 layers at radii from 25 to 110cm, whereas the endcap of the strip tracker consists of 12 layers. For charged particles with $\pt<1\GeV$ and $\abs{\eta}<1.4$, the track resolutions are typically 1–2% in , and 90–300 and 100–350for the transverse and longitudinal impact parameters, respectively [29]. The tracker provides an opportunity to identify charged particles with $0.3<p<2\GeV$ based on their specific ionization in the silicon detector elements [30]. The ECAL consists of 75 848 lead tungstate crystals, which provide coverage in $\abs{\eta}<1.479$ in the barrel region and $1.479<\abs{\eta}<3.0$ in the two endcap regions. The barrel and endcap sections of the HCAL consist of 36 wedges each and cover the $\abs{\eta}<3.0$ region. In the region $\abs{\eta}<1.74$, the HCAL cells have widths of 0.087 in $\eta$ and 0.087 radians in azimuth ($\phi$). In the $\eta$-$\phi$ plane, and for $\abs{\eta}<1.48$, the HCAL cells map onto $5{\times}5$ ECAL crystal arrays to form calorimeter towers projecting radially outwards from close to the nominal interaction point. At larger values of $\abs{\eta}$, the towers are larger and the matching ECAL arrays contain fewer crystals. The forward hadron (HF) calorimeter uses steel as an absorber and quartz fibers as the sensitive material. The two halves of the HF are located at 11.2m from the interaction region, one at each end. Together they provide coverage in the range $3.0<\abs{\eta}<5.2$. Each HF calorimeter consists of 432 readout towers, containing long and short quartz fibers running parallel to the beam. The long fibers run the entire depth of the HF calorimeter (165cm, or approximately 10 interaction lengths), whereas the short fibers start at a depth of 22cm from the front of the detector. By reading out the two sets of fibers separately, it is possible to distinguish showers generated by electrons or photons, which deposit a large fraction of their energy in the long-fiber calorimeter segment, from those generated by hadrons, which typically produce, on average, nearly equal signals in both calorimeter segments. The triggers used in this analysis are based on signals from the Beam Pick-up and Timing for eXperiments (BPTX) detectors [31]. The BPTX devices have a time resolution of less than 0.2ns. They are located around the beam pipe at a distance of $\pm 175$m from the nominal interaction point, and are designed to provide precise information on the bunch structure and timing of the proton beams. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [32]. ## 0.3 Monte Carlo simulations Two kinds of Monte Carlo (MC) event generators are used in this analysis: inclusive and exclusive generators. The inclusive generators model the inclusive diffractive dissociation [33] and nondiffractive interactions, and are used to estimate the tracking efficiency, multiple reconstruction and misreconstruction rates. The exclusive generators are used to generate CEP events and to calculate the vertex correction factors. There are no available MC event generators that produce exclusive scalar and tensor resonances via DPE, such as the production of 500, 980, and 1270 mesons. Event samples are generated with various tunes for diffraction and the underlying event: * • 8.205 [34] with CUETP8M1 tune [35] and MBR model [36]: 8 is an inclusive generator based on the Schuler and Sjöstrand model. It is capable of modeling a wide variety of physical processes, such as single diffractive (SD), double diffractive (DD), and central diffractive (CD) dissociation, as well as nondiffractive (ND) production [33]. The SD, DD, and ND events are generated with the CUETP8M1 tune. The Minimum Bias Rockefeller (MBR) model of is based on the renormalized pomeron flux model and it is capable of generating SD, DD, ND and CD events. * • epos [37] with its LHC tune [38]: This inclusive generator is based on the Regge–Gribov phenomenology [39], and it models SD, DD, CD, and ND processes. * • starlight [40]: This event generator models photon-photon and photon-pomeron interactions in $\Pp\Pp$ and heavy ion collisions. The production of ${\HepParticle{\PGr}{}{0}{}{}{}}$ mesons and their successive decay into two pions through the VMP process is simulated by starlight. For background studies, $\PGo$ mesons are also generated with starlight and their decay simulated by to the $\PGpp\PGpm\PGpz$ final state. * • dime mc 1.06 [5]: The dime mc software describes continuum $\PGpp\PGpm$ production through DPE. The generator uses a phenomenological model based on Regge theory. Events are generated with the Orear-type off-shell meson form factors with parameters $a_{\text{or}}=0.71\GeV^{-1}$ and $b_{\text{or}}=0.91\GeV^{-1}$ [5]. Furthermore, two additional MC samples are generated with an exponential form factor with $b_{\text{exp}}=0.45$ [5] and $1\GeV^{-2}$ [1] to study the systematic uncertainty in the measured resonance cross sections arising from uncertainties in the dime mc parametrization. All of the generated events are processed by a detailed simulation [41] of the CMS detector. ## 0.4 Event selection The following triggers were employed: * • Zero bias: zero-bias events are selected by using either the BPTX detectors (13data) or the LHC clock signal and the known LHC bunch structure (5.02data). Both methods provided zero-bias events. * • BPTX XOR: Here XOR stands for the exclusive OR logic, where only one BPTX is fired, corresponding to an incoming proton bunch from only one direction. This trigger was used in both 5.02 and 13data sets. * • No-BPTX: There is no signal in the BPTX detectors, which means there are no incoming proton bunches. This trigger was used in both 5.02 and 13data sets. The present analysis uses events acquired with the zero bias trigger. The BPTX XOR and No-BPTX triggers select events with no interacting bunches, which are used to estimate the electronic noise of calorimeters and possible collisions between beam particles and residual gas molecules in the CMS beampipe (beam- gas background). The contribution from beam-gas collisions is negligible because there is no difference in the measured calorimeter tower energy distributions for the BPTX XOR and No-BPTX triggered events. In the offline selections, it is required that the event has exactly two tracks, both of which satisfy $\chi^{2}/\text{ndf}<2$ (where the $\chi^{2}$ value is calculated based on the fitted trajectory and the measured tracker hits, and ndf is the number of degrees of freedom), $\pt>0.2\GeV$, and $\abs{\eta}<2.4$ to ensure high track reconstruction efficiency. Only events with oppositely charged (opposite-sign, OS) tracks are selected for analysis, whereas events with same-sign (SS) tracks are used in the background estimation. Events with a single collision are selected by requiring the two tracks form a single reconstructed vertex subject to the constraint that $\abs{z_{1}-z_{2}}<3\sqrt{\sigma_{1}^{2}+\sigma_{2}^{2}},$ (2) where $z_{1}$ and $z_{2}$ are the $z$ coordinates of the closest approach of the reconstructed tracks to the beamline, and $\sigma_{1}$ and $\sigma_{2}$ are their corresponding uncertainties. To select exclusive events, all calorimeter towers not matching the trajectories of the two tracks must have energy deposits below a threshold, which is defined in Table 0.4. A tower is matched to a track if the intersection of the extrapolated trajectory with the calorimeter surface is within three standard deviations in $\eta$ and $\phi$ from the center of the tower. The threshold values are chosen to have a maximum 1% rejection of signal events resulting from the electronic noise of the calorimeters. Non- exclusive events might be also selected because of the lack of coverage in the eta gap between the HF and central calorimeters; these events are also taken into account in the background estimation presented later in this paper. Using all of the above listed event selection criteria, a total of 48 961 events were selected from the 5.02and 20 980 from the 13dataset. The value of calorimeter thresholds for different calorimeter constituents, used in the selection of exclusive events. Calorimeter Threshold [] $\eta$ coverage ECAL barrel 0.6 $\abs{\eta}<1.5$ ECAL endcap 3.3 $1.5<\abs{\eta}<3.0$ HCAL barrel 2.0 $\abs{\eta}<1.3$ HCAL endcap 3.8 $1.3<\abs{\eta}<3.0$ HF 4.0 $3.15<\abs{\eta}<5.2$ ## 0.5 Data analysis ### 0.5.1 Particle identification Particle identification is used to select pion pairs by the mean energy loss ($\rd E/\rd x$) of particles in the silicon tracking detectors. The $\rd E/\rd x$ values shown in the left panel of Fig. 2 are calculated by a second-order harmonic mean using only the strip detectors [42]: $\left\langle\frac{\rd E}{\rd x}\right\rangle=\left(\frac{1}{N}\sum_{i=1}^{N}(\Delta E/\Delta x)_{i}^{-2}\right)^{-\frac{1}{2}},$ (3) where $N$ is the number of energy loss measurements, $\Delta E/\Delta x$ is a single energy loss measurement per path length in one tracker module, and the sum runs over the strip detectors carrying energy loss measurements. The $-2$ exponent in this formula suppresses high $\Delta E/\Delta x$ values arising from the highly asymmetric $\Delta E/\Delta x$ Landau distribution, thus avoiding a bias in the estimate of the average $\rd E/\rd x$ of the track. Figure 2: Left: The distribution of the logarithm of the mean energy loss and absolute value of the momentum of tracks from low-multiplicity ($N_{\text{track}}\leq 4$) events collected at $\sqrt{s}=13\TeV$. The $\PGp$-selection region is shown in the 0.3–2range. All tracks outside this momentum range are identified as pions. Right: The fit of energy loss distributions in a given momentum bin with the sum of three Gaussian curves. Plots are similar for the 5.02data. The track classification is achieved by fitting the mean energy loss distributions of tracks from low multiplicity ($N_{\text{track}}\leq 4$) events with a sum of three Gaussian functions corresponding to pions, kaons, and protons. An example for such a fit is shown in the right panel of Fig. 2. In the 0.3–2momentum range pions are selected from the $\pm 3$ standard deviation region of the corresponding Gaussian peak. This region is shown in the left panel of Fig. 2. Tracks that have $p<0.3$ or $p>2\GeV$ are assumed to be pions. The contamination from kaons and protons is estimated using the data-driven approach described in Section 0.5.3. ### 0.5.2 Corrections Each event is weighted by several correction factors to compensate for the detector and reconstruction effects. The multiplying factor is the product of four independent corrections: tracking, multiple reconstruction, vertex, and pileup correction. A tracking correction is used to correct for track reconstruction inefficiencies: $C_{\text{tr}}=\frac{1}{\varepsilon_{\text{tr},1}}\,\frac{1}{\varepsilon_{\text{tr},2}},$ (4) where $\varepsilon_{\text{tr},1}$ ($\varepsilon_{\text{tr},2}$) is the tracking efficiency in the region where the first (second) particle is reconstructed. A single charged particle may lead to two reconstructed tracks, such as spiralling tracks near $\eta\approx 0$ or split tracks in the overlap region of the tracker barrel and endcap. This effect is corrected using $\varepsilon_{\text{mrec}}$, which is the probability for this situation to occur. In this case the correction factor takes the form $C_{\text{mrec}}=\frac{1}{1-\varepsilon_{\text{mrec},1}}\,\frac{1}{1-\varepsilon_{\text{mrec},2}}.$ (5) The values of $\varepsilon_{\text{tr}}$ and $\varepsilon_{\text{mrec}}$ are estimated as a function of $\eta$ and using MC simulations. Their dependence on the track $\phi$ and the vertex position $z$-coordinate is integrated over. The simulated events are weighted such that the vertex $z$-coordinate distribution agrees with collision data. The vertex correction $C_{\text{vert}}$ accounts for events with an unreconstructed vertex. It is the reciprocal of the vertex efficiency, which is calculated using samples produced by the dime mc and starlight generators. The vertex efficiency has a slight dependence on the invariant mass of the track pair that is included when applying the vertex correction. Some real CEP events are rejected because of pileup. To account for these lost events, a correction factor $C_{\text{pu}}$ for the number of selected events can be computed. The CEP events are selected from bunch crossings with a single collision, so by assuming that the number of collisions follows a Poisson distribution, one can derive $C_{\text{pu}}$: $C_{\text{pu}}=\frac{N\mu}{N\,\mu\exp{(-\mu)}}=\exp{(\mu)}.$ (6) Here, $\mu$ is the average number of visible inelastic collisions, in a given bunch crossing, $N$ is the total number of analyzed events. The value of $\mu$ depends on the instantaneous luminosity associated with individual bunch crossings, $\mathcal{L}_{\text{bunch}}$, according to the following expression: $\mu=\frac{\sigma_{\text{inel,vis}}\mathcal{L}_{\text{bunch}}}{f},$ (7) where $\sigma_{\text{inel,vis}}$ is the visible inelastic $\Pp\Pp$ cross section, $f$ is the revolution frequency of protons, and $\mathcal{L}_{\text{bunch}}$ is the average instantaneous luminosity at the given bunch crossing position for time periods of 23.3s. The ratio of $\sigma_{\text{inel,vis}}$ to $f$ is obtained by fitting the fraction of events with no observed collisions as a function of $\mathcal{L}_{\text{bunch}}$ with the functional form $A\exp(-b\,\mathcal{L}_{\text{bunch}})$, where $A$ and $b$ are free parameters of the fit. The range of correction factors is summarized in Table 0.5.2. Correction factors. Type Range Tracking 1.05–1.50 Multiple reconstruction 1.005–1.040 Vertex 1.05–1.33 Pileup 1.3–2.1 Figure 3: The number of extra calorimeter towers over threshold in events containing an identified pion pair with opposite (left) and same (right) charge. The known contributions, denoted with the red hatched areas, are used to estimate the background in the zero bin of the opposite-sign distribution, which is denoted by the blue hatched area. The error bars correspond to statistical uncertainties, whereas the error rectangle on the background denotes the 14% systematic uncertainty in the background normalization. Plots are similar for 5.02data. ### 0.5.3 Background estimation The main background contributions to $\PGpp\PGpm$ CEP are the multiparticle background and the exclusive $\PKp\PKm$/$\Pp\PAp$ production. The multiparticle background in the selected exclusive sample consists of events with more than two particles created in the interaction, of which only two are observed because the additional particles yield energy deposits below the thresholds, or outside the acceptance. The SD, DD, ND, and CD processes with more than two centrally produced particles belong to this contribution. A method based on control regions is used to estimate this multiparticle background. Control regions are selected in which events have at least two calorimeter towers above threshold, not matched to the two selected pions, with all the other selection criteria satisfied. The distribution of the number of events selected in this way as a function of the number of extra towers with energy above threshold is shown in Fig. 3. The counts in the bins with 2, 3, 4, and 5 towers are used to estimate the background. The normalization factor is calculated using the following assumption: $\frac{N_{\text{mpart,SS}}(\text{0 extra towers})}{N_{\text{mpart,SS}}(\text{2 --5 extra towers})}=\frac{N_{\text{mpart,OS}}(\text{0 extra towers})}{N_{\text{mpart,OS}}(\text{2--5 extra towers})},$ (8) where $N_{\text{mpart,OS/SS}}$ is the number of multiparticle events with two OS or SS tracks. The validity of this assumption is checked by comparing the true and predicted number of background events in inclusive MC samples (Table 0.5.3). The observed discrepancy reflects the differences between OS and SS events and is included as a systematic uncertainty in the estimate of the total number of multiparticle background events, as discussed in Section 0.5.4. With this formula and the fact that all SS events are multiparticle events because of charge conservation, it is possible to calculate the value of $N_{\text{mpart,OS}}(\text{0 towers})$, which is the number of multiparticle background events. The expected distribution of the multiparticle background is obtained using OS events with 2–5 extra calorimeter towers. This method does not take into account the background contribution from $\PGo\to\PGpp\PGpm\PGpz$, because this decay cannot be observed in the SS events. This latter contribution is negligible (0.5%) based on MC simulation results. Checking the validity of Eq. (8) by comparing the true and predicted number of background events in inclusive MC samples. Event generator Difference in normalization epos $(+11\pm 4)\%$ 8 CUETP8M1 $(-5.5\pm 3)\%$ 8 MBR $(+10\pm 4)\%$ Figure 4: Background distributions as functions of kinematic variables estimated by data-driven methods. The proton dissociation background is not shown here, since it is included via scaling of the final cross section values. The error bars correspond to statistical uncertainties. The results for the 5.02data set are similar. Genuine exclusive $\PKp\PKm$ and $\Pp\PAp$ events, where both particles are misidentified as pions, are included in the previous multiparticle background estimate. To correct for this contribution, the $\PK/\PGp$ ratios are calculated in the exclusive events using tracks with $p<1\GeV$. Similarly, the $\Pp/\PGp$ ratio is calculated in the same sample in the range $1<p<2\GeV$. The $\PK/\PGp$ and $\Pp/\PGp$ ratios are assumed to be $0.3^{+0.1}_{-0.05}$ in the region $p>1$ and $p>2\GeV$, respectively [43]. Using this assumption and the measured ratios, the average $\PK/\PGp$ and $\Pp/\PGp$ ratios are then calculated over the entire momentum range of the exclusive sample. These average ratios can then be used to compute the number of $\PKp\PKm$ and $\Pp\PAp$ events under two extreme scenarios. The first scenario assumes that the production of a $\PK$ or a $\Pp$ is always accompanied by the production of its antiparticle, whereas in the second scenario it is assumed that the production of an individual $\PKp$, $\PKm$, $\Pp$, or $\PAp$ is a totally independent process. The final estimate of the exclusive $\PKp\PKm$ and $\Pp\PAp$ background normalization is calculated as the average of the estimates obtained from assuming these two scenarios. According to these calculations, there is an 11% residual contribution of exclusive $\PKp\PKm$ and $\Pp\PAp$ events in the sample after the multiparticle background subtraction. The background distributions of this contribution are calculated by using two-track OS exclusive events with at least one identified $\PKpm$ (Fig. 4). The estimated multiparticle and exclusive $\PKp\PKm$/$\Pp\PAp$ background distributions, as functions of the main kinematic variables, are shown in Fig. 4. These two background contributions are subtracted from the measured distributions. The background subtracted spectra are divided by the integrated luminosity to obtain the differential cross sections. ### 0.5.4 Systematic uncertainties Systematic uncertainties in the measured cross sections originate from various sources. These include reconstruction effects, particle identification, correction factors, background estimation, and the luminosity estimation. The uncertainty assigned to the tracking efficiency in the case of a single track is 3.9% [29], which corresponds to 7.8% uncertainty for two tracks. Furthermore, the uncertainty in the multiple reconstruction rate for a single track is also 3.9%, which propagates to a maximum of 0.4% uncertainty in the cross section for two tracks, which is neglected in the analysis. Misreconstructed tracks bias the sample in two ways: either a CEP event is rejected if a third misreconstructed track is found, or an event is identified as CEP with a misreconstructed and a genuine track. This source of systematic uncertainty is estimated to be 1% for a single track, which is the maximal misreconstruction rate calculated using inclusive MC samples in the kinematic region ($\pt(\PGp)>0.2\GeV$ and $\abs{\eta(\PGp)}<2.4$) of the analysis. Since the probability to have two or more misreconstructed tracks in these low- multiplicity events is negligible, the final uncertainty remains 1%. From the comparison of the dime mc and starlight simulations, the uncertainty of the vertex correction is estimated to be 1%. The systematic uncertainty in the pileup correction factor for a single event is calculated from only the systematic uncertainties in the luminosity measurement that do not affect its overall normalization. Indeed, the normalization-related systematic uncertainties are compensated in the exponential fit described in Section 0.5.2. The uncertainties that do not affect the normalization are estimated to be 1.6% and 1.5% for 5.02 [44] and 13[45] data, respectively. These values propagate to a 1% uncertainty in the pileup correction factor for a single event. After adding up all the selected events, the pileup uncertainty becomes smaller than 0.1%, which is neglected in the following. The measured signal yield is affected by the uncertainty arising from the two effects associated with calorimeter noise and veto inefficiency caused by the adopted energy thresholds. A genuine CEP event can be erroneously discarded if the calorimeter noise appears above the energy thresholds used in the veto. Conversely a nonCEP event can pass the final selection if the extra particles pass the veto requirements. In the HF, these uncertainties are estimated by varying the calorimeter energy thresholds by $\pm 10\%$ [46]. The resulting uncertainty is estimated to be 3% for both the 5.02 and 13data sets. Similarly, the ECAL and HCAL thresholds are varied by $\pm 5\%$ [47, 48], which results in a 1% uncertainty in the corrected yields at both energies. The systematic uncertainty estimation of the multiparticle background is done by varying the control region used in the background estimation procedure: 1–2, 2–9, and 5–9 extra towers. The estimate of the systematic uncertainty in the multiparticle background normalization is 10%. Additionally, a 10% uncertainty is added to this value quadratically, taking into account the deviations shown in Table 0.5.3; thus the final uncertainty in the multiparticle background normalization is 14%. After subtracting this contribution, this propagates to systematic uncertainties depending on the invariant mass, transverse momentum and rapidity of the pion pair. The multiparticle background estimation uncertainty varies between 10–20% below $1500\MeV$. Over $1500\MeV$ the uncertainty varies between 20–60%, because the signal versus background ratio is much smaller. The average uncertainty, used as the systematic uncertainty of the total cross section, is 15%. The exclusive $\PKp\PKm$ and $\Pp\PAp$ background uncertainty comes from three sources: (i) multiparticle contamination in the $\rd E/\rd x$ vs. momentum distribution that modifies the $\PK/\PGp$ and $\Pp/\PGp$ ratios, (ii) the uncertainty in the $\PK/\PGp$ ratio above 1, and (iii) the uncertainty in the $\Pp/\PGp$ ratio above 2. The multiparticle contamination is estimated by checking the difference between two extreme cases: all particle types are produced independently, or the sample is purely exclusive. The results correspond to an uncertainty of 70% in the normalization of this background contribution at both energies. To account for the uncertainty of $\PK/\PGp$ above 1and $\Pp/\PGp$ over 2, the exclusive background normalization is calculated assuming different values (0.25, 0.30, and 0.40 [43]) for the $\PK/\PGp$ and $\Pp/\PGp$ ratios in these regions. The uncertainties assigned to these effects are 16 and 4%, respectively. Thus the total systematic uncertainty of the exclusive $\PKp\PKm$ and $\Pp\PAp$ background normalization is 72%. After subtracting this background contribution, this propagates to systematic uncertainties, which depend on the invariant mass, transverse momentum, and rapidity of the pion pair. The typical range of this systematic uncertainty contribution is 5–20%. For the total cross section, this source contributes to an average uncertainty of 6%. The sources and average values of systematic uncertainties, used as the systematic uncertainty of the total cross section. Source Average value Tracking efficiency 7.8% Misreconstructed tracks 1% Vertex 1% HF energy scale 3% ECAL and HCAL energy scale 1% Multiparticle background 15% Exclusive $\PKp\PKm$ and $\Pp\PAp$ background 6% Total w/o int. luminosity 18.3% \+ Integrated luminosity 2.3% All of the systematic uncertainties listed above are the same for the 5.02 and 13data sets. Additionally, the systematic uncertainty in the integrated luminosity is 2.3% [44, 45]. The average values of the systematic uncertainties are summarized in Table 0.5.4. The total systematic uncertainty is obtained by adding the individual contributions in quadrature. All systematic uncertainty contributions are considered fully correlated across invariant mass bins. ## 0.6 Results The differential cross sections are calculated from the selected events as functions of the invariant mass, transverse momentum, and rapidity of the pion pair. These are shown in Fig. 5 with the generator-level predictions from the starlight and dime mc generators, normalized to their cross sections. The MC generators provide an incomplete description of the available data, since they do not model the 500, 980, and 1270 resonances as mentioned in Section 0.3. Figure 5: Differential cross sections as functions of mass (upper row), transverse momentum (middle row), and rapidity (bottom row), compared with generator-level simulations for the 5.02 (left) and 13(right) data sets. The error bars correspond to statistical, whereas the open boxes to systematic uncertainties. There is a peak at 800, which corresponds to the $\PGrzP{770}$ resonance. Since its quantum numbers $I^{G}(J^{PC})=1^{+}(1^{--})$ are forbidden in DPE processes, the ${\HepParticle{\PGr}{}{0}{}{}{}}$ mesons must be produced in VMP processes. The sharp drop visible around 1000is expected from previous measurements [11, 16] and can be attributed to the quantum mechanical interference of 980 with the continuum contribution. There is a prominent peak at 1200–1300, which corresponds to the 1270 resonance with $I^{G}(J^{PC})=0^{+}(2^{++})$ quantum numbers. This resonance is produced via a DPE process. Both dime mc and starlight underestimate the measured spectrum as these MC event generators do not model the forward dissociation of protons. Figure 6: Fit to the measured cross section with the sum of four interfering relativistic Breit–Wigner functions convolved with a normal distribution (to account for the the experimental resolution of the detector) for the 5.02 (left) and 13(right) data sets. The error bars correspond to statistical, whereas the open boxes correspond to systematic uncertainties. The total cross section of the CEP process with two pions in the final state in the kinematic region $\pt(\PGp)>0.2\GeV$ and $\abs{\eta(\PGp)}<2.4$ is obtained by integrating the observed spectra in this region: $\displaystyle\sigma_{\Pp\Pp\to\Pp^{\prime}\Pp^{\prime}\PGpp\PGpm}(\sqrt{s}=5.02\TeV)$ $\displaystyle=32.6\pm 0.7\stat\pm 6.0\syst\pm 0.8\lum\,\mu\text{b},$ (9) $\displaystyle\sigma_{\Pp\Pp\to\Pp^{\prime}\Pp^{\prime}\PGpp\PGpm}(\sqrt{s}=13\TeV)$ $\displaystyle=33.7\pm 1.0\stat\pm 6.2\syst\pm 0.8\lum\,\mu\text{b}.$ (10) Below, it is demonstrated that the measured invariant $\PGpp\PGpm$ mass spectrum is well-described by the sum of the continuum distributions obtained from the dime mc model and four dominant resonances, modeled here by Breit- Wigner functions. In the fitting procedure the quantum mechanical interference effect and the detector resolution are also included. The following fit function is used: $f(m)=\int G(m-m^{\prime};\sigma)\Big{[}\abs{A_{\text{RBW}}^{{\HepParticle{\PGr}{}{0}{}{}{}}}(m^{\prime})}^{2}+\abs{A_{\text{RBW}}^{\PfDzP{500}}(m^{\prime})\re^{\mathrm{i}\phi^{\PfDzP{500}}m^{\prime}}+A_{\mathrm{RBW}}^{\PfDzP{980}}(m^{\prime})\re^{\mathrm{i}\phi^{\PfDzP{980}}m^{\prime}}\\\ +A_{\text{RBW}}^{\mathrm{f}_{2}}(m^{\prime})\re^{\mathrm{i}\phi^{\mathrm{f}_{2}}m^{\prime}}+b\,B^{\textsc{dime}}(m^{\prime})}^{2}\Big{]}\rd m^{\prime}.$ (11) Here $G(m;\sigma)$ is a Gaussian distribution with variance $\sigma$ and zero mean, $B^{\textsc{dime}}(m)$ is the nonresonant background estimated from the dime mc using the Orear-type form factor, and $b$ is a scale factor for the continuum contribution, and $\phi^{\PfDzP{500}}$, $\phi^{\PfDzP{980}}$, and $\phi^{\mathrm{f}_{2}}$ are phases that characterize interference effects. The $A_{\text{RBW}}^{i}(m)$ is the relativistic Breit–Wigner amplitude, which can be written as [49]: $\displaystyle A_{\text{RBW}}^{i,J}(m)$ $\displaystyle=A_{i}\frac{\sqrt{mM_{i}\Gamma(m)}}{m^{2}-M_{i}^{2}+iM_{i}\Gamma(m)},$ (12) $\displaystyle\Gamma(m)$ $\displaystyle=\Gamma_{i}\frac{M_{i}}{m}\left[\frac{m^{2}-4m_{\PGp}^{2}}{M_{i}^{2}-4m_{\PGp}^{2}}\right]^{\frac{2J+1}{2}},$ (13) where $A_{i}$, $M_{i}$, and $\Gamma_{i}$ are the yield, mass, and width of the resonance, respectively, $m_{\PGp}$ is the mass of charged pions, and $J$ is the total angular momentum of the resonance. According to Ref. [2], the magnitude of the interference between the DPE and VMP processes is around 1%, therefore no interference term is used between ${\HepParticle{\PGr}{}{0}{}{}{}}$ and DPE resonances. The convolution with the Gaussian distribution models the mass resolution of the detector. The mass resolution ($\sigma$) is calculated by fitting the distribution of the difference between generator-level and reconstructed mass from the starlight and dime mc simulations. Based on these calculations, the mass resolution is found to vary from 9 to 14in the mass range 500–2000. In the final fit, an effective mass resolution of 11is used and the systematic uncertainty associated with this value is taken into account by repeating the fit with a mass resolution varying from 9 to 14. The resulting systematic uncertainty is 7–8% for the yield of 980 and around 1–2% for the yields of the 500, 770, and 1270 resonances. The impact of the uncertainty in the multiparticle (exclusive $\PKp\PKm$ and $\Pp\PAp$) background yield is included by varying the background normalization in the fit by $\pm 14\%$ ($\pm 72\%$). Cross sections of the resonant processes in the $\pt(\PGp)>0.2\GeV,\abs{\eta(\PGp)}<2.4$ fiducial region, extracted from the simple model fit using the sum of the continuum distribution obtained from the dime mc model and four dominant resonances. The luminosity-related uncertainties are included in the systematic uncertainties. The starlight predictions for $\Pp\Pp\to\Pp^{\prime}\Pp^{\prime}{\HepParticle{\PGr}{}{0}{}{}{}}\to\Pp^{\prime}\Pp^{\prime}\PGpp\PGpm$ processes are 2.3 and 3.0$\,\mu\text{b}$ for 5.02 and 13, respectively, which is compatible with the fit results. Resonance $\sigma_{\Pp\Pp\to\Pp^{\prime}\Pp^{\prime}X\to\Pp^{\prime}\Pp^{\prime}\PGpp\PGpm}[\mu\mathrm{b}]$ $\sqrt{s}=5.02\TeV$ $\sqrt{s}=13\TeV$ 500 $2.8\pm 1.4\stat\pm 2.2\syst$ $2.2\pm 0.8\stat\pm 1.3\syst$ $\PGrzP{770}$ $4.7\pm 0.9\stat\pm 1.3\syst$ $4.3\pm 1.3\stat\pm 1.5\syst$ 980 $0.5\pm 0.1\stat\pm 0.1\syst$ $1.1\pm 0.4\stat\pm 0.3\syst$ 1270 $3.6\pm 0.6\stat\pm 0.7\syst$ $4.2\pm 0.9\stat\pm 0.8\syst$ The masses and widths of 770 and 1270 resonances are fixed to the values of Ref. [24]. The mass and width of 500 and 980 are fixed according to the results from the most advanced calculations using dispersion relations [50]. The fits are also performed with the mass and width of 500 and 980 varied according to their uncertainties [24] and the resulting variation in the cross section of the resonances is added in quadrature to the other systematic uncertainty contributions. Furthermore the fit is repeated with the two other dime mc settings and the variation in the cross section is taken as an additional systematic uncertainty and added in quadrature to the other uncertainties. The above simple model fit also provides values for the cross sections of the resonances; these are obtained by integrating the fitted squared amplitudes from the dipion threshold ($2m_{\PGp}$) to $M_{i}+5\Gamma_{i}$: $\sigma^{\text{res}}_{i}=\int_{2m_{\PGp}}^{M_{i}+5\Gamma_{i}}\abs{A^{2}_{\text{RBW},i}(m)}\rd m.$ (14) The fits are shown in Fig. 6 and the cross sections are summarized in Table 0.6. The model of interfering Breit–Wigner resonances with a continuum gives a good description of the data in the region of resonant peaks (below 1500). The cross sections for $\PGrzP{770}$ production calculated from the fits are slightly larger than the predicted values from starlight, which are 2.3 and 3.0$\,\mu\text{b}$ for 5.02 and 13, respectively. The differences can be attributed to the additional semiexclusive contribution that is not modeled by starlight. The values of the scale parameter $b$ are $0.7\pm 0.2$ for 5.02and $1.1\pm 0.3$ for 13, and therefore they are consistent within uncertainties for the two energies. ## 0.7 Summary The cross sections for central exclusive pion pair production have been measured in $\Pp\Pp$ collisions at 5.02 and 13center-of-mass energies. Exclusive events are selected by vetoing additional energy deposits in the calorimeters and by requiring two oppositely charged pions identified via their mean energy loss in the tracker detectors. These events are used together with correction factors to obtain invariant mass, transverse momentum, and rapidity distributions of the $\PGpp\PGpm$ system. The measured total exclusive $\PGpp\PGpm$ production cross section is $32.6\pm 0.7\stat\pm 6.0\syst\pm 0.8\lum$ and $33.7\pm 1.0\stat\pm 6.2\syst\pm 0.8\lum$$\,\mu\text{b}$ for 5.02 and 13, respectively. The observed mass spectrum exhibits resonant structures, which can be fitted with a simple model containing four interfering Breit-Wigner functions, corresponding to the 500, $\PGrzP{770}$, 980, and 1270 resonances, and a continuum contribution modeled by the dime mc. The exclusive production cross sections are extracted from this fit. The obtained cross sections of $\PGrzP{770}$ production are higher than the starlight model prediction, which can be explained by the presence of semiexclusive production which is not modeled by the starlight generator. ###### Acknowledgements. We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMBWF and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, FAPERGS, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); SENESCYT (Ecuador); MoER, ERC IUT, PUT and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); NKFIA (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); MES (Latvia); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MOS (Montenegro); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS, RFBR, and NRC KI (Russia); MESTD (Serbia); SEIDI, CPAN, PCTI, and FEDER (Spain); MOSTR (Sri Lanka); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU (Ukraine); STFC (United Kingdom); DOE and NSF (USA). Individuals have received support from the Marie-Curie program and the European Research Council and Horizon 2020 Grant, contract Nos. 675440, 752730, and 765710 (European Union); the Leventis Foundation; the A.P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation à la Recherche dans l’Industrie et dans l’Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the “Excellence of Science – EOS” – be.h project n. 30820817; the Beijing Municipal Science & Technology Commission, No. Z191100007219010; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Deutsche Forschungsgemeinschaft (DFG) under Germany’s Excellence Strategy – EXC 2121 “Quantum Universe” – 390833306; the Lendület (“Momentum”) Program and the János Bolyai Research Scholarship of the Hungarian Academy of Sciences, the New National Excellence Program ÚNKP, the NKFIA research grants 123842, 123959, 124845, 124850, 125105, 128713, 128786, and 129058 (Hungary); the Council of Science and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus program of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and 2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406; the National Priorities Research Program by Qatar National Research Fund; the Ministry of Science and Education, grant no. 14.W03.31.0026 (Russia); the Tomsk Polytechnic University Competitiveness Enhancement Program and “Nauka” Project FSWW-2020-0008 (Russia); the Programa Estatal de Fomento de la Investigación Científica y Técnica de Excelencia María de Maeztu, grant MDM-2015-0509 and the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programs cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Kavli Foundation; the Nvidia Corporation; the SuperMicro Corporation; the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA). ## References * [1] P. Lebiedowicz and A. Szczurek, “Exclusive ${\Pp\Pp}\to{\Pp\Pp\pi^{+}\pi^{-}}$ reaction: From the threshold to LHC”, Phy. Rev. D 81 (2010) 036003, 10.1103/PhysRevD.81.036003, arXiv:0912.0190. * [2] P. Lebiedowicz, O. Nachtmann, and A. Szczurek, “Central exclusive diffractive production of $\pi^{+}\pi^{-}$ continuum, scalar and tensor resonances in pp and $\Pp\PAp$ scattering within tensor pomeron approach”, Phys. Rev. D 93 (2016) 054015, 10.1103/PhysRevD.93.054015, arXiv:1601.04537. * [3] P. Lebiedowicz, O. Nachtmann, and A. Szczurek, “$\rho^{0}$ and Drell-Söding contributions to central exclusive production of $\pi^{+}\pi^{-}$ pairs in proton-proton collisions at high energies”, Phys. Rev. D 91 (2015) 074023, 10.1103/PhysRevD.91.074023, arXiv:1412.3677. * [4] A. Bolz et al., “Photoproduction of $\pi^{+}\pi^{-}$ pairs in a model with tensor-pomeron and vector-odderon exchange”, JHEP 01 (2015) 151, 10.1007/JHEP01(2015)151, arXiv:1409.8483. * [5] L. A. Harland-Lang, V. A. Khoze, and M. G. Ryskin, “Modelling exclusive meson pair production at hadron colliders”, Eur. Phys. J. C 74 (2014) 2848, 10.1140/epjc/s10052-014-2848-9, arXiv:1312.4553. * [6] L. A. Harland-Lang, V. A. Khoze, M. G. Ryskin, and W. J. Stirling, “Probing the perturbative dynamics of exclusive meson pair production”, Phys. Lett. B 725 (2013) 316, 10.1016/j.physletb.2013.07.022, arXiv:1304.4262. * [7] R. A. Ryutin, “Central exclusive diffractive production of two-pion continuum at hadron colliders”, Eur. Phys. J. C 79 (2019) 981, 10.1140/epjc/s10052-019-7497-6, arXiv:1910.06683. * [8] E690 Collaboration, “Partial wave analysis of the centrally produced system at 800 GeV/c”, Phys. Rev. Lett. 81 (1998) 4079, 10.1103/PhysRevLett.81.4079. * [9] WA102 Collaboration, “A partial wave analysis of the centrally produced ${\PKp\PKm}$ and ${\PKzS\PKzS}$ systems in pp interactions at 450 and new information on the spin of the fJ(1710)”, Phys. Lett. B 453 (1999) 305, 10.1016/S0370-2693(99)00365-2, arXiv:hep-ex/9903042. * [10] WA102 Collaboration, “A partial wave analysis of the centrally produced $\pi^{0}\pi^{0}$ system in ${\Pp\Pp}$ interactions at 450”, Phys. Lett. B 453 (1999) 325, 10.1016/S0370-2693(99)00367-6, arXiv:hep-ex/9903044. * [11] WA102 Collaboration, “A partial wave analysis of the centrally produced $\pi^{+}\pi^{-}$ system in pp interactions at 450”, Phys. Lett. B 453 (1999) 316, 10.1016/S0370-2693(99)00366-4, arXiv:hep-ex/9903043. * [12] ABCDHW Collaboration, “Production of the f0 meson in the double pomeron exchange reaction $\Pp\Pp\to\Pp\Pp\pi^{+}\pi^{-}$ at $\sqrt{s}=62$ GeV”, Z. Phys. C 31 (1986) 185, 10.1007/BF01479525. * [13] ABCDHW Collaboration, “The reaction pomeron-pomeron $\to\pi^{+}\pi^{-}$ and an unusual production mechanism for the f2(1270)”, Z. Phys. C 48 (1990) 569, 10.1007/BF01614690. * [14] ABCDHW Collaboration, “Evidence for f2(1720) production in the reaction pomeron-pomeron $\to\pi^{+}\pi^{-}\pi^{+}\pi^{-}$”, Z. Phys. C 58 (1993) 251, 10.1007/BF01560342. * [15] Axial-Field Spectrometer Collaboration, “A search for glueballs and a study of double pomeron exchange at the CERN Intersecting Storage Rings”, Nucl. Phys. B 264 (1985) 154, 10.1016/0550-3213(86)90477-3. * [16] CDF Collaboration, “Measurement of central exclusive $\pi^{+}\pi^{-}$ production in $\Pp\PAp$ collisions at $\sqrt{s}=0.9$ and 1.96 TeV at CDF”, Phys. Rev. D 91 (2015) 091101, 10.1103/PhysRevD.91.091101, arXiv:1502.01391. * [17] H1 Collaboration, “Elastic photoproduction of J/$\psi$ and $\Upsilon$ mesons at HERA”, Phys. Lett. B 483 (2000) 23, 10.1016/S0370-2693(00)00530-X, arXiv:hep-ex/0003020. * [18] ZEUS Collaboration, “Exclusive photoproduction of $\Upsilon$ mesons at HERA”, Phys. Lett. B 680 (2009) 4, 10.1016/j.physletb.2009.07.066, arXiv:0903.4205. * [19] J. R. Forshaw and D. A. Ross, “Quantum chromodynamics and the pomeron”. Cambridge University Press, 1997. 10.1017/CBO9780511524387, ISBN 9780511524387. * [20] S. Donnachie, G. Dosch, P. Landshoff, and O. Nachtmann, “Pomeron physics and QCD”. Cambridge University Press, 2002. 10.1017/CBO9780511534935, ISBN 9780511534935. * [21] W. Ochs, “The status of glueballs”, J. Phys. G 40 (2013) 043001, 10.1088/0954-3899/40/4/043001, arXiv:1301.5183. * [22] A. A. Godizov, “High-energy central exclusive production of the lightest vacuum resonance related to the soft pomeron”, Phys. Lett. B 787 (2018) 188, 10.1016/j.physletb.2018.10.061, arXiv:1810.01824. * [23] A. J. Baltz, Y. Gorbunov, S. R. Klein, and J. Nystrand, “Two-Photon Interactions with Nuclear Breakup in Relativistic Heavy Ion Collisions”, Phys. Rev. C 80 (2009) 044902, 10.1103/PhysRevC.80.044902, arXiv:0907.1214. * [24] Particle Data Group, M. Tanabashi et al., “Review of particle physics”, Phys. Rev. D 98 (2018) 030001, 10.1103/PhysRevD.98.030001. * [25] S. Okubo, “$\phi$-meson and unitary symmetry model”, Phys. Lett. 5 (1963) 165, 10.1016/S0375-9601(63)92548-9. * [26] G. Zweig, “An SU(3) model for strong interaction symmetry and its breaking. version 2”, in Developments in the quark theory of hadrons. VOL. 1. 1964 - 1978, D. Lichtenberg and S. P. Rosen, eds., p. 22. 1964\. * [27] J. Iizuka, “Systematics and phenomenology of meson family”, Prog. Theor. Phys. Suppl. 37 (1966) 21, 10.1143/PTPS.37.21. * [28] H. B. O’Connell, B. C. Pearce, A. W. Thomas, and A. G. Williams, “$\rho$–$\omega$ mixing, vector meson dominance and the pion form-factor”, Prog. Part. Nucl. Phys. 39 (1997) 201, 10.1016/S0146-6410(97)00044-6, arXiv:hep-ph/9501251. * [29] CMS Collaboration, “Description and performance of track and primary-vertex reconstruction with the CMS tracker”, JINST 9 (2014) P10009, 10.1088/1748-0221/9/10/P10009, arXiv:1405.6569. * [30] CMS Collaboration, “CMS Tracking Performance Results from Early LHC Operation”, Eur. Phys. J. C 70 (2010) 1165, 10.1140/epjc/s10052-010-1491-3, arXiv:1007.1988. * [31] CMS Collaboration, “The CMS trigger system”, JINST 12 (2017) P01020, 10.1088/1748-0221/12/01/P01020, arXiv:1609.02366. * [32] CMS Collaboration, “The CMS experiment at the CERN LHC”, JINST 3 (2008) S08004, 10.1088/1748-0221/3/08/S08004. * [33] CMS Collaboration, “Measurement of diffraction dissociation cross sections in pp collisions at $\sqrt{s}$ = 7 TeV”, Phys. Rev. D 92 (2015) 012003, 10.1103/PhysRevD.92.012003, arXiv:1503.08689. * [34] T. Sjöstrand et al., “An Introduction to PYTHIA 8.2”, Comput. Phys. Commun. 191 (2015) 159, 10.1016/j.cpc.2015.01.024, arXiv:1410.3012. * [35] CMS Collaboration, “Event generator tunes obtained from underlying event and multiparton scattering measurements”, Eur. Phys. J. C 76 (2016) 155, 10.1140/epjc/s10052-016-3988-x, arXiv:1512.00815. * [36] R. Ciesielski and K. Goulianos, “MBR Monte Carlo Simulation in PYTHIA8”, PoS ICHEP2012 (2013) 301, 10.22323/1.174.0301, arXiv:1205.1446. * [37] K. Werner, F.-M. Liu, and T. Pierog, “Parton ladder splitting and the rapidity dependence of transverse momentum spectra in deuteron-gold collisions at RHIC”, Phys. Rev. C 74 (2006) 044902, 10.1103/PhysRevC.74.044902, arXiv:hep-ph/0506232. * [38] T. Pierog et al., “EPOS LHC: Test of collective hadronization with data measured at the CERN Large Hadron Collider”, Phys. Rev. C 92 (2015) 034906, 10.1103/PhysRevC.92.034906, arXiv:1306.0121. * [39] H. J. Drescher et al., “Parton based Gribov-Regge theory”, Phys. Rept. 350 (2001) 93, 10.1016/S0370-1573(00)00122-8, arXiv:hep-ph/0007198. * [40] S. R. Klein and J. Nystrand, “Photoproduction of quarkonium in proton-proton and nucleus-nucleus collisions”, Phys. Rev. Lett. 92 (2004) 142003, 10.1103/PhysRevLett.92.142003, arXiv:hep-ph/0311164. * [41] GEANT4 Collaboration, “—a simulation toolkit”, Nucl. Instrum. Meth. A 506 (2003) 250, 10.1016/S0168-9002(03)01368-8. * [42] L. Quertenmont, “Particle identification with ionization energy loss in the CMS silicon strip tracker”, Nucl. Phys. Proc. Suppl. 215 (2011) 95, 10.1016/j.nuclphysbps.2011.03.145. * [43] CMS Collaboration, “Study of the inclusive production of charged pions, kaons, and protons in pp collisions at $\sqrt{s}=0.9$, 2.76, and 7 TeV”, Eur. Phys. J. C 72 (2012) 2164, 10.1140/epjc/s10052-012-2164-1, arXiv:1207.4724. * [44] CMS Collaboration, “CMS luminosity calibration for the pp reference run at $\sqrt{s}=5.02~{}\mathrm{TeV}$”, CMS Physics Analysis Summary CMS-PAS-LUM-16-001, 2016. * [45] CMS Collaboration, “CMS luminosity measurement for the 2015 data-taking period”, CMS Physics Analysis Summary CMS-PAS-LUM-15-001, 2017. * [46] CMS Collaboration, “Measurement of energy flow at large pseudorapidities in $pp$ collisions at $\sqrt{s}=0.9$ and 7 TeV”, JHEP 11 (2011) 148, 10.1007/JHEP11(2011)148, arXiv:1110.0211. [Erratum: 10.1007/JHEP02(2012)055]. * [47] CMS Collaboration, “Energy calibration and resolution of the CMS electromagnetic calorimeter in pp collisions at $\sqrt{s}=7$ TeV”, JINST 8 (2013) P09009, 10.1088/1748-0221/8/09/P09009, arXiv:1306.2016. * [48] CMS Collaboration, “Calibration of the CMS hadron calorimeters using proton-proton collision data at $\sqrt{s}=13$ TeV”, J. Instrum. 15 (2019) P05002, arXiv:1910.00079. * [49] J. D. Jackson, “Remarks on the phenomenological analysis of resonances”, Il Nuovo Cim. 34 (1964) 1644, 10.1007/BF02750563. * [50] R. Garcia-Martin, R. Kaminski, J. R. Pelaez, and J. Ruiz de Elvira, “Precise determination of the f${}_{0}(600)$ and f${}_{0}(980)$ pole parameters from a dispersive data analysis”, Phys. Rev. Lett. 107 (2011) 072001, 10.1103/PhysRevLett.107.072001, arXiv:1107.1635. ## .8 The CMS Collaboration Yerevan Physics Institute, Yerevan, Armenia A.M. Sirunyan${}^{\textrm{\textdagger}}$, A. Tumasyan Institut für Hochenergiephysik, Wien, Austria W. Adam, F. Ambrogi, T. Bergauer, J. Brandstetter, M. Dragicevic, J. Erö, A. Escalante Del Valle, M. Flechl, R. Frühwirth1, M. Jeitler1, N. Krammer, I. Krätschmer, D. Liko, T. Madlener, I. Mikulec, N. Rad, J. Schieck1, R. Schöfbeck, M. Spanring, D. Spitzbart, W. Waltenberger, J. Wittmann, C.-E. Wulz1, M. Zarucki Institute for Nuclear Problems, Minsk, Belarus V. Drugakov, V. Mossolov, J. Suarez Gonzalez Universiteit Antwerpen, Antwerpen, Belgium M.R. Darwish, E.A. De Wolf, D. Di Croce, X. Janssen, J. Lauwers, A. Lelek, M. Pieters, H. Rejeb Sfar, H. Van Haevermaet, P. Van Mechelen, S. Van Putte, N. Van Remortel Vrije Universiteit Brussel, Brussel, Belgium F. Blekman, E.S. Bols, S.S. Chhibra, J. D’Hondt, J. De Clercq, D. Lontkovskyi, S. Lowette, I. Marchesini, S. Moortgat, L. Moreels, Q. Python, K. Skovpen, S. Tavernier, W. Van Doninck, P. Van Mulders, I. Van Parijs Université Libre de Bruxelles, Bruxelles, Belgium D. Beghin, B. Bilin, H. Brun, B. Clerbaux, G. De Lentdecker, H. Delannoy, B. Dorney, L. Favart, A. Grebenyuk, A.K. Kalsi, J. Luetic, A. Popov, N. Postiau, E. Starling, L. Thomas, C. Vander Velde, P. Vanlaer, D. Vannerom, Q. Wang Ghent University, Ghent, Belgium T. Cornelis, D. Dobur, I. Khvastunov2, C. Roskas, D. Trocino, M. Tytgat, W. Verbeke, B. Vermassen, M. Vit, N. Zaganidis Université Catholique de Louvain, Louvain-la-Neuve, Belgium O. Bondu, G. Bruno, C. Caputo, P. David, C. Delaere, M. Delcourt, A. Giammanco, G. Krintiras, V. Lemaitre, A. Magitteri, K. Piotrzkowski, J. Prisciandaro, A. Saggio, M. Vidal Marono, P. Vischia, J. Zobec Centro Brasileiro de Pesquisas Fisicas, Rio de Janeiro, Brazil F.L. Alves, G.A. Alves, G. Correia Silva, C. Hensel, A. Moraes, P. Rebello Teles Universidade do Estado do Rio de Janeiro, Rio de Janeiro, Brazil E. Belchior Batista Das Chagas, W. Carvalho, J. Chinellato3, E. Coelho, E.M. Da Costa, G.G. Da Silveira4, D. De Jesus Damiao, C. De Oliveira Martins, S. Fonseca De Souza, L.M. Huertas Guativa, H. Malbouisson, J. Martins5, D. Matos Figueiredo, M. Medina Jaime6, M. Melo De Almeida, C. Mora Herrera, L. Mundim, H. Nogima, W.L. Prado Da Silva, L.J. Sanchez Rosas, A. Santoro, A. Sznajder, M. Thiel, E.J. Tonelli Manganote3, F. Torres Da Silva De Araujo, A. Vilela Pereira Universidade Estadual Paulista a, Universidade Federal do ABC b, São Paulo, Brazil S. Ahujaa, C.A. Bernardesa, L. Calligarisa, T.R. Fernandez Perez Tomeia, E.M. Gregoresb, D.S. Lemos, P.G. Mercadanteb, S.F. Novaesa, SandraS. Padulaa Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, Sofia, Bulgaria A. Aleksandrov, G. Antchev, R. Hadjiiska, P. Iaydjiev, A. Marinov, M. Misheva, M. Rodozov, M. Shopova, G. Sultanov University of Sofia, Sofia, Bulgaria A. Dimitrov, L. Litov, B. Pavlov, P. Petkov Beihang University, Beijing, China W. Fang7, X. Gao7, L. Yuan Department of Physics, Tsinghua University, Beijing, China Z. Hu, Y. Wang Institute of High Energy Physics, Beijing, China M. Ahmad, G.M. Chen, H.S. Chen, M. Chen, C.H. Jiang, D. Leggat, H. Liao, Z. Liu, S.M. Shaheen8, A. Spiezia, J. Tao, E. Yazgan, H. Zhang, S. Zhang8, J. Zhao State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing, China A. Agapitos, Y. Ban, G. Chen, A. Levin, J. Li, L. Li, Q. Li, Y. Mao, S.J. Qian, D. Wang Universidad de Los Andes, Bogota, Colombia C. Avila, A. Cabrera, L.F. Chaparro Sierra, C. Florez, C.F. González Hernández, M.A. Segura Delgado University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, Split, Croatia D. Giljanović, N. Godinovic, D. Lelas, I. Puljak, T. Sculac University of Split, Faculty of Science, Split, Croatia Z. Antunovic, M. Kovac Institute Rudjer Boskovic, Zagreb, Croatia V. Brigljevic, S. Ceci, D. Ferencek, K. Kadija, B. Mesic, M. Roguljic, A. Starodumov9, T. Susa University of Cyprus, Nicosia, Cyprus M.W. Ather, A. Attikis, E. Erodotou, A. Ioannou, M. Kolosova, S. Konstantinou, G. Mavromanolakis, J. Mousa, C. Nicolaou, F. Ptochos, P.A. Razis, H. Rykaczewski, D. Tsiakkouri Charles University, Prague, Czech Republic M. Finger10, M. Finger Jr.10, A. Kveton, J. Tomsa Escuela Politecnica Nacional, Quito, Ecuador E. Ayala Universidad San Francisco de Quito, Quito, Ecuador E. Carrera Jarrin Academy of Scientific Research and Technology of the Arab Republic of Egypt, Egyptian Network of High Energy Physics, Cairo, Egypt M.A. Mahmoud11,12, Y. Mohammed11 National Institute of Chemical Physics and Biophysics, Tallinn, Estonia S. Bhowmik, A. Carvalho Antunes De Oliveira, R.K. Dewanjee, K. Ehataht, M. Kadastik, M. Raidal, C. Veelken Department of Physics, University of Helsinki, Helsinki, Finland P. Eerola, L. Forthomme, H. Kirschenmann, K. Osterberg, J. Pekkanen, M. Voutilainen Helsinki Institute of Physics, Helsinki, Finland F. Garcia, J. Havukainen, J.K. Heikkilä, T. Järvinen, V. Karimäki, R. Kinnunen, T. Lampén, K. Lassila-Perini, S. Laurila, S. Lehti, T. Lindén, P. Luukka, T. Mäenpää, H. Siikonen, E. Tuominen, J. Tuominiemi Lappeenranta University of Technology, Lappeenranta, Finland T. Tuuva IRFU, CEA, Université Paris-Saclay, Gif-sur-Yvette, France M. Besancon, F. Couderc, M. Dejardin, D. Denegri, B. Fabbro, J.L. Faure, F. Ferri, S. Ganjour, A. Givernaud, P. Gras, G. Hamel de Monchenault, P. Jarry, C. Leloup, E. Locci, J. Malcles, J. Rander, A. Rosowsky, M.Ö. Sahin, A. Savoy- Navarro13, M. Titov Laboratoire Leprince-Ringuet, CNRS/IN2P3, Ecole Polytechnique, Institut Polytechnique de Paris C. Amendola, F. Beaudette, P. Busson, C. Charlot, B. Diab, R. Granier de Cassagnac, I. Kucher, A. Lobanov, C. Martin Perez, M. Nguyen, C. Ochando, P. Paganini, J. Rembser, R. Salerno, J.B. Sauvan, Y. Sirois, A. Zabi, A. Zghiche Université de Strasbourg, CNRS, IPHC UMR 7178, Strasbourg, France J.-L. Agram14, J. Andrea, D. Bloch, G. Bourgatte, J.-M. Brom, E.C. Chabert, C. Collard, E. Conte14, J.-C. Fontaine14, D. Gelé, U. Goerlach, M. Jansová, A.-C. Le Bihan, N. Tonon, P. Van Hove Centre de Calcul de l’Institut National de Physique Nucleaire et de Physique des Particules, CNRS/IN2P3, Villeurbanne, France S. Gadrat Université de Lyon, Université Claude Bernard Lyon 1, CNRS-IN2P3, Institut de Physique Nucléaire de Lyon, Villeurbanne, France S. Beauceron, C. Bernet, G. Boudoul, C. Camen, N. Chanon, R. Chierici, D. Contardo, P. Depasse, H. El Mamouni, J. Fay, S. Gascon, M. Gouzevitch, B. Ille, Sa. Jain, F. Lagarde, I.B. Laktineh, H. Lattaud, M. Lethuillier, L. Mirabito, S. Perries, V. Sordini, G. Touquet, M. Vander Donckt, S. Viret Georgian Technical University, Tbilisi, Georgia T. Toriashvili15 Tbilisi State University, Tbilisi, Georgia Z. Tsamalaidze10 RWTH Aachen University, I. Physikalisches Institut, Aachen, Germany C. Autermann, L. Feld, M.K. Kiesel, K. Klein, M. Lipinski, D. Meuser, A. Pauls, M. Preuten, M.P. Rauch, C. Schomakers, J. Schulz, M. Teroerde, B. Wittmer RWTH Aachen University, III. Physikalisches Institut A, Aachen, Germany A. Albert, M. Erdmann, S. Erdweg, T. Esch, B. Fischer, R. Fischer, S. Ghosh, T. Hebbeker, K. Hoepfner, H. Keller, L. Mastrolorenzo, M. Merschmeyer, A. Meyer, P. Millet, G. Mocellin, S. Mondal, S. Mukherjee, D. Noll, A. Novak, T. Pook, A. Pozdnyakov, T. Quast, M. Radziej, Y. Rath, H. Reithler, M. Rieger, A. Schmidt, S.C. Schuler, A. Sharma, S. Thüer, S. Wiedenbeck RWTH Aachen University, III. Physikalisches Institut B, Aachen, Germany G. Flügge, W. Haj Ahmad16, O. Hlushchenko, T. Kress, T. Müller, A. Nehrkorn, A. Nowack, C. Pistone, O. Pooth, D. Roy, H. Sert, A. Stahl17 Deutsches Elektronen-Synchrotron, Hamburg, Germany M. Aldaya Martin, C. Asawatangtrakuldee, P. Asmuss, I. Babounikau, H. Bakhshiansohi, K. Beernaert, O. Behnke, U. Behrens, A. Bermúdez Martínez, D. Bertsche, A.A. Bin Anuar, K. Borras18, V. Botta, A. Campbell, A. Cardini, P. Connor, S. Consuegra Rodríguez, C. Contreras-Campana, V. Danilov, A. De Wit, M.M. Defranchis, C. Diez Pardos, D. Domínguez Damiani, G. Eckerlin, D. Eckstein, T. Eichhorn, A. Elwood, E. Eren, E. Gallo19, A. Geiser, J.M. Grados Luyando, A. Grohsjean, M. Guthoff, M. Haranko, A. Harb, N.Z. Jomhari, H. Jung, A. Kasem18, M. Kasemann, J. Keaveney, C. Kleinwort, J. Knolle, D. Krücker, W. Lange, T. Lenz, J. Leonard, J. Lidrych, K. Lipka, W. Lohmann20, R. Mankel, I.-A. Melzer-Pellmann, A.B. Meyer, M. Meyer, M. Missiroli, G. Mittag, J. Mnich, A. Mussgiller, V. Myronenko, D. Pérez Adán, S.K. Pflitsch, D. Pitzl, A. Raspereza, A. Saibel, M. Savitskyi, V. Scheurer, P. Schütze, C. Schwanenberger, R. Shevchenko, A. Singh, H. Tholen, O. Turkot, A. Vagnerini, M. Van De Klundert, G.P. Van Onsem, R. Walsh, Y. Wen, K. Wichmann, C. Wissing, O. Zenaiev, R. Zlebcik University of Hamburg, Hamburg, Germany R. Aggleton, S. Bein, L. Benato, A. Benecke, V. Blobel, T. Dreyer, A. Ebrahimi, A. Fröhlich, C. Garbers, E. Garutti, D. Gonzalez, P. Gunnellini, J. Haller, A. Hinzmann, A. Karavdina, G. Kasieczka, R. Klanner, R. Kogler, N. Kovalchuk, S. Kurz, V. Kutzner, J. Lange, T. Lange, A. Malara, D. Marconi, J. Multhaup, M. Niedziela, C.E.N. Niemeyer, D. Nowatschin, A. Perieanu, A. Reimers, O. Rieger, C. Scharf, P. Schleper, S. Schumann, J. Schwandt, J. Sonneveld, H. Stadie, G. Steinbrück, F.M. Stober, M. Stöver, B. Vormwald, I. Zoi Karlsruher Institut fuer Technologie, Karlsruhe, Germany M. Akbiyik, C. Barth, M. Baselga, S. Baur, T. Berger, E. Butz, R. Caspart, T. Chwalek, W. De Boer, A. Dierlamm, K. El Morabit, N. Faltermann, M. Giffels, P. Goldenzweig, A. Gottmann, M.A. Harrendorf, F. Hartmann17, U. Husemann, S. Kudella, S. Mitra, M.U. Mozer, Th. Müller, M. Musich, A. Nürnberg, G. Quast, K. Rabbertz, M. Schröder, I. Shvetsov, H.J. Simonis, R. Ulrich, M. Weber, C. Wöhrmann, R. Wolf Institute of Nuclear and Particle Physics (INPP), NCSR Demokritos, Aghia Paraskevi, Greece G. Anagnostou, P. Asenov, G. Daskalakis, T. Geralis, A. Kyriakis, D. Loukas, G. Paspalaki National and Kapodistrian University of Athens, Athens, Greece M. Diamantopoulou, G. Karathanasis, P. Kontaxakis, A. Panagiotou, I. Papavergou, N. Saoulidou, A. Stakia, K. Theofilatos, K. Vellidis National Technical University of Athens, Athens, Greece G. Bakas, K. Kousouris, I. Papakrivopoulos, G. Tsipolitis University of Ioánnina, Ioánnina, Greece I. Evangelou, C. Foudas, P. Gianneios, P. Katsoulis, P. Kokkas, S. Mallios, K. Manitara, N. Manthos, I. Papadopoulos, J. Strologas, F.A. Triantis, D. Tsitsonis MTA-ELTE Lendület CMS Particle and Nuclear Physics Group, Eötvös Loránd University, Budapest, Hungary M. Bartók21, M. Csanad, P. Major, K. Mandal, A. Mehta, M.I. Nagy, G. Pasztor, O. Surányi, G.I. Veres Wigner Research Centre for Physics, Budapest, Hungary G. Bencze, C. Hajdu, D. Horvath22, F. Sikler, T.Á. Vámi, V. Veszpremi, G. Vesztergombi${}^{\textrm{\textdagger}}$ Institute of Nuclear Research ATOMKI, Debrecen, Hungary N. Beni, S. Czellar, J. Karancsi21, A. Makovec, J. Molnar, Z. Szillasi Institute of Physics, University of Debrecen, Debrecen, Hungary P. Raics, D. Teyssier, Z.L. Trocsanyi, B. Ujvari Eszterhazy Karoly University, Karoly Robert Campus, Gyongyos, Hungary T.F. Csorgo, W.J. Metzger, F. Nemes, T. Novak Indian Institute of Science (IISc), Bangalore, India S. Choudhury, J.R. Komaragiri, P.C. Tiwari National Institute of Science Education and Research, HBNI, Bhubaneswar, India S. Bahinipati24, C. Kar, G. Kole, P. Mal, V.K. Muraleedharan Nair Bindhu, A. Nayak25, S. Roy Chowdhury, D.K. Sahoo24, S.K. Swain Panjab University, Chandigarh, India S. Bansal, S.B. Beri, V. Bhatnagar, S. Chauhan, R. Chawla, N. Dhingra, R. Gupta, A. Kaur, M. Kaur, S. Kaur, P. Kumari, M. Lohan, M. Meena, K. Sandeep, S. Sharma, J.B. Singh, A.K. Virdi, G. Walia University of Delhi, Delhi, India A. Bhardwaj, B.C. Choudhary, R.B. Garg, M. Gola, S. Keshri, Ashok Kumar, S. Malhotra, M. Naimuddin, P. Priyanka, K. Ranjan, Aashaq Shah, R. Sharma Saha Institute of Nuclear Physics, HBNI, Kolkata, India R. Bhardwaj26, M. Bharti26, R. Bhattacharya, S. Bhattacharya, U. Bhawandeep26, D. Bhowmik, S. Dey, S. Dutta, S. Ghosh, M. Maity27, K. Mondal, S. Nandan, A. Purohit, P.K. Rout, A. Roy, G. Saha, S. Sarkar, T. Sarkar27, M. Sharan, B. Singh26, S. Thakur26 Indian Institute of Technology Madras, Madras, India P.K. Behera, P. Kalbhor, A. Muhammad, P.R. Pujahari, A. Sharma, A.K. Sikdar Bhabha Atomic Research Centre, Mumbai, India R. Chudasama, D. Dutta, V. Jha, V. Kumar, D.K. Mishra, P.K. Netrakanti, L.M. Pant, P. Shukla Tata Institute of Fundamental Research-A, Mumbai, India T. Aziz, M.A. Bhat, S. Dugad, G.B. Mohanty, N. Sur, RavindraKumar Verma Tata Institute of Fundamental Research-B, Mumbai, India S. Banerjee, S. Bhattacharya, S. Chatterjee, P. Das, M. Guchait, S. Karmakar, S. Kumar, G. Majumder, K. Mazumdar, N. Sahoo, S. Sawant Indian Institute of Science Education and Research (IISER), Pune, India S. Chauhan, S. Dube, V. Hegde, A. Kapoor, K. Kothekar, S. Pandey, A. Rane, A. Rastogi, S. Sharma Institute for Research in Fundamental Sciences (IPM), Tehran, Iran S. Chenarani28, E. Eskandari Tadavani, S.M. Etesami28, M. Khakzad, M. Mohammadi Najafabadi, M. Naseri, F. Rezaei Hosseinabadi University College Dublin, Dublin, Ireland M. Felcini, M. Grunewald INFN Sezione di Bari a, Università di Bari b, Politecnico di Bari c, Bari, Italy M. Abbresciaa,b, C. Calabriaa,b, A. Colaleoa, D. Creanzaa,c, L. Cristellaa,b, N. De Filippisa,c, M. De Palmaa,b, A. Di Florioa,b, L. Fiorea, A. Gelmia,b, G. Iasellia,c, M. Incea,b, S. Lezkia,b, G. Maggia,c, M. Maggia, G. Minielloa,b, S. Mya,b, S. Nuzzoa,b, A. Pompilia,b, G. Pugliesea,c, R. Radognaa, A. Ranieria, G. Selvaggia,b, L. Silvestrisa, R. Vendittia, P. Verwilligena INFN Sezione di Bologna a, Università di Bologna b, Bologna, Italy G. Abbiendia, C. Battilanaa,b, D. Bonacorsia,b, L. Borgonovia,b, S. Braibant- Giacomellia,b, R. Campaninia,b, P. Capiluppia,b, A. Castroa,b, F.R. Cavalloa, C. Cioccaa, G. Codispotia,b, M. Cuffiania,b, G.M. Dallavallea, F. Fabbria, A. Fanfania,b, E. Fontanesi, P. Giacomellia, C. Grandia, L. Guiduccia,b, F. Iemmia,b, S. Lo Meoa,29, S. Marcellinia, G. Masettia, F.L. Navarriaa,b, A. Perrottaa, F. Primaveraa,b, A.M. Rossia,b, T. Rovellia,b, G.P. Sirolia,b, N. Tosia INFN Sezione di Catania a, Università di Catania b, Catania, Italy S. Albergoa,b,30, S. Costaa,b, A. Di Mattiaa, R. Potenzaa,b, A. Tricomia,b,30, C. Tuvea,b INFN Sezione di Firenze a, Università di Firenze b, Firenze, Italy G. Barbaglia, R. Ceccarellia,b, K. Chatterjeea,b, V. Ciullia,b, C. Civininia, R. D’Alessandroa,b, E. Focardia,b, G. Latino, P. Lenzia,b, M. Meschinia, S. Paolettia, L. Russoa,31, G. Sguazzonia, D. Stroma, L. Viliania INFN Laboratori Nazionali di Frascati, Frascati, Italy L. Benussi, S. Bianco, D. Piccolo INFN Sezione di Genova a, Università di Genova b, Genova, Italy M. Bozzoa,b, F. Ferroa, R. Mulargiaa,b, E. Robuttia, S. Tosia,b INFN Sezione di Milano-Bicocca a, Università di Milano-Bicocca b, Milano, Italy A. Benagliaa, A. Beschia,b, F. Brivioa,b, V. Cirioloa,b,17, S. Di Guidaa,b,17, M.E. Dinardoa,b, P. Dinia, S. Fiorendia,b, S. Gennaia, A. Ghezzia,b, P. Govonia,b, L. Guzzia,b, M. Malbertia, S. Malvezzia, D. Menascea, F. Montia,b, L. Moronia, G. Ortonaa,b, M. Paganonia,b, D. Pedrinia, S. Ragazzia,b, T. Tabarelli de Fatisa,b, D. Zuoloa,b INFN Sezione di Napoli a, Università di Napoli ’Federico II’ b, Napoli, Italy, Università della Basilicata c, Potenza, Italy, Università G. Marconi d, Roma, Italy S. Buontempoa, N. Cavalloa,c, A. De Iorioa,b, A. Di Crescenzoa,b, F. Fabozzia,c, F. Fiengaa, G. Galatia, A.O.M. Iorioa,b, L. Listaa,b, S. Meolaa,d,17, P. Paoluccia,17, B. Rossia, C. Sciaccaa,b, E. Voevodinaa,b INFN Sezione di Padova a, Università di Padova b, Padova, Italy, Università di Trento c, Trento, Italy P. Azzia, N. Bacchettaa, D. Biselloa,b, A. Bolettia,b, A. Bragagnolo, R. Carlina,b, P. Checchiaa, P. De Castro Manzanoa, T. Dorigoa, U. Dossellia, F. Gasparinia,b, U. Gasparinia,b, A. Gozzelinoa, S.Y. Hoh, P. Lujan, M. Margonia,b, A.T. Meneguzzoa,b, J. Pazzinia,b, M. Presillab, P. Ronchesea,b, R. Rossina,b, F. Simonettoa,b, A. Tiko, M. Tosia,b, M. Zanettia,b, P. Zottoa,b, G. Zumerlea,b INFN Sezione di Pavia a, Università di Pavia b, Pavia, Italy A. Braghieria, P. Montagnaa,b, S.P. Rattia,b, V. Rea, M. Ressegottia,b, C. Riccardia,b, P. Salvinia, I. Vaia,b, P. Vituloa,b INFN Sezione di Perugia a, Università di Perugia b, Perugia, Italy M. Biasinia,b, G.M. Bileia, C. Cecchia,b, D. Ciangottinia,b, L. Fanòa,b, P. Laricciaa,b, R. Leonardia,b, E. Manonia, G. Mantovania,b, V. Mariania,b, M. Menichellia, A. Rossia,b, A. Santocchiaa,b, D. Spigaa INFN Sezione di Pisa a, Università di Pisa b, Scuola Normale Superiore di Pisa c, Pisa, Italy K. Androsova, P. Azzurria, G. Bagliesia, V. Bertacchia,c, L. Bianchinia, T. Boccalia, R. Castaldia, M.A. Cioccia,b, R. Dell’Orsoa, G. Fedia, F. Fioria,c, L. Gianninia,c, A. Giassia, M.T. Grippoa, F. Ligabuea,c, E. Mancaa,c, G. Mandorlia,c, A. Messineoa,b, F. Pallaa, A. Rizzia,b, G. Rolandi32, A. Scribanoa, P. Spagnoloa, R. Tenchinia, G. Tonellia,b, N. Turinia, A. Venturia, P.G. Verdinia INFN Sezione di Roma a, Sapienza Università di Roma b, Rome, Italy F. Cavallaria, M. Cipriania,b, D. Del Rea,b, E. Di Marcoa,b, M. Diemoza, E. Longoa,b, B. Marzocchia,b, P. Meridiania, G. Organtinia,b, F. Pandolfia, R. Paramattia,b, C. Quarantaa,b, S. Rahatloua,b, C. Rovellia, F. Santanastasioa,b, L. Soffia,b INFN Sezione di Torino a, Università di Torino b, Torino, Italy, Università del Piemonte Orientale c, Novara, Italy N. Amapanea,b, R. Arcidiaconoa,c, S. Argiroa,b, M. Arneodoa,c, N. Bartosika, R. Bellana,b, C. Biinoa, A. Cappatia,b, N. Cartigliaa, S. Comettia, M. Costaa,b, R. Covarellia,b, N. Demariaa, B. Kiania,b, C. Mariottia, S. Masellia, E. Migliorea,b, V. Monacoa,b, E. Monteila,b, M. Montenoa, M.M. Obertinoa,b, L. Pachera,b, N. Pastronea, M. Pelliccionia, G.L. Pinna Angionia,b, A. Romeroa,b, M. Ruspaa,c, R. Sacchia,b, R. Salvaticoa,b, K. Shchelinaa,b, V. Solaa, A. Solanoa,b, D. Soldia,b, A. Staianoa INFN Sezione di Trieste a, Università di Trieste b, Trieste, Italy S. Belfortea, V. Candelisea,b, M. Casarsaa, F. Cossuttia, A. Da Rolda,b, G. Della Riccaa,b, F. Vazzolera,b, A. Zanettia Kyungpook National University, Daegu, Korea B. Kim, D.H. Kim, G.N. Kim, M.S. Kim, J. Lee, S.W. Lee, C.S. Moon, Y.D. Oh, S.I. Pak, S. Sekmen, D.C. Son, Y.C. Yang Chonnam National University, Institute for Universe and Elementary Particles, Kwangju, Korea H. Kim, D.H. Moon, G. Oh Hanyang University, Seoul, Korea B. Francois, T.J. Kim, J. Park Korea University, Seoul, Korea S. Cho, S. Choi, Y. Go, D. Gyun, S. Ha, B. Hong, K. Lee, K.S. Lee, J. Lim, J. Park, S.K. Park, Y. Roh Kyung Hee University, Department of Physics J. Goh Sejong University, Seoul, Korea H.S. Kim Seoul National University, Seoul, Korea J. Almond, J.H. Bhyun, J. Choi, S. Jeon, J. Kim, J.S. Kim, H. Lee, K. Lee, S. Lee, K. Nam, S.B. Oh, B.C. Radburn-Smith, S.h. Seo, U.K. Yang, H.D. Yoo, I. Yoon, G.B. Yu University of Seoul, Seoul, Korea D. Jeon, H. Kim, J.H. Kim, J.S.H. Lee, I.C. Park, I. Watson Sungkyunkwan University, Suwon, Korea Y. Choi, C. Hwang, Y. Jeong, J. Lee, Y. Lee, I. Yu Riga Technical University, Riga, Latvia V. Veckalns33 Vilnius University, Vilnius, Lithuania V. Dudenas, A. Juodagalvis, J. Vaitkus National Centre for Particle Physics, Universiti Malaya, Kuala Lumpur, Malaysia Z.A. Ibrahim, F. Mohamad Idris34, W.A.T. Wan Abdullah, M.N. Yusli, Z. Zolkapli Universidad de Sonora (UNISON), Hermosillo, Mexico J.F. Benitez, A. Castaneda Hernandez, J.A. Murillo Quijada, L. Valencia Palomo Centro de Investigacion y de Estudios Avanzados del IPN, Mexico City, Mexico H. Castilla-Valdez, E. De La Cruz-Burelo, I. Heredia-De La Cruz35, R. Lopez- Fernandez, A. Sanchez-Hernandez Universidad Iberoamericana, Mexico City, Mexico S. Carrillo Moreno, C. Oropeza Barrera, M. Ramirez-Garcia, F. Vazquez Valencia Benemerita Universidad Autonoma de Puebla, Puebla, Mexico J. Eysermans, I. Pedraza, H.A. Salazar Ibarguen, C. Uribe Estrada Universidad Autónoma de San Luis Potosí, San Luis Potosí, Mexico A. Morelos Pineda University of Montenegro, Podgorica, Montenegro N. Raicevic University of Auckland, Auckland, New Zealand D. Krofcheck University of Canterbury, Christchurch, New Zealand S. Bheesette, P.H. Butler National Centre for Physics, Quaid-I-Azam University, Islamabad, Pakistan A. Ahmad, M. Ahmad, Q. Hassan, H.R. Hoorani, W.A. Khan, M.A. Shah, M. Shoaib, M. Waqas AGH University of Science and Technology Faculty of Computer Science, Electronics and Telecommunications, Krakow, Poland V. Avati, L. Grzanka, M. Malawski National Centre for Nuclear Research, Swierk, Poland H. Bialkowska, M. Bluj, B. Boimska, M. Górski, M. Kazana, M. Szleper, P. Zalewski Institute of Experimental Physics, Faculty of Physics, University of Warsaw, Warsaw, Poland K. Bunkowski, A. Byszuk36, K. Doroba, A. Kalinowski, M. Konecki, J. Krolikowski, M. Misiura, M. Olszewski, A. Pyskir, M. Walczak Laboratório de Instrumentação e Física Experimental de Partículas, Lisboa, Portugal M. Araujo, P. Bargassa, D. Bastos, A. Di Francesco, P. Faccioli, B. Galinhas, M. Gallinaro, J. Hollar, N. Leonardo, J. Seixas, G. Strong, O. Toldaiev, J. Varela Joint Institute for Nuclear Research, Dubna, Russia S. Afanasiev, P. Bunin, M. Gavrilenko, I. Golutvin, I. Gorbunov, A. Kamenev, V. Karjavine, A. Lanev, A. Malakhov, V. Matveev37,38, P. Moisenz, V. Palichik, V. Perelygin, M. Savina, S. Shmatov, S. Shulha, N. Skatchkov, V. Smirnov, N. Voytishin, A. Zarubin Petersburg Nuclear Physics Institute, Gatchina (St. Petersburg), Russia L. Chtchipounov, V. Golovtsov, Y. Ivanov, V. Kim39, E. Kuznetsova40, P. Levchenko, V. Murzin, V. Oreshkin, I. Smirnov, D. Sosnov, V. Sulimov, L. Uvarov, A. Vorobyev Institute for Nuclear Research, Moscow, Russia Yu. Andreev, A. Dermenev, S. Gninenko, N. Golubev, A. Karneyeu, M. Kirsanov, N. Krasnikov, A. Pashenkov, D. Tlisov, A. Toropin Institute for Theoretical and Experimental Physics named by A.I. Alikhanov of NRC ‘Kurchatov Institute’, Moscow, Russia V. Epshteyn, V. Gavrilov, N. Lychkovskaya, A. Nikitenko41, V. Popov, I. Pozdnyakov, G. Safronov, A. Spiridonov, A. Stepennov, M. Toms, E. Vlasov, A. Zhokin Moscow Institute of Physics and Technology, Moscow, Russia T. Aushev National Research Nuclear University ’Moscow Engineering Physics Institute’ (MEPhI), Moscow, Russia M. Chadeeva42, D. Philippov, E. Popova, V. Rusinov P.N. Lebedev Physical Institute, Moscow, Russia V. Andreev, M. Azarkin, I. Dremin38, M. Kirakosyan, A. Terkulov Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University, Moscow, Russia A. Belyaev, E. Boos, A. Ershov, A. Gribushin, L. Khein, V. Klyukhin, O. Kodolova, I. Lokhtin, O. Lukina, S. Obraztsov, S. Petrushanko, V. Savrin, A. Snigirev Novosibirsk State University (NSU), Novosibirsk, Russia A. Barnyakov43, V. Blinov43, T. Dimova43, L. Kardapoltsev43, Y. Skovpen43 Institute for High Energy Physics of National Research Centre ‘Kurchatov Institute’, Protvino, Russia I. Azhgirey, I. Bayshev, S. Bitioukov, V. Kachanov, D. Konstantinov, P. Mandrik, V. Petrov, R. Ryutin, S. Slabospitskii, A. Sobol, S. Troshin, N. Tyurin, A. Uzunian, A. Volkov National Research Tomsk Polytechnic University, Tomsk, Russia A. Babaev, A. Iuzhakov, V. Okhotnikov Tomsk State University, Tomsk, Russia V. Borchsh, V. Ivanchenko, E. Tcherniaev University of Belgrade: Faculty of Physics and VINCA Institute of Nuclear Sciences P. Adzic44, P. Cirkovic, D. Devetak, M. Dordevic, P. Milenovic, J. Milosevic, M. Stojanovic Centro de Investigaciones Energéticas Medioambientales y Tecnológicas (CIEMAT), Madrid, Spain M. Aguilar-Benitez, J. Alcaraz Maestre, A. Álvarez Fernández, I. Bachiller, M. Barrio Luna, J.A. Brochero Cifuentes, C.A. Carrillo Montoya, M. Cepeda, M. Cerrada, N. Colino, B. De La Cruz, A. Delgado Peris, C. Fernandez Bedoya, J.P. Fernández Ramos, J. Flix, M.C. Fouz, O. Gonzalez Lopez, S. Goy Lopez, J.M. Hernandez, M.I. Josa, D. Moran, Á. Navarro Tobar, A. Pérez-Calero Yzquierdo, J. Puerta Pelayo, I. Redondo, L. Romero, S. Sánchez Navas, M.S. Soares, A. Triossi, C. Willmott Universidad Autónoma de Madrid, Madrid, Spain C. Albajar, J.F. de Trocóniz Universidad de Oviedo, Instituto Universitario de Ciencias y Tecnologías Espaciales de Asturias (ICTEA), Oviedo, Spain B. Alvarez Gonzalez, J. Cuevas, C. Erice, J. Fernandez Menendez, S. Folgueras, I. Gonzalez Caballero, J.R. González Fernández, E. Palencia Cortezon, V. Rodríguez Bouza, S. Sanchez Cruz Instituto de Física de Cantabria (IFCA), CSIC-Universidad de Cantabria, Santander, Spain I.J. Cabrillo, A. Calderon, B. Chazin Quero, J. Duarte Campderros, M. Fernandez, P.J. Fernández Manteca, A. García Alonso, G. Gomez, C. Martinez Rivero, P. Martinez Ruiz del Arbol, F. Matorras, J. Piedra Gomez, C. Prieels, T. Rodrigo, A. Ruiz-Jimeno, L. Scodellaro, N. Trevisani, I. Vila, J.M. Vizan Garcia University of Colombo, Colombo, Sri Lanka D.U.J. Sonnadara University of Ruhuna, Department of Physics, Matara, Sri Lanka W.G.D. Dharmaratna, N. Wickramage CERN, European Organization for Nuclear Research, Geneva, Switzerland D. Abbaneo, B. Akgun, E. Auffray, G. Auzinger, J. Baechler, P. Baillon, A.H. Ball, D. Barney, J. Bendavid, M. Bianco, A. Bocci, E. Bossini, C. Botta, E. Brondolin, T. Camporesi, A. Caratelli, G. Cerminara, E. Chapon, G. Cucciati, D. d’Enterria, A. Dabrowski, N. Daci, V. Daponte, A. David, A. De Roeck, N. Deelen, M. Deile, M. Dobson, M. Dünser, N. Dupont, A. Elliott-Peisert, F. Fallavollita45, D. Fasanella, G. Franzoni, J. Fulcher, W. Funk, S. Giani, D. Gigi, A. Gilbert, K. Gill, F. Glege, M. Gruchala, M. Guilbaud, D. Gulhan, J. Hegeman, C. Heidegger, Y. Iiyama, V. Innocente, A. Jafari, P. Janot, O. Karacheban20, J. Kaspar, J. Kieseler, M. Krammer1, C. Lange, P. Lecoq, C. Lourenço, L. Malgeri, M. Mannelli, A. Massironi, F. Meijers, J.A. Merlin, S. Mersi, E. Meschi, F. Moortgat, M. Mulders, J. Ngadiuba, S. Nourbakhsh, S. Orfanelli, L. Orsini, F. Pantaleo17, L. Pape, E. Perez, M. Peruzzi, A. Petrilli, G. Petrucciani, A. Pfeiffer, M. Pierini, F.M. Pitters, M. Quinto, D. Rabady, A. Racz, M. Rovere, H. Sakulin, C. Schäfer, C. Schwick, M. Selvaggi, A. Sharma, P. Silva, W. Snoeys, P. Sphicas46, J. Steggemann, V.R. Tavolaro, D. Treille, A. Tsirou, A. Vartak, M. Verzetti, W.D. Zeuner Paul Scherrer Institut, Villigen, Switzerland L. Caminada47, K. Deiters, W. Erdmann, R. Horisberger, Q. Ingram, H.C. Kaestli, D. Kotlinski, U. Langenegger, T. Rohe, S.A. Wiederkehr ETH Zurich - Institute for Particle Physics and Astrophysics (IPA), Zurich, Switzerland M. Backhaus, P. Berger, N. Chernyavskaya, G. Dissertori, M. Dittmar, M. Donegà, C. Dorfer, T.A. Gómez Espinosa, C. Grab, D. Hits, T. Klijnsma, W. Lustermann, R.A. Manzoni, M. Marionneau, M.T. Meinhard, F. Micheli, P. Musella, F. Nessi-Tedaldi, F. Pauss, G. Perrin, L. Perrozzi, S. Pigazzini, M. Reichmann, C. Reissel, T. Reitenspiess, D. Ruini, D.A. Sanz Becerra, M. Schönenberger, L. Shchutska, M.L. Vesterbacka Olsson, R. Wallny, D.H. Zhu Universität Zürich, Zurich, Switzerland T.K. Aarrestad, C. Amsler48, D. Brzhechko, M.F. Canelli, A. De Cosa, R. Del Burgo, S. Donato, C. Galloni, B. Kilminster, S. Leontsinis, V.M. Mikuni, I. Neutelings, G. Rauco, P. Robmann, D. Salerno, K. Schweiger, C. Seitz, Y. Takahashi, S. Wertz, A. Zucchetta National Central University, Chung-Li, Taiwan T.H. Doan, C.M. Kuo, W. Lin, S.S. Yu National Taiwan University (NTU), Taipei, Taiwan P. Chang, Y. Chao, K.F. Chen, P.H. Chen, W.-S. Hou, Y.y. Li, R.-S. Lu, E. Paganis, A. Psallidas, A. Steen Chulalongkorn University, Faculty of Science, Department of Physics, Bangkok, Thailand B. Asavapibhop, N. Srimanobhas, N. Suwonjandee Çukurova University, Physics Department, Science and Art Faculty, Adana, Turkey M.N. Bakirci49, A. Bat, F. Boran, S. Damarseckin50, Z.S. Demiroglu, F. Dolek, C. Dozen, I. Dumanoglu, E. Eskut, G. Gokbulut, EmineGurpinar Guler51, Y. Guler, I. Hos52, C. Isik, E.E. Kangal53, O. Kara, A. Kayis Topaksu, U. Kiminsu, M. Oglakci, G. Onengut, K. Ozdemir54, A.E. Simsek, D. Sunar Cerci55, U.G. Tok, S. Turkcapar, I.S. Zorbakir, C. Zorbilmez Middle East Technical University, Physics Department, Ankara, Turkey B. Isildak56, G. Karapinar57, M. Yalvac Bogazici University, Istanbul, Turkey I.O. Atakisi, E. Gülmez, M. Kaya58, O. Kaya59, B. Kaynak, Ö. Özçelik, S. Ozkorucuklu60, S. Tekten, E.A. Yetkin61 Istanbul Technical University, Istanbul, Turkey A. Cakir, K. Cankocak, Y. Komurcu, S. Sen62 Institute for Scintillation Materials of National Academy of Science of Ukraine, Kharkov, Ukraine B. Grynyov National Scientific Center, Kharkov Institute of Physics and Technology, Kharkov, Ukraine L. Levchuk University of Bristol, Bristol, United Kingdom F. Ball, E. Bhal, S. Bologna, J.J. Brooke, D. Burns, E. Clement, D. Cussans, O. Davignon, H. Flacher, J. Goldstein, G.P. Heath, H.F. Heath, L. Kreczko, S. Paramesvaran, B. Penning, T. Sakuma, S. Seif El Nasr-Storey, D. Smith, V.J. Smith, J. Taylor, A. Titterton Rutherford Appleton Laboratory, Didcot, United Kingdom K.W. Bell, A. Belyaev63, C. Brew, R.M. Brown, D. Cieri, D.J.A. Cockerill, J.A. Coughlan, K. Harder, S. Harper, J. Linacre, K. Manolopoulos, D.M. Newbold, E. Olaiya, D. Petyt, T. Reis, T. Schuh, C.H. Shepherd-Themistocleous, A. Thea, I.R. Tomalin, T. Williams, W.J. Womersley Imperial College, London, United Kingdom R. Bainbridge, P. Bloch, J. Borg, S. Breeze, O. Buchmuller, A. Bundock, GurpreetSingh CHAHAL64, D. Colling, P. Dauncey, G. Davies, M. Della Negra, R. Di Maria, P. Everaerts, G. Hall, G. Iles, T. James, M. Komm, C. Laner, L. Lyons, A.-M. Magnan, S. Malik, A. Martelli, V. Milosevic, J. Nash65, V. Palladino, M. Pesaresi, D.M. Raymond, A. Richards, A. Rose, E. Scott, C. Seez, A. Shtipliyski, M. Stoye, T. Strebler, S. Summers, A. Tapper, K. Uchida, T. Virdee17, N. Wardle, D. Winterbottom, J. Wright, A.G. Zecchinelli, S.C. Zenz Brunel University, Uxbridge, United Kingdom J.E. Cole, P.R. Hobson, A. Khan, P. Kyberd, C.K. Mackay, A. Morton, I.D. Reid, L. Teodorescu, S. Zahid Baylor University, Waco, USA K. Call, J. Dittmann, K. Hatakeyama, C. Madrid, B. McMaster, N. Pastika, C. Smith Catholic University of America, Washington, DC, USA R. Bartek, A. Dominguez, R. Uniyal The University of Alabama, Tuscaloosa, USA A. Buccilli, S.I. Cooper, C. Henderson, P. Rumerio, C. West Boston University, Boston, USA D. Arcaro, T. Bose, Z. Demiragli, D. Gastler, S. Girgis, D. Pinna, C. Richardson, J. Rohlf, D. Sperka, I. Suarez, L. Sulak, D. Zou Brown University, Providence, USA G. Benelli, B. Burkle, X. Coubez, D. Cutts, M. Hadley, J. Hakala, U. Heintz, J.M. Hogan66, K.H.M. Kwok, E. Laird, G. Landsberg, J. Lee, Z. Mao, M. Narain, S. Sagir67, R. Syarif, E. Usai, D. Yu University of California, Davis, Davis, USA R. Band, C. Brainerd, R. Breedon, M. Calderon De La Barca Sanchez, M. Chertok, J. Conway, R. Conway, P.T. Cox, R. Erbacher, C. Flores, G. Funk, F. Jensen, W. Ko, O. Kukral, R. Lander, M. Mulhearn, D. Pellett, J. Pilot, M. Shi, D. Stolp, D. Taylor, K. Tos, M. Tripathi, Z. Wang, F. Zhang University of California, Los Angeles, USA M. Bachtis, C. Bravo, R. Cousins, A. Dasgupta, A. Florent, J. Hauser, M. Ignatenko, N. Mccoll, S. Regnard, D. Saltzberg, C. Schnaible, V. Valuev University of California, Riverside, Riverside, USA K. Burt, R. Clare, J.W. Gary, S.M.A. Ghiasi Shirazi, G. Hanson, G. Karapostoli, E. Kennedy, O.R. Long, M. Olmedo Negrete, M.I. Paneva, W. Si, L. Wang, H. Wei, S. Wimpenny, B.R. Yates, Y. Zhang University of California, San Diego, La Jolla, USA J.G. Branson, P. Chang, S. Cittolin, M. Derdzinski, R. Gerosa, D. Gilbert, B. Hashemi, D. Klein, V. Krutelyov, J. Letts, M. Masciovecchio, S. May, S. Padhi, M. Pieri, V. Sharma, M. Tadel, F. Würthwein, A. Yagil, G. Zevi Della Porta University of California, Santa Barbara - Department of Physics, Santa Barbara, USA N. Amin, R. Bhandari, C. Campagnari, M. Citron, V. Dutta, M. Franco Sevilla, L. Gouskos, J. Incandela, B. Marsh, H. Mei, A. Ovcharova, H. Qu, J. Richman, U. Sarica, D. Stuart, S. Wang, J. Yoo California Institute of Technology, Pasadena, USA D. Anderson, A. Bornheim, J.M. Lawhorn, N. Lu, H.B. Newman, T.Q. Nguyen, J. Pata, M. Spiropulu, J.R. Vlimant, S. Xie, Z. Zhang, R.Y. Zhu Carnegie Mellon University, Pittsburgh, USA M.B. Andrews, T. Ferguson, T. Mudholkar, M. Paulini, M. Sun, I. Vorobiev, M. Weinberg University of Colorado Boulder, Boulder, USA J.P. Cumalat, W.T. Ford, A. Johnson, E. MacDonald, T. Mulholland, R. Patel, A. Perloff, K. Stenson, K.A. Ulmer, S.R. Wagner Cornell University, Ithaca, USA J. Alexander, J. Chaves, Y. Cheng, J. Chu, A. Datta, A. Frankenthal, K. Mcdermott, N. Mirman, J.R. Patterson, D. Quach, A. Rinkevicius68, A. Ryd, S.M. Tan, Z. Tao, J. Thom, P. Wittich, M. Zientek Fermi National Accelerator Laboratory, Batavia, USA S. Abdullin, M. Albrow, M. Alyari, G. Apollinari, A. Apresyan, A. Apyan, S. Banerjee, L.A.T. Bauerdick, A. Beretvas, J. Berryhill, P.C. Bhat, K. Burkett, J.N. Butler, A. Canepa, G.B. Cerati, H.W.K. Cheung, F. Chlebana, M. Cremonesi, J. Duarte, V.D. Elvira, J. Freeman, Z. Gecse, E. Gottschalk, L. Gray, D. Green, S. Grünendahl, O. Gutsche, AllisonReinsvold Hall, J. Hanlon, R.M. Harris, S. Hasegawa, R. Heller, J. Hirschauer, B. Jayatilaka, S. Jindariani, M. Johnson, U. Joshi, B. Klima, M.J. Kortelainen, B. Kreis, S. Lammel, J. Lewis, D. Lincoln, R. Lipton, M. Liu, T. Liu, J. Lykken, K. Maeshima, J.M. Marraffino, D. Mason, P. McBride, P. Merkel, S. Mrenna, S. Nahn, V. O’Dell, V. Papadimitriou, K. Pedro, C. Pena, G. Rakness, F. Ravera, L. Ristori, B. Schneider, E. Sexton-Kennedy, N. Smith, A. Soha, W.J. Spalding, L. Spiegel, S. Stoynev, J. Strait, N. Strobbe, L. Taylor, S. Tkaczyk, N.V. Tran, L. Uplegger, E.W. Vaandering, C. Vernieri, M. Verzocchi, R. Vidal, M. Wang, H.A. Weber University of Florida, Gainesville, USA D. Acosta, P. Avery, P. Bortignon, D. Bourilkov, A. Brinkerhoff, L. Cadamuro, A. Carnes, V. Cherepanov, D. Curry, F. Errico, R.D. Field, S.V. Gleyzer, B.M. Joshi, M. Kim, J. Konigsberg, A. Korytov, K.H. Lo, P. Ma, K. Matchev, N. Menendez, G. Mitselmakher, D. Rosenzweig, K. Shi, J. Wang, S. Wang, X. Zuo Florida International University, Miami, USA Y.R. Joshi Florida State University, Tallahassee, USA T. Adams, A. Askew, S. Hagopian, V. Hagopian, K.F. Johnson, R. Khurana, T. Kolberg, G. Martinez, T. Perry, H. Prosper, C. Schiber, R. Yohay, J. Zhang Florida Institute of Technology, Melbourne, USA M.M. Baarmand, V. Bhopatkar, S. Butalla, M. Hohlmann, D. Noonan, M. Rahmani, M. Saunders, F. Yumiceva University of Illinois at Chicago (UIC), Chicago, USA M.R. Adams, L. Apanasevich, D. Berry, R.R. Betts, R. Cavanaugh, X. Chen, S. Dittmer, O. Evdokimov, C.E. Gerber, D.A. Hangal, D.J. Hofman, K. Jung, C. Mills, T. Roy, M.B. Tonjes, N. Varelas, H. Wang, X. Wang, Z. Wu The University of Iowa, Iowa City, USA M. Alhusseini, B. Bilki51, W. Clarida, K. Dilsiz69, S. Durgut, R.P. Gandrajula, M. Haytmyradov, V. Khristenko, O.K. Köseyan, J.-P. Merlo, A. Mestvirishvili70, A. Moeller, J. Nachtman, H. Ogul71, Y. Onel, F. Ozok72, A. Penzo, C. Snyder, E. Tiras, J. Wetzel Johns Hopkins University, Baltimore, USA B. Blumenfeld, A. Cocoros, N. Eminizer, D. Fehling, L. Feng, A.V. Gritsan, W.T. Hung, P. Maksimovic, J. Roskes, M. Swartz, M. Xiao The University of Kansas, Lawrence, USA C. Baldenegro Barrera, P. Baringer, A. Bean, S. Boren, J. Bowen, A. Bylinkin, T. Isidori, S. Khalil, J. King, A. Kropivnitskaya, C. Lindsey, D. Majumder, W. Mcbrayer, N. Minafra, M. Murray, C. Rogan, C. Royon, S. Sanders, E. Schmitz, J.D. Tapia Takaki, Q. Wang, J. Williams Kansas State University, Manhattan, USA S. Duric, A. Ivanov, K. Kaadze, D. Kim, Y. Maravin, D.R. Mendis, T. Mitchell, A. Modak, A. Mohammadi Lawrence Livermore National Laboratory, Livermore, USA F. Rebassoo, D. Wright University of Maryland, College Park, USA A. Baden, O. Baron, A. Belloni, S.C. Eno, Y. Feng, N.J. Hadley, S. Jabeen, G.Y. Jeng, R.G. Kellogg, J. Kunkle, A.C. Mignerey, S. Nabili, F. Ricci-Tam, M. Seidel, Y.H. Shin, A. Skuja, S.C. Tonwar, K. Wong Massachusetts Institute of Technology, Cambridge, USA D. Abercrombie, B. Allen, A. Baty, R. Bi, S. Brandt, W. Busza, I.A. Cali, M. D’Alfonso, G. Gomez Ceballos, M. Goncharov, P. Harris, D. Hsu, M. Hu, M. Klute, D. Kovalskyi, Y.-J. Lee, P.D. Luckey, B. Maier, A.C. Marini, C. Mcginn, C. Mironov, S. Narayanan, X. Niu, C. Paus, D. Rankin, C. Roland, G. Roland, Z. Shi, G.S.F. Stephans, K. Sumorok, K. Tatar, D. Velicanu, J. Wang, T.W. Wang, B. Wyslouch University of Minnesota, Minneapolis, USA A.C. Benvenuti${}^{\textrm{\textdagger}}$, R.M. Chatterjee, A. Evans, S. Guts, P. Hansen, J. Hiltbrand, Sh. Jain, S. Kalafut, Y. Kubota, Z. Lesko, J. Mans, R. Rusack, M.A. Wadud University of Mississippi, Oxford, USA J.G. Acosta, S. Oliveros University of Nebraska-Lincoln, Lincoln, USA K. Bloom, D.R. Claes, C. Fangmeier, L. Finco, F. Golf, R. Gonzalez Suarez, R. Kamalieddin, I. Kravchenko, J.E. Siado, G.R. Snow, B. Stieger State University of New York at Buffalo, Buffalo, USA C. Harrington, I. Iashvili, A. Kharchilava, C. Mclean, D. Nguyen, A. Parker, S. Rappoccio, B. Roozbahani Northeastern University, Boston, USA G. Alverson, E. Barberis, C. Freer, Y. Haddad, A. Hortiangtham, G. Madigan, D.M. Morse, T. Orimoto, L. Skinnari, A. Tishelman-Charny, T. Wamorkar, B. Wang, A. Wisecarver, D. Wood Northwestern University, Evanston, USA S. Bhattacharya, J. Bueghly, T. Gunter, K.A. Hahn, N. Odell, M.H. Schmitt, K. Sung, M. Trovato, M. Velasco University of Notre Dame, Notre Dame, USA R. Bucci, N. Dev, R. Goldouzian, M. Hildreth, K. Hurtado Anampa, C. Jessop, D.J. Karmgard, K. Lannon, W. Li, N. Loukas, N. Marinelli, I. Mcalister, F. Meng, C. Mueller, Y. Musienko37, M. Planer, R. Ruchti, P. Siddireddy, G. Smith, S. Taroni, M. Wayne, A. Wightman, M. Wolf, A. Woodard The Ohio State University, Columbus, USA J. Alimena, B. Bylsma, L.S. Durkin, S. Flowers, B. Francis, C. Hill, W. Ji, A. Lefeld, T.Y. Ling, B.L. Winer Princeton University, Princeton, USA S. Cooperstein, G. Dezoort, P. Elmer, J. Hardenbrook, N. Haubrich, S. Higginbotham, A. Kalogeropoulos, S. Kwan, D. Lange, M.T. Lucchini, J. Luo, D. Marlow, K. Mei, I. Ojalvo, J. Olsen, C. Palmer, P. Piroué, J. Salfeld-Nebgen, D. Stickland, C. Tully, Z. Wang University of Puerto Rico, Mayaguez, USA S. Malik, S. Norberg Purdue University, West Lafayette, USA A. Barker, V.E. Barnes, S. Das, L. Gutay, M. Jones, A.W. Jung, A. Khatiwada, B. Mahakud, D.H. Miller, G. Negro, N. Neumeister, C.C. Peng, S. Piperov, H. Qiu, J.F. Schulte, J. Sun, F. Wang, R. Xiao, W. Xie Purdue University Northwest, Hammond, USA T. Cheng, J. Dolen, N. Parashar Rice University, Houston, USA K.M. Ecklund, S. Freed, F.J.M. Geurts, M. Kilpatrick, Arun Kumar, W. Li, B.P. Padley, R. Redjimi, J. Roberts, J. Rorie, W. Shi, A.G. Stahl Leiton, Z. Tu, A. Zhang University of Rochester, Rochester, USA A. Bodek, P. de Barbaro, R. Demina, Y.t. Duh, J.L. Dulemba, C. Fallon, M. Galanti, A. Garcia-Bellido, J. Han, O. Hindrichs, A. Khukhunaishvili, E. Ranken, P. Tan, R. Taus The Rockefeller University, New York, USA R. Ciesielski Rutgers, The State University of New Jersey, Piscataway, USA B. Chiarito, J.P. Chou, A. Gandrakota, Y. Gershtein, E. Halkiadakis, A. Hart, M. Heindl, E. Hughes, S. Kaplan, S. Kyriacou, I. Laflotte, A. Lath, R. Montalvo, K. Nash, M. Osherson, H. Saka, S. Salur, S. Schnetzer, D. Sheffield, S. Somalwar, R. Stone, S. Thomas, P. Thomassen University of Tennessee, Knoxville, USA H. Acharya, A.G. Delannoy, J. Heideman, G. Riley, S. Spanier Texas A&M University, College Station, USA O. Bouhali73, A. Celik, M. Dalchenko, M. De Mattia, A. Delgado, S. Dildick, R. Eusebi, J. Gilmore, T. Huang, T. Kamon74, S. Luo, D. Marley, R. Mueller, D. Overton, L. Perniè, D. Rathjens, A. Safonov Texas Tech University, Lubbock, USA N. Akchurin, J. Damgov, F. De Guio, S. Kunori, K. Lamichhane, S.W. Lee, T. Mengke, S. Muthumuni, T. Peltola, S. Undleeb, I. Volobouev, Z. Wang, A. Whitbeck Vanderbilt University, Nashville, USA S. Greene, A. Gurrola, R. Janjam, W. Johns, C. Maguire, A. Melo, H. Ni, K. Padeken, F. Romeo, P. Sheldon, S. Tuo, J. Velkovska, M. Verweij University of Virginia, Charlottesville, USA M.W. Arenton, P. Barria, B. Cox, G. Cummings, R. Hirosky, M. Joyce, A. Ledovskoy, C. Neu, B. Tannenwald, Y. Wang, E. Wolfe, F. Xia Wayne State University, Detroit, USA R. Harr, P.E. Karchin, N. Poudyal, J. Sturdy, P. Thapa, S. Zaleski University of Wisconsin - Madison, Madison, WI, USA J. Buchanan, C. Caillol, D. Carlsmith, S. Dasu, I. De Bruyn, L. Dodd, B. Gomber75, M. Herndon, A. Hervé, U. Hussain, P. Klabbers, A. Lanaro, A. Loeliger, K. Long, R. Loveless, J. Madhusudanan Sreekala, T. Ruggles, A. Savin, V. Sharma, W.H. Smith, D. Teague, S. Trembath-reichert, N. Woods †: Deceased 1: Also at Vienna University of Technology, Vienna, Austria 2: Also at IRFU, CEA, Université Paris-Saclay, Gif-sur-Yvette, France 3: Also at Universidade Estadual de Campinas, Campinas, Brazil 4: Also at Federal University of Rio Grande do Sul, Porto Alegre, Brazil 5: Also at UFMS, Nova Andradina, Brazil 6: Also at Universidade Federal de Pelotas, Pelotas, Brazil 7: Also at Université Libre de Bruxelles, Bruxelles, Belgium 8: Also at University of Chinese Academy of Sciences, Beijing, China 9: Also at Institute for Theoretical and Experimental Physics named by A.I. Alikhanov of NRC ‘Kurchatov Institute’, Moscow, Russia 10: Also at Joint Institute for Nuclear Research, Dubna, Russia 11: Also at Fayoum University, El-Fayoum, Egypt 12: Now at British University in Egypt, Cairo, Egypt 13: Also at Purdue University, West Lafayette, USA 14: Also at Université de Haute Alsace, Mulhouse, France 15: Also at Tbilisi State University, Tbilisi, Georgia 16: Also at Erzincan Binali Yildirim University, Erzincan, Turkey 17: Also at CERN, European Organization for Nuclear Research, Geneva, Switzerland 18: Also at RWTH Aachen University, III. Physikalisches Institut A, Aachen, Germany 19: Also at University of Hamburg, Hamburg, Germany 20: Also at Brandenburg University of Technology, Cottbus, Germany 21: Also at Institute of Physics, University of Debrecen, Debrecen, Hungary, Debrecen, Hungary 22: Also at Institute of Nuclear Research ATOMKI, Debrecen, Hungary 23: Also at MTA-ELTE Lendület CMS Particle and Nuclear Physics Group, Eötvös Loránd University, Budapest, Hungary, Budapest, Hungary 24: Also at IIT Bhubaneswar, Bhubaneswar, India, Bhubaneswar, India 25: Also at Institute of Physics, Bhubaneswar, India 26: Also at Shoolini University, Solan, India 27: Also at University of Visva-Bharati, Santiniketan, India 28: Also at Isfahan University of Technology, Isfahan, Iran 29: Also at Italian National Agency for New Technologies, Energy and Sustainable Economic Development, Bologna, Italy 30: Also at Centro Siciliano di Fisica Nucleare e di Struttura Della Materia, Catania, Italy 31: Also at Università degli Studi di Siena, Siena, Italy 32: Also at Scuola Normale e Sezione dell’INFN, Pisa, Italy 33: Also at Riga Technical University, Riga, Latvia, Riga, Latvia 34: Also at Malaysian Nuclear Agency, MOSTI, Kajang, Malaysia 35: Also at Consejo Nacional de Ciencia y Tecnología, Mexico City, Mexico 36: Also at Warsaw University of Technology, Institute of Electronic Systems, Warsaw, Poland 37: Also at Institute for Nuclear Research, Moscow, Russia 38: Now at National Research Nuclear University ’Moscow Engineering Physics Institute’ (MEPhI), Moscow, Russia 39: Also at St. Petersburg State Polytechnical University, St. Petersburg, Russia 40: Also at University of Florida, Gainesville, USA 41: Also at Imperial College, London, United Kingdom 42: Also at P.N. Lebedev Physical Institute, Moscow, Russia 43: Also at Budker Institute of Nuclear Physics, Novosibirsk, Russia 44: Also at Faculty of Physics, University of Belgrade, Belgrade, Serbia 45: Also at INFN Sezione di Pavia a, Università di Pavia b, Pavia, Italy, Pavia, Italy 46: Also at National and Kapodistrian University of Athens, Athens, Greece 47: Also at Universität Zürich, Zurich, Switzerland 48: Also at Stefan Meyer Institute for Subatomic Physics, Vienna, Austria, Vienna, Austria 49: Also at Gaziosmanpasa University, Tokat, Turkey 50: Also at Şırnak University, Sirnak, Turkey 51: Also at Beykent University, Istanbul, Turkey, Istanbul, Turkey 52: Also at Istanbul Aydin University, Application and Research Center for Advanced Studies (App. & Res. Cent. for Advanced Studies), Istanbul, Turkey 53: Also at Mersin University, Mersin, Turkey 54: Also at Piri Reis University, Istanbul, Turkey 55: Also at Adiyaman University, Adiyaman, Turkey 56: Also at Ozyegin University, Istanbul, Turkey 57: Also at Izmir Institute of Technology, Izmir, Turkey 58: Also at Marmara University, Istanbul, Turkey 59: Also at Kafkas University, Kars, Turkey 60: Also at Istanbul University, Istanbul, Turkey 61: Also at Istanbul Bilgi University, Istanbul, Turkey 62: Also at Hacettepe University, Ankara, Turkey 63: Also at School of Physics and Astronomy, University of Southampton, Southampton, United Kingdom 64: Also at IPPP Durham University, Durham, United Kingdom 65: Also at Monash University, Faculty of Science, Clayton, Australia 66: Also at Bethel University, St. Paul, Minneapolis, USA, St. Paul, USA 67: Also at Karamanoğlu Mehmetbey University, Karaman, Turkey 68: Also at Vilnius University, Vilnius, Lithuania 69: Also at Bingol University, Bingol, Turkey 70: Also at Georgian Technical University, Tbilisi, Georgia 71: Also at Sinop University, Sinop, Turkey 72: Also at Mimar Sinan University, Istanbul, Istanbul, Turkey 73: Also at Texas A&M University at Qatar, Doha, Qatar 74: Also at Kyungpook National University, Daegu, Korea, Daegu, Korea 75: Also at University of Hyderabad, Hyderabad, India
2024-09-04T02:54:57.644390
2020-03-05T19:30:54
2003.02881
{ "authors": "Dirk Keppens", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26066", "submitter": "Dirk Keppens", "url": "https://arxiv.org/abs/2003.02881" }
arxiv-papers
# On the history of ring geometry (with a thematical overview of literature) Dirk Keppens Faculty of Engineering Technology, KU Leuven Gebr. Desmetstraat 1 B-9000 Ghent BELGIUM <EMAIL_ADDRESS> ###### Abstract In this survey paper we give an historical and at the same time thematical overview of the development of “ring geometry” from its origin to the current state of the art. A comprehensive up-to-date list of literature is added with articles that treat ring geometry within the scope of incidence geometry. In questo documento di ricerca forniamo una panoramica storica e allo stesso tempo tematica dello sviluppo della “geometria sopra un anello” dalla sua origine allo stato attuale. È aggiunto una lista di letteratura aggiornata completa di articoli che trattano la geometria degli anelli nel contesto della geometria dell’incidenza. In diesem Forschungsartikel geben wir einen historischen und gleichzeitig thematischen Überblick über die Entwicklung der “Ringgeometrie” von ihrem Ursprung bis zum aktuellen Stand der Technik, mit einer Liste aktualisierter Literatur einschließlich Artikeln zur Ringgeometrie im Kontext der Inzidenzgeometrie. Dans ce document de recherche, nous fournissons un aperçu historique et à la fois thématique du développement de la “géométrie sur un anneau”, de son origine à l’état actuel des connaissances. Nous ajoutons une liste de la littérature actualisée comprenant des articles traitant la géométrie sur un anneau dans le contexte de la géométrie de l’incidence. Keywords: Ring geometry, projective ring plane, Hjelmslev geometry, Klingenberg geometry, Barbilian plane, neighbor relation, bibliography AMS Classification: Primary 51C05; Secondary 00A15, 01A60, 01A61, 51-03, 51A05, 51B05, 51E26, 51H10, 51F15 ## 1 Introduction The current version of the Mathematics Subject Classification (MSC), a classification scheme used by the two major mathematical reviewing databases, Mathematical Reviews (MathSciNet) and Zentralblatt MATH (zbMATH), provides code 51C05 for all research papers dealing with Ring Geometry. This rather small category contains articles about geometries that are not only provided with an incidence relation but also with a neighbor relation (or its negation, a non–neighbor or remoteness relation). Included are all geometries obtained from rings that are not division rings. Ring geometry, i.e. the theory of geometries equipped with a neighbor relation in general and of geometries over rings in particular, is a rather young discipline. Its origin lies in the beginning of the 20th century and its importance has grown steadily. For that reason it has also got a full–fledged place as a chapter in the “Handbook of Incidence Geometry” [A24]. In the past decades several mathematicians have contributed to ring geometry. Newly discovered connections with coding theory, with the theory of Tits–buildings and with quantum information theory, have opened new horizons. In this survey paper we present an historical and thematical overview with attention to many aspects. An out-of-date list with articles on the subject, up to 1989, was composed by Törner and Veldkamp in [A23] as an addition and completion of two even older lists [A02] and [A12] written by respectively Artmann, Drake et al. and Jungnickel. Up to now such list of literature is not available for the period after 1990, except for a survey paper by the author [A13], dealing exclusively with plane projective geometries over finite rings. In the present work we fill that gap and we add a new updated list of existing literature, ordered thematically (and containing the relevant material from the preceding lists). Articles on algebraic geometry and on differential geometry over rings are not included. We also do not aim for completeness when it concerns metric aspects, neither for geometries on modular lattices nor for geometric algebra over rings. We think that the bibliography might be useful for researchers who want to attack the future challenges of ring geometry. ## 2 The first traces of ring geometry: dual numbers and Johannes Hjelmslev The first traces of ring geometry date back to the beginning of the twentieth century. The Danish geometer Johannes Trolle Hjelmslev (1873–1950) who was born as Johannes Petersen but who changed his name in 1904, graduated in mathematics from the University of Copenhagen and received his PhD degree in 1897. Hjelmslev can be viewed as one of the early founders of ring geometry. In a series of four lectures, held at the University of Hamburg in July 1922 and published one year later in the Abhandlungen aus dem Mathematischen Seminar [A10], he presented an axiomatic framework for geometry that better reflected the properties observed in the real world. The basic observation made by Hjelmslev was that, if one draws lines “close” to each other (meaning that the sharp angle they define is very small), then it is hard to identify the intersection point, and it looks as if the lines have a little segment in common. Dually, if two points are close to each other, they belong to a line segment that can be part of several joining lines. Hjelmslev called this Die natürliche Geometrie, the “natural geometry”. In fact, Hjelmslev put forward this idea already some years earlier, in 1916, in an article [A09] on what he called Die Geometrie der Wirklichkeit, the “geometry of reality”. In order to obtain a model for his geometry, Hjelmslev made use of the ring of dual numbers of the form $a+b\varepsilon$ with $a$ and $b$ both real and $\varepsilon^{2}=0$. Dual numbers were already well–known before Hjelmslev. William Clifford defined them for the first time in 1873 [A06] and they were used as a convenient tool in mechanics by Aleksander Kotelnikov [A18] and by Eduard Study in his famous work “Geometrie der Dynamen” [A22]. According to Benz [A05] the first traces of ring geometry can be observed already in Study’s work and in that of some of his contemporaries like Josef Grünwald, Pilo Predella and Corrado Segre. In their papers dual numbers are treated from a geometrical viewpoint, by connecting the geometry of oriented lines (spears) in the real euclidean space with a spherical geometry over the ring of dual numbers [A08, A19, A21]. ## 3 The pioneers of ring geometry: Barbilian and Klingenberg Twenty years after Hjelmslev, Dan Barbilian (1895–1961), besides a mathematician at the University of Bucharest, also one of the greatest Romanian poets (with pseudonym Ion Barbu), took up the line again. In two papers [A03, A04] (extended versions of a lecture hold in Baden–Baden in 1940) he gave the first axiomatic foundation for plane projective ring geometry. He started by investigating the conditions which must be imposed upon an arbitrary associative ring in order that the corresponding geometry may have “useful” properties. It turns out that the rings must have a unit element and that all left singular elements must be two–sided singular or equivalently $ab=1$ implies $ba=1$. Barbilian called these rings “Zweiseitig singuläre Ringe” or $Z$–rings. Today such rings are also known as Dedekind–finite rings. They include of course all commutative rings but also all finite rings (even non–commutative) and many other classes of rings (e.g. matrix rings over a field). Starting with $C$–rings (“Kategorische Ringe”), being $Z$–rings with some additional property, Barbilian defined a kind of projective plane. Conversely, he formulated a set of axioms for a plane geometry with incidence and non–neighborship (“klare Lage”) and proved that this leads to a $C$–ring. Barbilian’s work showed some shortcomings, but nevertheless it was of great importance for the development of geometry over arbitrary rings as we will discuss further (see also section 11). Another monument among the ring geometers was the German mathematician Wilhelm Klingenberg (1924–2010). He is best known for his work on differential geometry, but in a series of papers [A14, A15, A16, A17], published in the mid–fifties, he also laid the foundation for affine, metric (euclidean) and projective geometries with neighbor relation (“mit Nachbarelementen”). His central idea is the existence of a natural map from the geometry under consideration onto an “underlying” ordinary geometry, a consequence of the assumption that the neighbor relation is transitive. Therefore Klingenberg called such geometries “Geometrien mit Homomorphismus”. He not only defined them axiomatically, but he also constructed explicit examples using rings. The assumption of a transitive neighbor relation is interlaced with the fact that the rings must be local ones. A not necessarily commutative local ring $R$ is a ring with a unique maximal left (or equivalently a unique maximal right) ideal $R_{0}$. The quotient ring $R/R_{0}$ is a (skew)field, coordinatizing an ordinary geometry which is the natural epimorphic image of the ring geometry over $R$. The work of Klingenberg was the source of two mainstreams in ring geometry in the decades afterwards: Klingenberg geometry and Hjelmslev geometry. Geometries with a transitive neighbor relation or equivalently with a canonical map onto an ordinary geometry are now named Klingenberg geometries. Their epimorphic image is obtained by considering the equivalence classes of neighboring elements as “thick” elements. Non–neighboring elements behave like distinct elements in the epimorphic image (e.g. two non–neighboring points are incident with exactly one line in a projective Klingenberg plane since two distinct points in its epimorphic projective plane are incident with a unique line). No further assumptions are made about the neighboring elements. Klingenberg geometries comprise the smaller, but more intensively studied class of Hjelmslev geometries for which additional axioms must hold when it comes to neighboring elements (e.g. two neighboring points can be connected by at least two distinct lines in a projective Hjelmslev plane). This idea relies on the natural geometry of Hjelmslev (see section 2). In the case of a Klingenberg plane over a local ring $R$, the Hjelmslev condition means that the ring must be a two–sided chain ring (a ring is a right chain ring if for any $a$ and $b$ in $R$ either $a\in bR$ or $b\in aR$, similar for left chain ring) with the additional property that its maximal ideal contains only zero divisors. Such rings are called PH–rings (projective Hjelmslev rings or $H$–rings). Finite chain rings are always $H$–rings and, equivalently, local principal ideal rings. Klingenberg himself considered both Klingenberg planes (over local rings) and Hjelmslev planes (over Hjelmslev rings) in [A16]. To be complete, we must mention also two isolated, rather unnoticed, contributions to early geometry over rings. J.W. Archbold [A01] wrote on projective geometry over group algebras $K[G]$ and worked out a small finite example taking $K=$ GF(2) and $G$ a group of order 2, while Rimhak Ree in [A20] considered projective spaces as the modular lattices $L(R,M)$ of all sub–modules of a module $M$ over a ring $R$, in particular over a full matrix ring. ## 4 The Belgian contribution to early ring geometry: projective geometry over rings of matrices The ideas of Barbilian concerning projective geometries over rings instead of fields, got some followers in the sixties and early seventies. Among them we find a lot of Belgian contributors who studied projective geometries over rings of matrices. Since such rings are not local, the geometries are not Klingenberg nor Hjelmslev geometries. Julien Depunt studied in [B01] the projective line and in [B03, B04] projective planes over the ring of ternions. This ring contains the elements $a_{1}\varepsilon_{1}+a_{2}\varepsilon_{2}+a_{3}\varepsilon_{3}$ with $a_{i}\in\mathbb{C}$ and $\varepsilon_{i}^{2}=\varepsilon_{i}$ ($i=1,2$), $\varepsilon_{1}\varepsilon_{3}=\varepsilon_{3}=\varepsilon_{3}\varepsilon_{2}$ and all other products equal to $0$. The ring of ternions is isomorphic to the ring of upper triangular matrices with elements in the complex field. The main result of Depunt concerns the embedding of the plane over the ternions into the 5–dimensional complex projective space. Inspired by this work, Cléry Vanhelleputte could generalize the results for planes over the full matrix ring of $2\times 2$–matrices with elements in an arbitrary commutative field. His doctoral thesis was published in [B14] and fitted in the research program under Julien Bilo (1914–2006) who was head of the Geometry Department at the University of Ghent at that time. A few years later, in 1969, Joseph Thas wrote his PhD thesis under the supervision of Bilo about the projective line over the ring of $3\times 3$–matrices with elements in an algebraically closed field [B11], see also [B10]. Soon afterwards Thas published [B12, B13] on projective ring geometry over matrix rings, in particular over finite rings $M_{n}(GF(q))$ of $n\times n$–matrices with entries in a Galois field of order $q$. Especially [B12] was important as it contains some concepts (ovals, arcs, caps, etc.) and combinatorial theorems which extend known results from classical Galois geometry over finite fields and which appear for the first time for ring geometries. Later Thas grew out to an authority in finite geometry, but his interest shifted from finite ring geometry to Galois geometry and other topics, in particular finite generalized quadrangles. Thas was a colleague of Willem Mielants during many years. It was he who guided Hendrik Van Maldeghem (see section 14) and the author towards ring geometry as PhD supervisor for both of us. Paul De Winne was the last student of Bilo studying ring geometry, see [B05]. His doctoral thesis was published in [B06]. Around the same time, Xavier Hubaut, another Belgian mathematician from the Université Libre de Bruxelles, introduced the projective line over a general associative algebra and investigated the structure of its group of projectivities in [B08, B09]. His colleague Franz Bingen could generalize the results to $n$–dimensional projective spaces over semi–primary rings in [B02]. Mind that the geometries over matrix rings studied by all these people, may not be confused with the “geometries of matrices” initiated by the Chinese mathematician Loo–Keng Hua in the mid–forties [B07]. Hua’s geometries of matrices are not ring geometries in the narrow sense, despite the suggestive name (although a connection is possible with projective lines over rings, see section 13). ## 5 The foundations of plane affine ring geometry: from Benz to Leissner and beyond Barbilian, as well as all members of the Belgian School, dealt exclusively with projective ring geometry. The first traces of affine geometries over rings can be found in a paper from 1942 by Cornelius Everett, concerning affine planes over rings without zero divisors [C16]. Walter Benz (1931–2017), a renowned German geometer, especially for his work on circle geometries, considered in [C04, C05] affine planes over a special kind of commutative rings and in [C06] plane affine (and metric–affine) geometries over arbitrary rings with unit. A big part of this interesting paper discusses the relation between algebraic properties of the ring and geometric properties of the plane. Peter Ashley Lawrence obtained a PhD under the supervision of Benz. His thesis “Affine mappings in the geometries of algebras” considers ring geometries associated with modules and algebras over a commutative ring and was published in [C21]. Two other publications [C02, C03] by Hans–Joachim Arnold can be classified within this section. They contain a simultaneous generalization of affine geometry over rings and generalized affine spaces in the sense of Emanuel Sperner [C41]. Arnold starts with the axiomatic definition of an affine–line geometry which can be coordinatized by a vectorial groupoid and he adds some extra axioms to turn it into a module over a unitary ring. Also [C18] and [C34] both fit into this approach. Affine planes over local rings, today also known as desarguesian affine Klingenberg planes, appear as particular examples in the papers of Benz, but this was not for the first time. Klingenberg already considered them on the side of projective ring planes and in the more general context of “Affine Ebenen mit Nachbarelementen” in the articles [A14, A15, A16, A17]. A reasonable part of the research concerning affine ring geometry focusses on affine Hjelmslev planes (AH–planes). An important paper [C28] in that respect was written by Heinz Lüneburg (1935–2009) in 1962. It contains the main results of his doctoral thesis entitled “Affine Hjelmslev–Ebenen mit transitiver Translationsgruppe”, dealing with translation AH–planes as generalizations of ordinary translation planes. Werner Seier investigated in more detail desarguesian and translation AH–planes in [C36, C37, C38, C39, C40]. The central theme in his work is a characterization of desarguesian AH–planes by an affine variant of Desargues’ theorem and the transitivity of the set of translations in the plane. Some other papers on affine Hjelmslev planes are not mentioned here and we postpone them to sections 7 and 8 where they will be discussed in connection with projective Hjelmslev planes. Without minimizing the importance of the papers cited above, we can state that the real breakthrough of affine ring geometry in full generality was not achieved yet at this point. Therefore one has to wait for Werner Leissner, a student of Benz. Leissner wrote two papers in the mid–seventies on, what he called, “affine Barbilian planes” (today the name “Leissner planes” would be more convenient). In [C22] such planes are defined axiomatically as affine structures in which two parallelodromy axioms hold (suitable substitutes for the affine Desargues theorem) and they are coordinatized by an arbitrary $Z$–ring $R$ together with a special subset $B$ of $R\times R$ (a Barbilian domain). In [C23] the converse is proved: any affine ring plane over a $Z$–ring is an affine Barbilian plane. Leissner considered even more general affine Barbilian structures in [C24]. The papers of Leissner were a source of inspiration for further generalizations by several people. Victoria Groze and Angela Vasiu considered affine structures over arbitrary rings in [C17], Francisc Radó defined affine Barbilian structures in [C33] by weakening Leissner’s axioms, Armentrout et al. investigated in [C01] generalized affine planes which can be coordinatized by near-rings and Pickert studied tactical configurations over quasirings in [C32]. Other contributors to the theory of plane affine geometries with neighbor relation and parallellism were Gernot Dorn [C13], Kvetoslav Burian [C08, C09, C10], Frantisek Machala [C29, C30, C31], Stefan Schmidt and Ralph Steinitz [C35], Franco Eugeni [C14, C15] and Angela Vasiu [C42, C43, C44]. The Barbilian domains defined by Leissner also formed a study object in themselves. Several authors have contributed to this, see [C07, C20, C25, C26, C27]. For a recent and fairly complete overview of affine planes over finite rings we refer to the survey paper [C19] of the author. We also mention here two papers [C11, C12] by Basra Çelik on finite hyperbolic Klingenberg planes as they are not far away from finite affine Klingenberg planes. ## 6 Metric geometry over rings and the school of Bachmann The goal of this section is just to give a glimpse of metric ring geometry. We do not aim for completeness when it concerns metric aspects, because they are only indirectly related to incidence geometry. The literature about this subject is extensive and we have selected only a few representative papers in which much more references can be found. The basis of “modern” metric geometry is provided by the work of several German mathematicians, including Bachmann, Lingenberg and Schröder. The pivotal figure is Friedrich Bachmann (1909–1982). In the second edition of his famous book “Aufbau der Geometrie aus dem Spiegelungsbegriff” [D01] he considers plane metric geometries in which points, lines, incidence and orthogonality are defined by means of a group of reflections. A group $G$ with a subset $S$ of involutory elements which are invariant under inner automorphisms of $G$ and which generate $G$, determines a group plane $E=E(G,S)$ as follows: the elements of $S$ are the lines of $E$ and the involutory elements of $S^{2}$ are the points. Two lines that commute are perpendicular. A point and a line that commute are incident. If the group plane $E$ satisfies the conditions (1) a point and a line determine a unique perpendicular, and (2) the product of three lines that are either concurrent or have a common perpendicular is again a line, then the pair $(G,S)$ is called a Hjelmslev group and the associated group plane $E$ is called a metric (non–elliptic) Hjelmslev plane. If the uniqueness of the line incident with two distinct points is not required, then metric Hjelmslev planes stripped of their metric structure, reduce to incidence Hjelmslev planes. In section 2 we have already discussed the role of Hjelmslev as founder of (incidence) ring geometry. In some later work, the “Algemeine Kongruenzlehre” [D06], Hjelmslev used special transformations (reflections) to define orthogonality. Hence, the rudiments of metric ring geometry also go back to Hjelmslev and have been worked out in more detail by Bachmann. There is also a minor contribution by Klingenberg who treats metric aspects in ring geometries in [A15]. Rolf Lingenberg (1929–1978), a student of Bachmann, continues his work in [D12]. The main result is an algebraic characterization of classical metric group planes $E(G,S)$ with an additional axiom, as planes associated with a metric vectorspace $(V,q)$ with $q$ a quadratic form. Eberhard Schröder in [D18, D19] considers metric planes of different kind starting from a pappian affine plane over a field and using twodimensional algebras for the introduction of the metric notions of angle, distance and orthogonality. His work is strongly related with circle geometries studied intensively by Benz (see section 13). A good overview of classical metric geometry (but not including ring geometries) can be found in Chapter 17 written by Schröder in the Handbook of Incidence Geometry [D20]. There are numerous contributions by Bachmann and his school to the theory of Hjelmslev groups, connecting geometric and algebraic properties of groups and planes. From Bachmann himself we mention the important additions [D02, D03, D04] to his book [D01]. In the seventies and eighties his successors, most of them from the University of Kiel (Germany), have extended the knowledge. Finite Hjelmslev groups were characterized by Rolf Stölting in his PhD thesis “Endliche Hjelmslev-Gruppen und Erweiterungen von Hjelmslev-Gruppen”, also published in [D21]. In [D22] Hjelmslev groups are constructed from a module $M$ over a commutative ring $R$ endowed with a bilinear form. Edzard Salow introduced singular Hjelmslev groups (in which the product of three points is always a point) in his doctoral thesis “Beiträge zur Theorie der Hjelmslev-Gruppen: Homomorphismen und Singuläre Hjelmslev-Gruppen” published in [D15]. The main result is the construction of a coordinate ring $R$ for the group plane of a singular Hjelmslev group, proving that these metric planes are indeed ring geometries. The process of algebraization of metric Hjelmslev planes is investigated also in [D16]. It is proved there that a Hjelmslev group with some additional axioms can be embedded in the orthogonal group of a metric module $(R^{3},f)$ with $R$ a commutative ring with unit for which $2$ and any non zero–divisor is invertible and with $f$ a symmetric bilinear form of the free module $R^{3}$. In [D17] Salow studies another class of metric ring planes using a commutative algebra over a ring and the concept of an angle. In an early paper of Benz [C04] the metric notion of angle also played an important role and in [C06] he devotes a paragraph to metric geometry, using an elliptic form over an arbitrary commutative ring. Similar work can be found in [D13, D14] in which Wolfgang Nolte proves that a class of metric planes $E(G,S)$ with additional axioms can be embedded into a projective Hjelmslev plane over a local ring and that $G$ is isomorphic to a subgroup of an orthogonal group. Other results in this direction were obtained by Frieder Knüppel, in [D07, D08, D09, D10, D27], Gerald Fischbach [D05], Michael Kunze [D11] and Rolf and Horst Struve [D23, D24, D25, D26]. The influence of Bachmann is also clear from the large amount of doctoral theses produced in Germany on Hjelmslev groups and metric geometry, e.g. R. Schnabel (1974), M. Kunze (1975), H. Struve (1979), R. Struve (1979), M. Gegenwart (1987), W. Vonck (1988) and A. Bach (1998). ## 7 The florescence of Hjelmslev geometry in the era of Drake, Artmann and Törner The first period of florescence of ring geometry (especially Hjelmslev geometry) regarded as incidence geometry, started in the late sixties and reached his culmination point in the seventies. This is reflected in a large number of publications. Two mathematicians who were very productive in this area and left their mark, were David Allyn Drake (1937–) from the University of Florida, Gainesville (USA) and Benno Artmann (1933–2010) from the University of Giessen (Germany). Drake obtained his PhD in 1967 with a thesis entitled “Neighborhood collineations and affine projective extensions of Hjelmslev planes” under the supervision of Erwin Kleinfeld. Kleinfeld himself, an authority in algebra, with a lot of publications on non–associative alternative rings, published only one, though interesting paper [E25] about ring geometry. It was the first publication concerning finite Hjelmslev planes. Among other things he introduced a two parameter set $(s,t)$ of non–zero integers such that for each flag $(P,\ell)$ in a finite projective Hjelmslev plane, there are exactly $t$ points on $\ell$ neighboring with $P$ and exactly $s$ points on $\ell$ not neighboring with $P$. It was proved that $s\leq t^{2}$ or $t=1$. If $s=t^{2}$, the plane is called uniform. In that case all point neighborhoods have the structure of ordinary affine planes. Robert Craig proved in [E11] that any finite projective plane can be extended to a uniform projective Hjelmslev plane. The notion of uniformity (and its generalization to $n$–uniformity) has played a crucial role in the work of Drake. It was also related to another issue: the extension of an affine Hjelmslev plane to a projective Hjelmslev plane. An ordinary affine plane can always be extended to a projective plane, but for Hjelmslev planes the situation is much more complicated. Drake has written several papers about this problem in the period from 1968 to 1975. In [E14] he proves that any uniform affine Hjelmslev plane has at least one (uniform) projective extension and he gives an example of a desarguesian uniform affine Hjelmslev plane with a non–desarguesian projective extension. In [E15] uniformity is generalized to $n$–uniformity inductively (a PH–plane is $n$–uniform if the point neighborhoods are $(n-1)$-uniform AH–planes). Strongly $n$–uniform planes are characterized by a local property, which leads to the theorem: an $n$–uniform PH–plane is strongly $n$–uniform if and only if its dual is $n$–uniform. Drake also proved that every finite desarguesian PH–plane is strongly $n$–uniform. A further study of $n$–uniform Hjelmslev planes (projective and affine) was made in [E16, E17, E18, E19, E20, E21, E22] where even more general affine geometries with neighbor relation appear. Drake could also prove that there do exist affine Hjelmslev planes which cannot be extended to projective Hjelmslev planes. Artmann was a contemporary of Drake and he wrote his doctoral thesis “Automorphismen und Koordinaten bei ebenen Verbänden” in 1965 under the supervision of Günther Pickert. In his early work on Hjelmslev geometry we can observe a strong relation with the theory of modular lattices. In [E01] he gives a sufficient condition for a modular lattice to define a projective Hjelmslev plane and in [E03] he proves that any uniform PH–plane can be derived from a modular lattice. Like Drake, Artmann also studies refinements of the neighbor relation (“verfeinerten Nachbarschaftsrelationen”) and the affine–projective extension question. In [E04] he proves that a uniform affine Hjelmslev plane can be extended to at least two non–isomorphic projective Hjelmslev planes. A new concept introduced by him in [E02] is that of a projective Hjelmslev plane of level $n$ (“$n$–ter Stufe”) based on the refinement of the neighbor relation. This definition was extended by Drake to the affine case in [E20]. Artmann proves that desarguesian PH–planes over a Hjelmslev ring $R$ are of level $n$ if and only if the maximal ideal of $R$ is nilpotent of index $n$, see [E05, E06]. Another theorem proved by Artmann [E07] states that for any projective plane $\mathcal{P}$ and any integer $n>0$ there exists a PH–plane of level $n$ with $\mathcal{P}$ as epimorphic image. Moreover, given a sequence of PH–planes $\ldots\rightarrow H_{i}\rightarrow H_{i-1}\rightarrow\ldots\rightarrow H_{1}=\mathcal{P}$ with $H_{i}$ of level $i$, the inverse limit is a projective plane. Arno Cronheim constructed in [E12] in a purely algebraic way, using formal power series over a cartesian group, a chain of Hjelmslev planes whose inverse limit is a projective plane. Cronheim also obtained a complete classification of all finite uniform desarguesian projective Hjelmslev planes in [E13]. They are either planes over a ring of twisted dual numbers over GF($q$) (a non–commutative generalization of the classical dual numbers) or over a truncated Witt ring $W_{2}(q)$ of length 2. Both Drake and Artmann had a great influence on the mathematical research in the domain of Hjelmslev geometry (even when Artmann’s interest shifted to other subjects after a few years). One of Drake’s students was Phyrne Bacon. She wrote both her Master’s thesis “On Hjelmslev planes with small invariants” and her PhD thesis “Coordinatized Hjelmslev planes” on Hjelmslev geometry, resulting in two papers [E08] and [E09]. She proved that a finite Hjelmslev plane is strongly $n$–uniform if and only if it is of level $n$ which unifies the two notions introduced by Drake and Artmann respectively. Later, Bacon’s attention shifted to the more general Klingenberg geometries (see section 9). Artmann was the supervisor of Manfred Dugas and Günther Törner who both made important contributions to Hjelmslev geometry. The PhD thesis of Dugas “Charakterisierungen endlicher desarguescher uniformer Hjelmslev-Ebenen” contains many new ideas, including a coordinatization method (see section 9). In [E23] Dugas proves that a finite translation AH–plane can be extended to a PH–plane if it is or a desarguesian PH–plane or an ordinary translation projective plane, while in [E24] he gives a necessary and sufficient condition for a projective Hjelmslev plane to be derivable from a lattice. In particular the PH–planes of level $n$ are always lattice–derivable. Törner wrote his Master’s thesis on “Hjelmslev–Ringe und die Geometrie der Nachbarschaftsbereiche in der zugehörigen Ebenen” and obtained his PhD with “Eine Klassifizierung von Hjelmslev-Ringen und Hjelmslev-Ebenen” in the same year as Dugas, under the supervision of Artmann and Pickert. His research in the domain of ring geometry focusses on two main themes: the structure of (finite) Hjelmslev planes and the ideal structure of chain rings. Among his publications we mention here [E36] which contains the main results from his thesis: a classification of PH–planes based on congruence relations. He proves that the set of all congruence relations of a finite PH–plane is linearly ordered under inclusion and consequently, the canonical epimorphism onto the associated projective plane admits an essentially unique factorization into indecomposable epimorphisms. The plane $\mathcal{H}$ is of “type $n$” or “height $n$” if the canonical epimorphism $\varphi$ from $\mathcal{H}$ onto the projective plane $\overline{\mathcal{H}}$ has a maximal factorization $\mathcal{H}=\mathcal{H}_{n}\rightarrow\mathcal{H}_{n-1}\rightarrow\ldots\rightarrow\mathcal{H}_{1}=\overline{\mathcal{H}}$. In [E40, E41] Törner investigates the equivalence of finite $n$–uniform planes or planes of level $n$ as defined by Drake and Artmann to planes of type $n$. In [E39] he proves that $n$–uniform projective Hjelmslev planes are strongly $n$–uniform. Some of the results were later extended to the infinite case in [E43]. In [E40] much attention goes also to affine Hjelmslev planes, in particular translation AH–planes over near–rings. In Törner’s work homomorphisms play an important role as can also be seen from [E37, E38]. In the desarguesian case (Hjelmslev planes over a chain ring) the structure of the plane is intrinsically connected with the ideal structure of the ring. The structure of chain rings and valuation rings was investigated by Törner partly in collaboration with Hans–Heinrich Brungs. One of their papers [E10] concerns the embedding of right chain rings into chain rings (related with the problem of embedding desarguesian affine Hjelmslev planes into projective ones), see also [E32, E33, E35, E42]. With a postdoctoral scholarship Artmann stayed for a short time at the McMaster University in Ontario (Canada). There he inspired J.W. (Mike) Lorimer who would later become one of the leading figures in topological Hjelmslev geometry (see section 10). In [E31] Lorimer and Lane study desarguesian Hjelmslev planes. They prove that an affine Hjelmslev plane is desarguesian if and only if it can be coordinatized by an AH–ring and that not every desarguesian AH–plane can be extended to a desarguesian PH–plane. Morphisms between affine Hjelmslev planes are the main subject in [E26, E27, E30] while [E28, E29] deal with the structure of Hjelmslev rings. ## 8 The continuation of the Hjelmslev epoch under Drake, Jungnickel and Sane In his publications on $n$–uniform planes, we can observe already that Drake had a particular interest in finite Hjelmslev planes. This is continued when his attention goes more and more to the problem of existence and non–existence of finite Hjelmslev planes with given parameters. In a series of papers [F05, F06, F07, F08, F13, F16, F20], some of them with co–author, he attacked this problem and he linked finite PH–planes to nets in [F10, F19]. Meanwhile, also finite Klingenberg planes came to the attention [F09]. Drake and Lenz considered a parameter set for finite PK–planes in [F15] together with new examples of finite PH–planes. Structure theorems for finite chain rings (needed for finite desarguesian Klingenberg planes) were proved by Edwin Clark et al. in [F03, F04] and independently by Arnold Neumaier in [F33] and Al–Khamees [F01]. The classification of all chain rings is still an open problem but partial results are known. Galois rings GR($q^{n},p^{n}$) with $q^{n}$ elements and characteristic $p^{n}$, with $q=p^{r}$, play a crucial role. Beside Drake another player came to the forefront, Dieter Jungnickel, who was active at the Universities of Giessen and Augsburg (Germany). He was a student of Hanfried Lenz and became an expert in the theory of designs. In 1976 he wrote his Diplomarbeit at the University of Berlin (Germany) on “Klingenberg and Hjelmslev planes” and with the dissertation “Konstruktion transitiver Inzidenzstrukturen mit Differenzenverfahren” he obtained his doctoral degree. His most important contribution to the theory of Hjelmslev planes (and the more general class of Klingenberg planes) concerns “regularity” [F14, F23, F24, F26, F29]. A PK– or PH–plane is regular if it has an abelian automorphism group $G=Z\oplus N$, where $G$ acts regularly (sharply transitively) on the point set and on the line set and $N$ acts regularly on each neighborhood. It is proved in [F21] that any finite PK–plane over a commutative local ring is regular. Regularity is also interpreted in terms of difference sets and auxiliary matrices, leading to new families of finite Hjelmslev and Klingenberg planes. An interesting result, connecting PH–planes with designs, is: the projective uniform Hjelmslev planes of order $q$ (with $q>2$) are precisely the symmetric divisible partial designs on two classes with parameters $v=b=q^{2}(q^{2}+q+1),k=r=q(q+1),s=q^{2},t=q^{2}+q+1,\lambda_{1}=q,\lambda_{2}=1$. For $q=2$ counterexamples exist, see [F27]. The concepts of regularity and uniformity were also considered in $K$–structures, a further generalization of Klingenberg planes (see [F09, F11, F12, F22, F25, F28, F30, F31]). Nino Civolani [F02] considers free extensions of partial Klingenberg planes. Jungnickel’s work was continued by Sharad Sane. Sane studied at the Indian Institute of Technology Bombay, Mumbai (India) and obtained his PhD with the dissertation “Studies in Partial Designs and Projective Hjelmslev Planes” under the supervision of Balmohan Vishnu Limaye. In [F32] Sane and Limaye demonstrate that $n$–uniform PH–planes are a kind of divisible partial designs and by taking advantage of this property, they can give an alternative proof for the fact that $n$–uniform planes are strongly $n$–uniform, as was proved before in another way by Törner [E39]. In some other papers [F17, F18, F34, F35, F36, F37, F38] Sane contributes to the theory of finite Hjelmslev and Klingenberg planes. ## 9 The coordinatization of Hjelmslev and Klingenberg planes: a versatile story The coordinatization of affine and projective planes is one of the most powerful tools in the study of such geometries. It permits to reformulate geometric properties into algebraic ones (and vice versa), leading to a better insight, including the construction of many non–desarguesian examples. This coordinatization goes back to Marshall Hall Jr. (1910–1990). His important paper [G37] published in 1943 is still one of the most cited. The basic concept is a Hall ternary ring, also called PTR (planar ternary ring), an algebraic structure $(R,T)$ with $R$ a non–empty set containing two distinct elements $0$ and $1$ and with $T$ a ternary operation on $R$ such that $y=T(x,m,k)$ means that the point with coordinates $(x,y)$ lies on the line with coordinates $[m,k]$. With $(R,T)$ one can associate two loops $(R,+)$ and $(R,\circ)$ when defining $a+b:=T(a,1,b)$ and $a\circ b:=T(a,b,0)$. The properties of the plane (formulated in terms of the validity of Desargues’ configuration or in terms of transitivity of the automorphism group) are reflected in the richness of the coordinatizing algebraic structure. If the theorem of Desargues is always valid or equivalently if the plane is $(P,\ell)$–transitive for any choice of the point $P$ and the line $\ell$, then it turns out that $(R,+)$ and $(R\setminus\\{0\\},\circ)$ are both groups, hence $(R,+,\circ)$ is a division ring or skewfield. Conversely, any skewfield gives rise to a desarguesian projective plane. Slight variations on Hall’s coordinatization method were made by Daniel Hughes [G38, G39] and by Günter Pickert [G55]. Independently the russian mathematician Lev Anatolevich Skornyakov described in 1949 a similar coordinatization method in [G58]. His work [G59] was important for the distribution of the knowledge on projective planes in the russian speaking mathematical community. In the seventies and the eighties several attempts were made to coordinatize in a similar way affine and projective Hjelmslev and Klingenberg planes. In 1967 the russian geometer V.K. Cyganova worked out a first successful coordinatization for the more restrictive class of affine Hjelmslev planes [G21]. She used the concept of an $H$–ternar, an algebraic structure with two ternary operations, generalizing a Hall ternary ring (one of the main differences being the existence of zero divisors). Since her paper was written in russian, it remained unfortunately unaccessible for many people. Independently from Cyganova, J.W. Lorimer introduced in 1971 in his PhD thesis “Hjelmslev Planes and Topological Hjelmslev Planes” generalized ternary rings (very similar to $H$–ternars) as the coordinatizing structures of affine Hjelmslev planes. Three years later, Phyrne Bacon streamlined the work of Cyganova and Lorimer in her thesis “Coordinatized Hjelmslev planes”, and she introduced the name biternary ring (in appendix A of her thesis she gives a comprehensive list of annotations including some mistakes and imperfections in the work of her predecessors). The interaction between the geometric properties of an affine Hjelmslev plane and the algebraic properties of its coordinatizing biternary ring was examined in more detail by Lorimer in [G46], by Cyganova in [G20, G22, G23, G24, G25, G26, G27], by Emelchenkov in [G33, G35, G36] and by Shatokhin in [G56, G57]. To be complete, we also have to mention a paper by Drake [G28] in which he obtains a kind of coordinatization for a special class of affine Hjelmslev planes (radial planes) by means of a module. After the coordinatization of affine Hjelmslev planes, a similar theory for the more general class of affine Klingenberg planes has been worked out by several authors. Bacon generalized her biternary rings. Drake, her supervisor, encouraged her to publish in mathematical journals, but she was stubborn and, apart from one single publication [G05], she refused. Her voluminous work, totaling about 1000 pages, is contained in four books [G04], edited in own management. For that reason it was often overlooked and seldom recognized as an acceptable reference. In [G05] the “triangle theorem” is proved: a PK–plane $\mathcal{P}$ possessing a nondegenerate triangle with sides $\ell_{1},\ell_{2}$ and $\ell_{3}$ such that each derived AK–plane $\mathcal{A}_{i}=\mathcal{P}\setminus g_{i}$ is desarguesian, is itself a desarguesian PK–plane. The Czech mathematician Frantisek Machala from the University of Olomouc introduced in [G49] affine local ternary rings as an alternative for the coordinatization of affine Klingenberg planes. Much later he could prove the equivalence between his coordinatization method and the one of Bacon. He also proved that any “incomplete” biternary ring (with one ternary and one partial ternary operator) can be extended to a biternary ring with two (full) ternary operators [G52]. In [G11] it is shown that this biternary ring extension is unique. In the case of ordinary planes each (desarguesian) affine plane can be extended to a (desarguesian) projective plane. This does not hold any longer for Hjelmslev planes (see section 7). This observation has also a serious impact on the coordinatization of projective Hjelmslev planes: it does not follow immediately from the affine coordinatization. The projective case was first attacked by the russian mathematician E.P. Emelchenkov in 1972 in his PhD thesis ”Ternars and automorphisms of Hjelmslev planes” (in russian) and in [G34]. Due to the language barrier, his work was not accessible for many researchers and for that reason, like Cyganova’s work, it was somewhat overlooked. Coordinatization methods for the more general case of projective Klingenberg planes were worked out by both Machala and Bacon. Their approach is totally different. Machala’s method is based on the concept of an extended local ternary ring, an algebraic structure $(R,R^{\prime},T)$ with two disjoint sets of coordinates $R$ and $R^{\prime}$ and one ternary operation $T$ on $R\cup R^{\prime}$ (see [G47, G48, G50]). This coordinatization was not very successful because it is not obvious to see any interaction between properties of the extended local ternary ring and geometric properties of the coordinatized plane. The approach of Bacon is based on the fact that a PK–plane can be covered by three AK–planes corresponding to three biternary rings. This yields a coordinatizing structure for a projective Klingenberg plane as a triplet of biternary rings, called sexternary ring [G04]. In Bacon’s voluminous work, the interaction between geometric properties and the algebraic structure is examined in depth. Unfortunately, a big part of these results remained hidden for the reason mentioned above. In [G31] Manfred Dugas used a similar coordinatizing structure, with six ternary operations. Independently from the people mentioned above, the author introduced in 1987 in his PhD thesis “Klingenberg incidence structures, a contribution to ring geometry” (in Dutch) (see also [G41, G42, G43]) planar sexternary rings (PSR’s) with one full and five partial ternary operators. His coordinatization method for PK–planes was inspired by the Hughes variant of the Hall ternary ring. A small deficiency in his method was detected later (as pointed out by Baker and Lorimer in [G11]). As a consequence of this shortcoming, the coordinatization of a PK–plane by a PSR wasn’t fully compatible with the coordinatization of a derived AK–plane by the biternary ring obtained from the PSR. To overcome this anomaly, Baker and Lorimer (op.cit.) developed a new coordinate ring, called incomplete sexternary ring (in the spirit of Dugas) as a substitute for the planar sexternary ring. They even proved that such a structure can be extended (in a unique way) to a sexternary ring with six full ternary operators. The coordinatization of projective planes is a handy instrument for the construction of non–desarguesian examples in an algebraic manner. A lot of new planes were found using quasifields, nearfields or alternative division rings. Because of the bigger complexity of sexternary rings it seems that much less examples of non–desarguesian PK–planes were obtained in this manner. Nevertheless examples of non–desarguesian AK– and PK–planes obtained from algebraic structures, are known. The oldest examples are the Moulton affine Hjelmslev planes, given by Baker in [G06]. A projective version is contructed in [G43] by the author. Klingenberg planes over local nearrings and Hjelmslev planes over Hjelmslev–nearrings were studied by Emanuel Kolb in [G44, G45]. Much attention has gone to Moufang planes which can be coordinatized by local alternative rings. Moufang–Hjelmslev planes first appear in a paper of Dugas [G29]. He proves that all finite uniform Moufang PH–planes are desarguesian. A stronger version of that theorem was later proved in [G30] (the uniformity condition could be dropped if the order of the plane is bigger than 2). Similar results were found by Baker, Lane and Lorimer for Moufang Klingenberg planes, see [G07, G09, G10]. They prove that the class of Moufang PK–planes coincides with the class of planes over local alternative rings and that a finite Moufang PK–plane in which any two points have at least one joining line, is a desarguesian projective Hjelmslev plane. Also a stronger version of Bacon’s triangle theorem was proved in [G08]: a PK–plane with a non–degenerate triangle for which the three derived AK–planes are translation AK–planes (and with epimorphic image distinct from PG(2,2)), is Moufang. More recently, a group of mathematicians around Basri Çelik and Süleyman Çiftçi, from the University of Uludag, Bursa (Turkey), published several papers concerning a particular class of Moufang–Klingenberg planes [G01, G02, G14, G03, G15, G16, G17, G18, G19]. Their results, all variations on the same theme, overlap with work of Andrea Blunck [G12, G13]. The role of Pappus’ theorem (its validity in a desarguesian plane implies the commutativity of the coordinatizing ring) was investigated by Nolte and Maurer in [G53, G54]. ## 10 Order and topology in Hjelmslev geometry: Machala and Lorimer The theory of ordered incidence structures can be traced back mainly to Pasch, Vorlesungen über neuere Geometrie from 1882. An excellent survey paper on the axiomatics of ordered incidence geometry is [H27]. Ordered affine and projective Hjelmslev planes were studied by a group of Canadian mathematicians, starting in the seventies. At least three dissertations were written in that period at the McMaster University of Hamilton, Ontario (Canada) under supervision of Norman Lane who was also a world–class canoeist, competing in two Olympic games (bronze medal in 1948 in London), before he started his academic career. In 1975, Lynda Ann Thomas, wrote her Master’s thesis on “Ordered desarguesian affine Hjelmslev planes” in which she proved that any ordered AH–ring gives rise to an ordered desarguesian affine Hjelmslev plane and vice versa. This result was published a few years later in [H30]. James Laxton, another student of Lane, treated in his Master’s thesis “Ordered non–desarguesian affine Hjelmslev planes”. Catherine Baker, also a student of Lane, wrote her Master’s thesis on “Affine Hjelmslev and generalized affine Hjelmslev planes” and her doctoral thesis in 1978 on “Ordered Hjelmslev planes”. In that thesis she investigates in detail the relationship between ordered AH–planes and the coordinatizing ordered biternary rings, extending results of Laxton and Thomas. Baker published several papers about ordered (affine and projective) Hjelmslev planes (also with co–author): [H01, H02, H03, H04, H05, H06]. It would be disrespectful if we should attribute all the results on ordered ring geometries to the “Canadian School” only. Independently, a theory of orderings for Klingenberg planes was worked out by Machala. He published many papers on ordered Klingenberg planes [H18, H19, H20, H21, H22, H23, H24, H25] and one overview work [H26]. The work of Baker et al. resembles in many aspects Machala’s, but there are some differences. For a comparison between both approaches, one may consult [H06]. The study of topological Klingenberg and Hjelmslev planes remained an exclusive Canadian affair. The most prominent student of Lane, was undoubtedly J.W. (Mike) Lorimer. In his doctoral thesis on “Hjelmslev planes and topological Hjelmslev planes”, he not only introduced a coordinatization (see the previous section) but he also laid the foundation for topological Hjelmslev geometry. His work generalizes that of Salzmann [H28] and Skornyakov [H29] on topological projective planes. In a series of publications [H07, H08, H09, H10, H11, H12, H13, H14, H15, H16, H17] he further developed the theory in close connection with the coordinatization problem. Among the most important theorems proved by Lorimer, we mention the following characterization theorem: the only locally compact connected pappian projective Hjelmslev planes are the ones over the rings $\mathbb{K}[x]/\langle x^{n}\rangle$ with $\mathbb{K}$ the field of real or complex numbers. ## 11 The revival of ring geometry in the eighties and nineties under Veldkamp and Faulkner In the seventies ring geometry was restricted almost exclusively to Hjelmslev and Klingenberg geometry (in the desarguesian case to geometries over local rings and Hjelmslev rings). The Dutch mathematician Ferdinand Douwe Veldkamp (1931-1999), who is well–known for his work on geometries associated with exceptional Lie groups and in particular polar spaces, reverted back to the pioneering work of Barbilian where geometries over the broader class of $Z$–rings were considered. It was Veldkamp’s aim to give an axiom system for projective planes (and higherdimensional spaces) over arbitrary rings with unit (without the imperfection in Barbilian’s attempt [A03, A04]). From conversations with his colleague van der Kallen at the University of Utrecht (an expert in $K$–theory), it became clear that the best setting for this project is provided by rings of stable rank two. A ring $R$ has stable rank two if the following property holds: if $a,b\in R$ and $Ra+Rb=R$ then there exists a $r$ in $R$ such that $a+rb$ is invertible in $R$. The class of stable rank two rings comprises all local rings. Hence, the projective ring planes introduced by Veldkamp include the desarguesian Klingenberg and Hjelmslev planes. A ring of stable rank two is always a $Z$–ring. Veldkamp first worked out the theory for planes in [I23] with some special cases in [I24] and later for spaces of higher dimension (see section 12). The ring planes defined by Veldkamp are also known today as desarguesian Veldkamp planes. John Robert Faulkner, an authority in the domain of non–associative algebra and geometry, further extended the theory of Veldkamp planes in the non–desarguesian direction by introducing alternative (non–associative) rings of stable rank two in [I01]. He then proved in [I02] that a Veldkamp plane has the Moufang property (i.e. $(P,\ell)$–transitivity holds for all $P,\ell$ with $P$ incident with $\ell$) if and only if it is a plane $\mathcal{P}(\mathbb{A})$ over an alternative ring $\mathbb{A}$ of stable rank two. Inspired by the work of his predecessors, Faulkner introduced in [I03] Faulkner planes as a very general class of plane incidence structures with neighbor (or remoteness) relation. They comprise the Veldkamp planes and the planes introduced by Barbilian. A (connected) Faulkner plane for which the group of $(P,\ell)$–transvections (automorphisms fixing all objects incident with $P$ or $\ell$) is transitive on the set of points not neighboring with $\ell$, is called a transvection plane. The coordinatization of transvection Faulkner planes by a not necessarily associative alternative ring with the property that $ab=1$ implies $ba=1$ involves a rather technical procedure based on group theory and a lot of new concepts such as covering planes and tangent bundle planes. A transvection Faulkner plane for which the tangent bundle plane is also a transvection plane is called a Lie transvection Faulkner plane. To every such plane an alternative two–sided units ring can be attached and conversely with an alternative two–sided units ring $R$ a corresponding Lie transvection Faulkner plane can be constructed. However, this plane is not determined unambiguously when the ring does not have stable rank 2. This high price has to be paid for the generalization from Veldkamp planes to Faulkner planes. In [I04] Faulkner gives a geometric construction of Barbilian planes coordinatized by composition algebras (including the Moufang plane) using Jordan algebras. His book [I07] is completely devoted to the role of such algebras in projective geometry. Faulkner was surrounded by students at the University of Virginia, Charlottesville (USA), who all were involved with the study of ring geometries. Terese Deltz Magnus obtained her PhD in 1991 on “Geometries over non–division rings” and she could generalize Faulkner’s axioms and results for geometries of higher dimension (see section 12). Eve Torrence also graduated under Faulkner’s supervision with “The coordinatization of a hexagonal–Barbilian plane by a quadratic Jordan algebra”, a generalization of the classical notion of generalized hexagon. Another student was Catherine Moore d’Ortona who studied homomorphisms between projective ring planes in her PhD thesis “Homomorphisms of remotely projective planes”, published in [I16]. Finally Karen Klintworth wrote her PhD thesis on “Affine remoteness planes”. Faulkner himself considered a slightly more general axiomatization of Faulkner planes in [I06]. There he chooses for the remoteness relation, the negation of the neighbor relation. Much of the results obtained in [I03] are extended but the coordinate rings that appear are no longer always alternative. It is proved that $\mathcal{P}$ is a transvection plane if and only if $\mathcal{P}$ is isomorphic to $P(G,N)$, the plane associated with a group $G$ of Steinberg type parametrized by the ring $R$ and with $N$ a certain subgroup of $G$. Necessary and sufficient conditions are given for $R$ to be alternative, associative or commutative. In [I06] also projective remoteness planes with reflections (hence metric planes) have been considered. The content of this paper is closely related to some work of Knüppel and Salow in [D10] (see section 6). It also contains a part on affine ring planes and elementary basis sets which are closely related to Barbilian domains as introduced by Leissner (see section 5). In the slipstream of Veldkamp’s paper on projective ring planes, several slightly modified axiom systems have been described, leading to other classes of projective ring geometries. We have already mentioned Frieder Knüppel in the section on metric ring geometry, but some of his papers rather join the spirit of this section. In [I14] Knüppel considers ring geometries over associative rings based on four axioms adapted from Veldkamp (based on remoteness rather than neighborship). A coordinatization theorem is stated without proof. In [I15] he studies homomorphisms between such planes. Renata Spanicciati defines near–Barbilian planes and strong near–Barbilian planes in [I22] by adapting some of the axioms of Veldkamp. The neighbour relation between points turns out to be the identity, and the neighbour relation between lines becomes an equivalence relation. In [I11] Guy Hanssens and Hendrik Van Maldeghem prove that any near–Barbilian plane is strong near–Barbilian. Kálman Péntek gives a necessary and sufficient condition for a Veldkamp plane to be a direct product of a finite number of Veldkamp planes in [I17, I18, I19]. The study of homomorphisms between ring geometries was also a central theme in several papers of Veldkamp (some of them in joint work with Joseph Ferrar) [I08, I09, I10, I25, I26]. The geometric homomorphisms of distinct kind (incidence preserving, neighbor–preserving, distant–preserving) are characterized in terms of algebraic morphisms between the underlying rings. Veldkamp’s results were generalized by the author in the non–desarguesian case for homomorphisms between projective Klingenberg planes using the coordinatizing planar sexternary rings (see section 9) in [I12, I13]. Thorsten Pfeiffer could generalize a well–known theorem for planes over fields to planes over rings: a desarguesian ring plane $\mathcal{P}(R)$ is pappian (Pappus’ theorem is valid) if and only if $R$ is commutative. ## 12 Projective and affine Hjelmslev spaces and spaces over arbitrary rings Hitherto we only discussed plane ring geometries. The theory of higher dimensional projective spaces over rings has been developed by different people. Projective spaces over local rings appear for the first time in the work of Klingenberg [A17]. Today we call them Klingenberg spaces (PK–spaces). The first study of PK–spaces after Klingenberg, is due to Hans–Heinrich Lück, a student of Lüneburg. His PhD thesis, published as an article [J34] in 1970 under the somewhat misleading title “Projektive Hjelmslevräume”, contains an axiomatic characterization of a class of incidence structures which permit a coordinatization by local rings. Hence, the paper deals with projective Klingenberg spaces rather than with Hjelmslev spaces. Lück proves that in an axiomatically defined PK–space of dimension at least three, the theorem of Desargues holds and that it must be isomorphic to a space derived from a module over a local ring. Hence all projective Klingenberg spaces of dimension $\geq 3$ are desarguesian, a situation analogous to the case of classical projective spaces. Independently from Lück, Machala defined and studied projective Klingenberg spaces (Projektive Räume mit Homomorphismus) of finite or infinite dimension in [J35, J36, J37, J38, J39]. He proved that the planes in a PK–space are PK–planes and that PK–spaces of dimension at least three come from modules over a local ring (cfr. Lück). Machala also investigated homomorphisms between PK–spaces and the fundamental theorem (isomorphisms between spaces can be represented by semilinear mappings between the underlying modules). A paper by Jukl [J13] is in conformity with this. An axiomatic approach for the more restricted class of projective Hjelmslev spaces (PH–spaces) was initiated by John Lamb Jr. from the University of Texas at Austin (USA) in his PhD thesis, entitled “The Structure of Hjelmslev space, a generalization of projective space” (1969) but its content (related to lattice theory) was not published. Much more widespread is the work of Karl Mathiak, who was very productive in this field. He defined a class of special projective Hjelmslev spaces starting from a vectorspace over a (skew)field endowed with a valuation (“Bewertete Vektorräume”). The structure of the ideals in the corresponding valuation ring plays a central role in his approach. The theory is thoroughly worked out in a series of papers, published over a period of twenty years between 1967 and 1987, see [J41, J42, J43, J44, J45, J46, J47, J48, J49, J50]. His compatriot Alexander Kreuzer introduced an axiom system for arbitrary PH–spaces in his doctoral thesis “Projektive Hjelmslevräume” which was published afterwards as an article in [J18] with some preliminary work in [J17]. This study is continued in [J19, J20]. For the affine case we have to go to Canada again. In four papers, Tibor Bisztriczky together with J.W. Lorimer, worked out two axiom systems for affine Klingenberg spaces [J02, J03, J04, J05]. Neither of their axiom systems assumes the existence of an overlying projective Klingenberg space or the existence of an underlying ordinary affine space. Machala in [J38] also defined affine Klingenberg spaces, but not separate from PK–spaces (similar to ordinary affine spaces obtained from projective spaces by deleting a hyperplane). The most general study of projective ring spaces (over not necessarily local rings) was undertaken by Ferdinand Veldkamp. We have already indicated his interest in section 11. In [J62, J63], Veldkamp gives a self–dual axiom system for projective “Barbilian spaces” of finite dimension using the basic concepts of points, hyperplanes, incidence and neighbor relation. We call them now Veldkamp spaces. The main result is that Veldkamp spaces of dimension $\geq 3$ are spaces over rings of stable rank 2. Theresa Magnus defines Faulkner spaces as even more general geometries by extending the theory of Faulkner planes [J40]. She proves that any Faulkner space of dimension $n\geq 3$ is coordinatized by a unique associative two–sided units ring $R$ and that the group generated by all transvections is a group of Steinberg type over $R$. A Faulkner space over the ring $\mathbb{Z}$ of integers is constructed, providing an example of a Faulkner geometry which is not a Veldkamp space, since $\mathbb{Z}$ has stable rank 3. Under the Veldkamp spaces we also find the projective spaces over matrix rings over GF($q$), studied by Thas [B12] in the early days of ring geometry (see section 4). Other contributions to finite ring spaces are due to Kapralova who considered projective spaces over the ring of dual numbers over a Galois field in [J14] and to Ivan Landjev and Peter Vandendriesche [J24, J25]. A well-developed theory for affine spaces over rings is also due to Veldkamp. In [J64] he defines Barbilian domains in free modules of rank $n$ and introduces $n$–dimensional affine ring geometries. A geometrical interpretation of Barbilian domains is given by Sprenger in [J61]. Other attempts for setting up a theory of higher dimensional affine ring geometries (incidence structures with parallellism) are scattered in the literature. The definitions and the methods used are very diverse. Contributions in this field are due Permutti and Pizzarello [J54, J55], Miron [J51], Leissner, Severin and Wolf [J31, J32], Ostrowski [J53], Schmidt and Weller [J58], Kreis [J15, J16], Seier [J59, J60], Bach [J01] and others. One of the problems for higher dimensional geometries over rings that has got much attention is that of morphisms and the fundamental theorem. For classical projective geometries $P(V)$ induced by a vectorspace over a field this theorem states that any bijective incidence preserving map (projectivity) between projective spaces $P(V)$ and $P(W)$ can be algebraically characterized by a semilinear map from $V$ to $W$. The first generalization of the fundamental theorem to ring geometries was obtained by the Indian mathematicians Ojanguren and Sridharan who could prove it in case of module–induced geometries $P(M)$ with $M$ a free module of finite rank $\geq 3$ over a commutative ring [J52]. Generalizations to other classes of rings were proved later by Sarath and Varadarajan in [J57] and by James in [J12] and Faure in [J06]. For the sake of completeness we also refer to module–induced geometries as defined by Marcus Greferath and Stefan Schmidt [J08, J09, J10, J11]. Instead of the usual definition of the pointset as the set of all submodules generated by a unimodular element in a free module (cfr. Veldkamp), they take all submodules of rank one, leading to the bizarre situation of points properly contained in bigger points. In [J07] an extension of the fundamental theorem is proved for such module–induced geometries. Close relative to this, there is an abundance of articles by the school of the Georgian mathematician Alexander Lashkhi. They all contain variations on the same theme: an extension of the fundamental theorem for affine and projective geometries related with modules over rings, from the lattice–theoretic point of view. We have not included the whole collection of papers by Lashki and his students. Some of them have been published multiple times in different journals (in Russian and in English). The literature list only mentions a few representative ones: [J21, J22, J23, J26, J27, J28, J29, J30, J56]. ## 13 Projective lines and circle geometries over rings: Blunck, Havlicek and Keppens The “smallest” projective geometries that can be considered, are the projective lines. From the viewpoint of incidence geometry not much can be said about these rather poor structures. But combining the study of projective lines with those of their automorphisms (the general linear group) puts them into a new light. The theory has common ground with what is usually called geometric algebra. Indeed, the projective line $P(R)$ over any ring $R$ can be defined in terms of the free left $R$–module $R^{2}$ as follows: it is the orbit of a starter point $R(1,0)$ under the action of the general linear group $GL_{2}(R)$ on $R^{2}$. Since geometric algebra over rings only fits sideways in the section of incidence geometry, we did not make an effort to be complete in the literature list for this item. Nevertheless we mention a number of relevant references in which more information can be found, e.g. [K46]. Central themes that keep returning are the notions of cross–ratio and harmonic quadruples and the fundamental theorem, also known as Von Staudt’s theorem. In the case of classical projective lines over a field or a skewfield, this theorem characterizes mappings of the projective line which preserve harmonicity as projectivities. Among the first publications on projective ring lines (and we do not consider here papers only dealing with linear groups over rings) belong some articles by the Indian mathematicians Nirmala and Balmohan Limaye. They prove a generalization of Von Staudt’s theorem for some special classes of commutative and non–commutative rings in [K38, K39, K40, K41, K42]. A little bit earlier, in 1968, Melchior, a student of Benz, wrote his PhD thesis, entitled “Die projektive Gerade über einem lokalen Ring: ihre lineare Gruppe und ihre Geometrie”. Other contributions to this item appeared in [K02, K04, K05, K10, K16, K22, K23, K24, K25, K30, K31, K34, K35, K37, K45, K47, K49]. Some papers by Bilo and Depunt [B01], Hubaut [B08, B09] and Thas [B10, B11] also deal with projective lines over rings (see section 4). They were followed by Havlicek et al. in [K26, K27, K28]. Projective lines over rings are also intrinsically related with circle geometries. This relation was established for the first time by Benz in his famous book “Vorlesungen über Geometrie der Algebren” [K06] from 1973. He presented a unified treatment of plane circle geometries, now called Benz planes, using the projective line over a commutative ring which is a two–dimensional $\mathbb{K}$–algebra over a field $\mathbb{K}$. His definition was extended by Andrea Blunck, a student of Benz, and Armin Herzer who introduced more general chain geometries $\Sigma(\mathbb{K},R)$ with $R$ a (not necessarily two–dimensional) $\mathbb{K}$–algebra. A chain geometry is an incidence structure whose point set is the set of points of the projective line over $R$ and the $GL_{2}(R)$ orbit of $P(\mathbb{K})$ is the set of chains. The plane circle geometries of Möbius, Laguerre and Minkowski type are particular chain geometries for $R=\mathbb{L}$ (a quadratic field extension of $\mathbb{K}$), $R=\mathbb{K}+\mathbb{K}\varepsilon$ with $\varepsilon^{2}=0$ (dual numbers) or $R=\mathbb{K}+\mathbb{K}t$ with $t^{2}=t$ (double numbers) respectively. For an overview of chain geometry we refer to [K19]. Chain geometries were also treated by Schaeffer, a student of Benz, in his doctoral thesis entitled “Zum Automorphismenproblem in affinen Geometrien und Kettengeometrien über Ringen. The study of chain geometries is continued by Blunck who introduced generalized chain geometries by considering projective lines over non–associative, alternative rings. In her PhD thesis “Doppelverhältnisse und lokale Alternativringe” (1990) and in [K07] she extended the notion of cross–ratio and investigated chain geometries (in relation to projective lines over non–associative rings) in [K08, K11, K12]. Around the same time the author in [K32, K33] defined Klingenberg–Benz planes axiomatically, i.e. plane circle geometries with neighbor relation which admit a natural epimorphism onto a classical Benz plane. Using the projective line over three kinds of quadratic ring extensions of a local ring (instead of a field), he was able to construct algebraic models of such geometries. Also Konrad Lang in [K36] studied independently of us a class of Hjelmslev–Möbius planes. Blunck and Stroppel extended our definition of Klingenberg–Benz planes to Klingenberg chain spaces in [K21] and Blunck also proved that a Klingenberg chain space can be embedded into a projective Klingenberg space, such that the points are identified with points of a quadric and the chains with plane sections [K09]. In [K50] Seier constructed analogously a class of chain geometries $\Sigma(H,L)$ with $H$ a Hjelmslev–ring and $L$ a ring extension of $H$ and in [K51] he defined a Möbius plane with neighbor relation of a different kind than the one defined by us. A basic notion concerning the projective line over a ring $R$ is its distant relation: two points are called distant if they can be represented by the elements of a two-element basis of $R^{2}$. The distant graph has as vertices the points of the projective line and as edges the pairs of distant points. The distant graph is connected precisely when $GL_{2}(R)$ is generated by the elementary linear group $E_{2}(R)$. This aspect of projective ring lines was studied in more detail by Blunck, Havlicek, Matraś and some others in [K01, K13, K14, K15, K29, K43, K44]. In [K17, K18] the interaction between ring geometry and the geometry of matrices in the sense of Hua (see [B07]) is investigated in more detail. ## 14 Ring geometries and buildings: Van Maldeghem and co. The theory of buildings was invented by the Belgium-born French mathematician Jacques Tits. Roughly speaking, buildings are incidence geometries on which groups act. Tits received many awards for his fundamental and path–breaking mathematical ideas, including the Abel Prize in 2008. One of his achievements is the classification of affine buildings of rank at least 4. They are known to be “classical”, i.e. they arise from algebraic groups over a local field. In the rank three case (where affine buildings are of three possible types $\tilde{A}_{2}$, $\tilde{C}_{2}$ or $\tilde{G}_{2}$) many non–classical counterexamples are known. In his PhD thesis “Niet–klassieke driehoeksgebouwen” on triangle buildings (affine buildings of type $\tilde{A}_{2}$) Hendrik Van Maldeghem observed that a special kind of ring geometry is present as the suitably defined geometry at distance $n$ from any given vertex of the building (the so-called $n$–th floor). This was described among other things in [L18, L19]. A little bit later Hanssens and Van Maldeghem could prove that those ring geometries are in fact projective Hjelmslev planes of level $n$, see [L07]. In [L08] they give a universal construction for level $n$ Hjelmslev planes (see also [L06] for the $2$–uniform case) and as a corollary any level $n$ projective Hjelmslev plane is isomorphic to the $n$–th floor of a triangle building. This result links the theory of PH–planes to that of triangle buildings, a rather unexpected but fascinating fact. In the same spirit Van Maldeghem investigated another class of rank three affine buildings, of type $\tilde{C}_{2}$, and proved that the $n$–th floor turns out to be another type of ring geometry which can be seen as a generalization of an ordinary generalized quadrangle. He called it “Hjelmslev–quadrangle” of level $n$ (see [L20, L21]). In joint work with Hanssens a complete characterization of $\tilde{C}_{2}$–buildings by Hjelmslev quadrangles was obtained [L09, L10]. The author defined “Klingenberg–quadrangles” as another generalization of ordinary generalized quadrangles in [L12]. The connection between Klingenberg–quadrangles and Hjelmslev–quadrangles is explained in [L20]. The relation of polar spaces of higher rank to generalized quadrangles is comparable with the relation of projective spaces to projective planes. Generalized quadrangles are polar spaces of rank two. Projective ring spaces of dimension at least three have got as much attention in the literature as projective ring planes. This is not the case so far for general “Klingenberg–polar spaces” or “polar spaces over rings” if the rank is bigger than two. Only one paper by James [L11] is known to us. Certainly this topic offers perspectives for future research. The discovery by Van Maldeghem of the connection between buildings and ring geometry has a precedent. Twenty years earlier, in 1968, Veldkamp in joint work with Tonny Springer considered a geometry over the split octonions (over the complex number field) in [L16]. This geometry is a kind of analogue of the non–desarguesian projective plane over the alternative division ring of (non–split) octonions $\mathbb{O}$ but in which two distinct lines may have more than one point in common and dually. It was called Moufang–Hjelmslev plane, but this name is misleading since it is not a projective Hjelmslev plane (the neighbor relation is not transitive) and hence it is completely distinct from the Moufang PH–plane (over an alternative local ring) studied elsewhere (see section 9). In two subsequent papers [L25, L26] more results on Hjelmslev–Moufang planes are obtained, concerning projective groups. The geometry of Veldkamp and Springer is the same as the one constructed by Tits [L17] starting from the split algebraic group of type $E_{6}$. In this geometry each line has the structure of a polar space and two lines can meet in more than one point (namely, in a maximal singular subspace of the corresponding polar spaces). John Faulkner considered Hjelmslev–Moufang planes by allowing an arbitrary ground field instead of $\mathbb{C}$ in [L04, L05] and also Robert Bix studied generalized Moufang planes in [L02, L03]. The relationship between Hjelmslev planes and buildings was further exploited by Van Maldeghem and Van Steen to give a characterization of some rank three buildings by automorphism groups [L22, L23, L24]. In the margins of the study of buildings some other questions have emerged. One of them concerns embeddings. Embeddings of point–line geometries into projective spaces are well–known in the literature. The embedding question for ring geometries, in particular for projective Hjelmslev planes, is first attacked by Artmann [L01] who shows that the PH–plane over the ring of plural numbers $\mathbb{F}[t]/t^{n}$ ($\mathbb{F}$ a field), can be embedded in the $(3n-1)$–dimensional projective space over $\mathbb{F}$. In [L13] the author and Van Maldeghem prove a nice characterization theorem for embeddable Klingenberg planes: if $\mathcal{P}$ is a projective Klingenberg plane that is fully embedded in the projective space PG(5,$\mathbb{K}$) for some skewfield $\mathbb{K}$, then $\mathcal{P}$ is either a desarguesian Klingenberg plane over a ring of twisted dual numbers or a subgeometry of an ordinary projective plane. The embedding of the projective plane over a matrix ring with entries in GF($q$) into a projective space over GF($q$) was also observed by Thas in [B12, B13]. Veronesean sets are closely connected with embeddings. In [L14] Schillewaert and Van Maldeghem define geometries with an additional axiom by which the Hjelmslev–Moufang plane (in the sense of Springer–Veldkamp) and its relatives fit into the framework using the modern notion of parapolar spaces. In [L15] they provide a common characterisation of projective planes over two- dimensional quadratic algebras (over an arbitrary field) in terms of associated Veronesean sets. Anneleen De Schepper and Van Maldeghem [L27] have considered Veronese representations of Hjelmslev planes over quadratic alternative algebras as part of a more general study of Veronese varieties and Mazzocca–Melone sets. ## 15 Ring geometries in coding theory: Honold, Kiermaier and Landjev One of the fastest growing disciplines in mathematics is coding theory. Since its introduction by Claude Shannon in 1948, the number of publications about codes has exploded, in particular due to its importance in cryptography, data transmission and data storage. Initially mostly linear codes over finite fields were studied, but after the publication in 1994 of the paper [M07] by Hammons _et al._ , a new era has begun. In that paper it is proved that all (non–linear) binary Kerdock-, Preparata-, Goethals- and Delsarte-Goethals- codes are images of $\mathbb{Z}_{4}$–linear codes under the Gray map. This discovery was quite peculiar and the paper got in 1995 the Information Theory Paper Award from the IEEE Information Theory Society. It was the start of a search for new codes by considering linear codes over the ring $\mathbb{Z}_{4}$ and over more general finite rings (see e.g. [M01]). Also some papers by Aleksandr Nechaev [M47, M48] have led the research in that direction. We will not give a survey of all results obtained up to now for codes over finite rings, because even this niche has become too wide. We restrict ourselves here (and in the literature list) to the publications in which the direct relation between codes over rings and ring geometries is exhibited. Indeed, linear codes over finite chain rings can be associated with finite projective Hjelmslev geometries, the hyperplanes corresponding to the codewords. This correspondence offers opportunities for investigating the structure and the construction of ring–linear codes by pure geometrical methods. The technique was first applied in [M14] by Thomas Honold, now working at the Zhejiang Universität in Hangzhou (China), and Ivan Landjev from the New Bulgarian University of Sofia (Bulgaria). They prove that certain MacDonald codes can be represented by linear codes over the ring of twisted dual numbers on a finite field, using multisets of points in Hjelmslev spaces. In [M15] they prove that all Reed–Muller codes are linearly representable over the ring of dual numbers over $\mathbb{Z}_{2}$. In [M16] a general theory of linear codes over finite chain rings has been developed as a natural generalization of the theory of linear codes over finite fields and the correspondence with Hjelmslev spaces is investigated. In [M17] and [M18] an update of that paper is given. Geometric arguments are also used explicitly in [M19] for the construction of particular linear codes over chain rings of order four, generalizing a result obtained by Michael Kiermaier and Johannes Zwanzger in [M34, M35]. Keisuke Shiromoto and Leo Storme defined in [M49] a Griesmer type bound for linear codes over finite quasi-Frobenius rings and they give a geometrical characterization of linear codes meeting the bound, viz. a one-to- one correspondence between these codes and minihypers in projective Hjelmslev spaces. Kiermaier was a student of Honold and wrote his Master’s thesis on “Arcs und Codes über endlichen Kettenringen” in 2006 and obtained his PhD in 2012 at the University of Bayreuth (Germany) with the thesis “Geometrische Konstruktionen linearer Codes über Galois-Ringen der Charakteristik 4 von hoher homogener Minimaldistanz”. The study of codes over chain rings from the viewpoint of Hjelmslev geometry has also led to the generalization of several special point sets (arcs, ovals, blocking sets, caps), already well–known in classical geometry over fields. The thus far most studied sets in Hjelmslev planes are arcs. A $(k,n)$-arc refers to a set of $k$ points which meet every line in at most $n$ points. This definition was given by Honold and Kiermaier in [M43]. General upper bounds on the cardinality of such arcs were found as well as the maximum possible size. For a chain ring $R$ of length 2 with Jacobson radical $R_{0}$, such that $|R/R_{0}|=q$, the maximum size of a $(k,2)$-arc in the projective Hjelmslev plane over $R$ is $q^{2}$ if $q$ is odd and $q^{2}+q+1$ if $q$ is even (see [M20]). In [M21] the existence of maximal $(q^{2}+q+1,2)$-arcs (i.e. hyperovals) is proved for $q$ even and in [M12] the existence of maximal $(q^{2},2)$-arcs for $q$ odd is proved. The results on maximal arcs are also used to construct interesting codes with a linear representation over a chain ring. Examples, non–existence results and upper bounds for the length of arcs are also present in [M02, M08, M10, M11, M13, M23, M32, M33, M36, M50]. Caps in finite projective Hjelmslev spaces over chain rings of nilpotency index 2 are defined by Honold and Landjev in [M22]. A geometric construction for caps in the threedimensional space is given, using ovoids in the epimorphic space PG(3,$q$) as well as an algebraic construction using Teichmüller sets. Blocking sets in Hjelmslev planes and their relation with codes is the subject of [M03, M40, M42] while [M39] and [M54] deal with spreads. Another aspect that appears in the literature is the correspondence between two–weight codes and strongly regular graphs. In [M04] regular projective two- weight codes over finite Frobenius rings are introduced and it is shown that such codes give rise to a strongly regular graph. In [M04, M41] two-weight codes are constructed using ring geometries and they yield infinite families of strongly regular graphs with non-trivial parameters. Ovals in an ordinary projective plane of order $q$ are just $q+1$ arcs and every conic is an oval. By a celebrated theorem of Segre every $(q+1)$–arc in PG(2,$q$) with $q$ odd, is a conic. The study of ovals, conics and unitals in finite projective Klingenberg and Hjelmslev planes was initiated by the author in his PhD thesis but only a part of it was published, e.g. in [M31]. The author also defined polarities in projective Klingenberg planes and spaces and he investigated their sets of absolute points which give rise to ovals or ovoids in some cases. A comprehensive study of polarities in $n$–uniform Hjelmslev planes and spaces over the ring GF($q$)[$t$]/$t^{n}$ appeared in [M28, M29, M30]. Also the papers [M37, M38] enlight some aspects of ovals and conics in ring planes. A combinatorial study of conics in finite desarguesian Hjelmslev planes was made by Ratislav Jurga and Viliam Chvál. Their papers contain formulas for the number of interior and exterior points, tangents and secants of a conic [M05, M25, M26, M27]. A related problem is the projective equivalence of quadrics in projective Klingenberg spaces. This was first formulated in [M06] and analyzed in detail by O.A. Starikova et al. in [M24, M44, M51, M52, M53]. Recently, a special class of codes (toric codes) is found to be related to affine ring geometries (Leissner planes) by Little in [M45, M46], revealing still another correspondence between ring geometry and coding theory. ## 16 Ring geometry in quantum information theory: Saniga and Planat Besides the fact that ring geometries play a substantial role in coding theory, there is another domain in which they arise unexpectedly, namely in quantum information theory. In this important branch of quantum physics it is studied how information can be stored and retrieved from a quantum mechanical system. In 2006 Metod Saniga from the Astronomical Institute at the Slovak Academy of Sciences and Michel Planat from the Université de Franche–Comté (France) discovered a connection between finite ring geometry and quantum information theory [N09]. It is not clear yet whether or not the correspondence goes further than just equality in numbers of objects, but anyhow it is remarkable that ring geometries seem to play a role in quantum physics. The notion of mutually unbiased bases (MUBs) has turned into a cornerstone of the modern theory. Saniga and Planat observed that the basic combinatorial properties of a complete set of MUBs of a $q$–dimensional Hilbert space $\mathcal{H}_{q}$ with $q=p^{r}$, $p$ being a prime and $r$ a positive integer, are qualitatively mimicked by the configuration of points lying on a proper conic in a projective Hjelmslev plane defined over a Galois ring of characteristic $p^{2}$ and rank $r$. The $q$ vectors of a basis of $\mathcal{H}_{q}$ correspond to the $q$ points in a neighbour class and the $q+1$ MUBs answer to the total number of pairwise disjoint neighbour classes on the conic. In a series of subsequently published papers, other combinatorial correspondences between concepts from quantum theory and ring geometries over finite rings (in particular projective ring lines) are observed. One of these similarities concerns the structure of the generalized Pauli group associated with a specific single $d$-dimensional qudit (qudits are generalizations tot $d$–level quantum systems of qubits which are the basic units in quantum information systems of level two). See [N01, N02, N03, N04, N06, N08, N11, N12, N13, N14, N15, N16, N17, N18] and the survey papers [N05, N10] for an overview on this topic. The study of the relation with finite geometry (not only restricted to ring geometry but also in connection with small generalized polygons and polar spaces) is still ongoing, see e.g. [N07]. ## 17 Literature on ring geometry and geometry over rings In the separate bibliographic list we only mention publications in scientific journals (or books). Other references, e.g. Master’s and PhD theses which are quoted explicitly in the foregoing text, are not repeated here, except in case they were also published. Articles belonging to conference proceedings are also omitted if there exist copies with almost identical content, published in other journals. We have grouped the references in themes, corresponding to the sections in the paper. Some papers can be classified in multiple sections. In that case they are listed in the section in which they are referred to for the first time. We do not claim that the bibliography is complete but we have tried to compose an accurate list which complements the existing obsolete lists, especially for the period after 1990. The author will be grateful for additions, completions and corrections to this list. ## A First traces and pioneers of ring geometry * [A01] Archbold J.W., Projective geometry over an algebra, Mathematika 2, 105–115 (1955) * [A02] Artmann B., Dorn D., Drake D. and Törner G., Hjelmslev’sche Inzidenzgeometrie und verwandte Gebiete. Literaturverzeichnis, J. Geom. 7, 175–191 (1976) * [A03] Barbilian D., Zur Axiomatik der projektiven ebenen Ringgeometrien I, Jahresber. Deutsch. Math.-Verein. 50, 179–229 (1940) * [A04] Barbilian D., Zur Axiomatik der projektiven ebenen Ringgeometrien II, Jahresber. Deutsch. Math.-Verein. 51, 34–76 (1941) * [A05] Benz W., On Study’s Übertragungsprinzip, J. Geom. 64, 1–7 (1999) * [A06] Clifford W. K., Preliminary Sketch of Bi-Quaternions, Proc. Lond. Math. Soc. 4, 381–395 (1873) * [A07] Dembowski P. Finite Geometries, reprint of the 1968 Edition, Springer-Verlag, Berlin Heidelberg, 379 pp. (1997) * [A08] Grünwald J., Über duale Zahlen und ihre Anwendung in der Geometrie, Monatsh. Math. 17, 81–136 (1906) * [A09] Hjelmslev J., Die Geometrie der Wirklichkeit, Acta Math. 40, 35–66 (1916) * [A10] Hjelmslev J., Die natürliche Geometrie, Abh. Math. Sem. Univ. Hamburg 2, 1–36 (1923) * [A11] Iordănescu R., The geometrical Barbilian’s work from a modern point of view, Balkan J. Geom. Appl. 1, 31–36 (1996) * [A12] Jungnickel D., Hjelmslev’sche Inzidenzgeometrie und verwandte Gebiete. Literaturverzeichnis II, J. Geom. 16, 138–147 (1981) * [A13] Keppens D., 50 years of finite geometry, the “geometries over finite rings” part, Innov. Incidence Geom. 15, 123–143 (2017) * [A14] Klingenberg W., Projektive und affine Ebenen mit Nachbarelementen, Math. Z. 60, 384–406 (1954) * [A15] Klingenberg W., Euklidische Ebenen mit Nachbarelementen, Math. Z. 61, 1–25 (1954) * [A16] Klingenberg W., Desarguessche Ebenen mit Nachbarelementen, Abh. Math. Sem. Univ. Hamburg 20, 97–111 (1955) * [A17] Klingenberg W., Projektive Geometrien mit Homomorphismus, Math. Ann. 132, 180–200 (1956) * [A18] Kotelnikov A. P., Screw Calculus and Some Applications to Geometry and Mechanics (in Russian), Annals of the Imperial University of Kazan, Russia (1895) * [A19] Predella P., Saggio di Geometria non-Archimedea (Nota II), Batt. G. 3, 161–171 (1912) * [A20] Ree R., On projective geometry over full matrix rings, Trans. Amer. Math. Soc. 6, 144–150 (1955) * [A21] Segre C., Le geometrie proiettive nei campi di numeri duali, Nota I e II, Atti R. Acc. Scienze Torino 47, 114–133 and 164–185 (1911–1912) (in: Corrado Segre, Opere, a cura della Unione Matematica Italiana, Volume II, Edizione Cremonese, Roma, 396–431 (1958)) * [A22] Study E., Geometrie der Dynamen, Leipzig, Germany, 603 pp. (1903) * [A23] Törner G. and Veldkamp F.D., Literature on geometry over rings, J. Geom. 42, 180–200 (1991) * [A24] Veldkamp F.D., Geometry over rings, chapter 19 in Handbook of Incidence Geometry: Buildings and Foundations. Edited by F. Buekenhout, Elsevier, Amsterdam, 1420 pp. (1995) ## B The Belgian contribution to early ring geometry * [B01] Bilo J. and Depunt J., Over de lineaire transformaties van de ternionenrechte (in Dutch), Med. Koninkl. Vlaamse Acad. Wet., Lett. schone Kunsten België, Kl. Wet. 24 (8), 1–36 (1962) * [B02] Bingen F., Géométrie projective sur un anneau semi–primaire, Acad. Roy. Belg. Bull. Cl. Sci. 52, 13–24 (1966) * [B03] Depunt J., Sur la géométrie ternionienne dans le plan, Bull. Soc. Math. Belg. 11, 123–133 (1959) * [B04] Depunt J., Grondslagen van de analytische projectieve ternionenmeetkunde van het platte vlak (in Dutch), Verh. Kon. Vl. Ac. voor Wet., Lett. en Sch. K. van België, Kl. der Wet. 63, 1–99 (1960) * [B05] De Winne P., An extension of the notion “cross-ratio of an ordered 4-tuple of points of the projective line” to an ordered ($n+3$)-tuple of points (resp. hyperplanes) of the $n$-dimensional space over an arbitrary ring with identity, part I: the $n$-dimensional projective space $S_{n}$ over an arbitrary ring with identity, Simon Stevin, 47, 139–159 (1974) * [B06] De Winne P., Een studie betreffende het projectieve vlak over de totale matrixalgebra $M_{2}((K)$ der $2\times 2$–matrices met elementen in een algebraïsch afgesloten veld (in Dutch), Verh. Kon. Vl. Ac. voor Wet., Lett. en Sch. K. van België, Kl. der Wet., 144, 1–80 (1978) * [B07] Hua L.-K., Geometries of Matrices I. Generalizations of Von Staudt’s theorem, Trans. Amer. Math. Soc. 57, 441–481 (1945) * [B08] Hubaut X., Construction d’une droite projective sur une algèbre associative, Acad. Roy. Belg. Bull. Cl. Sci., 50, 618–623 (1964). * [B09] Hubaut X., Algèbres projectives, Bull. Soc. Math. Belg. 17, 495–502 (1965) * [B10] Thas J.A., Dubbelverhouding van een geordend puntenviertal op de projectieve rechte over een associatieve algebra met éénelement (in Dutch), Simon Stevin, 42 (3), 97–111 (1969) * [B11] Thas J.A., Een studie betreffende de projectieve rechte over de totale matrix algebra $M_{3}(K)$ der $3\times 3$–matrices met elementen in een algebraïsch afgesloten veld $K$ (in Dutch), Verh. Kon. Vl. Ac. voor Wet., Lett. en Sch. K. van België, Kl. der Wet., 112, 1–151 (1969) * [B12] Thas J.A., The $m$-dimensional projective space $S_{m}(M_{n}(GF(q)))$ over the total matrix algebra $M_{n}(GF(q))$ of the $n\times n$-matrices with elements in the Galois field $GF(q)$, Rend. Mat. 4, 459–532 (1971) * [B13] Thas J.A., Deduction of properties, valid in the projective space $S_{3n-1}(K)$, using the projective plane over the total matrix algebra $M_{n}(K)$, Simon Stevin, 46, 3–16 (1972) * [B14] Vanhelleputte C., Een studie betreffende de projectieve meetkunde over de ring der $(2\times 2)$-matrices met elementen in een commutatief lichaam, Verhdl. Vlaamse Acad. Wet., Lett. schone Kunsten België, Kl. Wet. (in Dutch) 92, 1–93 (1966) ## C The foundations of plane affine ring geometry * [C01] Armentrout N., Hardy F. and Maxson C., On generalized affine planes, J. Geom. 4, 143–159 (1974) * [C02] Arnold H-J., Die Geometrie der Ringe im Rahmen allgemeiner affiner Strukturen, Hamburger Mathematische Einzelschriften, Göttingen 4, 86 pp. (1971) * [C03] Arnold H-J., A way to the geometry of rings, J. Geom. 1, 155–167 (1971) * [C04] Benz W., Süssche Gruppen in affinen Ebenen mit Nachbarelementen und allgemeineren Strukturen, Abh. Math. Sem. Univ. Hamburg 26, 83–101 (1963) * [C05] Benz W., $\Omega$–Geometrie und Geometrie von Hjelmslev, Math. Ann. 164, 118–123 (1966) * [C06] Benz W., Ebene Geometrie über einem Ring, Math. Nachr. 59, 163–193 (1974) * [C07] Benz W., On Barbilian domains over commutative rings, J. Geom. 12, 146–151 (1979) * [C08] Burian K., Affine H-structures, Comment. Math. Univ. Carolin. 13, 629–635 (1972) * [C09] Burian K., Affine parallel $H$–structures (in Czech), Sb. Prací Ped. Fak. v Ostrav$\breve{\rm e}$ Ser. A 9, 3–5 (1974) * [C10] Burian K., Translation $H$–structures, Sb. Prací Ped. Fak. v Ostrav$\breve{\rm e}$ Ser. A Mat. Fyz. 21, 15–25 (1986) * [C11] Çelik B., On the hyperbolic Klingenberg plane classes constructed by deleting subplanes, J. Inequal. Appl. 357, 1–6 (2013) * [C12] Çelik B., A hyperbolic characterization of projective Klingenberg planes, Int. J. Math. Sci. 2, 10–14 (2008) * [C13] Dorn G., Affine Geometrie über Matrizenringen, Mitt. Math. Sem. Giessen 109, 120 pp. (1974) * [C14] Eugeni F. and Galiè E., Sui piani costruiti su anelli, Dipartimento M.E.T., Università di Teramo, Italy, 143–162 (1991) * [C15] Eugeni F. and Maturo A., Generalized affine planes, J. Inform. Optim. Sci. 12, 431–439 (1991) * [C16] Everett C.J., Affine geometries of vector spaces over rings, Duke Math. J. 9, 873–878 (1942). * [C17] Groze V. and Vasiu A., Affine structures over an arbitrary ring, Studia Univ. Babes-Bolyai Math. 25, 28–31 (1980) * [C18] Kaerlein G., Zur Isomorphie vektorieller Gruppen und affiner Liniengeometrien, J. Geom. 16, 1–4 (1981) * [C19] Keppens D., Affine planes over finite rings, a summary, Aequationes Math. 91, 979–993 (2017) * [C20] Lantz D., Uniqueness of Barbilian domains, J. Geom. 15, 21–27 (1981) * [C21] Lawrence P.A., Affine mappings in the geometries of algebras, J. Geom. 2, 115–143 (1972) * [C22] Leissner W., Affine Barbilian-Ebenen. I, J. Geom. 6, 31–57 (1975) * [C23] Leissner W., Affine Barbilian-Ebenen. II, J. Geom. 6, 105–129 (1975) * [C24] Leissner W., Parallelodromie–Ebenen, J. Geom. 8, 117–135 (1976) * [C25] Leissner W., Barbilianbereiche, in Beiträge zur Geometrischen Algebra, Birkhäuser, Basel–Stuttgart, 219–224 (1977) * [C26] Leissner W., Rings of stable rank 2 are Barbilian rings, Result. Math. 20, 530–537 (1991) * [C27] Lorimer J. W., What is a collineation of the integer plane?, Amer. Math. Mon. 103, 687–691 (1996) * [C28] Lüneburg H., Affine Hjelmslev–Ebenen mit transitiver Translationsgruppe, Math. Z. 79, 260–288 (1962) * [C29] Machala F., Affine Klingenbergsche strukturen, J. Geom. 11 , 16–34 (1978) * [C30] Machala F., Desarguessche affine Ebenen mit Homomorphismus, Geom. Dedicata 3, 493–509 (1975) * [C31] Machala F., Affine planes with homomorphism (in Czech), Kni$\breve{z}$nice, Odb. V$\breve{e}$d. Spisu Vys. U$\breve{c}$. Tech. Brn$\breve{e}$ 56, 79–84 (1975) * [C32] Pickert G., Taktische Konfigurationen über Quasiringen, Aequationes Math. 58, 31–40 (1999) * [C33] Radó F., Affine Barbilian structures, J. Geom. 14, 75–102 (1980) * [C34] Schleicher R., Die Eindeutigkeit der Koordinatisierung von affinen Liniengeometrien durch freie Moduln, J. Geom. 22, 143–148 (1984) * [C35] Schmidt S. and Steinitz R., The coordinatization of affine planes by rings, Geom. Dedicata 62, 299–317 (1996) * [C36] Seier W., Der kleine Satz von Desargues in affinen Hjelmslev-Ebenen, Geom. Dedicata 3, 215–219 (1974) * [C37] Seier W., Über Translationen in affinen Hjelmslev–Ebenen, Abh. Math. Sem. Univ. Hamburg 43, 224–228 (1975) * [C38] Seier W., Die Quasitranslationen desarguesscher affiner Hjelmslev-Ebenen, Math. Z. 177, 181–186 (1981) * [C39] Seier W., Streckungstransitive affine Hjelmslev–Ebenen, Geom. Dedicata 11, 329–336 (1981) * [C40] Seier W., Eine Bemerkung zum grossen Satz von Desargues in affinen Hjelmslev–Ebenen, J. Geom. 20, 181–191 (1983) * [C41] Sperner E., Affine Räume mit schwacher Inzidenz und zugehörige algebraische Strukturen, J. Reine und angewandte Math. 204, 205–215 (1960) * [C42] Vasiu A., Hjelmslev–Barbilian structures, Math., Rev. Anal. Numér. Théor. Approximation, Math. 27, 73–77 (1985) * [C43] Vasiu A., The coordinatisation of a class of $\mathcal{B}$–structures (in Romanian), Stud. Univ. Babes-Bolyai, Math. 31, 35–40 (1986) * [C44] Vasiu A., On a class of planes with neighbouring elements, Prepr., Babes-Bolyai Univ., Fac. Math., Res. Semin. 10, 59–70 (1986) ## D Metric geometry over rings * [D01] Bachmann F., Aufbau der Geometrie aus dem Spiegelungsbegriff, 2nd ed.Die Grundlehren der mathematischen Wissenschaften, Band 96. Springer-Verlag, Berlin-New York, 374 pp. (1973) * [D02] Bachmann F., Hjelmslev–Gruppen, Neudruck. Mit einem Kapitel und einem Anhang von R. Stölting. Arbeitsgemeinschaft über geometrische Fragen, Universität Kiel, Kiel, 175 pp. (1974) * [D03] Bachmann F., Eine neuere Entwicklung in der ebenen metrischen Geometrie, Ber. Math.-Statist. Sekt. Forsch. Graz 92–95, 22 pp. (1978) * [D04] Bachmann F., Ebene Spiegelungsgeometrie. Eine Vorlesung über Hjelmslev-Gruppen, Bibliographisches Institut, Mannheim, 340 pp. (1989) * [D05] Fischbach G., Ein Darstellungssatz für Translations-Hjelmslev-Ebenen mit Spiegelungen, J. Geom. 46, 45–54 (1993) * [D06] Hjelmslev J., Einleitung in die allgemeine Kongruenzlehre, 1. Mitt. Danske Vid. Selsk. mat.-fys. Medd. 8 (1929); 2. Mitt. 10 (1929); 3. Mitt. 19 (1942); 4. und 5. Mitt. 22 (1945); 6. Mitt. 25 (1949). * [D07] Knüppel F., Äquiforme Ebenen über kommutativen Ringen und singuläre Prä-Hjelmslev-Gruppen, Abh. Math. Semin. Univ. Hamburg 53, 229–257 (1983) * [D08] Knüppel F. and Kunze M., Neighbor relation and neighbor homomorphism of Hjelmslev groups, Canad. J. Math. 31, 680–699 (1979) * [D09] Knüppel F. and Kunze M., Reguläre Hjelmslev–Homomorphismen, Geom. Dedicata 11, 195–225 (1981) * [D10] Knüppel F. and Salow E., Plane elliptic geometry over rings, Pacific J. Math. 123, 337–384 (1986) * [D11] Kunze M., Angeordnete Hjelmslevsche Geometrie, ein Ergebnisbericht, Geom. Dedicata 10, 91–111 (1981) * [D12] Lingenberg R., Metric planes and metric vector spaces. Pure and Applied Mathematics, John Wiley & Sons, New York-Chichester-Brisbane, 209 pp. (1979) * [D13] Nolte W., Minkowskische Hjelmslevgruppen über lokalen Ringen, Result. Math. 12, 376–385 (1987) * [D14] Nolte W., Hjelmslevgruppen mit Nachbar-Homomorphismus, J. Geom. 38, 78–94 (1990) * [D15] Salow E., Singuläre Hjelmslev-Gruppen, Geom. Dedicata 1, 447–467 (1973) * [D16] Salow E., Einbettung von Hjelmslev-Gruppen in orthogonale Gruppen über kommutativen Ringen, Math. Z. 134, 143–170 (1973) * [D17] Salow E., Verallgemeinerte Halbdrehungsebenen, Geom. Dedicata 13, 67–85 (1982) * [D18] Schröder E., Gemeinsame Eigenschaften euklidischer, galileischer und minkowskischer Ebenen, Mitt. Math. Ges. Hamburg 10, 185–217 (1974) * [D19] Schröder E., Modelle ebener metrischer Ringgeometrien, Abh. Math. Sem. Univ. Hamburg 48, 139–170 (1979) * [D20] Schröder E., Metric geometry, chapter 17 in Handbook of Incidence Geometry: Buildings and Foundations. Edited by F. Buekenhout, Elsevier, Amsterdam, 1420 pp. (1995) * [D21] Stölting R., Über endliche Hjelmslev-Gruppen, Math. Z. 135, 249–255 (1973/74) * [D22] Stölting R., Ebene metrische Geometrien über projektiven Moduln, Abh. Math. Sem. Univ. Hamburg 50, 166–177 (1980) * [D23] Struve H. and Struve R., Hjelmslevgruppen, in denen sich die Punkte gegen Geraden austauschen lassen, Geom. Dedicata 13, 399–417 (1983) * [D24] Struve H. and Struve R., Ein spiegelungsgeometrischer Aufbau der cominkowskischen Geometrie, Abh. Math. Semin. Univ. Hamburg 54, 111–118 (1984) * [D25] Struve H. and Struve R., Coeuklidische Hjelmslevgruppen, J. Geom. 34, 181–186 (1989) * [D26] Struve R., Algebraisierung singulärer Hjelmslevgruppen, Geom. Dedicata 13, 309–323 (1982) * [D27] von Benda H. and Knüppel F., Hjelmslev-Gruppen über lokalen Ringen, Geom. Dedicata 5, 195–206 (1976) ## E The florescence of Hjelmslev geometry * [E01] Artmann B., Hjelmslev planes derived from modular lattices, Canad. J. Math. 21, 76–83 (1969) * [E02] Artmann B., Hjelmslev–Ebenen mit verfeinerten Nachbarschaftsrelationen, Math. Z. 112, 163–180 (1969) * [E03] Artmann B., Uniforme Hjelmslev–Ebenen und modulare Verbände, Math. Z. 111, 15–45 (1969) * [E04] Artmann B., Über die Einbettung uniformer affiner Hjelmslev–Ebenen in projektive Hjelmslev–Ebenen, Abh. Math. Semin. Univ. Hamburg 34, 127–134 (1970) * [E05] Artmann B., Existenz und projektive Limiten von Hjelmslev–Ebenen $n$-ter Stufe, Atti Convegno Geom. combinat. Appl. Perugia, 27–41 (1971) * [E06] Artmann B., Desarguessche Hjelmslev-Ebenen $n$–ter Stufe, Mitt. Math. Sem. Giessen 91, 1–19 (1971) * [E07] Artmann B., Geometric aspects of primary lattices, Pacific J. Math. 43, 15–25 (1972) * [E08] Bacon P., Strongly $n$–uniform and level $n$ Hjelmslev planes, Math. Z. 127, 1–9 (1972) * [E09] Bacon P., On the extension of projectively uniform affine Hjelmslev planes, Abh. Math. Semin. Univ. Hamburg 41, 185–189 (1974) * [E10] Brungs H. and Törner G., Embedding right chain rings in chain rings, Canad. J. Math. 30, 1079–1086 (1978) * [E11] Craig R., Extensions of finite projective planes. I. Uniform Hjelmslev planes, Canad. J. Math. 16, 261–266 (1964) * [E12] Cronheim A., Cartesian groups, formal power series and Hjelmslev planes, Arch. Math. 27, 209–220 (1976) * [E13] Cronheim A., Dual numbers, Witt vectors, and Hjelmslev planes, Geom. Dedicata 7, 287–302 (1978) * [E14] Drake D.A., Projective extensions of uniform affine Hjelmslev planes, Math. Z. 105, 196–207 (1968) * [E15] Drake D.A., On $n$–uniform Hjelmslev planes, J. Comb. Theory Ser. A 9, 267–288 (1970) * [E16] Drake D.A., The translation groups of $n$–uniform translation Hjelmlev planes, Pacific J. Math. 38, 365–375 (1971) * [E17] Drake D.A., The structure of $n$–uniform translation Hjelmslev planes, Trans. Amer. Math. Soc. 175, 249–282 (1973) * [E18] Drake D.A., Near affine Hjelmslev planes, J. Comb. Theory Ser. A 16, 34–50 (1974) * [E19] Drake D.A., Existence of parallelisms and projective extensions for strongly $n$–uniform near affine Hjelmslev planes, Geom. Dedicata 3, 191–214 (1974) * [E20] Drake D.A., Affine Hjelmslev-Ebenen mit verfeinerten Nachbarschaftsrelationen, Math. Z. 143, 15–25 (1975) * [E21] Drake D.A., All $n$–uniform quasitranslation Hjelmslev planes are strongly $n$–uniform, Proc. Amer. Math. Soc. 51, 494–498 (1975) * [E22] Drake D.A., Squeezing the accordion in a strongly $n$–uniform Hjelmslev plane, Math. Z. 185, 151–166 (1984) * [E23] Dugas M., Eine Kennzeichnung der endlichen desarguesschen Hjelmslev-Ebenen, Geom. Dedicata 3, 295–324 (1974) * [E24] Dugas M., Der Zusammenhang zwischen Hjelmslev-Ebenen und H-Verbänden, Geom. Dedicata 3, 295–324 (1974) * [E25] Kleinfeld E., Finite Hjelmslev planes, Illinois J. Math. 3, 403–407 (1959) * [E26] Lorimer J.W., The fundamental theorem of desarguesian affine Hjelmslev planes, Mitt. Math. Sem. Giessen 119, 6–14 (1975) * [E27] Lorimer J.W., Morphisms and the fundamental theorem of affine Hjelmslev planes, Mathematical Report 64, McMaster University, Hamilton, Ontario, Canada (1973) * [E28] Lorimer J. W., Structure theorems for commutative Hjelmslev rings with nilpotent radicals, C. R. Math. Rep. Acad. Sci. Canada 6, 123–127 (1984) * [E29] Lorimer J. W., Affine Hjelmslev rings and planes, Annals of Discr. Math. 37, 265–276 (1988) * [E30] Lorimer J.W. and Lane N.D., Morphisms of affine Hjelmslev planes, Atti Accad. Naz. Lincei, VIII. Ser., Rend., Cl. Sci. Fis. Mat. Nat. 56, 880–885 (1974) * [E31] Lorimer J.W. and Lane N.D., Desarguesian affine Hjelmslev planes, J. für die reine und angew. Math. 1, 336–352 (1975) * [E32] Machala F., Über projektive Erweiterung affiner Klingenbergscher Ebenen, Czech. Math. J. 29, 116–129 (1979) * [E33] Machala F., Über eine Klasse affiner Klingenbergscher Ebenen, die projektiv erweiterbar sind, Acta Univ. Palacki. Olomuc., Fac. Rerum Nat. 19, 65–74 (1980) * [E34] Seier W., Zentrale und axiale Kollineationen in projektiven Hjelmslev–Ebenen, J. Geom. 17, 35–45 (1981) * [E35] Skornjakov, L. A., Rings chain–like from the left (in Russian) Izv. Vyssh. Uchebn. Zaved. Matematika 4, 114–117 (1966) * [E36] Törner G., Eine klassifizierung von Hjelmslev–ringen und Hjelmslev–Ebenen, Mitt. Math. Sem. Giessen, 107, 1–77 (1974) * [E37] Törner G., Über Homomorphismen projektiver Hjelmslev–Ebenen, J. Geom. 5, 1–13 (1974) * [E38] Törner G., Homomorphismen von affinen Hjelmslev–Ebenen, Math. Z. 141, 159–167 (1975) * [E39] Törner G., $n$-uniforme projektive Hjelmslev-Ebenen sind stark $n$-uniform, Geom. Dedicata 6, 291–295 (1977) * [E40] Törner G., Über den Stufenaufbau von Hjelmslev–Ebenen, Mitt. Math. Sem. Giessen, 126, 1–43 (1977) * [E41] Törner G., $(r^{n-1},r)$ Hjelmslev–Ebenen des Typs $n$, Math. Z. 154, 189–201 (1977) * [E42] Törner G., Über ein Problem von Klingenberg, Arch. Math. 28, 253–254 (1977) * [E43] Törner G., Faktorisierungen von Epimorphismen projektiver Ebenen, Geom. Dedicata 18, 281–291 (1985) ## F The continuation of the Hjelmslev epoch * [F01] Al-Khamees Y., The enumeration of finite principal completely primary rings, Abh. Math. Sem. Univ. Hamburg 51, 226–231 (1981) * [F02] Civolani N., Hyperfree extensions of partial Klingenberg planes, Geom. Dedicata 9, 467–475 (1980) * [F03] Clark W.E. and Liang J., Enumeration of finite commutative chain rings, J. Algebra 27, 445–453 (1973) * [F04] Clark W.E. and Drake D.A., Finite chain rings, Abh. Math. Sem. Univ. Hamburg 39, 147–153 (1973) * [F05] Drake D.A., Nonexistence results for finite Hjelmslev planes, Abh. Math. Sem. Univ. Hamburg 40, 100–110 (1974) * [F06] Drake D.A., More new integer pairs for finite Hjelmslev planes, Illinois J. Math. 19, 618–627 (1975) * [F07] Drake D.A., Charakterisierungen der Hjelmslev-Ebenen mit Invarianten (4,2), Arch. Math. 27, 436–440 (1976) * [F08] Drake D.A., Constructions of Hjelmslev planes, J. Geom. 10, 179–193 (1977) * [F09] Drake D.A., The use of auxiliary sets of matrices in the construction of Hjelmslev and Klingenberg structures, Lecture Notes in Pure and Appl. Math. 82, 129–153 (1983) * [F10] Drake D.A. and Hale M., Group constructible $(t,k)$–nets and Hjelmslev planes, J. Algebra 48, 301–331 (1977) * [F11] Drake D.A. and Jungnickel D., Klingenberg structures and partial designs I: Congruence relations and solutions, J. Stat. Plann. Inference 1, 265–287 (1977) * [F12] Drake D.A. and Jungnickel D., Klingenberg structures and partial designs. II: Regularity and uniformity, Pacific J. Math. 76, 389–415 (1978) * [F13] Drake D.A. and Jungnickel D., Das Existenzproblem für projektive (8,5)-Hjelmslevebenen, Abh. Math. Sem. Univ. Hamburg 50, 118–126 (1980) * [F14] Drake D.A. and Jungnickel D., Finite Hjelmslev planes and Klingenberg epimorphisms, in: Rings and geometry, Proc. NATO Adv. Study Inst., Istanbul/Turkey 1984, NATO ASI Ser., Ser. C 160, 153–231 (1985) * [F15] Drake D.A. and Lenz H., Finite Klingenberg planes, Abh. Math. Sem. Univ. Hamburg 44, 70–83 (1975) * [F16] Drake D.A. and Lenz H., Finite Hjelmslev planes with new integer invariants, Bull. Amer. Math. Soc. 82, 265–267 (1976) * [F17] Drake D.A. and Sane S., Maximal intersecting families of finite sets and $n$–uniform Hjelmslev planes, Proc. Amer. Math. Soc. 86, 358–362 (1982) * [F18] Drake D.A. and Sane S., Auxiliary sets of matrices with new step parameter sequences, Linear Algebra Appl. 46, 131–153 (1982) * [F19] Drake D.A. and Shult E., Construction of Hjelmslev planes from $(t,k)$–nets, Geom. Dedicata 5, 377–392 (1976) * [F20] Drake D.A. and Törner G., Die Invarianten einer Klasse projektiver Hjelmslev–Ebenen, J. Geom. 7, 157–174 (1976) * [F21] Hale M. and Jungnickel D., A generalization of Singer’s theorem, Proc. Amer. Math. Soc. 71, 280–284 (1978) * [F22] Jungnickel D., Verallgemeinerte Klingenberg–Ebenen, Mitt. Math. Sem. Giessen 120, 1–10 (1976) * [F23] Jungnickel D., Hjelmslevebenen mit regulärer abelscher Kollineationsgruppe, Beiträge zur geometrischen Algebra (Proc. Sympos. Duisburg 1977), 157–165 (1977) * [F24] Jungnickel D., Regular Hjelmslev planes. II, Trans. Amer. Math. Soc. 241, 321–330 (1978) * [F25] Jungnickel D., On the congruence relations of regular Klingenberg structures, J. Combin. Inform. System Sci. 3, 49–57 (1978) * [F26] Jungnickel D., Regular Hjelmslev planes, J. Comb. Theory Ser. A 26, 20–37 (1979) * [F27] Jungnickel D., On an assertion of Dembowski, J. Geom. 12, 168–174 (1979) * [F28] Jungnickel D., Construction of regular proper CK–planes, J. Combin. Inform. System Sci. 4, 14–18 (1979) * [F29] Jungnickel D., On balanced regular Hjelmslev planes, Geom. Dedicata 8, 445–462 (1979) * [F30] Jungnickel D., Some new combinatorial results on finite Klingenberg structures, Utilitas Math. 16, 249–269 (1979) * [F31] Jungnickel D., A class of uniform Klingenberg matrices, Ars Combin. 10, 91–94 (1980) * [F32] Limaye B.V. and Sane S., On partial designs and $n$–uniform projective Hjelmslev planes, J. Combin. Inform. System Sci. 3, 223–227 (1978) * [F33] Neumaier A., Nichtkommutative Hjelmslev-Ringe, Festband für H. Lenz, Freie Universität Berlin, 200–213 (1976) * [F34] Sane S., Some new invariant pairs $(t,3)$ for projective Hjelmslev planes, J. Geom. 15, 64–73 (1981) * [F35] Sane S., New integer pairs for Hjelmslev planes, Geom. Dedicata 10, 35–48 (1981) * [F36] Sane S., On class-regular projective Hjelmslev planes, in: Finite geometries and designs, London Math. Soc. Lecture Note Ser. 49, 332–336 (1981) * [F37] Sane S., On the theorems of Drake and Lenz, Aequationes Math. 23, 223–232 (1981) * [F38] Sane S. and Singhi N., On the structure of a finite projective Klingenberg plane, Congr. Numer. 33, 285–292 (1981) * [F39] Sedlar V., Incidence matrices of finite projective uniform Hjelmslev planes (in Czech) Sb. Praci Ped. Fak. v Ostrave Ser. A 17, 25–38 (1982) ## G The coordinatization of Klingenberg and Hjelmslev planes * [G01] Akpinar A., Çelik B. and Çiftçi S., Cross-ratios and 6-figures in some Moufang-Klingenberg planes, Bull. Belg. Math. Soc. - Simon Stevin 15, 49–64 (2008) * [G02] Akpinar A., Çelik B. and Çiftçi S., Cross-ratios of points and lines in some Moufang-Klingenberg planes, Hacet. J. Math. Stat. 40, 1–13 (2011) * [G03] Akpinar A., Dayio$\breve{\rm g}$lu A., Do$\breve{\rm g}$an I., Boztemür B., Aslan D. and Gürel Z.S., A note on projective Klingenberg planes over rings of plural numbers, Int. J. of New Technology and Research 4, 103–105 (2018) * [G04] Bacon P., An introduction to Klingenberg planes. Vols. 1–4 (1976, 1977, 1979, 1983), published by the author, 3101 NW 2nd Av, Gainesville, Florida 32607, U.S.A.. * [G05] Bacon P., Desarguesian Klingenberg planes, Trans. Amer. Math. Soc. 241, 343–355 (1978) * [G06] Baker, C., Moulton affine Hjelmslev planes Canad. Math. Bull. 21, 135–142 (1978) * [G07] Baker, C., Lane N. and Lorimer J.W., Local alternative rings and finite alternative right chain rings, C. R. Math. Rep. Acad. Sci. Canada 12, 53–58 (1990) * [G08] Baker, C., Lane N. and Lorimer J.W., An affine characterization of Moufang projective Klingenberg planes, Results Math. 17, 27–36 (1990) * [G09] Baker, C., Lane N. and Lorimer J.W., The Artin-Zorn theorem for finite punctually cohesive projective Klingenberg planes, Ars Comb. Ser. B 29, 143–149 (1990) * [G10] Baker, C., Lane N. and Lorimer J.W., A coordinatization for Moufang Klingenberg planes, Simon Stevin 65, 3–22 (1991) * [G11] Baker C.A. and Lorimer J.W., Coordinate rings of topological Klingenberg planes. II: The algebraic foundation for a projective theory, J. Geom. 73, 49–92 (2002) * [G12] Blunck A., Projectivities in Moufang-Klingenberg planes, Geom. Dedicata 40, 341–359 (1991) * [G13] Blunck A., Cross–ratios in Moufang-Klingenberg planes. Geom. Dedicata 43, 93–107 (1992) * [G14] Çelik B., Akpinar A. and Çiftçi S., 4-transitivity and 6-figures in some Moufang-Klingenberg planes, Monatsh. Math. 152, 283–294 (2007) * [G15] Çelik B. and Çiftçi S., Cross-ratios over the geometric structures which are coordinatized with alternative or local alternative rings, Commun. Fac. Sci. Univ. Ankara, Ser. A1 43, 105–117 (1994) * [G16] Çelik B. and Erdoğan F., On addition and multiplication of points in a certain class of projective Klingenberg planes, J. Inequal. Appl. 230, 1–9 (2013) * [G17] Çelik B. and Dayioglu A., The collineations which act as addition and multiplication on points in a certain class of projective Klingenberg planes, J. Inequal. Appl. 193, 1–9 (2013) * [G18] Çelik N., Çiftçi S. and Akpinar A., Some properties of 6-figures and cross-ratios in some Moufang-Klingenberg planes, J. Algebra Appl. 9, 173–184 (2010) * [G19] Çelik B., Akpinar A. and Çiftçi S., On harmonicity in some Moufang-Klingenberg planes, Turk. J. Math. 34, 249–260 (2010) * [G20] Cyganova V. K., The impossibility of introducing universally comprehensible configurational propositions into a projective plane with neighboring elements (in Russian), Smolensk. Gos. Ped. Inst. Uchen. Zap. 18, 35–43 (1967) * [G21] Cyganova V. K., Ternary rings of affine Hjelmslev planes (in Russian), Smolensk. Gos. Ped. Inst. Uchen. Zap. 18, 44–69 (1967) * [G22] Cyganova V. K., H-Ternaries and configuration theorems with their algebraic equivalents in affine Hjelmslev planes (in Russian), Izv. Akad. Nauk BSSR, Ser. Fiz.-Mat. Nauk. 3, 125–126 (1973) * [G23] Cyganova V. K., Dependence of some configurational theorems in affine Hjelmslev planes (in Russian), Izv. Akad. Nauk BSSR, Ser. Fiz.-Mat. Nauk 3, 127 (1973) * [G24] Cyganova V. K., Affine specialization of the configuration proposition $D_{i}(8,11,14)$ in an AH-plane, and its algebraic equivalent (in Russian), Vestsi Akad. Navuk BSSR Ser. Fiz.-Mat. Navuk 5, 104–105 (1975) * [G25] Cyganova V. K., The configuration postulate $D_{H}(9,13,17)$, and its algebraic equivalent (in Russian), Vestsi Akad. Navuk BSSR Ser. Fiz.-Mat. Navuk 1, 120–121 (1978) * [G26] Cyganova V. K., A geometric interpretation of the distributivity of an H-ternary (in Russian), Vestsi Akad. Navuk BSSR Ser. Fiz.-Mat. Navuk 2, 31–35 (1980) * [G27] Cyganova V. K., The second minor Pappos theorem and its algebraic equivalent in Hjelmslev affine planes (in Russian), Vestsi Akad. Navuk BSSR Ser. Fiz.-Mat. Navuk 1, 107–109 (1984) * [G28] Drake D.A., Coordinatization of $H$-planes by $H$-modules, Math. Z. 115, 79–103 (1970) * [G29] Dugas M., Charakterisierungen endlicher Desarguesscher uniformer Hjelmslev-Ebenen, Geom. Dedicata 3, 295–324 (1974) * [G30] Dugas M., Moufang-Hjelmslev-Ebenen, Geom. Dedicata 3, 295–324 (1974) * [G31] Dugas M., Verallgemeinerte André–Ebenen mit Epimorphismen auf Hjelmslev–Ebenen, Geom. Dedicata 8, 105–123 (1979) * [G32] Eliseev, E.M., Desarguesian theorems and collineations of projective Hjelmslev planes, (in Russian) Geometry of incidence structures and differential equations, Collect. sci. Artic., Smolensk, 30–40 (1981) * [G33] Emel’chenkov E.P., Translation AH–planes and H–ternaries (in Russian), Smolenk. Gos. Ped. Inst. Uchen. Zap. 4, 74–83 (1973) * [G34] Emel’chenkov E.P., The PH–ternary of a projective Hjelmslev plane (in Russian), Smolenk. Gos. Ped. Inst. Uchen. Zap. 4, 93–101 (1973) * [G35] Emel’chenkov E.P., Homotheties of AH–planes and H–ternaries (in Russian), Gos. Ped. Inst., Leningrad, Geom. Topol. 2, 89–93 (1974) * [G36] Emel’chenkov E.P., On $(\Pi,l)$–collineations of AH–planes (in Russian), in Modern Geometry, Gos. Ped. Inst., Leningrad, 58–60 (1978) * [G37] Hall M., Projective planes, Trans. Amer. Math. Soc. 54, 229–277 (1943) * [G38] Hughes D.R., Planar division neo-rings, Trans. Amer. Math. Soc. 80, 502–527 (1955) * [G39] Hughes D.R. and Piper F.C, Projective planes, Springer–Verlag, Berlin, 291 pp. (1973) * [G40] Jukl M., Desargues theorem for Klingenberg projective plane over certain local ring, Acta Univ. Palacki. Olomuc., Fac. Rerum Nat., Math. 36, 33–39 (1997) * [G41] Keppens D., Coordinatization of projective Klingenberg planes. I: Introduction of coordinates and planar sexternary rings, Simon Stevin 62, 63–90 (1988) * [G42] Keppens D., Coordinatization of projective Klingenberg planes. II: Connections between geometric properties of a PK-plane and algebraic properties of a coordinatizing PSR, Simon Stevin 62, 163–188 (1988) * [G43] Keppens D., Coordinatization of projective Klingenberg planes. III: Construction of planar sexternary rings and examples, Simon Stevin 63, 117–140 (1989) * [G44] Kolb E., Projective Klingenberg planes over nearrings, J. Geom. 46, 82–91 (1993) * [G45] Kolb E., Hjelmslev planes over nearrings, Discrete Math. 155, 147–155 (1996) * [G46] Lorimer J.W., Coordinate theorems for affine Hjelmslev planes, Ann. Mat. Pura Appl. 105, 171–190 (1975) * [G47] Machala F., Erweiterte lokale Ternärringe, Czech. Math. J. 27 , 560–572 (1977) * [G48] Machala F., Koordinatisation projektiver Ebenen mit Homomorphismus, Czech. Math. J. 27 , 573–590 (1977) * [G49] Machala F., Koordinatisation affiner Ebenen mit Homomorphismus, Math. Slovaca 27, 181–193 (1977) * [G50] Machala F., Projektive Ebenen mit Homomorphismus und erweiterte lokale Ternärringe, Math. Slovaca 29, 227–237 (1979) * [G51] Machala F., Epimorphismen von lokalen Ternärringen, Czech. Math. J. 33, 70–75 (1983) * [G52] Machala F., Biternärringe und affine lokale Ternärringe, Acta. Univ. Palack. Olomuc. Fac. Rerum. Natur. Math. 27 , 25–37 (1988) * [G53] Mäurer H. and Nolte W., A characterization of Pappian affine Hjelmslev planes, Combinatorics ’86 (Trento, 1986), Ann. Discrete Math. 37, 281–291 (1988) * [G54] Nolte W., Pappussche affine Klingenbergebenen, J. Geom. 52, 152–158 (1995) * [G55] Pickert G., Projektive Ebenen, Springer–Verlag, Berlin, second edition, 388 pp. (1975) * [G56] Shatokhin N. L., Homomorphisms of H-planes and PH-ternars (in Russian), Geometry of incidence structures and differential equations, Collect. sci. Artic., Smolensk, 81–91 (1981) * [G57] Shatokhin N. L., Frame isomorphisms of affine Hjelmslev planes and $\omega$–isotopies of AH ternaries (in Russian), Vladikavkaz. Mat. Zh. 9, 48–54 (2007) * [G58] Skornyakov L.A., Natural domains of Veblen-Wedderburn projective planes (in Russian), Izvestiya Akad. Nauk SSSR. Ser. Mat. 13, 447–472 (1949); english translation in Amer. Math. Soc. Translation 58, 37 pp. (1951) * [G59] Skornyakov L.A., Projective planes (in Russian), Uspehi Matem. Nauk (N.S.) 6, 112–154 (1951); english translation in Amer. Math. Soc. Translation 99, 58 pp. (1953) ## H Order and topology in ring geometry * [H01] Baker C., Ordered affine Hjelmslev planes, J. Geom. 23, 1–13 (1984) * [H02] Baker C., Preordered uniform Hjelmslev planes, J. Geom. 24, 14–17 (1985) * [H03] Baker C. and Lorimer J.W., Coordinate rings of topological Klingenberg planes. I: The affine perspective, Geom. Dedicata 58, 101–116 (1995) * [H04] Baker C., Lane N.D. and Lorimer J.W., Order and topology in projective Hjelmslev planes, J. Geom. 19, 8–42 (1982) * [H05] Baker C., Lane N.D. and Lorimer J.W., A construction for topological non–desarguesian affine Hjelmslev planes, Arch. Math. 50, 83–92 (1988) * [H06] Baker C., Lane N.D., Laxton J.A. and Lorimer J.W., Preordered affine Hjelmslev planes, J. Geom. 23, 14–44 (1984) * [H07] Lorimer J.W., Topological Hjelmslev planes, Geom. Dedicata 7, 185–207 (1978) * [H08] Lorimer J.W., Connectedness in topological Hjelmslev planes, Ann. Mat. Pura Appl. 118, 199–216 (1978) * [H09] Lorimer J.W., Locally compact Hjelmslev planes, C. R. Math. Rep. Acad. Sci. Canada 1, 309–314 (1978/79) * [H10] Lorimer J.W., Locally compact Desarguesian Hjelmslev planes of level $n$, C. R. Math. Rep. Acad. Sci. Canada 2, 141–145 (1980) * [H11] Lorimer J.W., Locally compact Hjelmslev planes and rings, Canad. J. Math. 33, 988–1021 (1981) * [H12] Lorimer J.W., Dual numbers and topological Hjelmslev planes, Canad. Math. Bull. 26, 297–302 (1983) * [H13] Lorimer J.W., Compactness in topological Hjelmslev planes, Canad. Math. Bull. 27, 423–429 (1984) * [H14] Lorimer J.W., A topological characterization of Hjelmslev’s classical geometries. in: Rings and geometry (Istanbul, 1984), 81–151, NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci., 160, Reidel, Dordrecht, (1985) * [H15] Lorimer J.W., The classification of compact punctally cohesive Desarguesian projective Klingenberg planes, Geom. Dedicata 36, 347–358 (1990) * [H16] Lorimer J.W., Topological characterizations of finite desarguesian projective Hjelmslev planes, Ars Comb. Ser. A 29, 247–254 (1990) * [H17] Lorimer J.W., The classification of compact right chain rings, Forum Math. 4, 335–347 (1992) * [H18] Machala F., Angeordnete affine Klingenbergsche Ebenen, Czech. Math. J. 30, 341—356 (1980) * [H19] Machala F., Angeordnete affine lokale Ternärringe und angeordnete affine Klingenbergsche Ebenen, Czech. Math. J. 30, 556–568 (1980) * [H20] Machala F., Fastgeordnete und geordnete affine Klingenbergsche Ebenen, Čas. Pěst. Mat. 106, 138–155 (1981) * [H21] Machala F., Fastgeordnete und geordnete lokale Ringe und ihre geometrische Anwendung, Čas. Pěst. Mat. 106, 269—278 (1981) * [H22] Machala F., Über angeordnete affine Klingenbergsche Ebenen, die sich in projektive Klingenbergsche Ebenen einbetten lassen, Acta Univ. Palacki. Olomuc., Fac. Rerum Nat., Math. 21, 9–31 (1982) * [H23] Machala F., Über die Fortsetzung einer Anordnung der affinen Klingenbergschen Ebene in einer Anordnung der projektiven Klingenbergschen Ebene, Acta Univ. Palacki. Olomuc., Fac. Rerum Nat., Math. 22, 19–36 (1983) * [H24] Machala F., Über die Fortsetzung einer Anordnung der affinen Klingenbergschen Ebene in einer Anordnung der projektiven Klingenbergschen Ebene, II. Acta Univ. Palacki. Olomuc., Fac. Rerum Nat., Math. 23, 107–136 (1984) * [H25] Machala F., Fastgeordnete affine lokale Ternärringe und pregeordnete Biternärringe, Acta Univ. Palacki. Olomuc., Fac. Rerum Nat., Math. 28, 11–25 (1989) * [H26] Machala F., Angeordnete Klingenbergsche Ebenen, Olomouc: Univerzita Palackého, 100 pp. (1990) * [H27] Pambuccian V., The axiomatics of ordered geometry, I. Ordered incidence spaces, Expositiones Mathematicae 29, 24–66 (2011) * [H28] Salzmann H., Topological planes, Adv. in Math. 2, 1–60 (1967) * [H29] Skornyakov L.A., Topological projective planes (in Russian), Trudy Moskov. Mat. Obšč. 3, 347–373 (1954) * [H30] Thomas L., Ordered desarguesian affine Hjelmslev planes, Canad. Math. Bull. 21, 229–235 (1978) ## I The revival of ring geometry (Veldkamp, Faulkner) * [I01] Faulkner J.R., Stable range and linear groups for alternative rings, Geom. Dedicata 14, 177–188 (1983) * [I02] Faulkner J.R., Coordinatization of Moufang–Veldkamp planes, Geom. Dedicata 14, 189–201 (1983) * [I03] Faulkner J.R., Barbilian planes, Geom. Dedicata 30, 125–181 (1989) * [I04] Faulkner J.R., A geometric construction of Moufang planes, Geom. Dedicata 29, 133–140 (1989) * [I05] Faulkner J.R., Current results on Barbilian planes, Sci. Bull., Politeh. Univ. Buchar., Ser. A 55, 146–152 (1993) * [I06] Faulkner J.R., Projective remoteness planes, Geom. Dedicata 60, 237–275 (1996) * [I07] Faulkner J.R., The role of nonassociative algebra in projective geometry, Graduate Studies in Mathematics, 159. American Mathematical Society, Providence, RI, 229 pp. (2014) * [I08] Ferrar J, Homomorphisms of Moufang-Veldkamp planes, Geom. Dedicata 46, 299–311 (1993) * [I09] Ferrar J. and Veldkamp F.D., Neighbor-preserving homomorphisms between projective ring planes, Geom. Dedicata 18, 11–33 (1985) * [I10] Ferrar J. and Veldkamp F.D., Admissible subrings revisited, Geom. Dedicata 23, 229–236 (1987) * [I11] Hanssens G. and Van Maldeghem H., A note on near–Barbilian planes, Geom. Dedicata 29, 233–235 (1989) * [I12] Keppens D., Neighbor-preserving epimorphisms between projective Klingenberg planes, Geom. Dedicata 29, 209–219 (1989) * [I13] Keppens D., Distant-preserving epimorphisms between projective Klingenberg planes, Geom. Dedicata 29, 237–247 (1989) * [I14] Knüppel F., Projective planes over rings, Results Math. 12, 348–356 (1987) * [I15] Knüppel F., Regular homomorphisms of generalized projective planes, J. Geom., 29, 170–181 (1987) * [I16] Moore D’Ortona C., Homomorphisms of projective remoteness planes, Geom. Dedicata 72, 111–122 (1998) * [I17] Péntek K., The minimal projective ring planes (in Hungarian), Berzsenyi Dániel Tanárk. Föisk. Tud. Közl., Termtud. 8, 33–70 (1992) * [I18] Péntek K., A generalization of the projective Klingenberg plane, Berzsenyi Dániel Tanárk. Föisk. Tud. Közl., Termtud. 9, 19–42 (1994) * [I19] Péntek K., The $\Delta^{2}$–configuration and the direct decomposition of projective Veldkamp planes (in Hungarian), Berzsenyi Dániel Tanárk. Föisk. Tud. Közl., Termtud. 10, 27–37 (1996) * [I20] Péntek K., On the direct decomposition of pappian projective Veldkamp planes, Publ. Math. 53, 347–365 (1998) * [I21] Pfeiffer T., Pappus’ theorem for ring-geometries, Beitr. Algebra Geom. 39, 461–466 (1998) * [I22] Spanicciati R., Near–Barbilian planes, Geom. Dedicata 24, 311–318 (1987) * [I23] Veldkamp F.D., Projective planes over rings of stable rank 2, Geom. Dedicata 11, 285–308 (1981) * [I24] Veldkamp F.D., Projective ring planes: some special cases, in Atti Conv. Geometria combinatoria e di incidenza, La Mendola, 1982, Rend. Sem. Mat. Brescia 7, 609–615 (1984) * [I25] Veldkamp F.D., Distant–preserving homomorphisms between projective ring planes, Nederl. Akad. Wetensch. Indag. Math. 47, 443–453 (1985) * [I26] Veldkamp F.D., Incidence–preserving mappings between ring planes, Nederl. Akad. Wetensch. Indag. Math. 47, 455–459 (1985) ## J Projective and affine Hjelmslev spaces and spaces over rings * [J01] Bach A., Teilverhältnisse in affinen Räumen über Moduln, Beitr. Algebra Geom. 38, 385–398 (1997) * [J02] Bisztriczky T. and Lorimer J. W., Axiom systems for affine Klingenberg spaces, Research and Lecture Notes in Mathematics Combinatorics ’88, vol I , 185–200 (1991) * [J03] Bisztriczky T. and Lorimer J. W., On hyperplanes and free subspaces of affine Klingenberg spaces, Aequationes Math. 48, 121–136 (1994) * [J04] Bisztriczky T. and Lorimer J. W., Subspace operations in affine Klingenberg spaces, Bull. Belg. Math. Soc.–Simon Stevin 2 , 99–108 (1995) * [J05] Bisztriczky T. and Lorimer J. W., Translations in affine Klingenberg spaces, J. Geom. 99, 15–42 (2010) * [J06] Faure C.-A., Morphisms of projective spaces over rings, Adv. Geom. 4, 19–31 (2004) * [J07] Greferath M., Global-affine morphisms of projective lattice geometries, Results Math. 24, 76–83 (1993) * [J08] Greferath M. and Schmidt S.E., A unified approach to projective lattice geometries, Geom. Dedicata 43, 243–264 (1992) * [J09] Greferath M. and Schmidt S.E., On Barbilian spaces in projective lattices geometries, Geom. Dedicata 43, 337—349 (1992) * [J10] Greferath M. and Schmidt S.E., On point–irreducible projective lattice geometries, J. Geom. 50, 73–83 (1994) * [J11] Greferath M. and Schmidt S.E., On stable geometries, Geom. Dedicata 51, 181–199 (1994) * [J12] James D.G., Projective geometry over rings with stable range condition, Linear and Multilinear Algebra 23, 299–304 (1988) * [J13] Jukl M., On homologies of Klingenberg projective spaces over special commutative local rings, Publ. Math. 55, 113–121 (1999) * [J14] Kapralova, S. B., Dual Galois spaces (in Russian), Trudy Geom. Sem. Kazan. Univ. 12, 38–44 (1980) * [J15] Kreis E., Koordinatisierung von verallgemeinerten affinen Räumen, Result. Math. 32, 304–317 (1997) * [J16] Kreis E. and Schmidt S.E., Darstellung von Hyperebenen in verallgemeinerten affinen Räumen durch Moduln, Results Math. 26, 39–50 (1994) * [J17] Kreuzer A., Hjelmslev-Räume, Result. Math. 12, 148–156 (1987) * [J18] Kreuzer A., A system of axioms for projective Hjelmslev spaces, J. Geom. 40, 125–147 (1991) * [J19] Kreuzer A., Fundamental theorem of projective Hjelmslev spaces, Mitt. Math. Ges. Hamb. 12, 809–817 (1991) * [J20] Kreuzer A., Free modules over Hjelmslev rings in which not every maximal linearly independent subset is a basis, J. Geom. 45, 105–113 (1992) * [J21] Kvirikashvili T.G., The fundamental theorem for affine geometries over rings (in russian), Soobshch. Akad. Nauk Gruz. 148, 196–197 (1993) * [J22] Kvirikashvili T.G., Projective geometries over rings and modular lattices. Algebra and geometry, J. Math. Sci. 153, 495–505 (2008) * [J23] Kvirikashvili T.G. and Lashkhi A., Geometrical maps in ring affine geometries, J. Math. Sci. 186, 759–765 (2012) [translated from Sovrem. Mat. Prilozh. 74 (2011)] * [J24] Landjev I. and Vandendriesche P., On the point-by-subspace incidence matrices of projective Hjelmslev spaces, C. R. Acad. Bulg. Sci. 67, 1485–1490 (2014) * [J25] Landjev I. and Vandendriesche P., On the rank of incidence matrices in projective Hjelmslev spaces, Des. Codes Cryptography 73, 615–623 (2014) * [J26] Lashkhi A., The fundamental theorem of projective geometry for modules and Lie algebras, J. Soviet Math. 42, 1991–2007 (1988) [translated from VINITI Itogi Nauki i tekniki. Sovrem. Mat. i Prilozh., Geometria 18, 167–187 (1986)] * [J27] Lashkhi A., General geometric lattices and projective geometry of modules. J. Math. Sci. 74, 1044–1077 (1995) * [J28] Lashkhi A., Ring geometries and their related lattices, J. Math. Sci. 144, 3960–3967 (2007) [translated from Fundam. Prikl. Mat. 11, 127–137 (2005)] * [J29] Lashkhi A. and Chkhatarashvili D., On the fundamental theorem of affine geometry over ring, Bull. Georgian Acad. Sci. 159, 17–19 (1999) * [J30] Lashkhi A. and Kvirikashvili T. G., Affine geometry of modules over a ring with an invariant basis number, Math. Notes 82, 756–765 (2007) [translated from Mat. Zametki 82, 838–849 (2007)]. * [J31] Leissner W., On classifying affine Barbilian spaces, Result. Math. 12, 157–171 (1987) * [J32] Leissner W., Severin R. and Wolf K., Affine geometry over free unitary modules, J. Geom. 25, 101–120 (1985) * [J33] Limaye N. B., A generalization of Fano’s postulate, Math. Stud. 49, 125–127 (1981) * [J34] Lück H.-H., Projektive Hjelmslevräume, J. Reine Angew. Math. 243, 121–158 (1970) * [J35] Machala F., Projektive Abbildungen projektiver Räume mit Homomorphismus, Czech. Math. J. 25(100), 214–226 (1975) * [J36] Machala F., Homomorphismen projektiver Räume mit Homomorphismus, Czech. Math. J. 25(100), 454–474 (1975) * [J37] Machala F., Homomorphismen von projektiven Räumen und verallgemeinerte semilineare Abbildungen, $\breve{C}$as. P$\breve{e}$st. Mat. 100, 142–154 (1975) * [J38] Machala F., Eine Ebene im projektiven Raum mit Homomorphismus, Acta Univ. Palack. Olomuc., Fac. Rer. Natur. Math. 15, 5–21 (1976) * [J39] Machala F., Fundamentalsätze der projektiven Geometrie mit Homomorphismus, Rozpr. Cesk. Akad. Ved, Rada Mat. Prir. Ved 90, 81 pp. (1980) * [J40] Magnus T. D., Faulkner geometry, Geom. Dedicata 59, 1–28 (1996) * [J41] Mathiak K., Eine geometrische Kennzeichnung von Homomorphismen desarguesscher projektiver Ebenen, Math. Z. 98, 259–267 (1967) * [J42] Mathiak K., Homomorphismen projektiver Räume und Hjelmslevsche Geometrie, J. Reine Angew. Math. 254, 41–73 (1972) * [J43] Mathiak K., Ein Beweis der Dimensionsformel in projektiven Hjelmslevschen Räumen, J. Reine Angew. Math. 256, 215–220 (1972) * [J44] Mathiak K., Bewertete Vektorräume, J. Reine Angew. Math. 257, 80–90 (1972) * [J45] Mathiak K., Kennzeichnende Eigenschaften bewerteter Vektorräume, J. Reine Angew. Math. 260, 127–132 (1973) * [J46] Mathiak K., $I$-Hülle und $I$-Kern von Moduln über Bewertungsringen, J. Reine Angew. Math. 283/284, 1–8 (1976) * [J47] Mathiak K., Schnitt und Verbindung in projektiven Hjelmslevschen Räumen, J. Reine Angew. Math. 283/284, 9–20 (1976) * [J48] Mathiak K., Projective Hjelmslevsche Räume im nicht invarianten Fall, J. Reine Angew. Math. 291, 182–188 (1977) * [J49] Mathiak K., Valuations of skew fields and projective Hjelmslev spaces, Lecture Notes in Mathematics 1175, Springer-Verlag, Berlin, 116 pp. (1986) * [J50] Mathiak K., Dualität in projektiven Hjelmslevschen Räumen, Result. Math. 12, 166–171 (1987) * [J51] Miron R., The minimality of Weyl’s system of axioms for the affine geometry over an unitary ring, An. Stiint. Univ. “Al. I. Cuza” Iasi Sect. I a Mat. (N.S.) 24, 15–19 (1978) * [J52] Ojanguren M. and Sridharan R., A note on the fundamental theorem of projective geometry, Comment. Math. Helv. 44, 310–315 (1969) * [J53] Ostrowski T. and Dunajewski K., An affine space over a module, Int. Math. Forum 4, 1457–1463 (2009) * [J54] Permutti R., Geometria affine su di un anello, Atti Accad. Naz. Lincei, Mem., Cl. Sci. Fis. Mat. Nat., Sez. I, VIII. 8, 259–287 (1967) * [J55] Pizzarello G., Sugli spazi affini sopra un anello, Rend. Ist. Mat. Univ. Trieste 1, 98–111 (1969) * [J56] Rostomashvili Z., Remark to the projective geometry over rings and corresponding lattices, Bull. Georgian Acad. Sci. 160, 211–212 (1999) * [J57] Sarath B. and Varadarajan K., Fundamental theory of projective geometry, Comm. Algebra 12, 937-952 (1984) * [J58] Schmidt S.E. and Weller S., Fundamentalsatz für affine Räume über Moduln, Results Math. 30, 151–159 (1996) * [J59] Seier W., Über Hjelmslev-Strukturen. I, Abh. Math. Sem. Univ. Hamburg 42, 107–133 (1974) * [J60] Seier W., Über Hjelmslev-Strukturen. II., Abh. Math. Sem. Univ. Hamburg 42, 236–254 (1974) * [J61] Sprenger N., Ein geometrischer Zugang zu Barbilianbereichen Berlin: Logos Verlag. Mainz: Univ. Mainz, Fachbereich Mathematik, 146 pp. (1999) * [J62] Veldkamp F.D., Projective Barbilian spaces. I., Results Math. 12, 222–240 (1987) * [J63] Veldkamp F.D., Projective Barbilian spaces. II., Results Math. 12, 434–449 (1987) * [J64] Veldkamp F.D., $n$–Barbilian domains, Results Math. 23, 177–200 (1993) ## K Projective lines and circle geometries over rings * [K01] Bartnicka E. and Matraś A., The distant graph of the projective line over a finite ring with unity, Results Math. 72, 1943–1958 (2017) * [K02] Bartnicka E. and Matraś A., Free cyclic submodules in the context of the projective line, Results Math. 70, 567–580 (2016) * [K03] Bartolone C., Jordan homomorphisms, chain geometries and the fundamental theorem, Abh. Math. Sem. Univ. Hamburg 59, 93–99 (1989) * [K04] Bartolone C. and Bartolozzi F., Topics in geometric algebra over rings, in: Rings and geometry, Proc. NATO Adv. Study Inst., Istanbul/Turkey 1984, NATO ASI Ser., Ser. C 160, 353–389 (1985) * [K05] Bartolone C. and Di Franco F., A remark on the projectivities of the projective line over a commutative ring, Math. Z. 169, 23–29 (1979) * [K06] Benz W., Vorlesungen über Geometrie der Algebren, Springer, Berlin, 368 pp. (1973) * [K07] Blunck A., Cross–ratios over local alternative rings, Result. Math. 19, 246–256 (1991) * [K08] Blunck A., Chain geometries over local alternative algebras, J. Geom. 44, 33–44 (1992) * [K09] Blunck A., A quadric model for Klingenberg chain spaces, Geom. Dedicata 55, 237–246 (1995) * [K10] Blunck A. and Havlicek H., Projective representations. I: Projective lines over rings, Abh. Math. Sem. Univ. Hamburg 70, 287–299 (2000) * [K11] Blunck A. and Havlicek H., Projective representations. II: Generalized chain geometries, Abh. Math. Sem. Univ. Hamburg 70, 301–313 (2000) * [K12] Blunck A. and Havlicek H., Extending the concept of chain geometry, Geom. Dedicata 83, 119–130 (2000) * [K13] Blunck A. and Havlicek H., The connected components on the projective line over a ring, Adv. Geom. 1, 107–117 (2001) * [K14] Blunck A. and Havlicek H., Radical parallelism on projective lines and non-linear models of affine spaces, Math. Pannonica 14, 113–127 (2003) * [K15] Blunck A. and Havlicek H., On distant-isomorphisms of projective lines, Aequationes Math. 69, 146–163 (2005) * [K16] Blunck A. and Havlicek H., Jordan homomorphisms and harmonic mappings, Monatsh. Math. 139, 111–127 (2003) * [K17] Blunck A. and Havlicek H., Projective lines over Jordan systems and geometry of Hermitian matrices, Linear Algebra Appl. 433, 672–680 (2010) * [K18] Blunck A. and Havlicek H., Geometries on $\sigma$–Hermitian matrices, J. Math. Sci. 186, 715–719 (2012) [translated from Sovrem. Mat. Prilozh. 74 (2011)] * [K19] Blunck A. and Herzer A., Kettengeometrien. Eine Einführung, Berichte aus der Mathematik, Shaker Verlag, Aachen, 337 pp. (2005) * [K20] Blunck A. and Pianta S., Lines in 3–space, Mitt. Math. Ges. Hamb. 27, 189–202 (2008) * [K21] Blunck A. and Stroppel M., Klingenberg chain spaces, Abh. Math. Sem. Univ. Hamburg 65, 225–238 (1995) * [K22] Chkhatarashvili D., K. von Staudt’s theorem over Ore domains, Bull. Georgian Acad. Sci. 158, 18–20 (1998) * [K23] Cirlincione L. and Enea M.R., Una generalizzazione del birapporto sopra un anello, Rend. Circ. Mat. Palermo (II) 39, 271–280 (1990) * [K24] Havlicek H., Projective ring lines and their generalizations, Electronic Notes in Discrete Mathematics 40, 151–155 (2013) * [K25] Havlicek H., Von Staudt’s theorem revisited, Aequationes Math. 89, 459–472 (2015) * [K26] Havlicek H., Kosiorek J. and Odehnal B., A point model for the free cyclic submodules over ternions, Result. Math. 63, 1071–1078 (2013) * [K27] Havlicek H., Matraś A. and Pankov M., Geometry of free cyclic submodules over ternions, Abh. Math. Sem. Univ. Hamburg . 81, 237–249 (2011) * [K28] Havlicek H. and Saniga M., Vectors, cyclic submodules, and projective spaces linked with ternions, J. Geom. 92, 79–90 (2009) * [K29] Havlicek H. and Zanella C., Linear sets in the projective line over the endomorphism ring of a finite field, J. Algebr. Comb. 46, 297–312 (2017) * [K30] Herzer A. and Ramroth H., Die projektive Gerade über einem Ring, der direktes Produkt kommutativer Körper ist, J. Algebra 176, 1–11 (1995) * [K31] Jurga R., The cross-ratio in Hjelmslev planes, Math. Bohem. 122, 243–247 (1997) * [K32] Keppens D., Möbius planes with neighbor relation, Simon Stevin 61, 157–170 (1987) * [K33] Keppens D., Laguerre and Minkowski planes with neighbor relation, J. Geom. 30, 12–27 (1987) * [K34] Kulkarni M., Fundamental theorem of projective geometry over a commutative ring, Indian J. Pure Appl. Math. 11 (1980), 1561–1565 (1980) * [K35] Kvirikashvili T. and Lashkhi A., Harmonic maps and Von Staudt’s theorem over rings, J. Math. Sci. 195, 496–504 (2013) * [K36] Lang K., Spiegelungen in Hjelmslev’schen Kreisgeometrien, Geom. Dedicata 21, 107–121 (1986) * [K37] Lashkhi A., Harmonic maps over rings, Georgian Math. J. 4, 41–64 (1997) * [K38] Limaye N., Projectivities over local rings, Math. Z. 121, 175–180 (1971) * [K39] Limaye N., Cross ratios and projectivities of the line, Math. Z. 129, 49–53 (1972) * [K40] Limaye B and Limaye N., Fundamental theorem for the projective line over non-commutative local rings, Arch. Math. 28, 102–109 (1977) * [K41] Limaye B and Limaye N., The fundamental theorem for the projective line over commutative rings, Aequationes Math. 16, 275–281 (1977) * [K42] Limaye B. and Limaye N., Correction to “Fundamental theorem for the projective line over non-commutative local rings”, Arch. Math. 29, 672 (1977) * [K43] Matraś A. and Siemaszko A., The shortest path problem for the distant graph of the projective line over the ring of integers, Bull. Malays. Math. Sci. Soc. 41, 231–248 (2018) * [K44] Matraś A. and Siemaszko A., The Cayley property of some distant graphs and relationship with the Stern–Brocot tree, Result. Math. 73, No. 141, 14 pp. (2018) * [K45] McDonald, B. R., Projectivities over rings with many units, Comm. Algebra 9, 195–204 (1981) * [K46] McDonald, B. R., Geometric algebra over local rings,. Pure and Applied Mathematics, No. 36. Marcel Dekker, Inc., New York-Basel, 421 pp. (1976) * [K47] Saniga M., Planat M., Kibler, M. and Pracna P., A classification of the projective lines over small rings, Chaos Solitons Fractals 33, 1095–1102 (2007) * [K47] Saniga M., Planat M. and Pracna P., A Classification of the projective lines over small rings II. Non-Commutative Case , arXiv:math/0606500 (2006) * [K49] Schaeffer H., Das von Staudtsche Theorem in der Geometrie der Algebren, J. Reine Angew. Math. 267, 133–142 (1974) * [K50] Seier W., Kettengeometrie über Hjelmslevringen, Beitr. Geom. Algebra, Proc. Symp. Duisburg 1976, 299–303 (1977) * [K51] Seier W., $n$–affine Ebenen mit Nachbarelementen, Math. Sem. Univ. Hamburg 50, 20–31 (1980) ## L Ring geometries and buildings * [L01] Artmann B., Hjelmslev–Ebenen in projektiven Räumen, Arch. Math. 21, 304–307 (1970) * [L02] Bix R., Octonion planes over local rings, Trans. Amer. Math. Soc. 261, 417–438 (1980) * [L03] Bix R., Isomorphism theorems for octonion planes over local rings, Trans. Amer. Math. Soc. 266, 423–439 (1981) * [L04] Faulkner J.R., Octonion planes defined by quadratic Jordan algebras, Mem. Amer. Math. Soc. 104, 71 pp. (1970) * [L05] Faulkner J.R. and Ferrar J., Generalizing the Moufang plane, Rings and geometry (Istanbul, 1984), NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci., Reidel, Dordrecht 160, 235–288 (1985) * [L06] Hall J. and Rao A., An algorithm for constructing Hjelmslev planes, in: Algebraic design theory and Hadamard matrices, Springer Proc. Math. Stat. 133, 137–147 (2015) * [L07] Hanssens G. and Van Maldeghem H., On projective Hjelmslev Planes of level $n$, Glasgow Math. J. 31, 257–261 (1989) * [L08] Hanssens G. and Van Maldeghem H., A universal construction for projective Hjelmslev planes of level $n$, Comp. Math. 71, 285–294 (1989) * [L09] Hanssens G. and Van Maldeghem H., Hjelmslev-Quadrangles of level $n$, J. Combin. Theory Ser. A 55, 256–291 (1990) * [L10] Hanssens G. and Van Maldeghem H., A Characterization of $\tilde{C}_{2}$–buildings by floors, Simon Stevin 65, 217–265 (1991) * [L11] James D.G., Polar spaces over rings with absolute stable range condition, Linear Multilinear Algebra 38, 373–378 (1995) * [L12] Keppens D., Classical Klingenberg generalized quadrangles, Arch. Math. 55, 619–624 (1990) * [L13] Keppens D. and Van Maldeghem H., Embeddings of projective Klingenberg planes in the projective space PG(5,$\mathbb{K}$), Beiträge Algebra Geom. 50, 483–493 (2009) * [L14] Schillewaert J. and Van Maldeghem H., Imbrex geometries, J. Comb. Theory, Ser. A 127, 286–302 (2014) * [L15] Schillewaert J. and Van Maldeghem H., Projective planes over 2–dimensional quadratic algebras, Adv. Math. 262, 784–822 (2014) * [L16] Springer T. and Veldkamp F.D., On Hjelmslev–Moufang planes, Math. Z. 107, 249–263 (1968) * [L17] Tits J., Sur la géometrie des $R$–espaces, J. Math. Pure Appl. 36, 17–38 (1957) * [L18] Van Maldeghem H., Non–classical triangle buildings, Geom. Dedicata 24, 123–206 (1987) * [L19] Van Maldeghem H., Valuations on PTR’s induced by triangle buildings, Geom. Dedicata 26, 29–84 (1988) * [L20] Van Maldeghem H., Quadratic Quaternary Rings with Valuation and affine buildings of type $\tilde{C}_{2}$, Mitt. Mathem. Sem. Giessen 189, 1–159 (1989) * [L21] Van Maldeghem H., An algebraic characterization of affine buildings of type $\tilde{C}_{2}$, Mitt. Mathem. Sem. Giessen 198, 1–42 (1990) * [L22] Van Maldeghem H. and Van Steen K., Characterizations by automorphism groups of some rank 3 buildings I: Some properties of half strongly-transitive triangle buildings, Geom. Dedicata 73, 119–142 (1998) * [L23] Van Maldeghem H. and Van Steen K., Characterizations by automorphism groups of some rank 3 buildings II: A half strongly-transitive locally finite triangle building is a Bruhat–Tits building, Geom. Dedicata 74, 113–133 (1999) * [L24] Van Steen K., Characterizations by automorphism groups of some rank 3 buildings III: Moufang-like conditions, Geom. Dedicata 74, 225–240 (1999) * [L25] Veldkamp F.D., Collineation groups in Hjelmslev–Moufang planes, Math. Z. 108, 37–52 (1968) * [L26] Veldkamp F.D., Unitary groups in Hjelmslev–Moufang planes, Math. Z. 108, 288–312 (1969) * [L27] De Schepper A. and Van Maldeghem H., Veronese representation of projective Hjelmslev planes over some quadratic alternative algebras, Results Math. 75, 1–51 (2020) ## M Ring geometries and coding theory * [M01] Bini G. and Flamini F., Finite commutative rings and their applications, Kluwer, Boston–Dordrecht–London, 176 pp. (2002) * [M02] Boev S., Honold T. and Landjev I., Optimal arcs in Hjelmslev spaces of higher dimension, C. R. Acad. Bulg. Sci. 64, 625–632 (2011) * [M03] Boev S. and Landjev I., On blocking sets in affine Hjelmslev planes, Serdica J. Comput. 6, 175–184 (2012) * [M04] Byrne E., Greferath M. and Honold T., Ring geometries, two-weight codes, and strongly regular graphs, Des. Codes Cryptography 48, 1–16 (2008) * [M05] Chvál V. and Jurga R., Tangents of conics in Hjelmslev planes over a local ring of even characteristic, Math. Slovaca 48, 69–78 (1998) * [M06] Egorychev G. and Zima E., Simple formulae for the number of quadrics and symmetric forms of modules over local rings, Commun. Algebra 36, 1426–1436 (2008) * [M07] Hammons A. R., Kumar, P.V., Calderbank A.R., Sloane N.J.A. and Solé P., The $\mathbb{Z}_{4}$–linearity of Kerdock, Preparata, Goethals, and related codes, IEEE Transactions on Information Theory 40, 301–319 (1994) * [M08] Hemme L., Honold T. and Landjev I., Arcs in projective Hjelmslev spaces obtained from Teichmüller sets, in Proceedings of the Seventh International Workshop on Algebraic and Combinatorial Coding Theory (ACCT 2000), Bansko, Bulgaria, 177–182 (2000) * [M09] Høholdt T., Modern Hjelmslev geometry (in Danish), Normat 33, 166–167 (1985) * [M10] Honold T. and Kiermaier M., The maximal size of 6- and 7-arcs in projective Hjelmslev planes over chain rings of order 9, Sci. China, Math. 55, 73–92 (2012) * [M11] Honold T. and Kiermaier M., Classification of maximal arcs in small projective Hjelmslev geometries, in Proceedings of the Tenth International Workshop on Algebraic and Combinatorial Coding Theory 2006, [arXiv:1503.02937], 112–117 (2006) * [M12] Honold T. and Kiermaier M., The existence of maximal ($q^{2},2$)-arcs in projective Hjelmslev planes over chain rings of length 2 and odd prime characteristic, Des. Codes Cryptography 68, 105–126 (2013) * [M13] Honold T., Kiermaier M. and Landjev I., New arcs of maximal size in projective Hjelmslev planes of order nine, C. R. Acad. Bulg. Sci. 63, 171–180 (2010) * [M14] Honold T. and Landjev I., Linearly representable codes over chain rings, Abh. Math. Sem. Univ. Hamburg 69, 187–203 (1999) * [M15] Honold T. and Landjev I., All Reed–Muller codes are linearly representable over the ring of dual numbers over $\mathbb{Z}_{2}$, IEEE Trans. Inf. Theory 45, 700–701 (1999) * [M16] Honold T. and Landjev I., Linear codes over finite chain rings, Electron. J. Comb. 7, Research paper R 11, 22 p. (2000); printed version J. Comb. 7 (2000) * [M17] Honold T. and Landjev I., Linear codes over finite chain rings and projective Hjelmslev geometries, in Codes over rings, Ser. Coding Theory Cryptol., 6, World Sci. Publ., Hackensack, NJ, 60–123 (2009) * [M18] Honold T. and Landjev I., Codes over rings and ring geometries, in Storme, Leo (ed.) et al.,Current research topics in Galois geometry. New York, NY: Nova Science Publishers/Novinka, Mathematics Research Developments, 161–186 (2014) * [M19] Honold T. and Landjev I., Non–free extensions of the simplex codes over a chain ring with four elements, Des. Codes Cryptography 66, 27–38 (2013) * [M20] Honold T. and Landjev I., On arcs in projective Hjelmslev planes, Discrete Math. 231, 265–278 (2001) * [M21] Honold T. and Landjev I., On maximal arcs in projective Hjelmslev planes over chain rings of even characteristic, Finite Fields Appl. 11, 292–304 (2005) * [M22] Honold T. and Landjev I., Caps in projective Hjelmslev spaces over finite chain rings of nilpotency index 2, Innov. Incidence Geom. 4, 13–25 (2006) * [M23] Honold T. and Landjev I., The dual construction for arcs in projective Hjelmslev spaces, Adv. Math. Commun. 5, 11–21 (2011) * [M24] Jukl M. and Snásel V., Projective equivalence of quadrics in Klingenberg projective spaces over a special local ring, Int. Electron. J. Geom. 2, 34–38 (2009) * [M25] Jurga R., Some combinatorial properties of conics in the Hjelmslev plane, Math. Slovaca 45, 219–226 (1995) * [M26] Jurga R., Some problems of classification of points in the Desarguesian Hjelmslev plane, Math. Slovaca 47, 563–574 (1997) * [M27] Jurga R., Some combinatorial results on the classification of lines in Desarguesian Hjelmslev planes, Math. Slovaca 48, 79–85 (1998) * [M28] Keppens D., Polarities in finite 2-uniform projective Hjelmslev planes, Geom. Dedicata 24, 51–76 (1987) * [M29] Keppens D., On polarities in the $k$-uniform $n$-dimensional projective Hjelmslev space ${\rm PH}(n,GF(q)[t]/t^{k})$, $q$ odd, Results Math. 12, 297–324 (1987) * [M30] Keppens D., Polarities in $n$-uniform projective Hjelmslev planes, Geom. Dedicata 26, 185–214 (1988) * [M31] Keppens D. and Mielants W., On the number of points on a plane algebraic curve over GF($q$)[$t$]/$t^{n}$, Ars Combin. 40, 121–128 (1995) * [M32] Kiermaier M., Koch M. and Kurz S., 2–arcs of maximal size in the affine and the projective Hjelmslev plane over $\mathbb{Z}_{25}$, Adv. Math. Commun. 5, 287–301 (2011) * [M33] Kiermaier M. and Kohnert A., New arcs in projective Hjelmslev planes over Galois rings, in Proceedings of the Fifth International Workshop on Optimal Codes and Related Topics 2007, [arXiv:1503.02932], 112–119 (2007) * [M34] Kiermaier M. and Zwanzger J., A $\mathbb{Z}_{4}$–linear code of high minimum Lee distance derived from a hyperoval, Adv. Math. Commun. 5, 275–286 (2011) * [M35] Kiermaier M. and Zwanzger J., New ring–linear codes from dualization in projective Hjelmslev geometries, Des. Codes Cryptography 66, 39–55 (2013) * [M36] Kohnert A., Sets of type $(d_{1},d_{2})$ in projective Hjelmslev planes over Galois rings, in Klin, Mikhail (ed.) et al., Algorithmic algebraic combinatorics and Gröbner bases. Dordrecht: Springer, 269–278 (2009) * [M37] Kossel M., Symmetrische Ovale in Klingenberg–Ebenen, Aachen: Verlag Shaker. Darmstadt: TH Darmstadt, FB Math., 93 pp. (1996) * [M38] Kulkarni M., A generalisation of Pascal’s theorem to commutative rings, Arch. Math. 33, 426–429 (1980) * [M39] Landjev I., Spreads in projective Hjelmslev spaces over finite chain rings, Sci. Res. 5, 1–8 (2007) * [M40] Landjev I., On blocking sets in projective Hjelmslev planes, Adv. Math. Commun. 1, 65–81 (2007) * [M41] Landjev I. and Boev S., A family of two-weight ring codes and strongly regular graphs, C. R. Acad. Bulg. Sci. 62, 297–302 (2009) * [M42] Landjev I. and Boev S., Blocking sets of Rédei type in projective Hjelmslev planes, Discrete Math. 310, 2061–2068 (2010) * [M43] Landjev I. and Honold T., Arcs in projective Hjelmslev planes, Discrete Math. Appl. 11, 53–70 (2001) [translated from Diskretnaya Mat. 13(1), 90–109 (2001) (in Russian)] * [M44] Levchuk, V. M. and Starikova, O. A., Quadratic forms of projective spaces over rings, Sb. Math. 197, 887–899 (2006) [translated from Mat. Sb. 197, 97-110 (2006) (in Russian)]. * [M45] Little J., Toric codes and finite geometries, Finite Fields Appl. 45, 203–216 (2017) * [M46] Little J., Corrigendum to: “Toric codes and finite geometries”, Finite Fields Appl. 48, 447–448 (2017) * [M47] Nechaev, A., Kerdock’s code in cyclic form, Discrete Math. Appl. 1, 365–384 (1991) [translated from Diskret. Mat. 1, 123–139 (1989) (in Russian)] * [M48] Nechaev A., Kuzmin A. and Markov, V., Linear codes over finite rings and modules (in Russian), Fundam. Prikl. Mat. 3, 195–254 (1997) * [M49] Shiromoto K. and Storme L., A Griesmer bound for codes over finite quasi–Frobenius rings, International Workshop on Coding and Cryptography (Paris, 2001), 9 pp., Electron. Notes Discrete Math., 6, Elsevier Sci. B. V., Amsterdam (2001) * [M50] Stepień Z. and Szymaszkiewicz L., Arcs in $\mathbb{Z}^{2}_{2p}$, J. Comb. Optim. 35, 341–349 (2018) * [M51] Starikova, O. A. and Svistunova, A. V., Enumeration of quadrics of projective spaces over local rings, Russ. Math. 55, 48–51 (2011) [translated from Izv. Vyssh. Uchebn. Zaved., Mat. 2011, No. 12, 59–63 (2011) (in Russian)]. * [M52] Starikova, O. A., Quadratic forms and quadrics of space over local rings, J. Math. Sci. 187, 177–186 (2012) [translated from Fundam. Prikl. Mat. 17, 97–110 (2012) (in Russian)]. * [M53] Starikova, O. A., Classes of projectively equivalent quadrics over local rings, Discrete Math. Appl. 23 (2013), 385–398. [translated from Diskretn. Mat. 25, No. 2, 91–103 (2013) (in Russian)]. * [M54] Landjev, I. and Georgieva, N., Conditions for the existence of spreads in projective Hjelmslev spaces, Designs, Codes and Cryptography 87 (2019), 785–794. ## N Ring geometries in quantum information theory * [N01] Havlicek H. and Saniga M., Projective ring line of a specific qudit, J. Phys. A 40, 943–952 (2007) * [N02] Havlicek H. and Saniga M., Projective ring line of an arbitrary single qudit, J. Phys. A 41, 12 pp. (2008) * [N03] Planat M. and Baboin A.-C., Qudits of composite dimension, mutually unbiased bases and projective ring geometry, J. Phys. A, Math. Theor. 40, 1005-1012 (2007) * [N04] Planat M., Baboin A.-C. and Saniga M., Multi-line geometry of qubit-qutrit and higher-order Pauli operators, Int. J. Theor. Phys. 47, 1127–1135 (2008) * [N05] Planat M., Rosu H. and Perrine S., A survey of finite algebraic geometrical structures underlying mutually unbiased quantum measurements, Found. Phys. 36, 1662–1680 (2006) * [N06] Planat M., Saniga M. and Kibler M., Quantum entanglement and projective ring geometry, SIGMA Symmetry Integrability Geom. Methods Appl. 2, 14 pp. (2006) * [N07] Saniga M. and Bartnicka E., Doily as subgeometry of a set of nonunimodular free cyclic submodules, preprint [arXiv:1812.01916, 5 december 2018], 1–5 (2018) * [N08] Saniga M., Havlicek H., Planat M. and Pracna P., Twin ”Fano-snowflakes” over the smallest ring of ternions, SIGMA Symmetry Integrability Geom. Methods Appl. 4, Paper 050, 7 pp. (2008) * [N09] Saniga M. and Planat M., Hjelmslev geometry of mutually unbiased bases, J. Phys. A 39, 435–440 (2006) * [N10] Saniga M. and Planat M., Finite geometries in quantum theory: from Galois (fields) to Hjelmslev (rings), Internat. J. Modern Phys. B 20, 1885–1892 (2006) * [N11] Saniga M. and Planat M., A projective line over the finite quotient ring GF(2)$[x]/\langle x^{3}-x\rangle$ and quantum entanglement: theoretical bases, Theoret. and Math. Phys. 151, 474–481 (2007) [translated from Teoret. Mat. Fiz. 151, 44–53 (2007) (in Russian)] * [N12] Saniga M. and Planat M., Projective planes over “Galois” double numbers and a geometrical principle of complementarity, Chaos Solitons Fractals 36, 374–381 (2008) * [N13] Saniga M. and Planat M., On the fine structure of the projective line over GF(2)$\otimes$GF(2)$\otimes$GF(2), Chaos Solitons Fractals 37, 337–345 (2008) * [N14] Saniga M., Planat M. and Minarovech M., A projective line over the finite quotient ring GF(2)$[x]/\langle x^{3}-x\rangle$ and quantum entanglement: the Mermin ”magic” square and pentagram, Theoret. and Math. Phys. 151, 625–631 (2007) [translated from Teoret. Mat. Fiz. 151, 219–227 (2007) (in Russian)] * [N15] Saniga M., Planat M. and Pracna P., Projective curves over a ring that includes two-qubits Theoret. and Math. Phys. 155, 905–913 (2008) [translated from Teoret. Mat. Fiz. 155, 463–473 (2008) (in Russian)]. * [N16] Saniga M., Planat, M. and Pracna, P., Projective ring line encompassing two-qubits, Theor. Math. Phys. 155, 905–913 (2008) [translated from Teor. Mat. Fiz. 155, No. 3, 436–473 (2008) (in Russian)] * [N17] Saniga M. and Pracna P., A Jacobson radical decomposition of the Fano-Snowflake configuration, SIGMA Symmetry Integrability Geom. Methods Appl. 4, Paper 072, 7 pp. (2008) * [N18] Saniga M. and Pracna P., Space versus time: unimodular versus non-unimodular projective ring geometries?, Journal of Cosmology 4, 719–735 (2010)
2024-09-04T02:54:57.666922
2020-03-05T19:57:48
2003.02895
{ "authors": "Monica Alexander, Kivan Polimis, Emilio Zagheni", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26067", "submitter": "Monica Alexander", "url": "https://arxiv.org/abs/2003.02895" }
arxiv-papers
# Combining social media and survey data to nowcast migrant stocks in the United States Monica Alexander111University of Toronto<EMAIL_ADDRESS> Kivan Polimis222University of Washington Emilio Zagheni333Max Planck Institute for Demographic Research ###### Abstract Measuring and forecasting migration patterns, and how they change over time, has important implications for understanding broader population trends, for designing policy effectively and for allocating resources. However, data on migration and mobility are often lacking, and those that do exist are not available in a timely manner. Social media data offer new opportunities to provide more up-to-date demographic estimates and to complement more traditional data sources. Facebook, for example, can be thought of as a large digital census that is regularly updated. However, its users are not representative of the underlying population. This paper proposes a statistical framework to combine social media data with traditional survey data to produce timely ‘nowcasts’ of migrant stocks by state in the United States. The model incorporates bias adjustment of the Facebook data, and a pooled principal component time series approach, to account for correlations across age, time and space. We illustrate the results for migrants from Mexico, India and Germany, and show that the model outperforms alternatives that rely solely on either social media or survey data. ## 1 Introduction Accurate, reliable and timely estimates of migration indicators, such as flows and stocks, are crucial for understanding population dynamics and demographic change, for designing effective economic, social and health policies, and for supporting migrants and their families. However, data on migration from traditional sources, such as censuses, surveys or administrative registers, are often insufficient. Even when these sources exist, the data available may lack the granularity of information required to understand migration trends, or are not released in a manner that is timely enough to monitor changes in trends. As migration flows can change substantially over a short period of time — such as in response to a natural disaster or war and conflict — relying on out-dated data is often not sufficient. Timely and reliable information about migration stocks is important not only to understand migration patterns. It is key also to monitor fertility, population health and mortality. Even when accurate data on births and deaths exist, often demographers are faced with large uncertainty in population counts, which, in various disaggregations, form the denominators for standard demographic rates. Most of the uncertainty is driven by lack of appropriate information on how migration stocks change over time and space. As a consequence of data availability issues, we need to consider how non- traditional data can be leveraged to complement existing sources in order to improve estimates and predictions of migration indicators over time. Previous work has explored the use of data such as call detail records (Blumenstock, 2012; Pestre et al., 2020), air traffic data (Gabrielli et al., 2019), tax file records (Engels and Healy, 1981) and other sources like billing addresses or school enrollment (Foulkes and Newbold, 2008) to estimate migration. Additionally, an increasingly large body of work has investigated the use of social media data, from websites such as Twitter (Zagheni et al., 2014), Facebook (Zagheni et al., 2017) and LinkedIn (State et al., 2014). Provided that the data can be obtained in a reliable, timely, and ethical way, information about the users of social media websites is potentially an incredibly rich demographic data source. Data on these populations are essentially collected in real time, and while individual-level information is usually restricted, many of the social media websites provide a certain amount of aggregate-level information through their advertising platforms (Cesare et al., 2018). In particular, Facebook’s Advertising platform allows information to be extracted on the relative size of groups by key demographics such as age, sex, location of residence and country of origin, thereby potentially acting as a measure of the relative size of migrant groups in a particular country. While these data have clear potential for use in demographic research, with respect to timeliness and the size of the sample being considered, there are some notable issues that need to be overcome. In particular, for any given population subgroup of interest, the corresponding users of Facebook or any other social media platform are unlikely to be a representative sample. An additional challenge is to use these data in a way that meaningfully combines new ‘signal’ or information about migration trends with existing knowledge on probable migration trends from historical data sources. This paper proposes a statistical framework to combine social media data from Facebook, with traditional survey data from the ACS, in order to produce timely ‘nowcasts’ of migrant stocks by state in the United States. The framework consists of a Bayesian hierarchical model which incorporates bias adjustment of the Facebook data, a demographic time series approach to account for historical past trends, and a geographic pooling component which allows information about the age structure of migrants to be shared across space. The model also accounts for the different types of uncertainty that are likely to be present in Facebook and traditional survey data. The resulting model produces estimates and short-term projections of migrant stocks by US state of destination and country of origin, and is shown to out-perform valid modeling alternatives. The remainder of the paper is structured as follows. First, we briefly discuss previous demographic research which incorporates social media. Then we outline the data sources used, and in particular how the Facebook data were collected. Section 4 discusses the model set-up, assumptions and computation. We then present results for Mexican, Indian and German migrants by US state, and validate model performance against reasonable alternatives. Finally, the strengths and limitations of the model are discussed, together with avenues for future research. ## 2 Background The lack of good-quality data on migration is a global problem, with data sparsity issues prevalent in both developed and developing countries (Landau and Achiume, 2017). This has prompted scholars to investigate the use of other types of data to monitor migration trends. In particular, with the rise of social media use around the world, new data that have potential for demographic research have emerged. Scholars began using social media and web data to estimate and track demographic indicators over time in the early 2010s. The earliest papers illustrated how geo-located data from email services and web-based applications such as Twitter, Google Latitude, Foursquare or Yahoo! can be used (Ferrari et al., 2011; Noulas et al., 2011; Zagheni and Weber, 2012). Initial research focused on evaluating spatial mobility of populations at a city or regional level. For example, Ferrari et al. (2011) used Twitter data to study patterns of urban movement in New York. In the first effort to tackle global trends, Zagheni and Weber (2012) linked the geographic locations of IP addresses of Yahoo! emails to the user’s self-reported demographic data to estimate age- and sex-specific migration flows in a large number of countries around the world. Recent efforts have focused on using data from social media and networking websites such as Twitter, LinkedIn and, more recently, Facebook and Instagram. These websites provide public access to an Application Programming Interface (API), which makes it possible to send requests and receive responses from these websites for data like tweet hashtag counts, the number of jobs in a certain industry, or number of cell phone users in a particular area. Researchers have utilized these APIs to extract publicly available demographic and location data for use in social research, in particular to study outcomes such as migration (Yildiz et al., 2017; Zagheni et al., 2017), fertility (Rampazzo et al., 2018), gender equality (Fatehkia et al., 2018; Garcia et al., 2018), and health (Araujo et al., 2017a). For instance, Garcia et al. (2018) used Facebook data to create an index of the internet gender divide in 217 countries, showing that this indicator encapsulated gender equality indices in education, health and economic opportunity. Yildiz et al. (2017) used a combination of geo-located tweets and image recognition software to obtain estimates of internal migration in England. In work relevant to this paper, Zagheni et al. (2017) presented a proof of concept for estimating migration stocks in the United State by age, sex and state, using Facebook’s Advertising Platform. More recently, Alexander et al. (2019) used the same type of data to track changes in migrants over time, in the context of estimating out-migration from Puerto Rico following Hurricane Maria in September 2017. The main gap in the literature is related to the lack of a suitable statistical model for combining ‘traditional’ data sources on migrants — in the form of censuses, nationally representative surveys, or other vital statistics — with migration information from social media data. The goal of this paper is thus to develop a probabilistic framework that allows representative and historical time series to be combined, in a sound statistical framework, with non-representative – but timely – sources from social media. ## 3 Data ### 3.1 Facebook Advertising data Facebook for Business has developed a targeted advertising platform, called Ads Manager that provides a graphical user interface to allow advertisers to micro-target specific audiences. Demographic characteristics that can be targeted include information directly reported by Facebook users, such as age or sex, and information indirectly inferred from using Facebook’s platform or affiliated websites, such as location and behavioral interests. Before launching an advertisement, an advertiser can select a variety of characteristics (e.g., Australians living in California, who are female, and aged 30-35) and get an estimate of the ‘potential reach’ (monthly active users) to this subgroup. These estimates can be obtained, in a programmatic way, for a variety of different expatriate (‘expat’) groups by age, sex, and education. We use the estimates of potential reach by expat group, age and sex to track sizes of migration stocks over time. These estimates can be obtained before the launch of an advertisement, and as such are obtained free of charge. We use the Ads Manager backend application, Facebook’s Marketing API, to extract estimates of potential reach over time programmatically with the Python module pySocialWatcher (Araujo et al., 2017b). With pySocialWatcher, we collected data across 11 age groups (10 UN age groups from 15-19 to 60-65; an 11th group for the entire available Facebook population of 13-65 was also used), three gender groups (female, male, and total population) and multiple education categories. Data was collected using Amazon Web Services (AWS) EC2 Instance servers. As part of a broader project on using social media in demographic research, we started data collection in January 2017. For each wave of data collection we obtained state-level estimates of all Facebook users (by age, sex, and gender) as well as state-level estimates of 90 expat groups. We have been collecting a new wave of data every 2-3 months. 444The waves used in this paper are: Wave 1: January 2017; Wave 2: April 2017; Wave 3: June 2017; Wave 4: October 2017; Wave 5: January 2018; Wave 6: March 2018 ### 3.2 American Community Survey The American Community Survey (ACS) is an annual survey of the U.S. Census Bureau, designed to supplement the decennial census. Based on the long-form version of the census, the ACS collects information on topics including population, housing, employment and education from a nationally representative sample. Data on migrant stocks can be readily obtained from the ACS. In particular, in every year of the ACS, the survey has contained a question asking the birthplace of the person; if it is inside the United States, the state is recorded, and if it is outside the United States, the country is recorded. This birthplace variable is recorded as a three digit code to indicate the US state or country of birth. In addition to the birthplace variable, the ACS has information on current state of residence. Thus, we can tabulate the number of migrants from a particular country living in a particular state by looking at the combination of these two variables. From a modeling perspective, we are interested in the proportion of migrants from a particular origin of the total population by five-year age groups ($15-19,20-24,\dots,50-54$) in each state. We calculated the migrant stock proportions using the 1-year ACS for each year between 2001 and 2017 using micro-data available through the Integrated Public Use Microdata (IPUMS) US project (Ruggles et al., 2000). Standard errors around the calculated proportions based on sampling variation were calculated based on ACS accuracy guidelines (US Census Bureau, 2020) and using the Delta method. ## 4 Model We are considering two data sources of migration trends in the US: data from Facebook’s Advertising Platform, and the ACS. The overall goal of the modeling strategy is to combine information from both these sources to produce estimates of current and future migrant stocks. To do this, the model should have three main characteristics. Firstly, we want to adjust for biases in Facebook data to effectively use up-to-date information on migration patterns from this source. Secondly, we want to be able to incorporate longer time series of information from the ACS. Finally, the data should be combined in a probabilistic way, in order to objectively weigh information from both sources. We propose a Bayesian hierarchical model which achieves these goals. In this section we describe the model in detail. For a particular migrant group, define $\mu_{xts}$ to be the (‘true’) proportion of migrants of the total populations in age group $x$ at time $t$ and in state $s$. This quantity $\mu_{xts}$ is the main parameter of interest to be estimated. We have observations of this proportion, which will be denoted $p_{xts}$. The observed proportions are either from Facebook ($p_{xts}^{FB}$) or from the ACS ($p^{ACS}_{xts}$). The $p_{xts}$s are observed, and it is assumed that these are somehow related to the underlying true proportions, $\mu_{xts}$, with some associated error. We use the term ‘true’ in a statistical sense, referring to a latent variable of interest. ### 4.1 Facebook bias adjustment The first goal is to adjust the Facebook data to account for the non- representativeness of the Facebook user population. Previous research from Zagheni et al. (2017) showed that, while the bias in the Facebook migrant data is substantial, it is also relatively systematic by age and migrant group and can be modelled. Following their approach, we introduce a regression model which relates the proportions of migrants in Facbook, $p_{xts}^{FB}$, to the proportions in the ACS in a similar time period, $p_{xts}^{ACS}$, plus a series of age and state variables. In particular, for a particular migrant group, express $p_{xts}^{ACS}$, on a log scale, as $\log p_{xts}^{ACS}=\alpha_{0}+\alpha_{1}\log p_{xts}^{FB}+\mathbf{\beta X}+\varepsilon_{FB}$ (1) where $\mathbf{X}$ is a covariate matrix containing an indicator variable for each age group ($15-19,20-24,\dots,50-54$) and each of the 50 states plus Washington D.C. This means that we estimate a fixed effect for each age group and state. In addition, we assume that the error is i.i.d and that $\varepsilon_{FB}\sim N(0,\sigma^{2}_{FB}).$ Estimates of the coefficients $\alpha_{0}$, $\alpha_{1}$ and the vector of $\beta$’s are obtained using the first wave of the Facebook data and the 2016 ACS data. Once obtained, these coefficient estimates are then used to adjust subsequent waves of Facebook data, i.e. we calculate $\log p_{xts}^{*}=\hat{\alpha}_{0}+\hat{\alpha}_{1}\log p_{xts}^{FB}+\mathbf{\hat{\beta}X}$ where $\log p_{xts}^{*}$ is a ‘bias-adjusted’ version of the Facebook data. This is taken to be our ‘best guess’ of what the migrant stocks in group $xts$ are, based on the Facebook data alone. Note that an estimate of $\sigma^{2}_{FB}$ is also obtained, that is, the variance of the error terms, which becomes important in the final model (see section 4.3 below). ### 4.2 Time series modeling of ACS using principal components In addition to using data from Facebook, we also want to incorporate the relatively long historical time series of information on migrant stocks obtained from the ACS. A reasonable short term forecast based on ACS should model historical trends and project them forward. There are many different time series models that could be used in this context. Perhaps the simplest approach would be to project forward a moving average of the time series for each age group and state combination. Alternatively, we could use a classical Box-Jenkins approach and model the time series of migrant stocks in each age group and state separately using an appropriately specified ARIMA model. However, these methods would not place any constraints on the age structure of migration. Given this demographic context, we expect that the age distribution of migration displays strong patterns and changes in a relatively regular way over time. This is because of regularities in the age at migration as well as historical trends which include different waves of migrants, who also age over time. As such, we chose to incorporate this prior knowledge into our model through a principal components approach. Principal component-based models have a long history in demographic modeling, with the most well known example being the Lee-Carter mortality model (Lee and Carter, 1992). The idea is that a set of age-specific demographic rates observed over time can be expressed as a combination of a series of characteristic patterns (or principal components). The Lee-Carter approach uses the mean age-specific mortality schedule and first principal component, which is interpreted as age-specific contributions to mortality change over time. This model can easily be extended to include higher-order principal components, which various researchers have done. Apart from the Lee-Carter model and variants (e.g. Li et al. (2004), Lee (2000), Renshaw and Haberman (2006)), principal component models have been recently used to estimate and forecast mortality (e.g. Alexander et al. (2017)), fertility (e.g. Schmertmann et al. (2014)) and overall population (Wiśniowski et al. (2015)). Here, we extend this idea to parsimoniously estimate and project migration stocks by age and state. #### 4.2.1 Model overview Age-specific migration schedules are decomposed into independent age and time components. The time component is then projected forward as a time series, taking auto-correlated error into account. We propose a log-linear model for $p_{xts}$: $\log p_{xts}^{ACS}=\beta_{ts,1}Z_{x,1}+\beta_{ts,2}Z_{x,2}+\varepsilon_{xts}$ (2) where $Z_{x,1}$ and $Z_{x,2}$ are the first and second ‘principal components’, $\beta_{ts,1}$ and $\beta_{ts,2}$ are state and time-specific coefficients, to be estimated, and $\varepsilon_{xts}$ is an error term. The principal components are obtained via Singular Value Decomposition (SVD), as outlined in the next section. To obtain estimates of $\beta_{ts,1}$ and $\beta_{ts,2}$, we impose some smoothing over time and pooling of information across space, as outlined in section 4.2.3. Finally, as discussed in Section 4.2.4 we place a time series model on the error term, $\varepsilon_{xts}$, to account for auto- correlation. #### 4.2.2 Obtaining the principal components The principal component terms $Z_{x,1}$ and $Z_{x,2}$ aim to capture the main sources of systematic variation in migration patterns across age. They are obtained by first creating a matrix of (logged) historical age-specific migration schedules based on ACS data from 2001 to 2016. Singular Value Decomposition (SVD) is then performed on this matrix to obtain principal components of the age-specific migration. In particular, let $\bf{X}$ be a $N\times G$ matrix of log-migration stock rates, where $N$ is the number of state-years and $G$ is the number of age-groups. In this case, we had $N=51$ states + DC $\times 16$ years $=816$ observations of $G=9$ age-groups ($15-19,20-24,\dots,50-54$). The SVD of $\bf X$ is $\bf X=\bf{UDV^{\prime}},$ (3) where $\bf U$ is a $N\times N$ matrix, $\bf D$ is a $N\times G$ matrix and $\bf V$ is a $G\times G$ matrix. The first two columns of $\bf V$ (the first two right-singular values of $\bf X$) are $Z_{x,1}$ and $Z_{x,2}$. For example, Fig. 1 shows the resulting $Z_{.,1}$ and $Z_{.,2}$ for the Mexican migrant group in the US. These were obtained via the following steps: 1. 1. Calculate $p_{xts}^{ACS}$, i.e. the proportion of migrants in age group $x$, year $t$ and state $s$ for each age group, year and state in ACS 2001-2016. 2. 2. Create $\bf{X}$ where each element is $\log p_{xts}^{ACS}$, every row is a state-year and every column is an age group. 3. 3. Perform SVD on $\bf X$ and extract the first two columns555We used the ‘svd’ function in R. of $\bf V$. Figure 1: Principal Components for Mexico The principal components shown in Fig. 1 can be interpreted as a baseline migration age schedule ($Z_{.,1}$) and age-specific contributions to change over time ($Z_{.,2}$). In the model, the coefficient on $Z_{.,1}$ ($\beta_{ts,1}$) moves the overall level of Mexican migrants up or down, depending on the year and state. The coefficient on $Z_{.,2}$ allows the age distribution to shift to older or younger ages. For $Z_{.,2}$, the sign changes from negative to positive at age 35. This means that the larger and more positive the value of $\beta_{ts,2}$, the older the migrant age distribution.666Note that the interpretation of $Z_{.,1}$ and $Z_{.,2}$ is similar to the interpretation of the $a_{x}$ and $b_{x}$ terms in the usual Lee-Carter model. #### 4.2.3 Sharing information across time and space The model specified in Eqn. 2 requires the estimation of two coefficients, $\beta_{ts,1}$ and $\beta_{ts,2}$ for each time $t$ and state $s$. One option would be to estimate each of these coefficients separately for every year and state. However, we would like to incorporate the knowledge that trends in migration over time are likely to exhibit relatively regular patterns. In addition, for the coefficient on the second principal component — which allows for the age distribution of migrants to shift to the left or right, we would like to share information about the patterns in migration across geographic space. The coefficient on the first principal component, $\beta_{ts,1}$, is modeled as a random walk, i.e. $\beta_{ts,1}\sim N(\beta_{t-1s,1},\sigma_{\beta_{1}}^{2})$ This allows for information about the level of migration within each state to be smoothed over time. The random walk structure allows for the estimate in the current time period, $\beta_{ts,1}$, to be partially informed by the previous period. For the coefficient on the second principal component, we place the following hierarchical structure on the $\beta$’s: $\beta_{ts,2}\sim N(\Phi_{t},\sigma_{\beta}^{2})$ $\Phi_{t}\sim N(\Phi_{t-1},\sigma_{\Phi}^{2})$ where $p=1,2$. The $\Phi_{t}$ term represents essentially a national mean; as such the $\beta_{ts,2}$’s are a draw from a national distribution with some mean and variance. In this way, information about how the age distribution is ageing over time is shared across states. The more information about migration there is available for a particular state (i.e., the larger the migrant population), the less the estimate of $\beta_{ts,2}$ is influenced by the overall mean. Conversely, states with smaller migrant populations where the trends over time are less clear from the data are partially informed by patterns in larger states. Note that the geographical hierarchical structure is not present on the first coefficient, as this represents an overall level of migration. Pooling information across space about the level of migration would artificially increase migrant proportions in smaller states. #### 4.2.4 Auto-correlated error The final piece of the time series model is the error term $\varepsilon_{xts}$. This term is included in the model to allow for extra variation in migration age schedules that is not otherwise picked up by the principal components. We expect the extra variation to be autocorrelated, and as such we model the error term as an AR(1) process: $\varepsilon_{xts}\sim N(\rho_{xs}\varepsilon_{xt-1s},\sigma_{\varepsilon}^{2})$ where $\rho_{xs}\in[0,1]$. #### 4.2.5 Projection The model described above is fit to ACS data from 2001-2016. However, estimates in more recent years can easily be obtained by projecting the time series aspects of this model forward. In particular, for time $t+1$: * • Obtain an estimate for $\beta_{t+1s,1}$ from $\beta_{t+1s,1}\sim N(\beta_{ts,1},\sigma^{2}_{\beta})$. * • Obtain an estimate for $\beta_{t+1s,2}$ from $\beta_{t+1s}\sim N(\Phi_{t+1},\sigma^{2}_{\Phi})$ and $\Phi_{t+1}\sim N(\Phi_{t},\sigma_{\Phi}^{2})$. * • Obtain an estimate for $\varepsilon_{xt+1s}$ from $\varepsilon_{xt+1s}\sim N(\rho_{xs}\varepsilon_{xts},\sigma_{\varepsilon}^{2}).$ * • Calculate $\log p_{xt+1s}^{ACS}$ based on Eqn. 2. ### 4.3 Bringing it all together Sections 4.1 and 4.2 described two ways to obtain current ‘nowcasts’ of migrant stocks. One option would be to take the most recent data obtained from Facebook, adjust using the bias-adjustment model, and take the resulting estimate as our nowcast. Another option would be to project forward the ACS model to the time period of interest. Ideally, we would like to incorporate both sources into our final estimate. Perhaps an option would be to just take an average of the two resulting estimates. However, we would like to weigh the estimates from both sources more objectively, taking different sorts of uncertainty into consideration. Our solution is to combine both models into one framework, and use the results from both methods as data points for our ‘best estimate’ nowcast. This is illustrated in Fig. 2. Facebook inputs are calibrated with the ACS via the adjustment model. ACS data are used to obtained principal components based on past migration data. The modeling structure allows for information exchange over time and across geographic space. The key piece of the combined model, which has yet to be explained, is the data model (or likelihood), which allows data from the different sources to have different associated error. Figure 2: Modeling framework #### 4.3.1 Data model As outlined above, we observe migrant proportions $p_{xts}$ from either Facebook or the ACS. The data model assumes $\log p_{xts}\sim N(\log\mu_{xts},\sigma_{p}^{2})$ i.e. the log of the observed proportion is assumed to have mean $\log\mu_{xts}$ and variance $\sigma_{p}^{2}$, where $\sigma_{p}^{2}$ depends on the data source: $\sigma_{p}^{2}=\begin{cases}\sigma_{s}^{2},&\text{if ACS}\\\ \sigma_{s}^{2}+\sigma_{FB}^{2}+\sigma_{ns}^{2},&\text{if Facebook}\end{cases}\\\ $ Here, $\sigma^{2}_{s}$ refers to sampling error, and is assumed to be present in both ACS and Facebook data. For the ACS data, sampling errors are calculated based on guidelines from the US Census Bureau (2020). For Facebook data, the sampling error is calculated assuming the binomial approximation to the Normal distribution and calculating $\sigma^{2}_{s}=\frac{p_{xts}\cdot(1-p_{xts})}{N_{xts}^{FB}}$ where $N_{xts}^{FB}$ is the total size of the Facebook population in subgroup $x,t,s$. For the Facebook data there are two additional error terms. $\sigma_{FB}^{2}$ refers to the error associated with our bias-adjustment model (Eqn. 1) and is estimated within this model. This captures the fact that our adjustment model is imperfect and that extra variation remains. Additionally, we allow for a non-sampling error with $\sigma_{ns}^{2}$, which aims at capturing additional uncertainty like variation in the way potential reach is estimated across waves. For a given population size, the sampling error is going to be of similar size for ACS and Facebook data. As such, the error term associated with the Facebook data, which is the sum of three terms, will always be bigger than for ACS. In practice, this means that estimates from the model will follow (i.e. give more weight to) the ACS data. #### 4.3.2 Summary of full model The full model is summarized below. Equation 4 is the data model. Equations 5-9 relate to the ACS time series model. Equation 12 is related to the Facebook regression model. Equations 10 and 11 allow the observation of the proportion of interest to come from a different source (Facebook or ACS), which has a different associated variance. Note that $\mu_{xts}$ is estimated on a yearly basis, but it is assumed that $j$ waves of Facebook data are collected within any one year. $\displaystyle\log p_{xts}$ $\displaystyle\sim$ $\displaystyle N(\log\mu_{xts},\sigma^{2})$ (4) $\displaystyle\log\mu_{xts}$ $\displaystyle=$ $\displaystyle\beta_{ts,1}Z_{x,1}+\beta_{ts,2}Z_{x,2}+\varepsilon_{xts}$ (5) $\displaystyle\beta_{ts,1}$ $\displaystyle\sim$ $\displaystyle N(\beta_{t-1s,1},\sigma_{\beta_{1}}^{2})$ (6) $\displaystyle\beta_{ts,2}$ $\displaystyle\sim$ $\displaystyle N(\Phi_{t,2},\sigma_{\beta}^{2})$ (7) $\displaystyle\Phi_{t}$ $\displaystyle\sim$ $\displaystyle N(\Phi_{t-1},\sigma_{\Phi}^{2})$ (8) $\displaystyle\varepsilon_{xts}$ $\displaystyle\sim$ $\displaystyle N(\rho_{xs}\varepsilon_{xt-1s},\sigma_{\varepsilon}^{2})$ (9) $\displaystyle p_{xts}$ $\displaystyle=$ $\displaystyle\begin{cases}p_{xts}^{ACS},&\text{if }2001\leq t\leq 2016\\\ p_{xtsj}^{*},&\text{if }t\geq 2017\end{cases}$ (10) $\displaystyle\sigma^{2}$ $\displaystyle=$ $\displaystyle\begin{cases}\sigma_{s}^{2},&\text{if ACS}\\\ \sigma_{s}^{2}+\sigma_{FB}^{2}+\sigma_{ns}^{2},&\text{if Facebook}\\\ \end{cases}$ (11) $\displaystyle p_{xtsj}^{*}$ $\displaystyle\sim$ $\displaystyle N({\alpha_{0}}+{\alpha_{1}}\cdot p_{xtsj}^{\text{ Facebook}}+X{\Gamma},\sigma^{2}_{FB})$ (12) #### 4.3.3 Priors Weakly-informative priors were placed on the coefficients in the Facebook bias-adjustment model, as well as the principal component coefficients in the initial periods: $\displaystyle\alpha_{0}$ $\displaystyle\sim$ $\displaystyle N(0,100)$ $\displaystyle\alpha_{1}$ $\displaystyle\sim$ $\displaystyle N(0,100)$ $\displaystyle\Gamma_{0}$ $\displaystyle\sim$ $\displaystyle N(0,100)$ $\displaystyle\beta_{1,s,1}$ $\displaystyle\sim$ $\displaystyle N(0,100)$ $\displaystyle\Phi_{1}$ $\displaystyle\sim$ $\displaystyle N(0,100)$ In addition, we put weakly-informative half-Normal priors on the two standard deviation terms to be estimated: $\displaystyle\sigma_{FB}$ $\displaystyle\sim$ $\displaystyle N_{+}(0,1)$ $\displaystyle\sigma_{ns}$ $\displaystyle\sim$ $\displaystyle N_{+}(0,1).$ #### 4.3.4 Computation The model was fitted in a Bayesian framework using the statistical software R. Samples were taken from the posterior distributions of the parameters via a Markov Chain Monte Carlo (MCMC) algorithm. This was performed using JAGS software (Plummer et al., 2003). Standard diagnostic checks using trace plots and the Gelman and Rubin diagnostic were used to check convergence (Gelman et al., 2013). Best estimates of all parameters of interest were taken to be the median of the relevant posterior samples. The 95% Bayesian credible intervals were calculated by finding the 2.5% and 97.5% quantiles of the posterior samples. All code and data are available on GitHub: https://github.com/MJAlexander/fb- migration-bayes ## 5 Results We illustrate the model on male migrants from three different countries: Mexico, India, and Germany. These three migrant groups represent three different scenarios of levels and trends over time, as illustrated by the trends in the ACS data shown in Figures 3(a) and 3(b). Firstly, Mexican migrants make up a relatively large share of the overall population, but the proportion has been generally declining since around 2007. The age distribution at the national level peaks in the 40-44 year old age group. Secondly, Indian migrants make up a moderate proportion of the total population, but this share is increasing over time. The age distribution peaks at younger ages (30-34), compared to Mexicans. Finally, German migrants make up a low and declining share of the population. In contrast to the other migrant groups, the age distribution of German migrants at the national level is relatively flat, increasing slightly across age. (a) By proportion of the total population (b) By proportion of each age group, 2016 Figure 3: German, Indian and Mexican migrants ### 5.1 Bias adjustment of Facebook data We firstly illustrate the results of the bias-adjustment step of the Facebook data. Figure 4 shows, for each US state and five-year age group where data are available, the proportion of migrants in each age group for the ACS data in 2016 (black dots), the un-adjusted Facebook data (blue dots), and the estimated bias-adjusted Facebook data (red line and associated shaded area) for Mexican migrants. Similar plots for migrants from India and Germany are shown in Appendix A. The interpretation is that if the bias-adjustment step is working reasonably well, the red line would be close to the black dots. In general, this appears to be the case. For all three migrant groups, the raw Facebook data are generally lower than the ACS data, but the bias-adjustment model adjusts these values upwards. In general, across the three migrant groups and across states, the shape of the age distributions in the Facebook and ACS data are similar, with more substantial under-representation in Facebook in the older age groups. These systematic differences mean that the model works well to adjust the raw Facebook data based on age and state effects. Figure 4: Bias adjustment of Facebook data for Mexican migrants ### 5.2 Nowcasts by age group and state Now we move on to short term projections by age and state. Figure 5 shows the estimated age distribution in 2008 (red) and projected distribution in 2018 (blue) for Mexican migrants. Similar plots for India and Germany can be found in Appendix LABEL:appendix_age_facets. For Mexico (Figure 5), the relatively high proportions in the border states and on the West coast are apparent, with the highest proportions in California, Texas, Nevada and Arizona. Additionally, the age distribution of Mexican migrants is generally aging over time (shifting to the right), which is consistent with relatively constant stocks. Figure 5: Estimated and projected age distributions of Mexican migrants by state, 2008 and 2018 #### 5.2.1 Projected time series Figures 6 to 8 zoom in on two states for each migrant group and show how the Facebook data are used to project forward the time series to the most recent two years (2017 and 2018). The full estimated and projected time series from 2001 to 2018 is shown. In the figures, each facet is a five-year age group. The red dots and associated shaded area represent the ACS data and sampling standard errors; these data are broadly available from 2001 to 2016, although some observations are missing (if sample sizes in the ACS were too small to capture information about migrants in that particular state and age group). The blue dots represent the (adjusted) Facebook observations, which are available in years 2017 and 2018. The black line and associated shaded area is the model estimate and 95% uncertainty intervals. Mexican males in California (Figure 6(a)) represent by far the highest proportions of any of the migrant origin/ state combinations considered. The proportion is as high as 0.25 in some age groups, for example 25-29 year olds in 2001 and 40-44 year olds in 2018. As a consequence, the sampling error around the ACS data for this migrant group is relatively small and the model estimates closely follow these data. For the most recent two years, where only Facebook data are available, note that the model estimates do not follow the data as closely and the uncertainty around the model estimates increases. This reflects the fact that there are more sources of error associated with the Facebook data. In Georgia (Figure 6(b)) the levels of Mexican migrants are around half as high as in California. Due to smaller sample sizes in Georgia, the standard errors around the ACS data are much larger, and as such the model estimates do not follow the data as closely. However, the trends for Mexican migrants in California, and Georgia are broadly the same: decreases in the younger age groups, and increases in the older age groups, representing an aging stock of migrants. For Indian males in California and Georgia (Figure 7), the proportions are much lower than for the Mexican migrant population, peaking at around 3-4% of the population in the 30-39 year old age groups. The proportions are increasing over time, however, particularly in the 25-44 age bracket. Finally, for German male migrants in California and Georgia (Figure 8), we see low and constant migrant proportions. The uncertainty around the ACS data is already relatively high, and so there is not so much of an increase in uncertainty in the final two years. (a) California (b) Georgia Figure 6: Mexican male migrants by age group, California and Georgia, 2001-2018 (a) California (b) Georgia Figure 7: Indian male migrants by age group, California and Georgia, 2001-2018 (a) California (b) Georgia Figure 8: German male migrants by age group, California and Georgia, 2001-2018 ## 6 Validation We evaluated the performance of the Bayesian model compared to other reasonable forecasting alternatives. To do this, we ran the model on data from 2001 to 2016, and forecast migration stocks in 2017. We then compared these forecasts to the actual ACS data in 2017. We compared the accuracy of the Bayesian model forecast to forecasts produced by three other models: 1. 1. Three-year moving average of the ACS data. This is one of the simplest options available and does not require the Facebook data or any statistical modeling. 2. 2. Facebook data only: Estimates are based just on the available Facebook data in 2017, after it has been adjusted for biases. 3. 3. ACS time series model: Here, we ran the Bayesian hierarchical time series model described in section 4 above, but just using data from the ACS (no Facebook). In order to assess model performance, we compare the root mean squared error (RMSE): $RMSE=\sqrt{\frac{\sum_{n}\left(\hat{p}_{g,2017}-p^{ACS}_{g2017}\right)^{2}}{N}}$ (13) where $\hat{p}_{g,2017}$ is the estimated proportion of migrants from a particular group $g$, $p^{ACS}_{g2017}$ is the equivalent proportion from the ACS and $N$ is the size of the group. Here, the $g$ can refer to any combination of age group, state and migrant origin. Table 1 shows the overall RMSE for the four models for Mexican, Indian and German migrants. The main result is that in each of the three migrant groups, the Bayesian model presented (which combines the ACS and bias-adjusted Facebook data and thus is referred to as the ’combined model’), produces the lowest RMSE and thus the most accurate forecasts. The overall results also illustrate that the Bayesian hierarchical time series model produces substantially more accurate forecasts compared to a simple moving average or the bias-adjusted Facebook data alone, producing RMSEs that are up to an order of magnitude smaller. This gain in accuracy is much larger than the gain moving from ACS-only to the combined model, although there is still a gain in each case. Model | Mexico | India | Germany ---|---|---|--- Moving average | 0.01280 | 0.0480 | 0.0129 Facebook | 0.01210 | 0.00584 | 0.0142 ACS | 0.00995 | 0.00453 | 0.00263 Combined | 0.00970 | 0.00356 | 0.00261 Table 1: Overall RMSE by model and migrant origin Figure 9 illustrates the RMSE by age group and model type for each of the three migrant groups. Similar plots by state can be found in Appendix C. For Mexico, there is generally an incremental decline in the RMSE moving from the Facebook-only model, to moving average, to ACS, to the combined model. The RMSE is highest in the 30-34 year old age group, which is also where the proportion of migrants is highest (see Figure 6). For India (Figure 9b)), the RMSE is particularly high from the Facebook-only model in the 25-34 age bracket. As a consequence, the combined model RMSEs in those age groups are higher in the combined model versus the ACS model. For Germany, the gain in accuracy in moving to the hierarchical time series set-up is much larger. This is most likely related to the fact that the proportions of German migrants are in general a lot lower than for Mexico or India, and so there are noticeable gains in pooling information across state, age and time. (a) Mexico (b) India (c) Germany Figure 9: RMSE by age group and model To summarize, the validation exercise comparing the one-year-out predictions from a range of models to the migrant proportions reported in the ACS illustrate both (i) the strength of the proposed Bayesian hierarchical time series model as a general framework, and (ii) the additional information obtained from including up-to-date Facebook data compared to just historical ACS data alone. ## 7 Discussion As the size and frequency of migration movements continues to increase worldwide, new sources of data are being considered in order to better understand both historical and future migration trends. There is a growing body of work considering the feasibility of using social media data to achieve such goals, from platforms such as Facebook, Twitter and LinkedIn. While the granularity of social media use varies widely (from individual geo-tagged tweets, to aggregated advertising demographic data, as is in this paper), the common challenges of using such data remain: firstly, to adequately adjust for known biases in the social media data, primarily as a consequence of the non- representativeness of the population of social media users; and secondly, to meaningfully combine information from social media data with information from more traditional data sources, such as surveys or censuses. In this paper we presented a statistical framework to achieve these goals in the context of producing short-term projections of migrant stocks in the United States. The model includes a bias-adjustment process of the Facebook data, and a ‘principal components time series’ model, which allows for the projection of trends in stocks into the future, considering both Facebook and ACS data. The model allows for different types of uncertainty around the different data sources, and shares information on migration trends over time and pools across geographic space. Illustrative results were presented for three separate migrant groups: Mexicans, Indians and Germans. The results of the validation exercise, comparing projections with 2017 ACS data, suggest that the proposed model improves prediction of short-term trends when compared to viable alternatives. The validation exercise illustrated the substantial gain in accuracy achieved when moving to the Bayesian hierarchical time series model, regardless of whether or not the Facebook data were included. While the benefits of including the Facebook data in this particular case were relatively marginal, more generally the Facebook data has the advantage of being up-to-date and essentially available in real time. Thus, in a situation of a ‘shock’, such as a natural disaster or other event, the collection of Facebook data allows for a more immediate estimate of the effects of that shock on migration. The combination of these data with past trends allows for the identification of surprising increases or decreases, that are out of the expected bounds based on historical patterns. There are several limitations of the proposed model, which naturally lead into avenues for future work. Firstly, the bias-adjustment model assumes that the systematic bias in the Facebook data (by age and state) is constant over time. In reality, it is reasonable to believe that the biases in the Facebook data are changing over time, as the composition of the underlying Facebook population changes. The relationship between the age/location composition of the Facebook and the actual population (as measured by the ACS) could be investigated in future work. Secondly, the bias-adjustment model also assumes that the non-sampling error is constant over Facebook’s ‘waves’ of data collection — that is, sources of error that include changes in how the population of reach is calculated, or other computational reasons, are assumed to be constant. In practice, and in other work using these data (Alexander et al. 2019), we have observed that this is probably not the case, and needs to be further investigated to better understand non-migration-related fluctuations over time. While we only consider two data sources in this paper, the general statistical framework could easily be extended to include information from other sources. Future work could also include taking advantage of the rich demographic and socioeconomic data available through the Facebook Advertising Platform, including information on education and occupation. While this work focused on a model for the estimation of migrant stocks, the philosophy of combining social media data with more traditional data sources in one statistical framework – allowing for different sources of uncertainty – can be readily extended to model other demographic indicators. Indeed, the underlying time series model was itself an extension of principal component techniques that have been previously been used in demography to study mortality and fertility. We show the strength of combining more traditional demographic modeling techniques and data with newer sources of data to gain insights into underlying population processes. ## Appendix A Bias adjustment plots for migrants from India and Germany Figure 10: Bias adjustment of Facebook data for Indian migrants Figure 11: Bias adjustment of Facebook data for German migrants ## Appendix B Age distributions in 2008 and 2018 for Indian and Germany migrants by state For India (Figure 12) we see that the proportions across age groups have generally increased over the decade, with relatively high proportions concentrated in the northeast region of the country. Unlike Mexico, the age distribution is fairly constant, with the highest proportions generally being in the 30-34 age group. The German male migrant populations (Figure 13) by state show relatively flat, low and unchanging levels over the decade 2008–2018. Figure 12: Estimated and projected age distributions of Indian migrants by state, 2008 and 2018 Figure 13: Estimated and projected age distributions of German migrants by state, 2008 and 2018 ## Appendix C Validation results by state The figures below show the RMSE for each model, by state of destination in the US, for Mexican, Indian and German migrants. The figures show that, in general, the combined model out-performs the other three alternative models. For Mexico (Figure 14), this is particularly the case for the states with relatively high proportions of Mexican migrants. Results are more variable for Indian migrants (Figure 15), with the combined model performing relatively poorly in the West coast states but well in the mid-West states. For German migrants (Figure 16), the geographic pooling appears to vastly improve the projection accuracy. Figure 14: RSME by state and model type for Mexican migrants in 2018 Figure 15: RSME by state and model type for Indian migrants in 2018 Figure 16: RSME by state and model type for German migrants in 2018 ## References * Alexander et al. (2017) Alexander, M., E. Zagheni, and M. Barbieri (2017). A flexible bayesian model for estimating subnational mortality. Demography 54(6), 2025–2041. * Alexander et al. (2019) Alexander, M., E. Zagheni, and K. Polimis (2019). The impact of hurricane maria on out-migration from puerto rico: Evidence from facebook data. Population and Development Review 45(3), 617–630. * Araujo et al. (2017a) Araujo, M., Y. Mejova, I. Weber, and F. Benevenuto (2017a). Using facebook ads audiences for global lifestyle disease surveillance: Promises and limitations. In Proceedings of the 2017 ACM on Web Science Conference, pp. 253–257. * Araujo et al. (2017b) Araujo, M., Y. Mejova, I. Weber, and F. Benevenuto (2017b). Using facebook ads audiences for global lifestyle disease surveillance: Promises and limitations. WebSci ’17, New York, NY, USA. ACM. * Blumenstock (2012) Blumenstock, J. E. (2012). Inferring patterns of internal migration from mobile phone call records: evidence from rwanda. Information Technology for Development 18(2), 107–125. * Cesare et al. (2018) Cesare, N., H. Lee, T. McCormick, E. Spiro, and E. Zagheni (2018). Promises and pitfalls of using digital traces for demographic research. Demography 55(5), 1979–1999. * Engels and Healy (1981) Engels, R. A. and M. K. Healy (1981). Measuring interstate migration flows: an origin—destination network based on internal revenue service records. Environment and Planning A 13(11), 1345–1360. * Fatehkia et al. (2018) Fatehkia, M., R. Kashyap, and I. Weber (2018). Using facebook ad data to track the global digital gender gap. World Development 107, 189–209. * Ferrari et al. (2011) Ferrari, L., A. Rosi, M. Mamei, and F. Zambonelli (2011). Extracting urban patterns from location-based social networks. In Proceedings of the 3rd ACM SIGSPATIAL International Workshop on Location-Based Social Networks, pp. 9–16. * Foulkes and Newbold (2008) Foulkes, M. and K. B. Newbold (2008). Using alternative data sources to study rural migration: examples from illinois. Population, Space and Place 14(3), 177–188. * Gabrielli et al. (2019) Gabrielli, L., E. Deutschmann, F. Natale, E. Recchi, and M. Vespe (2019). Dissecting global air traffic data to discern different types and trends of transnational human mobility. EPJ Data Science 8(1), 1–24. * Garcia et al. (2018) Garcia, D., Y. M. Kassa, A. Cuevas, M. Cebrian, E. Moro, I. Rahwan, and R. Cuevas (2018). Analyzing gender inequality through large-scale facebook advertising data. Proceedings of the National Academy of Sciences 115(27), 6958–6963. * Gelman et al. (2013) Gelman, A., J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin (2013). Bayesian data analysis. Chapman and Hall/CRC. * Landau and Achiume (2017) Landau, L. B. and E. T. Achiume (2017). International migration report 2015: Highlights. DEVELOPMENT AND CHANGE 48(5), 1182–1195. * Lee (2000) Lee, R. (2000). The lee-carter method for forecasting mortality, with various extensions and applications. North American actuarial journal 4(1), 80–91. * Lee and Carter (1992) Lee, R. D. and L. R. Carter (1992). Modeling and forecasting us mortality. Journal of the American statistical association 87(419), 659–671. * Li et al. (2004) Li, N., R. Lee, and S. Tuljapurkar (2004). Using the lee–carter method to forecast mortality for populations with limited data. International Statistical Review 72(1), 19–36. * Noulas et al. (2011) Noulas, A., S. Scellato, C. Mascolo, and M. Pontil (2011). An empirical study of geographic user activity patterns in foursquare. In Fifth international AAAI conference on weblogs and social media. * Pestre et al. (2020) Pestre, G., E. Letouzé, and E. Zagheni (2020). The abcde of big data: assessing biases in call-detail records for development estimates. The World Bank Economic Review 34(Supplement_1), S89–S97. * Plummer et al. (2003) Plummer, M. et al. (2003). Jags: A program for analysis of bayesian graphical models using gibbs sampling. In Proceedings of the 3rd international workshop on distributed statistical computing, Volume 124, pp. 1–10. Vienna, Austria. * Rampazzo et al. (2018) Rampazzo, F., E. Zagheni, I. Weber, M. R. Testa, and F. Billari (2018). Mater certa est, pater numquam: What can facebook advertising data tell us about male fertility rates? In Twelfth International AAAI Conference on Web and Social Media. * Renshaw and Haberman (2006) Renshaw, A. E. and S. Haberman (2006). A cohort-based extension to the lee–carter model for mortality reduction factors. Insurance: Mathematics and economics 38(3), 556–570. * Ruggles et al. (2000) Ruggles, S., C. A. Fitch, P. K. Hall, and M. Sobek (2000). Ipums-usa: Integrated public use microdata series for the united states. Handbook of international historical microdata for population research. Minneapolis: Minnesota Population Center, 259–284. * Schmertmann et al. (2014) Schmertmann, C., E. Zagheni, J. R. Goldstein, and M. Myrskylä (2014). Bayesian forecasting of cohort fertility. Journal of the American Statistical Association 109(506), 500–513. * State et al. (2014) State, B., M. Rodriguez, D. Helbing, and E. Zagheni (2014). Migration of professionals to the us. In International Conference on Social Informatics, pp. 531–543. Springer. * US Census Bureau (2020) US Census Bureau (2020). PUMS accuracy. https://www2.census.gov/programs-surveys/acs/tech_docs/pums/accuracy/. * Wiśniowski et al. (2015) Wiśniowski, A., P. W. Smith, J. Bijak, J. Raymer, and J. J. Forster (2015). Bayesian population forecasting: extending the lee-carter method. Demography 52(3), 1035–1059. * Yildiz et al. (2017) Yildiz, D., J. Munson, A. Vitali, R. Tinati, and J. A. Holland (2017). Using twitter data for demographic research. Demographic Research 37, 1477–1514. * Zagheni et al. (2014) Zagheni, E., V. R. K. Garimella, I. Weber, and B. State (2014). Inferring international and internal migration patterns from twitter data. In Proceedings of the 23rd International Conference on World Wide Web, pp. 439–444. * Zagheni and Weber (2012) Zagheni, E. and I. Weber (2012). You are where you e-mail: using e-mail data to estimate international migration rates. In Proceedings of the 4th annual ACM web science conference, pp. 348–351. * Zagheni et al. (2017) Zagheni, E., I. Weber, and K. Gummadi (2017). Leveraging facebook’s advertising platform to monitor stocks of migrants. Population and Development Review 43(4), 721–734.
2024-09-04T02:54:57.680225
2020-03-05T20:07:53
2003.02900
{ "authors": "Dirk Keppens", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26068", "submitter": "Dirk Keppens", "url": "https://arxiv.org/abs/2003.02900" }
arxiv-papers
# 50 years of Finite Geometry, the “geometries over finite rings” part Dirk Keppens Faculty of Engineering Technology, KU Leuven Gebr. Desmetstraat 1 B-9000 Ghent BELGIUM <EMAIL_ADDRESS> ###### Abstract Whereas for a substantial part, “Finite Geometry” during the past 50 years has focussed on geometries over finite fields, geometries over finite rings that are not division rings have got less attention. Nevertheless, several important classes of finite rings give rise to interesting geometries. In this paper we bring together some results, scattered over the literature, concerning finite rings and plane projective geometry over such rings. It doesn’t contain new material, but by collecting stuff in one place, we hope to stimulate further research in this area for at least another 50 years of Finite Geometry. Keywords: Ring geometry, finite geometry, finite ring, projective plane AMS Classification: 51C05,51E26,13M05,16P10,16Y30,16Y60 ## 1 Introduction Geometries over rings that are not division rings have been studied for a long time. The first systematic study was done by Dan Barbilian [11], besides a mathematician also one of the greatest Romanian poets (with pseudonym Ion Barbu). He introduced plane projective geometries over a class of associative rings with unit, called Z–rings (abbreviation for Zweiseitig singuläre Ringe) which today are also known as Dedekind-finite rings. These are rings with the property that $ab=1$ implies $ba=1$ and they include of course all commutative rings but also all finite rings (even non–commutative). Wilhelm Klingenberg introduced in [52] projective planes and 3–spaces over local rings. A ring $R$ is local if it possesses a unique maximal right ideal (which turns out to be the Jacobson radical $J(R)$). For a local ring $R$ the quotient ring $R/J(R)$ is a division ring (= skewfield or field) and the natural homomorphism of $R$ onto $\mathbb{K}=R/J(R)$ induces an epimorphism of the plane $P_{2}(R)$ over $R$ onto the ordinary desarguesian projective plane PG(2,$\mathbb{K}$). Nowadays planes over local rings are called (desarguesian) Klingenberg planes (see also [7]). In the finite case such planes have the finite projective plane PG(2,$q$) over the Galois field GF($q$) as epimorphic image. In three other papers [49], [50] and [51], Klingenberg studied projective planes over local rings with some additional properties, called $H$–rings (short for Hjelmslev rings). In these rings the left and right ideals form a chain and the maximal ideal contains only zero divisors. If one drops that last condition, one gets chain rings. In the finite case any chain ring is an $H$–ring. Planes over $H$–rings are now called (desarguesian) Hjelmslev planes after the Danish mathematician Johannes Hjelmslev (born as Johannes Petersen) who was the first one to consider plane geometries in which two distinct lines may have more than one point in common [41]. Among the finite $H$–rings are the Galois rings GR($p^{nr},p^{n}$) of cardinality $p^{nr}$ and characteristic $p^{n}$ which are natural generalizations of Galois fields. In the early seventies another class of rings came under the attention: full matrix rings over fields. Strongly inspired by the work of the italian “father of Galois geometry” Beniamino Segre on geometries over finite fields (e.g. [77]), J.A. Thas defined projective planes (and higher dimensional spaces) over full matrix rings with elements in a field and investigated combinatorial properties in the finite planes over the matrix rings $M_{n}(GF(q))$ of $n\times n$–matrices over Galois fields [80]. We will refer to these planes further as Thas planes. In the eighties F.D. Veldkamp was very productive in the area of projective ring planes and their generalizations. He gave in [84] and [86] an axiomatic description of projective planes and higher dimensional geometries over the large class of rings of stable rank 2, a notion coming from algebraic $K$–theory. A ring $R$ has stable rank 2 if for any $a,b\in R$ with $Ra+Rb=R$ there exists $r\in R$ such that $a+rb$ is a unit. The class of rings of stable rank 2 includes the class of semilocal rings (hence also all finite rings, local rings, chain rings, $H$–rings and matrix rings over a division ring) and a ring of stable rank 2 is always Dedekind–finite (hence a Z–ring in the sense of Barbilian). Projective planes over rings of stable rank 2 are called (desarguesian) Veldkamp planes. Among these are Klingenberg planes, Hjelmslev planes, Thas planes and also the projective planes over semiprimary rings (i.e. rings with nilpotent Jacobson radical and with $R/J(R)$ semisimple) treated by Bingen in [13]. In almost all papers on projective geometry over rings no special attention is paid to the finite case. Mostly, theorems deal with rings in general (with no specification for finite or infinite). In this paper we restrict ourselves to the finite case. First we bring together some results on finite rings with special attention for local rings. Then we have a closer look at projective plane geometries over finite rings. In the last section we deal with some generalizations of rings (semirings, nearrings and alternative rings) and projective plane geometries over such algebraic structures. ## 2 Finite rings In this section the word “ring” always refers to an associative ring with unit $1\not=0$, but with multiplication not necessarily commutative. Finite fields or Galois fields are well-known algebraic structures. Finite fields of order $q$ only exist if $q$ is a prime power ($q=p^{r}$) and for each such $q$ their is a unique (up to isomorphism) field of that order which is denoted by $\mathbb{F}_{q}$ or by GF($q$). The prime number $p$ is the characteristic of the field. It is natural to look at generalizations of finite fields to finite rings, but the situation is much more complicated. First there exist finite non–commutative rings unlike the situation in finite division rings where the famous theorem of Wedderburn forces any finite skewfield to be a field. Also the order of a finite ring doesn’t uniquely determine that ring (for example there are four non–isomorphic rings of order four, including one field). A complete classification of finite rings seems to be a “mission impossible” (even if one restricts to the commutative case). The paper of Raghavendran [72] on rings of prime power order was the starting point for the study of the structure of finite rings. Also the work of Wilson [88] and [89] was of great importance. A recent survey on results obtained so far with an extensive bibliography can be found in Nechaev [68]. Local rings, first defined by Krull in [56], play a central role in the structure theory of (finite) rings. Recall that a ring $R$ is called local if it possesses a unique maximal right ideal (or equivalently a unique maximal left ideal). This is stronger than asking that $R$ has a unique maximal two–sided ideal (e.g. the ring $M_{n}(\mathbb{Z}_{p^{n}})$ of $n\times n$–matrices over $\mathbb{Z}/p^{n}\mathbb{Z}$ has a unique maximal two–sided ideal but is not local). The unique maximal right or left ideal in a local ring turns out to be the Jacobson radical $J(R)$. Other characterizations of local rings are possible. E.g. $R$ is local iff the set of non–units forms a right (or left) proper ideal in $R$. Also $R$ is local if and only if for all $r\in R$ either $r$ or $1-r$ is invertible. Finally $R$ is local iff $R/J(R)$ is a division ring. Other characterizations in terms of zero divisors are given in [76]. In the finite case one can say even more: $R$ is local iff $R\setminus J(R)$ is the set of units of $R$ or equivalently $J(R)$ is the set of nilpotent elements of $R$. The following theorem gives parameters for finite local rings. ###### Theorem 2.1. (Raghavendran [72]) Let $R$ be a finite local ring. Then there exist unique numbers $p$, $n$, $r$ and $k$ such that $|R|=p^{nr}$, $|J(R)|=p^{(n-1)r}$ and the characteristic of $R$ is $p^{k}$ with $1\leq k\leq n$. The number $p^{r}$ is the order of the Galois field $R/J(R)$ and the number $n$ is the index of nilpotency of $J(R)$. If $k=n$ then $R$ is commutative. There is also a more recent result which conversely characterizes local rings among finite rings just by a couple of parameters. ###### Theorem 2.2. (Behboodi and Beyranvand [12], González [36]) Let $R$ be a finite ring and let $Z(R)$ be the set of zero–divisors of $R$. Then $R$ is local if and only if $|R|=p^{m}$ and $|Z(R)|=p^{n}$ for some prime number $p$ and integers $1\leq n<m$. Moreover, when $R$ is local with these parameters, then the order of $R/J(R)=R/Z(R)=p^{r}$ with $r=m-n$. The structure of commutative finite local rings was first studied by Ganske and McDonald [33]. Classification theorems are proved for fixed orders or fixed characteristic in [16], [17], [24] and [72]. In the non–commutative case Wirt [90] has contributed to the theory. By the following important structure theorem the classification problem of commutative finite rings can be reduced to that of finite local rings. ###### Theorem 2.3. (McDonald [67]) Let $R$ be a finite commutative ring. Then $R$ decomposes (up to order of summands) uniquely as a direct sum of finite local rings. Another decomposition theorem, also valid in the non–commutative case, shows once more the importance of finite local rings. ###### Theorem 2.4. (McDonald [67] and Wirt [90]) Let $R$ be a finite ring. Then $R$ decomposes as $S+N$ with $S$ a direct sum of full matrix rings over finite local rings and $N$ a subring of the Jacobson radical $J(R)$. Next we look at principal ideal rings. A ring is called a right principal ideal ring if any right ideal $I$ is right principal, i.e. generated by one element ($I=aR$). Similar definition for left principal ideal ring. If a ring is a left and right principal ideal ring, it is called a principal ideal ring (PIR). A right principal ideal ring is always a right noetherian ring, since any right ideal is finitely generated. It is also a right Bézout ring since any finitely generated right ideal is principal. In fact the right PIR’s are just the rings which are both right noetherian and right Bézout. Similar results are true for left principal ideal rings and PIR’s. The structure of finite principal ideal rings was first studied by Fisher in [32]. For finite rings the notions of left PIR, right PIR and PIR are equivalent (see [68]). Another important class of rings are the chain rings. A ring is called a right chain ring if for any $a$ and $b$ in $R$ either $a\in bR$ or $b\in aR$. For a right chain ring the lattice of right ideals is totally ordered by inclusion and it follows that $R$ is a local ring. Analogous definitions and results for left chain rings. A ring which is a left and right chain ring is called a chain ring. In the infinite case there are examples of right chain rings which are not chain rings (see [63], [79] and [15]), but in the finite case there is a left–right equivalence. Every ideal of a chain ring is a power of the unique maximal ideal. A (left or right) chain ring with the additional property that any non–unit is a two–sided zero divisor is called a (left or right) $H$–ring or affine Hjelmslev ring (two–sided chain rings are also known as projective Hjelmslev rings). Finite chain rings are always left and right $H$–rings. For a comprehensive study of $H$–rings linked to the behaviour of ideals, we refer to [81]. The following theorem shows that finite chain rings are nothing but finite local principal ideal rings! ###### Theorem 2.5. (Clarke and Drake [18] and Lorimer [62]) Let $R$ be a finite ring. Then the following conditions are equivalent: (a) $R$ is a local PIR (b) $R$ is a local ring with principal maximal ideal (b) $R$ is a left chain ring (c) $R$ is a right chain ring (d) $R$ is a chain ring A valuation ring in a division ring $\mathbb{D}$ is a proper subring $R$ with the property that $x$ or $x^{-1}$ $\in R$ for each nonzero $x\in\mathbb{D}$. A ring is a valuation ring if and only if it is a left and right chain domain (i.e. a chain ring without zero divisors) (for a proof, see [60]. Since any finite domain is a finite field, there do not exist finite valuation rings. A ring $R$ is called E–ring if and only if it possesses an ideal $I$ so that all ideals of $R$ are of the form $I^{n}$. In the infinite case $E$–rings can be characterized as $H$–rings with nilpotent radical and also as proper homomorphic images of discrete valuation rings (see [4], [5], [60] and [61]). In the finite case the notions of $H$–ring and $E$–ring coincide. The simplest and most investigated finite chain rings are the Galois rings, first defined by Krull [56] as “Grundringe” and later rediscovered by Janusz [46] and Raghavendran [72]. A Galois ring is a commutative local PIR such that $J(R)=(p)$ with $p=1+1+\ldots+1$ ($p$ terms) for some prime $p$. These rings are very close to Galois fields. In the past ten years, finite chain rings and in particular Galois rings got a lot of attention in connection with coding theory (see e.g. [42] and [43]). As for Galois fields one has ###### Theorem 2.6. (Raghavendran [72] and McDonald [67]) For any prime $p\in\mathbb{N}$ and for any $n,r\in\mathbb{N}$ there exists a unique (up to isomorphism) Galois ring $R$ consisting of $q^{n}$ (with $q=p^{r}$) elements and with characteristic $p^{n}$ The unique Galois ring in the preceding theorem is denoted by GR($q^{n},p^{n}$) (or sometimes also by GR($p^{n},r$)). For $p=q$ we have GR($p^{n},p^{n}$) = $\mathbb{Z}_{p^{n}}$, the ring of integers module $p^{n}$, and for $n=1$ we obtain the Galois field GF($q$)=GR($q,p$). All Galois rings can be constructed in the form $R=\mathbb{Z}_{p^{n}}[x]/(f(x))$ where $f(x)$ is a monic polynomial of degree $r$ which is irreducible modulo $p$ and hence GR($q^{n},p^{n})$, $q=p^{r}$, can be seen as Galois extensions of degree $r$ of its subring $\mathbb{Z}_{p^{n}}$. The properties of Galois rings are well known, e.g. the structure of the group of units, the automorphism group, the possible subrings etc. Many results can be found in [14]. The classification of all chain rings is still an open problem but partial results are known. Galois rings occur in the construction of finite chain rings as can be seen from next theorem. ###### Theorem 2.7. (Clark and Liang [19], Wirt [90], Neumaier [69]) Let $R$ be a finite chain ring with parameters $p,n,r$ and $k$ as in theorem 2.1. Then there exist integers $t$ and $s$ such that $R$ is isomorphic to $S[x,\sigma]/(g(x),p^{k-1}x^{t})$ with $S$ = GR($q^{k},p^{k}$) and $S[x,\sigma]$ the Ore–skew polynomial ring over $S$, i.e. with usual addition and the multiplication $xa=\sigma(a)x$ for $\sigma\in Aut\;S$, and with $g(x)\in$ $S[x,\sigma]$ an Eisenstein polynomial of degree $s$, $g(x)=x^{s}-p(a_{0}+a_{1}x+\ldots+a_{s-1}x^{s-1})$ with $a_{0}$ a unit in $S$ and $n=(k-1)s+t$ ($1\leq t\leq s\leq n$) The integer $s$ in the theorem above is called the ramification index of $R$. It is the smallest integer such that the ideal $(p)$ is equal to $J(R)^{s}$. For a given set of parameters $p,n,r,k,s$ and $t$ one could ask for the number of non–isomorphic finite chain rings. In general this problem is still open, but partial results are known (see e.g. [1], [3], [74], [68] ). In some cases the parameters determine completely the ring: ###### Theorem 2.8. (Clark and Liang [19] and Arkhipov [3]) Let $R$ be a finite chain ring with parameters $p,n,r,k,t$ and $s$ as in theorem 2.7. (a) If $k=1$ (hence $R$ has minimal characteristic $p$), then $R$ is uniquely determined (up to isomorphism) and $R\cong$ GF($q$)$[x,\sigma]/(x^{n})$ (a truncated skew polynomial ring) (b) If $k=n$ (hence $R$ has maximal characteristic $p^{r}$), then $R$ is uniquely determined (up to isomorphism) and $R\cong$ GR($q^{n},p^{n}$) (always commutative) Some more results are known for finite chain rings with characteristic $p^{k}$ with $1<k<n$ (see [44]). In [91] still another description of finite (commutative) chain rings is given as certain homomorphic images of the polynomial ring $\mathbb{Z}_{p^{r}}[x,y]$. An important special subclass of chain rings is the one for which the Jacobson radical $J$ of $R$ has index of nilpotency $2$, so $J^{2}=0$. In this case $n=2$ and the two cases in theorem 2.8 are the only possible ones. Finite chain rings with $J\not=0$ and $J^{2}=0$ are called uniform. The classification of finite uniform chain rings follows from theorem 2.8 but was also proved directly by Cronheim. ###### Theorem 2.9. (Cronheim [25]) Every finite uniform chain ring with $R/J\cong$ GF($q$) is either a ring of (twisted) dual numbers, or a truncated Witt ring of length 2, over the field GF($q$). Rings of (twisted) dual numbers are the rings $\mathbb{D}(q,\sigma)$ = GF($q$)$[x,\sigma]/(x^{2})$ (twisted for $\sigma\not=1$) (corresponding to case ($a$) with $n=2$ in theorem 2.8). $\mathbb{D}(q,\sigma)$ can also be represented as the subring of matrices $\left(\begin{array}[]{ll}a&b\\\ 0&a^{\sigma}\end{array}\right)$ in the full matrix ring $M_{2}(q)$ of $2\times 2$ matrices with elements in GF($q$). $W_{2}(q)$, the truncated Witt ring of length 2 over GF($q$) is defined on the set GF($q$) $\times$ GF($q$), $q=p^{k}$, as follows: addition: $(x_{0},x_{1})+(y_{0},y_{1})=(x_{0}+y_{0},x_{1}+y_{1}+\dfrac{x_{0}^{p}+y_{0}^{p}-(x_{0}+y_{0})^{p}}{p})$ and multiplication: $(x_{0},x_{1})\cdot(y_{0},y_{1})=(x_{0}y_{0},x_{0}^{p}y_{1}+x_{1}y_{0}^{p})$ It can be proved that $W_{2}(q)$ is isomorphic to the Galois ring GR($q^{2},p^{2}$) (this is case ($b$) with $n=2$ in theorem 2.8). ## 3 Finite ring planes In this section we deal with geometries over finite rings and we restrict ourselves to the case of plane projective geometries. The projective line, higher dimensional projective geometries, affine and metric geometries, planar circle geometries (Benz–planes), chain geometries and polar geometries over finite rings will not be considered here. A big part of finite geometry, called Galois geometry, is related to finite fields, see e.g. the work of Hirschfeld [40]. Since the pioneering work of B. Segre, the finite desarguesian (pappian) projective plane PG($2,q$) and its interesting point sets (arcs, ovals, blocking sets, unitals, $\ldots$) have been studied extensively. For planes over finite rings still a lot of work has to be done. As already mentioned in the introductory section, plane geometries over some important classes of rings have been defined in a suitable way, starting somewhere in 1940 by Barbilian (some isolated cases over particular rings were even known longer ago). Before we look at planes over finite rings, we first recall the definition of a projective plane over an arbitrary ring (not necessarily finite). Let $R$ be an arbitrary ring (associative and with unit element). Denote the set of twosided invertible elements of $R$ by $R^{\star}$. Following [11], [26] or [55], we can construct a plane projective geometry PG($2,R$) over $R$ as follows: points are the left unimodular triples $(x,y,z)\in R\times R\times R$ up to a right scalar in $R^{\star}$ (where $(x,y,z)$ left unimodular means that there exist $a,b,c\in R$ such that $ax+by+cz=1$ or equivalently $Rx+Ry+Rz=R$) lines are the right unimodular triples $[u,v,w]\in R\times R\times R$ up to a left scalar in $R^{\star}$ (where $[u,v,w]$ right unimodular means that there exist $a,b,c\in R$ such that $ua+vb+wc=1$ or equivalently $xR+yR+zR=R$) incidence I (between points and lines) is defined as follows: $(x,y,z)$ I $[u,v,w]$ if and only if $ux+vy+wz=0$ neighborship $\sim$ (between points and lines) is defined by: $(x,y,z)\sim[u,v,w]$ if and only if $ux+vy+wz\in R\setminus R^{\star}$ It is clear that incidence always implies neighborship, so $p$ I $L$ implies $p\sim L$ for any point $p$ and any line $L$. The incidence structure (with neighbor relation) obtained in this way is called right projective plane over $R$. In the same way one can define the left projective plane over $R$ which is clearly isomorphic to the right projective plane over the opposite ring $R^{\circ}$. Therefore we will drop from now on the specification “right” or “left”. Although the denomination “projective plane” is used here, the projective plane over a ring (which is not a division ring) isn’t a projective plane in the usual sense, as two distinct points may be incident with none or with more than one line and dually. In addition to the neighbor relation for point–line pairs, also a neighbor relation between points (between lines respectively) can be considered in PG($2,R$): points $(x,y,z)$ and $(x^{\prime},y^{\prime},z^{\prime})$ are neighboring iff $\left[\begin{array}[]{lll}x&y&z\\\ x^{\prime}&y^{\prime}&z^{\prime}\end{array}\right]$ cannot be extended to an invertible $3\times 3$–matrix over $R$ (and similar for lines). Dealing with projective planes over rings it is natural to assume that non–neighboring elements behave the same as distinct elements in an ordinary projective plane over a division ring. To get that situation, it is necessary to restrict to the class of rings for which every one–sided unit is a two–sided unit, which was first observed by Barbilian [11]. Indeed, assume that $r$ is right–invertible (with right inverse $a$), but not left invertible. Consider the lines $[1,0,0]$ and $[r,0,0]$ (remark that $[r,0,0]$ is right unimodular since $r\cdot a+0\cdot b+0\cdot c=1$). These lines are distinct as otherwise there would exist a left scalar $l\in R^{\star}$ for which $[1,0,0]=l\cdot[r,0,0]$, so $1=l\cdot r$ in contradiction with the assumption that $r$ has not a left inverse. Now these two distinct lines are incident with the non–neighboring points $(0,1,0)$ and $(0,0,1)$. The restriction to rings in which any left (or right) invertible element is a two–sided unit (or equivalently $a\cdot b=1$ implies $b\cdot a=1$), the so–called Dedekind–finite rings, only comes up when one deals with infinite rings. In the finite case (but also for many important classes of infinite rings) invertible elements are always two–sided invertible. In this context it is also interesting to mention that all reversible rings (i.e. $a\cdot b=0$ implies $b\cdot a=0$) are Dedekind–finite. Next we are interested in the connection between properties of the ring $R$ and the projective plane PG($2,R$). Most of the following results are reformulations for the finite case of theorems that can be found in Veldkamp [84] and [85] . In the projective plane over a Dedekind–finite ring, there is a unique line incident with two given non–neighboring points (and dually). One might ask whether the neighbor relation is completely determined by the incidence relation in the sense that two points are non–neighboring if and only if there is unique line incident with them (and dually for lines). This is not always the case, but it does if every non–invertible element in $R$ is a right and left zero–divisor. A (left or right) artinian ring is a ring in which any non–empty set of (left or right) ideals that is partially ordered by inclusion, has a minimal element. In a (left or right) artinian ring any non–invertible element is a (left or right) zero divisor. As finite rings are always left and right artinian we get that the neighbor relation is completely determined by the incidence relation in planes PG($2,R$) over finite rings. In [23] a proof is given of the property that in a finite ring any left zero divisor is also a right zero divisor. ###### Theorem 3.1. (Veldkamp [84]) Let $R$ be a finite ring and PG($2,R$) the projective plane over $R$. Then two distinct points are neighboring if and only if they are incident with either no or at least two lines. Two distinct lines are neighboring if and only if they are incident with either no or at least two points. A projective ring plane is called linearly connected (Veldkamp [84]), neighbor cohesive (Drake and Jungnickel [27]) or punctally cohesive (Baker et al. [9]) if any two distinct points are incident with at least one line. For planes over rings of stable rank 2 it is proved in Veldkamp that two points are incident with at least one line if and only if $R$ has the following property: for any two $r_{1},r_{2}\in R$ there exists $a\in R$ such that $Rr_{1}+Rr_{2}=R(r_{2}+ar_{1})$. This is fulfilled for $R$ a left Bézout ring, i.e. a ring for which any finitely generated left ideal is a principal ideal. Dually two lines are incident with at least one point iff $R$ is a right Bézout ring. For finite rings the Bézout conditions amount to the condition that $R$ is a principal ideal ring (recall that for a finite ring the notions left principal, right principal and principal coincide). So we can reformulate the theorem for finite rings as follows: ###### Theorem 3.2. (Veldkamp [84]) Let $R$ be a finite ring and PG($2,R$) the projective plane over $R$. Then any two points are incident with at least one line (the plane is linearly connected) and dually if and only if $R$ is a principal ideal ring. The possibility of more than one line incident with two neighboring points (and dually) corresponds to the presence of zero divisors in the ring. So any two distinct points are incident with exactly one line (and dually) if and only if $R$ is a Bézout domain. In the finite case this becomes: if $R$ is a principal ideal domain, hence if $R$ is a finite field. Hence: ###### Theorem 3.3. (Veldkamp [84]) Let $R$ be a finite ring and PG($2,R$) the projective plane over $R$. Then any two distinct points are incident with exactly one line and dually if and only if $R$ is a finite field (i.e. PG($2,R$) is a pappian projective plane). Next we look at the special case of (finite) local rings. For such rings the definition of the projective plane PG($2,R$) and his neighbor relations, can be adapted (in an equivalent way) a little. E.g. two points $(x,y,z)$ and $(x^{\prime},y^{\prime},z^{\prime})$ are neighbors if and only if $(x^{\prime},y^{\prime},z^{\prime})$ $-$ $(x,y,z)\lambda\in J\times J\times J$ for some $\lambda\in R\setminus J$ with $J$ the maximal ideal of $R$ and similarly for lines. ###### Theorem 3.4. (Veldkamp [84]) Let $R$ be a (finite) ring and PG($2,R$) the projective plane over $R$. Then the neighbor relation $\approx$ between points (between lines resp.) is transitive if and only if $R$ is a (finite) local ring. For local rings $R$ there is a canonical epimorphism $\varphi$ from $R$ onto the division ring $\mathbb{K}=R/J$. This epimorphism induces an epimorphism $\pi$ of the projective plane PG($2,R$) onto the (ordinary) projective plane PG($2,\mathbb{K}$) by putting $\pi(x,y,z)=(\phi(x),\phi(y),\phi(z))$ and $\pi[u,v,w]=[\phi(u),\phi(v),\phi(w)]$ and the neighbor relation can be expressed by means of $\pi$ : $p\sim L$ if and only if $\pi(p)$ I $\pi(L)$ and similarly $p\sim q$ iff $\pi(p)=\pi(q)$ and $L\sim M$ iff $\pi(L)=\pi(M)$. The projective plane over a local ring, also known as a (desarguesian) projective Klingenberg plane, therefore is strongly connected with an ordinary desarguesian projective plane. One could say that the points (and lines) of an ordinary projective plane are blown up to clusters of neighboring points (lines) to produce a projective Klingenberg plane. In the finite case the epimorphic image of PG($2,R$) is the plane PG($2,q$) over the Galois field GF($q$). Combining theorems 3.1, 3.2 and 3.4 yields the following: ###### Theorem 3.5. (Veldkamp [84]) Let $R$ be a finite ring and PG($2,R$) the projective plane over $R$. Then the neighbor relation $\approx$ between points (between lines resp.) is transitive and two neighboring points are incident with at least two lines and dually, if and only if $R$ is a finite local principal ideal ring. From section 2 we know that finite local principal ideal rings are synonym for finite chain rings or finite $H$–rings. Recall that projective planes over $H$–rings are called (desarguesian) projective Hjelmslev planes. We now summarize the possibilities for projective planes over a finite ring. ###### Corollary 3.6. Let $R$ be a finite ring and PG($2,R$) the projective plane over $R$. Then only four cases are possible: (a) $R$ has no zero divisors and hence is a field and PG($2,R$) is an ordinary pappian projective plane (two distinct points are incident with exactly one line and dually) (b) $R$ is a local principal ideal ring (hence a chain ring = $H$–ring) and PG($2,R$) is a desarguesian projective Hjelmslev plane (c) $R$ is local but not a principal ideal ring and PG($2,R$) is a desarguesian projective Klingenberg plane (but not a Hjelmslev plane) (d) $R$ is semilocal (but not local) and PG($2,R$) has non–transitive neighbor relations The fourth class (d) is the wildest as it contains all finite rings which are not local (but necessarily semilocal due to the finiteness, i.e. with a finite number of maximal ideals). Important examples of rings belonging to this class are the full matrix rings $M_{n}(q)$ over GF($q$). Projective planes over full matrix rings were first mentioned by Ree [73] and further studied by J.A. Thas [80] who also gave an interpretation of PG($2,M_{n}(q)$) in terms of the projective space PG($3n-1,q$). Other examples are the rings $\mathbb{Z}_{m}$ with $m\not=p^{r}$ (see [29]). Of special interest are also the rings of double numbers $\mathbb{B}(q)=$GF($q$) + GF($q$)$\,t$ with $t^{2}=t$. They possess exactly two maximal ideals. In [75] projective planes over $\mathbb{B}(q)$ are studied. Examples of finite local rings that are not chain rings (c) are provided by the rings GF($q$)[$x,y$]/$\langle x^{n},xy,y^{n}\rangle$ ($n>1$). The corresponding planes are finite desarguesian Klingenberg planes that are not Hjelmslev planes. Class (b) contains many interesting examples, including the Galois rings GR($q^{n},p^{n}$) ($q=p^{r}$) and the rings $\mathbb{A}(p^{r},n)=$ GF($p^{r}$)$[x]/x^{n}$ (called quasi–Galois rings in [14]). The rings $\mathbb{A}(p^{r},n)$ can also be interpreted as matrix rings, consisting of all matrices $(a_{ij})$ with elements belonging to GF($q$) and $a_{i,j+i-1}=a_{1j}$ and $a_{ij}=0$ for $i>j$. For $n=2$ the ring of dual numbers $\mathbb{D}(q)$ over GF($q$) is included. Projective planes over dual numbers were considered yet a century ago by Corrado Segre [78]. Class (a) finally consists of all the Galois fields GF($q$) with the associated projective planes PG($2,q$). For finite (not necessarily desarguesian) projective Klingenberg and Hjelmslev planes a unique set of parameters (the order) can be given (see [48] and [28]): for any flag $(p,L)$ there are exactly $t$ points on $L$ neighboring with $p$ and exactly $s$ points on $L$ not neighboring with $p$. Moreover: the number of points = the number of lines = $s^{2}+st+t^{2}$, any line is incident with $s+t$ points, any point is incident with $s+t$ lines, any point has $t^{2}$ neighbors, any line has $t^{2}$ neighbors, $t|s$ and $r=\frac{s}{t}$ is the order of the projective plane that is the canonical epimorphic image of the Klingenberg plane, and $s\leq t^{2}$ or $t=1$. For a finite desarguesian projective Klingenberg plane this yields ###### Theorem 3.7. (Drake and Jungnickel [27] ) Let $R$ be a finite local ring. Then the projective Klingenberg plane (Hjelmslev plane in some cases) PG($2,R$) has parameters $s=|R|=q^{n}$ and $t=|J|=q^{n-1}$ with $q=p^{r}$ a prime power. To conclude this section we consider rings of order 4. This is the smallest order for which there exist rings that are not division rings. There are four non–isomorphic rings (with unit) of order 4. The first is the Galois ring GF($4$) that gives rise to the projective plane PG($2,4$). The second is the chain ring $\mathbb{Z}_{4}\cong GR(4,4)\cong W_{2}(2)$ with characteristic 4, the coordinate ring of a projective Hjelmslev plane. The third is the chain ring $\mathbb{D}(2)\cong\mathbb{A}(2,2)$ of characteristic 2 and with GF($2$) as a subfield (dual numbers over GF($2$)) that gives rise to another projective Hjelmslev plane (it is proved in [47] that the plane over $\mathbb{D}(2)$ is embeddable in PG($5,q$) while the plane over $\mathbb{Z}_{4}$ is not). Finally the fourth is the non–local ring $\mathbb{B}(2)\cong GF(2)[t]/(t^{2}-t)$ of characteristic 2 (double numbers over GF($2$)), associated to a Veldkamp plane with non–transitive neighbor relation. ## 4 Finite ring–like structures and ring–like planes Besides finite rings, even more general finite ring–like algebraic structures deserve a closer look in relationship to geometry. Till now very few research has been done in this area (except for finite field–like algebraic structures). In the literature several generalizations of rings can be found. Among the most important are: non–associative rings, nearrings and semirings. For all these structures there are finite examples and a generalization of the concept “local ring” exists, which opens perspectives for Klingenberg–like geometries associated to these generalized rings. ### 4.1 Semirings A semiring is a structure $(S,+,\cdot)$ with $(S,+)$ a commutative semigroup with identity element $0$, $(S,\cdot)$ a (not necessarily commutative) semigroup with identity element $1$ ($\not=0$), in which the left and right distributivity of multiplication over addition hold and in which $0$ is absorbing for multiplication $a\cdot 0=0\cdot a=0$. Hence semirings differ from rings by the fact that elements do not always have an inverse for the addition (the additive group of a ring is replaced by a semigroup). Semirings were first introduced by Vandiver [83] in 1935 and in the past years there was an enormous amount of publications on the subject (see e.g. the work of Glazec [34] for a survey), mainly in relation to computer science and automata theory but they also are interesting algebraic objects on their own, see [38] and [35]. Examples of finite semirings are $B(n,i)$ on the set $\\{0,1,\ldots,n-1\\}$ ($n\geq 2$ and $0\leq i\leq n-1$) with addition $\oplus$ defined by $a\oplus b=a+b$ if $0\leq a+b<n$ and $a\oplus b=c$ if $a+b\geq n$ with $c$ the unique number such that $c\equiv a+b$ (mod $n-i$) and $0\leq c\leq n-1$. Multiplication $\odot$ is defined in a similar way. In particular $B(n,0)$ is the ring $\mathbb{Z}_{n}$ of integers modulo $n$ and $B(2,1)$ is known as the boolean semiring $\mathbb{B}$. For other values of $n$ and $i$ one obtains semirings that are not rings. In semirings zero divisors and zero sums are of interest ($a$ is a zero sum of $S$ if there exists an element $b\not=0$ in $S$ such that $a+b=0$). In [39] it is proved that if $S$ is a finite commutative semiring then either every zero sum is a zero divisor or $S$ is a ring. As a corollary one has that a finite commutative semiring without zero divisors (a semidomain) either is zero sum free ($a+b=0$ always implies $a=b=0$) or is a finite domain (and hence a field). A semiring is called a semifield if $(S^{\ast},\cdot)$, with $S^{\ast}=S\setminus\\{0\\}$, is a group (in a semifield the multiplication need not to be commutative, so the term semi division ring would be better). We have to warn for confusion between the semifields considered here and the semifields defined in the context of non–desarguesian projective planes by e.g. Albert, Dickson, Knuth and others and which are also known under the name division algebras or distributive quasifields. Those are generalizations of division rings by dropping the need for associativity of the multiplication, so $(S,+,\cdot)$ is in that context a semifield if $(S,+)$ is a group (which turns out to be commutative) and $(S^{\ast},\cdot)$ is a loop and the two distributivity laws of multiplication over addition hold (for a survey on those semifields, see e.g. [57]) To my knowledge no research has been done yet on geometries over (finite) semirings that are not rings. So there may be opportunities. In particular a generalization to Klingenberg–like planes (over local semirings) seems possible though not trivial as the concept of ideals in semirings is much more complicated. As a starting point for ideals in semirings and the concept of local semiring see e.g. [6] and [37]. ### 4.2 Nearrings A left nearring is a structure $(N,+,\cdot)$ with $(N,+)$ a (not necessarily) commutative group with identity element $0$, $(N,\cdot)$ a (not necessarily commutative) semigroup with identity element $1$ ($\not=0$), in which the left distributivity of multiplication over addition holds: $a\cdot(b+c)=a\cdot b+a\cdot c$. Similar for right nearring. Nearrings differ from rings by the fact that addition isn’t necessarily commutative and there is only distributivity on one side. Nearrings which are distributive on both sides are rings (the commutativity of the addition then follows automatically). Most of the material on nearrings can be found in the work of Pilz [70] and [71]. A (left or right) nearring is called a (left or right) nearfield if $(S^{\ast},\cdot)$ is a group, with $S^{\ast}=S\setminus\\{0\\}$. So nearfields are division rings with only distributivity on one side. Nearfields were first discovered by Dickson in 1905 and are useful in constructing examples of non–desarguesian projective planes. All finite nearfields are classified by Zassenhaus. They are either Dickson–nearfields or they belong to one of seven exceptional classes. Little research has been done on geometry over nearrings that are not nearfields (except for the class of planar nearrings which give rise to balanced incomplete block designs, but these nearrings don’t possess a multiplicative identity element and therefore are less usable in the context of projective geometry). So another suggestion for research could be a treatment of plane projective geometries over (finite) nearrings that are not rings. The special case of Klingenberg planes (over local nearrings) and Hjelmslev planes (over $H$–nearrings) was already initiated in two general papers by E. Kolb, see [53] and [54] and by Törner in [82]. Local nearrings were introduced by Maxson in [65] and partially classified in [66]. Among the results is the fact that the additive group of a finite local nearring always is a $p$–group and the existence of a natural epimorphism from a local nearring onto a local nearfield. Other results on finite nearrings can be found in [2], [20], [21], [22], [45], [58], [59], [64] and [87]. ### 4.3 Alternative rings Finally, we mention some results on non–associative rings. A non–associative ring is a structure $(A,+,\cdot)$ which satisfies all axioms for an (associative) ring with multiplicative identity element, except for the associativity of the multiplication. An alternative ring is a non–associative ring such that $a\cdot(a\cdot b)=a^{2}\cdot b$ and $(a\cdot b)\cdot b=a\cdot b^{2}$ for all $a,b\in A$. Alternativity is a weaker condition than associativity. If any element in an alternative ring $A$ is a unit, then $A$ is called an alternative division ring. Alternative division rings are used to construct a class of non–desarguesian projective planes, called Moufang planes. By the theorem of Artin–Zorn every finite alternative division ring is a field, hence finite Moufang projective planes are desarguesian (and pappian). Generalizations to alternative rings that are not division rings are due to Baker, Lorimer and Lane. In [10] Moufang projective Klingenberg planes are defined as projective Klingenberg planes that are $(p,L)$–transitive for all flags and it is proved that they can be coordinatized by a local alternative ring. In [8] several characterizations of local alternative rings are given and an analogue for non–associative chain rings and $H$–rings is defined properly. In the finite case it is proved that the concepts of alternative $H$–ring, left (or right) alternative chain ring and local alternative principal ideal ring are equivalent. Moreover the theorem of Artin–Zorn is expanded : any finite alternative chain ring (or $H$–ring) is associative [9]. Leaving out the condition of being local, leads to more general alternative rings. It is hard to define projective ring planes over such rings in a suitable way. Faulkner has done it for alternative stable rank 2 rings in [30] (generalizing Veldkamp’s results for associative stable rank 2 rings) and for alternative rings in which any one–sided unit is two–sided in [31] (generalizing the planes of Barbilian). ## References * [1] Y. Al–Khamees, The enumeration of finite principal completely primary rings, Abh. Math. Sem. Univ. Hamburg 51 (1981), 226–231. * [2] B. Amberg, P. Hubert and Y. Sysak, Local nearrings with dihedral multiplicative group, Journal of Algebra 273 (2004), 700–717. * [3] L. M. Arkhipov, Finite principal ideal rings, Math. Notes, 12 (1972), 656–659. * [4] B. Artmann, Desarguessche Hjelmslev-Ebenen $n$-ter Stufe, Mitt. Math. Sem. Giessen, 91 (1971), 1–19. * [5] K. Asano, Über Hauptidealringe mit kettensatz, Osaka Math. J. 1 (1949), 52–61. * [6] R.E. Atani and S.E. Atani, Ideal theory in commutative semirings, Bull. Acad. Rep. Moldova 2 (57) (2008), 14–23. * [7] P. Y. Bacon, Desarguesian Klingenberg planes, Trans. Am. Math. Soc., 241 (1978), 343–355. * [8] C.A. Baker, N.D. Lane and J.W. Lorimer, Local alternative rings and finite alternative right chain rings, C. R. Math. Rep. Acad. Sci. Canada, 12 (1990), 53–58. * [9] , The Artin–Zorn theorem for finite punctally cohesive projective Klingenberg planes, Ars Combin., 29 (1990), 143–149. * [10] , A coordinatization for Moufang Klingenberg planes, Bull. Belg. Math. Soc. – Simon Stevin, 65 (1991), 3–22. * [11] D. Barbilian, Zur Axiomatik der projektiven ebenen Ringgeometrien I and II, Jahresbericht Deutsch. Math. Verein 50 (1940), 179–229 and 51 (1941), 34–76. * [12] M. Behboodi and R. Beyranvand, On the structure of commutative rings with $p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{n}^{k_{n}}$ $(1\leq k\leq 7)$ zero divisors, European Journal of Pure and Applied Math. 3 (2) (2010), 303–316. * [13] F. Bingen, Géométrie projective sur un anneau semi–primaire, Acad. Roy. Belg. Bull. Cl. Sci. 52 (1966), 13–24. * [14] G. Bini and F. Flamini, Finite commutative rings and their applications, Kluwer, Boston–Dordrecht–London (2002). * [15] H.H. Brungs and G. Törner, Embedding right chain rings in chain rings, Can. J. Math., 30 (5) (1978), 1079–1086. * [16] C. J. Chikunji, On a class of finite rings, Comm. Alg., 27 (10) (1999), 5049–5081. * [17] C. J. Chikunji, Enumeration of finite rings with Jacobson radical of cube zero arXiv:math/9905030 (1999), 1–20. * [18] W.E. Clarke and D.A. Drake, Finite chain rings, Abh. Math. Sem. Univ. Hamburg 39 (1973), 147–153. * [19] W.E. Clark and J.J. Liang, Enumeration of finite commutative chain rings, J. Algebra 27 (3) (1973), 445–453. * [20] J. Clay, The near–rings of groups of low order, Math. Z. 104 (1968), 364–371. * [21] J. Clay and J. Malone, The near–rings with identities on certain finite groups, Math. Scand. 19 (1966), 146–150. * [22] J. Clay and D. Doi, The near–rings with identity on alternating groups, Math. Scand. 23 (1968), 54–56. * [23] B. Corbas, Rings with a few zero divisors, Math. Ann. 181 (1969), 1–7. * [24] B. Corbas and P.D. Williams, Rings of order $p^{5}$, Part I and II, J. Algebra 231 (2000), 677–690 and 691–704. * [25] A. Cronheim, Dual numbers, Witt vectors, and Hjelmslev planes, Geom. Ded. 7 (1978), 287–302. * [26] P. De Winne, An extension of the notion “cross-ratio of an ordered 4-tuple of points of the projective line” to an ordered (n+3)-tuple of points (resp. hyperplanes) of the n-dimensional space over an arbitrary ring with identity, part I: the n-dimensional projective space $S_{n}$ over an arbitrary ring with identity, Simon Stevin, 47 (3-4) (1974), 139–159. * [27] D.A. Drake and D. Jungnickel, Finite Hjelmslev planes and Klingenberg epimorphisms, in Rings and Geometry, NATO Adv. Study Inst., Istanbul, eds R. Kaya et al., Reidel, Dordrecht, (1985), 153–231. * [28] D.A. Drake and H. Lenz, Finite Klingenberg planes, Abh. Math. Sem. Univ. Hamburg, 44 (1975), 70–83. * [29] F. Eugeni and E. Galiè, Sui piani costruiti su anelli, Dipartimento M.E.T., Università di Teramo, Italy, (1991), 143–162. * [30] J.R. Faulkner, Coordinatization of Moufang–Veldkamp planes, Geom. Ded. 14 (1983), 189–201. * [31] , Barbilian planes, Geom. Ded. 30 (1989), 125–181. * [32] J.L. Fisher, Finite principal ideal rings, Canad. Math. Bull., 19 (3) (1976), 277–283. * [33] G. Ganske and B. R. McDonald, Finite local rings, Rocky Mountain J. Math, 3 (4) (1973), 512–540. * [34] K. Glazec, A short guide through the literature on semirings, Math. Inst. Univ. Wroclaw, Poland (1985). * [35] J. Golan, Semirings and their applications, Kluwer, Dordrecht (1999). * [36] M. González, On distinguishing local finite rings from finite rings only by counting elements and zero divisors, European Journal of Pure and Applied Math. 7 (1) (2014), 109–113. * [37] V. Gupta en R. Chaudhari, Right local semirings, Asian-European Journal of Mathematics 6 (1) (2013), 1–5. * [38] U. Hebisch and H. J. Weinert, Semirings and semifields, Handbook of Algebra part I, Elsevier, Amsterdam (1996), 425–462. * [39] A. Hetzel and R. Lewis Lufy, On the relationship between zero–sums and zero–divisors of semirings, Kyungpook J. Math. 49 (2009), 221–233. * [40] J.W.P. Hirschfeld, Projective geometries over finite fields, Oxford University Press, New–York (1998). * [41] J. Hjelmslev, Einleitung in die allgemeine Kongruenzlehre. III, Danske Vid. Selsk. Mat.-Fys. Medd. 19 (1942), nr. 12. * [42] T. Honold and I. Landjev, Linear codes over finite chain rings, Electr. J. Comb. 7 (2000), # R11. * [43] T. Honold and I. Landjev, Codes over rings and ring geometries, in Current research topics in Galois geometry, eds. L. Storme and J. de Beule, Nova Science Publishers, New–York 7 (2011), 159–184. * [44] X.-D. Hou, Finite commutative chain rings, Finite Fields and Their Applications 7 (2001), 382–396. * [45] P. Hubert, Nearrings and a construction of triply factorized groups, Ph.D. Thesis, Universität Mainz (2005). * [46] G. Janusz, Separable algebras over commutative rings, Trans. Am. Math. Soc., 122 (1966), 461–479. * [47] D. Keppens and H. Van Maldeghem Embeddings of projective Klingenberg planes in the projective space PG($5,\mathbb{K}$), Contrib. Alg. Geom., 50 (2) (2009), 483–493. * [48] E. Kleinfeld, Finite Hjelmslev planes, Illinois J. Math., 3 (1959), 403–407. * [49] W. Klingenberg, Projektive und affine Ebenen mit Nachbarelementen, Math. Z. 60 (1954), 384–406. * [50] , Euklidische Ebenen mit Nachbarelementen, Math. Z. 61 (1954), 1–25. * [51] , Desarguessche Ebenen mit Nachbarelementen, Abh. Math. Sem. Univ. Hamburg 20 (1955), 97–111. * [52] , Projektive Geometrien mit Homomorphismus, Math. Ann 132 (1956), 180–200. * [53] E. Kolb, Projective Klingenberg planes over nearrings, Journal of Geometry 46 (1993), 82–91. * [54] , Hjelmslev planes over nearrings, Discrete Mathematics 155 (1996), 147–155. * [55] F. Knüppel, Projective planes over rings, Resultate Math., 12 (1987), 348–356. * [56] W. Krull, Algebraische theorie der Ringe I, II and III, Math. Annalen 8 (1923), 80–122, 91 (1924), 1–46 and 92 (1924), 183–213. * [57] M. Lavrauw and O. Polverino, Finite semifields and Galois geometry, in Current research topics in Galois Geometry, ed. De Beule and Storme, NOVA Academic Publishers (2011). * [58] S. Ligh, Near–rings with descending chain condition, Compositio Mathematica 21 (2) (1969), 162–166. * [59] S. Ligh and J. J. Malone, Zero divisors and finite nearrings, 6 (1) (2013), 1–5. * [60] J.W. Lorimer, Structure theorems for commutative Hjelmslev rings with nilpotent radicals, C. R. Math. Rep. Acad. Sci. Canada, 6 (3) (1984), 123–127. * [61] , Affine Hjelmslev rings and planes, Annals of Discr. Math., 37 (1988), 265–276. * [62] , The classification of compact right chain rings, Forum Math., 4 (1992), 335–347. * [63] J.W. Lorimer and N.D. Lane, Desarguesian affine Hjelmslev planes, J. für die reine und angew. Math., 1 (1975), 336–352. * [64] C. Maxson, On finite nearrings with identity, Amer. Math. Monthly 74 (1967), 1228–1230. * [65] , On local near–rings, Math. Z. 106 (1968), 197–205. * [66] , Local near–rings of cardinality $p^{2}$, Canad. Math. Bull. 11 (4) (1968), 555–561. * [67] B. R. McDonald, Finite rings with identity, Marcel Dekker, New York (1974). * [68] A.A. Nechaev, Finite rings with applications, in Handbook of Algebra, vol. 5, Elsevier (2008), 213–320. * [69] A. Neumaier, Nichtkommutative Hjelmslev-Ringe, Festband für H. Lenz, Freie Universität Berlin, (1976), 200–213. * [70] G. Pilz, Near–rings, North–Holland, Amsterdam, 2nd Edition (1983). * [71] , Nearrings and nearfields, Handbook of Algebra part I, Elsevier, Amsterdam (1996), 463–498. * [72] R. Raghavendran, Finite associative rings, Compositio Mathematica 21 (1969), 195–229. * [73] R. Ree, On projective geometry over full matrix rings, Trans. Am. Math. Soc. 6 (1) (1955), 144–150. * [74] A.S. Rybkin, Finite local rings of principal ideals, Math. Notes 28 (1981), 465–472. * [75] M. Saniga and M. Planat, Projective Planes Over Galois double numbers and a geometrical principle of complementarity, arXiv:math/0601261v3, (2006), 1–9. * [76] M. Satyanarayana Characterization of local rings, Tôhoku Math. Journal, 19 (1967), 411–416. * [77] B. Segre, Le geometrie di Galois, Ann. Math. Pura e Appl. 48 (1959), 1–96. * [78] C. Segre, Le geometrie proiettive nei campi di numeri duali, Atti Accad. Sci. Torino, 47 (1911), 114–133 and 164–185. * [79] L.A. Skornyakov, Rings chain–like from the left (Russian), Izv. Vyssh. Uchebn. Zaved. Mat., 4 (1966), 114–117. * [80] J. A. Thas, The $m$–dimensional projective space $S_{m}(M_{n}(GF(q)))$ over the total matrix algebra $M_{n}(GF(q))$ of the $n\times n$–matrices with elements in the Galois field $GF(q)$, Rend. Mat. 4 (1971), 459–532. * [81] G. Törner, Eine klassifizierung von Hjelmslev–ringen und Hjelmslev–Ebenen, Mitt. Math. Sem. Giessen, 107 (1974), 1–77. * [82] , Über den Stufenaufbau von Hjelmslev–Ebenen, Mitt. Math. Sem. Giessen, 126 (1977), 1–43. * [83] H. S. Vandiver, Note on a simple type of algebra in which cancellation law of addition does not hold, Bull. Amer. Math. Soc. 40 (1934), 914–920. * [84] F.D. Veldkamp, Projective planes over rings of stable rank 2, Geom. Dedicata 11 (1981), 285–308. * [85] , Projective ring planes: some special cases, in Atti Conv. Geometria combinatoria e di incidenza, La Mendola, 1982, Rend. Sem. Mat. Brescia 7 (1984), 609–615. * [86] , Geometry over rings, Handbook of Incidence Geometry, Elsevier, Amsterdam (1995), 1033–1084. * [87] G. Wendt, On zero divisors in near–rings, Int. Journal of Algebra 3 (2009), 21–32. * [88] R.S. Wilson, On the structure of finite rings I, Comp. Math 26 (1973), 79–93. * [89] , On the structure of finite rings II, Pacific J. Math. 51 (1974), 317–325. * [90] B. R. Wirt, Finite non–commutative local rings, Ph.D. Thesis, University of Oklahoma (1972). * [91] T. Wu, H. Yu and D. Lu, The structure of finite local principal ideal rings, arXiv:math/11055179v3, (2011).
2024-09-04T02:54:57.693651
2020-03-05T22:55:45
2003.02955
{ "authors": "Seid Muhie Yimam and Gopalakrishnan Venkatesh and John Sie Yuen Lee\n and Chris Biemann", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26069", "submitter": "Seid Muhie Yimam", "url": "https://arxiv.org/abs/2003.02955" }
arxiv-papers
# Automatic Compilation of Resources for Academic Writing and Evaluating with Informal Word Identification and Paraphrasing System ###### Abstract We present the first approach to automatically building resources for academic writing. The aim is to build a writing aid system that automatically edits a text so that it better adheres to the academic style of writing. On top of existing academic resources, such as the Corpus of Contemporary American English (COCA) academic Word List, the New Academic Word List, and the Academic Collocation List, we also explore how to dynamically build such resources that would be used to automatically identify informal or non- academic words or phrases. The resources are compiled using different generic approaches that can be extended for different domains and languages. We describe the evaluation of resources with a system implementation. The system consists of an informal word identification (IWI), academic candidate paraphrase generation, and paraphrase ranking components. To generate candidates and rank them in context, we have used the PPDB and WordNet paraphrase resources. We use the _Concepts in Context_ (CoInCO) ”All-Words” lexical substitution dataset both for the informal word identification and paraphrase generation experiments. Our informal word identification component achieves an F-1 score of 82%, significantly outperforming a stratified classifier baseline. The main contribution of this work is a _domain- independent_ methodology to build targeted resources for writing aids. Keywords: academic writing, academic word, academic phrase, informal word identification, academic text paraphrasing languageresourceLanguage Resources Automatic Compilation of Resources for Academic Writing and Evaluating with Informal Word Identification and Paraphrasing System Seid Muhie Yimam1, Gopalakrishnan Venkatesh2, John Sie Yuen Lee3 and Chris Biemann1 --- Universität Hamburg, Germany1, International Institute of Information Technology, Bangalore, India2, City University of Hong Kong, Hong Kong SAR3 <EMAIL_ADDRESS> Abstract content ## 1 Introduction We present the first approach to building resources for an academic writing aid system automatically. Academic writing aid systems help in automatically editing a text so that it better adheres to the academic style of writing, particularly by choosing a better academic word in a given domain. In the context of academic paraphrasing tasks, the resources are mainly words or phrases, that are more appropriate to use in an academic writing style. Moreover, the academic resources might vary from domain to domain as some words or phrases are extensively used in one domain over the other. The first step in building an academic writing aid tool is to collect resources that determines whether a given phrase follows the style of writing in academia. This involves analyzing a given sentence and determining if the lexemes of the sentences are well-selected academic words and phrases or not. To evaluate the resources compiled, we have to build a system, analogous to the lexical substitution and text simplification tasks, for example, [Szarvas et al., 2013, Štajner and Saggion, 2018], that consists of informal word identification, academic candidate generation, and candidate paraphrase ranking components (see Figure 1). While it is possible to follow the same approaches as the lexical substitution and text simplification approaches for academic text rewriting tasks, the main challenge for the academic paraphrasing task is the collection of resources for academic texts. The following are the main objectives of building academic resources: 1. 1. Identify suitable academic and non-academic datasets that are to be used to build academic resources. 2. 2. Design a generic, _domain-independent_ , approach to extract academic resources. 3. 3. Evaluate the quality of the collected resources and use these resources for informal word identification (IWI) and academic paraphrasing systems. The informal word identification (IWI) component automatically identifies informal words (see Section 4.2) that are going to be replaced with academic paraphrases. The candidate generation and ranking component determine the best academic candidate paraphrase to replace the informal words. The ultimate goal of this research work is to integrate the informal word identification, candidate generation, and paraphrase ranking components into writing aid tools, for example to word processors or text composing software like latex packages, to automatically assist users in academic text composing. In this work, we have targeted the following research questions 1) How to build academic resources (words or phrases), which are used to replace informal or less academic expressions in academic texts? 2) How to build a system that can be used to evaluate the collected resources? In Section 2, a brief review of related works is presented. In Section 3, we discuss how to build academic resources using reference corpora and evaluate the quality of the resource. In Section 4, we present the approaches that are used to build an informal word identification and paraphrasing system for academic rewriting. Setups of the academic paraphrasing systems and the experimental results are discussed in Section 5. Analysis of system results and conclusion of the research are presented in Section 6 and Section 7 respectively. ## 2 Previous Work In this section, we review previous work in lexical substitution, a closely related task, and discuss how the academic text rewriting system potentially differs. In essence, our system is similar to lexical substitution (LS) and text simplification tasks, in such a way that both focus on the rewriting of an original text towards a given goal. Lexical substitution system mainly focuses on rewriting texts by replacing some of the words or phrases without altering the original meaning [Szarvas et al., 2013, Štajner and Saggion, 2018]. The work by ?) targeted text simplification based on the sequence-to-sequence deep neural network model, where its entailment and paraphrasing capabilities are improved via multi-task learning. While the complex word identification (CWI) task focuses on identifying lexical units that pose difficulties to understand the sentence [Yimam et al., 2017b, Yimam et al., 2017a, Yimam et al., 2018, Paetzold and Specia, 2016], our informal word identification (IWI) component focuses on identifying words that are not fitting or adhering to the academic style of writing. The work by ?) focuses on the lexical substitution task, particularly for medical documents. They have relied on Distributional Thesaurus (DT), computed on medical texts to generate synonyms for target words. Existing resources for academic writing are limited to a precompiled list of words such as the Corpus Of Contemporary American English (COCA) [Gardner and Davies, 2013] and the New Academic Word List 1.0 (NAWL) [Browne et al., 2013] vocabulary lists. Regarding phrases (multi-word expressions) for academic writing, the only available resources are the academic bi-grams compiled by Pearson111Academic collocation list: https://pearsonpte.com/organizations/resea. However, these resources are 1) limited to a certain domain and target writers (mostly L2 learners and students), 2) their vocabulary is fixed, thus requiring manual work for an extension, and 3) the resources are limited to uni-gram and bi-gram lists. In this work, we build academic resources that are more generic, which can be built from existing reference corpora. In addition to uni-gram and bi-gram resources, we also design a system that can produce resources up to a length of four words (quad-grams). As far as we know, the only system available to academic writing is the work of ?), which addresses a different aspect, which is a sentence restructuring based on nominalizing verbal expressions. ## 3 Building Academic Resources In this section, we will first discuss the existing academic resources, how they are built and their limitations. Then, we will present our approach that describes the process of building academic resources from different reference corpora. Finally, we will discuss the quality of the collected resources against two evaluation measures, namely comparing with the existing resources and manually evaluating the academic fitness of resources. ### 3.1 Existing Resources for Academic Writing In this subsection, we will present the existing academic word lists and phrases, which will be used to evaluate the quality of the dataset we build from reference corpora. #### 3.1.1 Academic Vocabulary There are some efforts in building a list of vocabularies or words for academic writing. Some of them are created by analyzing text from academic writing corpora such as journals, theses works, and essays. One such resource is the Corpus Of Contemporary American English (COCA) [Gardner and Davies, 2013] vocabulary list, which contains about 3,000 words (in lemmas) that are derived from a 120 million word sub-corpus of the 560 million words. Similarly, the New Academic Word List 1.0 (NAWL) [Browne et al., 2013] was also built in the same way as the COCA list as a reference resource for second language learners of English, which is selected from an academic corpus of 288 million words. #### 3.1.2 Academic Phrases Academic phrases are a list of collocated words (multi-word expressions), which are mostly used for academic writing. The list from ?) comprises of 2,468 bi-gram collocations. The list is compiled from the written curricular component of the Pearson International Corpus of Academic English (PICAE) comprising of over 25 million words. However, the academic phrases, like the academic word lists, are mostly used as a guideline (study material) to practice academic writing. ### 3.2 Academic and Non-Academic Reference Corpora The existing resources that are presented in Section 3.1 are prepared mostly as references or study guidelines for academic writers. However, to build automatic writing support, it is required to have more comprehensive and larger resources that can also be updated dynamically. In addition to single word and bi-gram lists, it would be also beneficial if the resource includes longer sequences of words. Hence, we have further extended the academic phrase list that includes up to four-gram phrases. The resource helps the academic paraphrasing or rewriting system in 1) identifying words or phrases in a text that are less academic and 2) providing alternative academic words or phrases that are more relevant to the contexts presented. Figure 1: Frequencies of the highest occurring tri-grams collected from the reference corpora based on our approach. To this end, we have compiled a list of academic phrases that are extracted from the ACL Anthology Reference Corpus (ACLAC) [Bird et al., 2008]. This corpus contains 22,878 scholarly publications (articles) about Computational Linguistics. To understand the syntactic difference of an academic corpus from a non-academic corpus, we have used the Amazon Review Full Score Dataset [Zhang et al., 2015] as our non-academic reference. The non-academic dataset is constructed by randomly taking 600,000 training samples and 130,000 testing samples for each review score from 1 to 5 [Zhang et al., 2015]. In this paper, a review refers to the review text from the training sample. Resource | Size | Coverage (%) ---|---|--- COCA | 3,015 | 95.39 NAWL | 963 | 99.90 Academic phrases | 2,468 | 79.34 Table 1: Coverage of the existing resources for academic writing in our reference ACLAC corpus. The above two corpora can be considered to be a good fit as it shows a high match with the existing academic vocabulary or phrase list, as shown in Table 1. From Table 1, we can see that 95% of the academic words from COCA and 99.90% of the academic words from NAWL are represented in the ACLAC corpus. Similarly, around 80% of the bi-grams from the academic phrases (PICAE) are contained in the ACLAC corpus. ### 3.3 Approach to Build the New Academic Resource On analyzing the corpora, we noticed that the non-academic corpus is much larger (in terms of the number of words) than the academic corpus. Therefore, we downsampled the non-academic text (to have comparable resources in terms of size) and ensured that the total number of words in both of the corpora are comparable. As a part of the pre-processing step, we clean the corpus (removing special characters) and lower case each word. We have considered a total of 991,798 reviews, which results in 75,184,498 tokens. Using the NLTK’s222https://www.nltk.org/ Bi-, Tri- and Quad-Gram multi-word expression finder, we have extracted phrases from the two corpora (ACLAC and Amazon Review Full Score Dataset) and also compute the frequency distributions of these phrases across both the corpora as it can bee seen in Figure 1. The phrases extracted from both corpora can be used to assess naively the distribution across the two domains. However, we have followed two different widely adopted approaches to extract representative phrases in a corpus, which is specifically known as keyphrases. The first approach is called Term Frequency-Inverse Document Frequency (TF- IDF), which is one of the most important statistics that show the relative importance of a term in a document in comparison to the corpus. The importance increases proportionally to the number of times a word appears in the document while its weight is lowered when the term occurs in many documents. We used the scikit-learn333https://scikit-learn.org/ implementation of TF-IDF to compute the scores of the different n-grams and thereby select the phrases that have maximum TF-IDF scores as keyphrases. In the ACLAC corpus, we have considered an article as one document while for the Amazon Review dataset, a review is considered as a single document. In the second approach, we explore keyphrase extraction techniques based on part-of-speech sequences. We have employed _EmbedRank_ , an unsupervised keyphrase extraction tool trained with sentence embeddings [Bennani-Smires et al., 2018]. We consider only those phrases that consist of zero or more adjectives followed by one or multiple nouns [Wan and Xiao, 2008]. While using the official implementation444https://github.com/swisscom/ai-research- keyphrase-extraction, we also explored the possibility of using the _Spacy_ 555https://spacy.io/ POS tagger for keyphrase extraction in our corpora, which has a permissive license to redistribute our resource generation system as an open-source project. As per the heuristic approach followed in the COCA word list compilation, we only retain those phrases that occur at least 50% more frequently in the academic portion of the corpora than would otherwise be expected. In other words, the ratio of the academic frequency of a term (in the ACLAC dataset) to the non-academic frequency (in the Amazon Review Full Score Dataset) should be 1.50 or higher [Gardner and Davies, 2013]. Using a similar approach, we have also created the non-academic resources, which are also used to evaluate the quality of the academic resources in the human evaluation experiment (cf. Section 3.5) ### 3.4 Newly Collected Academic Resources Based on the two keyphrase extraction approaches discussed in Section 3.3 (TF- IDF and EmbedRank based keyphrase extractions), we have compiled a total of 6,836 academic phrases (5,275 from EmbedRank and 1,900 from the TF-IDF approach). From Table 2, we can see that most of the academic keyphrases are extracted using the EmbedRank approach. Newly Collected resources --- Approach | Uni-gram | Bi-gram | Tri-gram | Quad-gram EmbedRank | 1,267 | 3,848 | 156 | 4 TF-IDF | 1,090 | 690 | 109 | 11 From Existing Resources COCA | 3,016 | 0 | 0 | 0 NAWAL | 960 | 0 | 0 | 0 PICAE | 0 | 2,468 | 0 | 0 Table 2: Academic word and phrases lists from the existing as well as from newly collected resources. ### 3.5 Manual Evaluation of Resources From the automatically compiled list of resources (words and phrases), we have randomly sampled 520 words and phrases comprising of 155 uni-grams, 100 bi- grams and 5 tri-grams from each of the compiled academic and non-academic phrase list. We then distributed the word and phrase lists to a total of 9 annotators (Ph.D. and postdoctoral researchers) and requested the participants to label each entry as academic or non-academic. The sampled words and phrases are evaluated by two sets of annotators and the annotators were able to label the entries with an inter-annotator agreement of 68.22%. ### 3.6 Results and Discussions on the Collected Resources While analyzing the COCA list, we noticed that it contains a few stop words such as both and above. Hence, while relying on TF-IDF, we have considered extracting academic resources in different scenarios. First, we remove stop words as a part of the preprocessing step and in the second approach we have used the whole corpus as it is. The system proposed by us relies on the relative frequencies in the reference corpora which can be computed independently of the language used. Thus the compilation of such an academic resource (through keyphrase extraction) can be considered language agnostic. While performing the human evaluation, the annotators were asked to classify whether the given phrase is academic or not. The evaluation would have been more rigorous if they had to classify the phrases given the context in which the term had occurred. The annotators have at times labeled an entry as both academic and non-academic. Consider the word attention, it was used both in an academic (ACLAC) and non-academic (Amazon Review Full Score Dataset) context, for example as ”LSTM with attention” and ”the kid’s attention to the game” respectively. ## 4 Evaluating the Resources for Academic Rewriting System ### 4.1 Academic Words We define a word as academic or formal if it is in one of the following lists of academic phrases 1) keyphrases (up to four-grams) compiled by our system (cf. Section 3.3 – comprises of 6,836 phrases) 2) the COCA list [Davies, 2012] 3) the New Academic Word List [Browne et al., 2013]666 http://www.newgeneralservicelist.org/nawl-new-academic-word-list. Some example academic words are shown in Table 3. The academic word lists are also extended to phrases or multi-word expressions. Pearson has published a set of academic bi-grams777Academic collocation list: https://pearsonpte.com/organizations/resea. Words like _best_ , _almost_ , and _way_ are not by themselves _academic_ , but they can be combined with other words to form academic expressions such as _best described_ , _almost identical_ , and _appropriate way_. ### 4.2 Informal Words The naive approach is to attempt to rewrite every non-academic word, using our definition above. That is a misplaced goal, however, since even the average document in the BAWE corpus [Alsop and Nesi, 2009] contains a considerable number of words outside the list, including function words and other words commonly used in all English documents. We define a word as informal if it is a non-academic term that can be paraphrased by an academic term. If the term is academic, or it is non- academic but does not have an academic paraphrase, it is termed as formal. ### 4.3 Architecture Figure 2: Architecture of the system. As shown in Figure 2, our proposed system consists of four components, which is analogous to the lexical simplification systems [Paetzold and Specia, 2017]. The components of our system constituted informal word identification (IWI), paraphrase generation, candidate selection, and paraphrase ranking. #### 4.3.1 Informal Word Identification The informal word identification (IWI) component identifies each word as _informal_ , or not. The system attempts to paraphrase only the informal words in the rest of the pipeline. Similar to CWI [Yimam et al., 2017b, Yimam et al., 2017a, Yimam et al., 2018, Paetzold and Specia, 2016], IWI is more accurate when placed in context. The word _big_ , for example, may need to be paraphrased to _major_ in the context of ” _This article makes two big contributions._ ” It should not be paraphrased, however, when it is part of the expression _big data_. #### 4.3.2 Paraphrase Generation, Selection, and Ranking Given an informal word, this step generates a list of substitution candidates. While there are different approaches to generate candidates for target words, such as using existing paraphrase resources like WordNet and Distributional thesaurus (see ?)), we depend solely on the CoInCo [Kremer et al., 2014], WordNet [Miller, 1995], and the paraphrase database (PPDB) [Pavlick et al., 2015] resources to generate candidates. Once the candidates are generated, all of the candidates, which must be academic words are retained for the paraphrase ranking component. Given a list of academic substitution candidates, the paraphrase ranking component finds the one that fits best in the context. The detailed approach is presented in Section 4.4. Academic words | report, state, claim… ---|--- Non-academic words | say, declare, mention, allege… Table 3: Example of academic and non-academic words based on our academic resources. ### 4.4 Datasets for IWI and the Paraphrasing Components For this evaluation, we derive our dataset from a lexical substitution dataset called the Concepts in Context (CoInCo) [Kremer et al., 2014]. The CoInCo dataset is an _All-Words_ lexical substitution dataset, where all words that could be substituted are manually annotated. The corpus is sampled from newswire and fiction genres of the Manually Annotated Sub-Corpus (MASC) corpus888http://www.anc.org/data/masc/. While the targets (words that are going to be substituted) are used to build the informal word identification dataset, the candidates are further processed to perform the academic paraphrase ranking task. A total of 1,608 training and 866 test sentences are compiled out of 2474 sentences from the CoInCo dataset. Statistics on the IWI dataset are shown in Table 5. #### 4.4.1 Building the IWI Dataset We automatically generated an IWI dataset from CoInCo as follows. For each non-academic target word, we determine if its substitution candidates include at least one academic word. If so, it is labeled as informal; otherwise, it is labeled as formal. All academic target words and all words without substitution candidates are labeled as formal. An example is given in Example 4.1 and Table 3. CoInCo annotation | Pacific First Financial Corp said[paraphrases: report, state, detect] shareholders ---|--- IWI dataset | Pacific[N] First[N] Financial[N] Corp[N] said[Y] shareholders Table 4: Transformation of the CoInCo dataset into IWI dataset, with respect to the academic word list in Table 3 ###### Example 4.1. Sentence: Pacific First Financial Corp said shareholders … CoInCo annotation: Target word: said. Paraphrases: report, state, claim, allege, announce, mention, declare IWI dataset ([I]–informal, [F]–formal): Pacific[F] First[F] Financial[F] Corp[F] said[I] shareholders[N] Dataset | # Tokens | #Types ---|---|--- | I | F | I | F IWI training | 6,783 | 3,358 | 2,266 | 1,509 IWI test | 3,666 | 1,822 | 1,577 | 994 Table 5: Statistics on the IWI dataset. _#Tokens_ shows the total number of tokens (formal (_F_) and informal (_I_)) while _#Types_ shows the unique occurrences of tokens in the IWI training and test sets. _I_ stands for informal and _F_ for formal tokens and types resp. #### 4.4.2 Paraphrase Candidates To generate non-academic to academic word pairs for paraphrasing, we used the paraphrases (word pairs) in CoInCo, WordNet, and PDPB as the starting point. For the CoInCo dataset, we have only included those word pairs where: 1) the target word is non-academic, 2) the substitution candidate is academic, 3) the target word has a higher word frequency than the substitute candidate in our academic resources. Since the academic resource is not exhaustive, some proper academic terms may be mistakenly considered as non-academic. This requirement aims to prevent these words from being substituted. For example, from the sentence in Example 4.1, we obtained the word pairs say:report, say:state, and say:claim. We have collected a total of 23,476 word pairs from the CoInCo Training Set. The dataset is prepared with 4 candidates for each informal target, where 2 candidates are academic and 2 candidates are non-academic. When we do not have appropriate candidates we extract further candidates from WordNet [Miller, 1995] and PPDB [Pavlick et al., 2015]. Table 6 shows the statistics of target words extracted from the CoInCo dataset, where 59% of the informal words have possible candidate paraphrases. ### 4.5 Academic Paraphrase Corpus In general, any existing paraphrase or lexical substitution corpus can be converted into an academic paraphrase corpus with the following steps: 1) Discard all academic target words since they do not need to be paraphrased. 2) Remove all non-academic substitution candidates for the remaining (non- academic) target words. If no candidate is left after step (2), also remove that target word. # target words | Paraphrase coverage ---|--- Original | Our corpus | in (%) 5,480 | 3,250 | 59.30 Table 6: Statistics on our evaluation dataset. The last column shows the percentage of non-academic words in the corpus for which paraphrases can be obtained. ### 4.6 Informal Word Identification Models We trained three Support Vector Machine (SVM) classifiers, using Radial Basis Function kernel, from scikit-learn999https://scikit-learn.org/ with different feature sets. We use the following features: Word frequency: We use word frequencies 1) in the Beautiful Data101010https://norvig.com/ngrams/ which are derived from the Google Web Trillion Word Corpus, 2) in the general COCA list, and 3) in the ACL anthology corpus [Bird et al., 2008]. Word Embedding: We have used GloVe [Pennington et al., 2014] word embedding to compute the cosine similarity between the word and the sentence111111Embedding for the sentence is calculated by averaging the embedding of words in the sentence. We also explore the option of using Euclidean distance between the word and the sentence as a feature while training the classifier. Part of Speech Tag (POS): The POS tag of the word obtained from the TreeTagger121212https://www.cis.uni-muenchen.de/~schmid/tools/TreeTagger/. Word level features: We use the word length and the number of vowels as features for training the classifier. ### 4.7 Paraphrase Ranking Models In order to rank the best candidates for academic rewriting, we have followed the learning-to-rank machine learning approach, where candidates are ranked based on their relevance score. The number of annotators selected the given candidate is considered as a relevance score. The TF-Ranking deep learning model provided by _TensorFlow Ranking_ 131313https://github.com/tensorflow/ranking library [Pasumarthi et al., 2019] is used to build the paraphrase ranking model. ## 5 Experiments ### 5.1 Informal Word Identification We trained the IWI classifier on the CoInCo Train Set using SVM. Similar to most of the CWI evaluation metrics, we evaluate the performance of the system on the following evaluation metrics: Precision: The number of correct informal targets, out of all targets proposed by the system. Recall: The number of correct informal targets, out of all informal words that should be paraphrased. F-Measure: The harmonic average of precision and recall. Table 7 shows IWI precision and recalls on the CoInCo Test Set. We use a simple stratified randomization algorithm from scikit-learn as a baseline system. The proposed algorithm (SVM classifier) achieves a better performance overall in the F-Score of 0.8204. As it can be seen in Table 7, the following features work better for the IWI task: frequencies, cosine similarity, and Euclidean distance. Method | Precision | Recall | F-score ---|---|---|--- Baseline | 0.6679 | 0.6787 | 0.6733 SVM Fe1 | 0.7584 | 0.8933 | 0.8204 SVM Fe2 | 0.7650 | 0.8748 | 0.8162 SVM Fe3 | 0.7552 | 0.8912 | 0.8176 Table 7: Precision and recall on the informal word identification task. The baseline system has been setup using the Stratified classifier from scikit- learn: The stratified classifier in scikit-learn generates predictions by respecting the training set’s class distribution. Fe1 = (Frequencies, cosine similarity), Fe2 = Fe1 + (Euclidean distance), Fe3 = All features ### 5.2 Academic Paraphrasing We evaluate the system performance on automatically generating academic paraphrases and ranking them. Following standard evaluation metrics in lexical simplification, we report on the Mean Reciprocal Rank (MRR)141414https://en.wikipedia.org/wiki/Mean_reciprocal_rank metric. The model from TF-Ranking [Pasumarthi et al., 2019] library has been trained to re-rank the candidates on the CoInCo test set. The model was trained using the _Adagrad_ optimizer with a learning rate of 0.05. Experiments were performed on various loss functions (_pairwise_logistic_loss_ and _softmax_loss_) and different _step_ 151515Steps are the number of training iterations executed. (50, 100 and 200) values. Table 8 shows the experimental results. Parameters | Ranking metric ---|--- Loss | Steps | MRR Logistic | 50 | 0.8861 100 | 0.8926 200 | 0.8895 Softmax | 50 | 0.8893 100 | 0.8895 200 | 0.8914 Table 8: Academic paraphrasing performance on the CoInCo Test Set using the MRR ranking metric. ## 6 Analysis of Results For the informal word identification task, our models have a slightly lower precision as our dataset is not balanced (we have more informal words than formal words, as shown in Table 5). From an error analysis, we find out that even if the term is academic in general, its usage in the test dataset is inclined to be informal. For example, in the sentence ”It was last February, after the winter break, that we moved in together.”, break is labeled as academic but should be labeled as informal. This issue could be solved by further enhancing the dataset by employing human annotators during the resource compilation process. Similarly, some of the errors from the system’s prediction are to be attributed to the annotation process of the test set. For example, in the sentence ”They included support for marine reserves and money for fisheries management reform.”, reserves is annotated as informal while the system identified it as formal. In general, while bootstrapping the academic resource compilation and the informal word identification tasks, a minimal intervention of human annotators would enhance the overall system. Furthermore, integration of a BERT or other contextualized embedding model [Devlin et al., 2019] could also help to improve the performance of the system. Contextualized word embeddings provide word vector representations based on their context. As the vector representation of words varies as per the context, they implicitly provide a model for word sense disambiguation (WSD). ## 7 Conclusion and Future Direction In the realm of academic text writing, we explored how to compile academic resources, automatically identify informal words (words that are less formal for academic writing), and provide better substitutes. We have used a generic approach to compile the academic resources, which can be easily transferred to domains or languages as it only requires text corpus. The academic text rewriting system, analogous to lexical substitution systems, consists of informal word identification, candidate generation, candidate selection, and ranking components. As far as we know, this is the first experiment towards the development of academic writing support for academia, while there might be commercial cases (for example Grammarly161616https://www.grammarly.com/) that we do not know how the systems operate. We envision this system to be embedded into open source academic writing aid tools where the academic sources are used to detect informal terms and propose academic substitutes. For the resource compilation process, it would be nice to extend the EmbedRank approach to extract keyphrases beyond the adjective and noun POS tag patterns, especially to cover verbs used in academic contexts. Source code and resources of this the paper are released publicly171717https://github.com/uhh-lt/par4Acad on the Github repository under permissive licenses (ASL 2.0, CC-BY). ## Acknowledgments This work was partially funded by a HKSAR UGC Teaching & Learning Grant (Meeting the Challenge of Teaching and Learning Language in the University: Enhancing Linguistic Competence and Performance in English and Chinese) in the 2016-19 Triennium. ## 8 Bibliographical References ## References * Ackermann and Chen, 2013 Ackermann, K. and Chen, Y.-H. (2013). Developing the Academic Collocation List (ACL) - A corpus-driven and expert-judged approach. Journal of English for Academic Purposes, 12:235–247. * Alsop and Nesi, 2009 Alsop, S. and Nesi, H. (2009). Issues in the development of the British Academic Written English (BAWE) corpus. Corpora, 4(1):71–83. * Axelsson, 2000 Axelsson, M. W. (2000). USE - The Uppsala Student English Corpus: An instrument for needs analysis. ”International Computer Archive of Modern and Medieval English”, (24):155 – 157. * Bennani-Smires et al., 2018 Bennani-Smires, K., Musat, C., Hossmann, A., Baeriswyl, M., and Jaggi, M. (2018). Simple Unsupervised Keyphrase Extraction using Sentence Embeddings. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 221–229, Brussels, Belgium. * Bird et al., 2008 Bird, S., Dale, R., Dorr, B., Gibson, B., Joseph, M., Kan, M.-Y., Lee, D., Powley, B., Radev, D., and Tan, Y. F. (2008). The ACL Anthology Reference Corpus: A Reference Dataset for Bibliographic Research in Computational Linguistics. In International conference on Language Resources and Evaluation (LREC 2008), pages 1755–1759, Marrakech, Morocco. * Browne et al., 2013 Browne, C., Culligan, B., and Phillips, J. (2013). New academic word list 1.0. Accessed December 2019:http://www.newgeneralservicelist.org/nawl-new-academic-word-list. * Cohn et al., 2008 Cohn, T., Callison-Burch, C., and Lapata, M. (2008). Constructing Corpora for the Development and Evaluation of Paraphrase Systems. Computational Linguistics, 34(4):597–614. * Cortes, 2004 Cortes, V. (2004). Lexical bundles in published and student disciplinary writing: Examples from history and biology. English for Specific Purposes, 23(4):397 – 423. * Coxhead, 2019 Coxhead, A. (2019). An introduction to the academic word list. Accessed December 2019:http://ksngo.org/images/download/LDOCE_AWL.pdf. * Davies and Gardner, 2013 Davies, M. and Gardner, D. (2013). A New Academic Vocabulary List. Applied Linguistics, 35(3):305–327. * Davies, 2012 Davies, M. (2012). Corpus of Contemporary American English (1990-2012). Accessed December 2019:http://corpus.byu.edu/coca/. * Devlin et al., 2019 Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. * Dong et al., 2017 Dong, L., Mallinson, J., Reddy, S., and Lapata, M. (2017). Learning to Paraphrase for Question Answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 875–886, Copenhagen, Denmark. * García Salido et al., 2018 García Salido, M., Garcia, M., Villayandre-Llamazares, M., and Alonso-Ramos, M. (2018). A Lexical Tool for Academic Writing in Spanish based on Expert and Novice Corpora. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan. * Gardner and Davies, 2013 Gardner, D. and Davies, M. (2013). A New Academic Vocabulary List. Applied Linguistics, 35(3):305–327. * Guo et al., 2018 Guo, H., Pasunuru, R., and Bansal, M. (2018). Dynamic Multi-Level Multi-Task Learning for Sentence Simplification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 462–476, Santa Fe, NM, USA. * Kasewa et al., 2018 Kasewa, S., Stenetorp, P., and Riedel, S. (2018). Wronging a right: Generating better errors to improve grammatical error detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4977–4983, Brussels, Belgium. * Kremer et al., 2014 Kremer, G., Erk, K., Padó, S., and Thater, S. (2014). What Substitutes Tell Us - Analysis of an “All-Words” Lexical Substitution Corpus. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 540–549, Gothenburg, Sweden. * Lee et al., 2018 Lee, J., Saberi, D., Lam, M., and Webster, J. (2018). Assisted nominalization for academic English writing. In Proceedings of the Workshop on Intelligent Interactive Systems and Language Generation (2IS&NLG), pages 26–30, Tilburg, the Netherlands. * McCarthy and Navigli, 2007 McCarthy, D. and Navigli, R. (2007). Semeval-2007 task 10: English lexical substitution task. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 48–53, Prague, Czech Republic. * Michael and O’Dell, 2008 Michael, M. and O’Dell, F. (2008). Academic Vocabulary in Use: 50 Units of Academic Vocabulary Reference and Practice ; Self-study and Classroom Use. Cambridge University Press. * Miller, 1995 Miller, G. A. (1995). WordNet: A Lexical Database for English. Commun. ACM, 38(11):39–41. * Morley, 2014 Morley, J. (2014). Academic phrasebank. Technical Report 2014b edition, The University of Manchester. Accessed December 2019: http://www.kfs.edu.eg/com/pdf/2082015294739.pdf. * Oshima and Hogue, 2007 Oshima, A. and Hogue, A. (2007). Introduction to Academic Writing. Pearson Education, Third Edition (The Longman Academic Writing Series, Level 3) (3e) edition. * Paetzold and Specia, 2016 Paetzold, G. and Specia, L. (2016). SemEval 2016 task 11: Complex word identification. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 560–569, San Diego, CA, USA. * Paetzold and Specia, 2017 Paetzold, G. H. and Specia, L. (2017). A survey on lexical simplification. Journal of Artificial Intelligence Research, 60(1):549–593. * Pasumarthi et al., 2019 Pasumarthi, R. K., Bruch, S., Wang, X., Li, C., Bendersky, M., Najork, M., Pfeifer, J., Golbandi, N., Anil, R., and Wolf, S. (2019). TF-Ranking: Scalable TensorFlow Library for Learning-to-Rank. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 2970–2978, Anchorage, AK, USA. * Pavlick and Callison-Burch, 2016 Pavlick, E. and Callison-Burch, C. (2016). Simple PPDB: A Paraphrase Database for Simplification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 143–148, Berlin, Germany. * Pavlick et al., 2015 Pavlick, E., Rastogi, P., Ganitkevitch, J., Van Durme, B., and Callison-Burch, C. (2015). PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 425–430, Beijing, China. * Pennington et al., 2014 Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. * Riedl et al., 2014 Riedl, M., Glass, M., and Gliozzo, A. (2014). Lexical substitution for the medical domain. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 610–614, Doha, Qatar. * Ruppert et al., 2015 Ruppert, E., Kaufmann, M., Riedl, M., and Biemann, C. (2015). JOBIMVIZ: A Web-based Visualization for Graph-based Distributional Semantic Models. In The Annual Meeting of the Association for Computational Linguistics (ACL) System Demonstrations, pages 103–108, Beijing, China. * Sekizawa et al., 2017 Sekizawa, Y., Kajiwara, T., and Komachi, M. (2017). Improving japanese-to-english neural machine translation by paraphrasing the target language. In Proceedings of the 4th Workshop on Asian Translation (WAT2017), pages 64–69, Taipei, Taiwan. * Štajner and Saggion, 2018 Štajner, S. and Saggion, H. (2018). Data-driven text simplification. In Proceedings of COLING 2018, the 28th International Conference on Computational Linguistics: Tutorial Abstracts, pages 19–23, Santa Fe, NM, USA. * Szarvas et al., 2013 Szarvas, G., Biemann, C., and Gurevych, I. (2013). Supervised All-Words Lexical Substitution using Delexicalized Features. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1131–1141, Atlanta, Georgia. * Toutanova et al., 2003 Toutanova, K., Klein, D., Manning, C. D., and Singer, Y. (2003). Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, NAACL ’03, pages 173–180, Edmonton, Canada. * Wan and Xiao, 2008 Wan, X. and Xiao, J. (2008). Single Document Keyphrase Extraction Using Neighborhood Knowledge. In Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 2, AAAI’08, pages 855–860, Chicago, IL, USA. * Yimam and Biemann, 2018 Yimam, S. M. and Biemann, C. (2018). Par4sim – adaptive paraphrasing for text simplification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 331–342, Santa Fe, NM, USA. * Yimam et al., 2016 Yimam, S. M., Martínez Alonso, H., Riedl, M., and Biemann, C. (2016). Learning Paraphrasing for Multiword Expressions. In Proceedings of the 12th Workshop on Multiword Expressions, pages 1–10, Berlin, Germany. * Yimam et al., 2017a Yimam, S. M., Štajner, S., Riedl, M., and Biemann, C. (2017a). CWIG3G2 - Complex Word Identification Task across Three Text Genres and Two User Groups. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 401–407, Taipei, Taiwan. * Yimam et al., 2017b Yimam, S. M., Štajner, S., Riedl, M., and Biemann, C. (2017b). Multilingual and Cross-Lingual Complex Word Identification. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 813–822, Varna, Bulgaria. * Yimam et al., 2018 Yimam, S. M., Biemann, C., Malmasi, S., Paetzold, G., Specia, L., Štajner, S., Tack, A., and Zampieri, M. (2018). A Report on the Complex Word Identification Shared Task 2018. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 66–78, New Orleans, LA, USA. * Zhang et al., 2015 Zhang, X., Zhao, J., and LeCun, Y. (2015). Character-level convolutional networks for text classification. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS’15, pages 649–657, Cambridge, MA, USA.
2024-09-04T02:54:57.704651
2020-03-06T00:39:37
2003.02979
{ "authors": "Hengyuan Hu, Adam Lerer, Alex Peysakhovich, Jakob Foerster", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26070", "submitter": "Hengyuan Hu", "url": "https://arxiv.org/abs/2003.02979" }
arxiv-papers
# “Other-Play” for Zero-Shot Coordination Hengyuan Hu Adam Lerer Alex Peysakhovich Jakob Foerster ###### Abstract We consider the problem of zero-shot coordination - constructing AI agents that can coordinate with novel partners they have not seen before (e.g. humans). Standard Multi-Agent Reinforcement Learning (MARL) methods typically focus on the self-play (SP) setting where agents construct strategies by playing the game with themselves repeatedly. Unfortunately, applying SP naively to the zero-shot coordination problem can produce agents that establish highly specialized conventions that do not carry over to novel partners they have not been trained with. We introduce a novel learning algorithm called _other-play_ (OP), that enhances self-play by looking for more robust strategies, exploiting the presence of known symmetries in the underlying problem. We characterize OP theoretically as well as experimentally. We study the cooperative card game Hanabi and show that OP agents achieve higher scores when paired with independently trained agents. In preliminary results we also show that our OP agents obtains higher average scores when paired with human players, compared to state-of-the-art SP agents. Machine Learning, ICML ## 1 Introduction A central challenge for AI is constructing agents that can coordinate and cooperate with partners they have not seen before (Kleiman-Weiner et al., 2016; Lerer & Peysakhovich, 2017; Carroll et al., 2019; Shum et al., 2019). This is particularly important in applications such as cooperative game playing, communication, or autonomous driving (Foerster et al., 2016; Lazaridou et al., 2016; Sukhbaatar et al., 2016; Resnick et al., 2018). In this paper we consider the question of zero-shot coordination where agents are placed into a cooperative situation with a novel partner and must quickly coordinate if they wish to earn high payoffs. Our setting is a partially observed cooperative Markov game (MG) which is commonly known among both agents. The agents are able to construct strategies separately in the training phase but cannot coordinate on the strategies that they construct. They must then play these strategies when paired together one time. We refer to this as zero-shot coordination. A popular way of constructing strategies for MG with unknown opponents is self-play (or “self-training”), (Tesauro, 1994). Here the agent controls both players during training and iteratively improves both players’ strategies. The agent then uses this strategy at test time. If it converges, self-play finds a Nash equilibrium of the game and yields superhuman AI agents in two-player zero-sum games such as Chess, Go and Poker (Campbell et al., 2002; Silver et al., 2017; Brown & Sandholm, 2018). However, in complex environments self-play agents typically construct ‘inhuman’ strategies (Carroll et al., 2019). This may be a benefit for zero-sum games, but is less useful when it is important to coordinate with, not trick, one’s partner. Our main contribution is “other-play” (OP), an algorithm for constructing good strategies for the zero-shot coordination setting. We assume that with every MG we are provided with a set of symmetries, i.e. arbitrary relabelings of the state/action space that leave trajectories unchanged up to relabeling. One source of miscoordination in zero-shot settings is that agents have no good way to break the symmetries (e.g. should we drive on the left or the right?). In most MDPs, there are classes of strategies that require more or less coordinated symmetry breaking. OP’s goal is to find a strategy that is maximally robust to partners breaking symmetries in different ways while still playing in the same class. OP works as follows: it uses RL to maximize reward when matched with agents playing the same policy under a random relabeling of states and actions under the known symmetries. Figure 1: The _lever coordination game_ illustrates the counter intuitive outcome of zero-shot coordination. To show the intuition behind OP consider the following game: you need to coordinate with an unknown stranger by independently choosing one from a set of 10 different levers (Figure 1a). If both of you pick the same lever a reward of 1 point is paid out, otherwise you leave the game empty-handed. Clearly, without any prior coordination the only option is to pick one of the levers at random, leading to an expected reward of $1/10=0.1$. Next we consider a game that instead only pays $0.9$ for one of the levers, keeping all other levers unchanged (Figure 1b). How does this change the coordination problem? From the point of view of the MG the $1$ payoff levers have no labels and so are symmetric. Since agents cannot coordinate on how to break symmetries, picking one of the $1.0$ levers leads to $0.11$ expected return. By contrast, OP suggests the choice of the $0.9$ lever. We note that this example illustrates another facet of OP: it is an equilibrium in meta-strategies. That is, neither agent wishes to deviate from OP as a reasoning strategy if the other agent is using it. Note that OP does not use any action labels. Instead OP uses only features of the problem description to coordinate. Furthermore, note that the OP policy in this setting is the only policy that would _never_ be chosen by the types of algorithms that try to use self-play to optimize team performance, e.g. VDN (Sunehag et al., 2018) or SAD (Hu & Foerster, 2019). The main contributions of this work are: 1) we introduce OP as a way of solving the zero-shot coordination problem, 2) we show that OP is the highest payoff meta-equilibrium for the zero-shot coordination problem, 3) we show how to implement OP using deep reinforcement learning (deep RL) based methods, and 4) we evaluate OP in the cooperative card game Hanabi (Bard et al., 2020). ## 2 Related Work ### 2.1 Self-Play in Cooperative Settings There is a large body of research on constructing agents to do well in positive-sum games. Self-play, if it converges, converges to an equilibrium of the game and so in purely cooperative games SP agents will be able to coordinate. Here the main problem is that SP may reach inefficient equilibria and so there is a large literature on pushing self-play toward higher payoff equilibria using various algorithmic innovations (Babes et al., 2008; Devlin et al., 2011; Devlin & Kudenko, 2016; Peysakhovich & Lerer, 2018). However, the setting where agents play with the same agents they have been trained with (aka. centralized training with decentralized execution) is quite different from the zero-shot coordination one which we study. ### 2.2 Cooperation and Coordination A closely related problem to zero-shot coordination is ad-hoc teamwork (Stone et al., 2010; Barrett et al., 2011). For example: a robot agent joining another existing group of agents to play soccer (Barrett et al., 2011). Ad-hoc teamwork differs from the zero-shot coordination problem in that it is typically framed as a learning problem of learning the policies/capabilities of other agents during interaction whereas the pure zero-shot coordination scenario is one where there is no time to update a fixed policy that is constructed during training. These problems are closely linked and incorporating ideas from this literature into algorithms like OP is an interesting question for future research. However, another difference is that zero-shot agents only need to coordinate well with teams of agents that are optimized for the zero-shot setting, rather than arbitrary teams self-play of agents. There is recent work looking at the situation where the one RL agent, trained separately, must join a group of new AI agents or humans (Lerer & Peysakhovich, 2018; Tucker et al., 2020; Carroll et al., 2019). These papers focus on using small amounts of observed behavior from partnered test time agents to either guide self-play to selecting the equilibrium (or “social convention”) of the existing agents (Lerer & Peysakhovich, 2018; Tucker et al., 2020) or allow building a human model which can be used to learn an approximate best response using RL (Carroll et al., 2019). This setting is related, but zero-shot coordination gives no behavioral data to either agent to guide self-play or allow building a model of the other agent. Instead, zero-shot makes the assumption that test-time agents being themselves are optimized for the zero-shot setting (rather than the SP setting). ### 2.3 Game Theory and Tacit Coordination Within behavorial game theory a large body of work considers coordination based on “focal points” or other shared grounding such as the famous “you lost your friend in New York City, where are you going to meet?” coordination problem (Schelling, 1980; Mehta et al., 1994). However, such focal points typically come from the fact that these coordination problems are not just abstract but grounded in _exogenous_ features, _action labels_ , that are meaningful due to a prior shared context. The zero-shot coordination setting thus is a special form of the tacit coordination problem in which there are no shared exogenous features between the different agents and OP can be thought of as a way to coordinate in this setting. There is also a large theoretical literature on learning and evolving coordination (Nowak, 2006). However, as with the self-play literature, it focuses on long run outcomes within a single group of agents learning or evolving together and does not typically focus on the question of engineering agents as we do. ### 2.4 Predicting Human Decision Making Clearly, if we were able to accurately predict how our human counterparts are going to act in any given situation, the zero-shot coordination with human counterparts would reduce to learning a best response to those predicted actions. There is a large body of work using formal models to predict and understand human decision making (Camerer, 2011) and recent work that incorporates machine learning into this question (Wright & Leyton-Brown, 2010; Hartford et al., 2016; Peysakhovich & Naecker, 2017; Kleinberg et al., 2017; Fudenberg & Liang, 2019). However, the majority of this research focuses on extremely simple settings such as small normal form games (Wright & Leyton- Brown, 2010; Hartford et al., 2016; Fudenberg & Liang, 2019) or single decision problems (Peysakhovich & Naecker, 2017; Kleinberg et al., 2017) rather than complex cooperative settings with partial observability. ### 2.5 Domain Randomization Our work is also related to the idea of domain randomization (Tobin et al., 2017). In RL and supervised learning domain randomization tries to make the realized model invariant to some feature of the environment. For example, an object detector should be invariant to the exact camera angle from which a view of an object is captured. OP applies a similar idea: a policy should be invariant to how an agent’s partner breaks symmetries in the underlying game. ### 2.6 Exploiting Symmetries in Other Contexts In the single agent context, it is harder to plan in MDPs that have more states. The idea of abstraction is to use underlying symmetries to ‘compress’ a large MDP into a simpler one, solve for the optimal strategy in the abstraction, and then lift the strategy to the original MDP. One set of such methods are MDP homomorphisms (van der Pol et al., 2020; Ravindran & Barto, 2004). These, like OP, use underlying symmetries but their goal is different: they want to find payoff maximizing policies for single agent decision problems, while OP seeks to find robust policies for zero-shot coordination. Note that as the lever game illustrates robust policies are not necessarily the payoff maximizing ones. In addition, these methods do not solve the problem of equilibrium selection among ‘symmetric’ policies in games, because the symmetry in the MDP just becomes a symmetry in the homomorphism. A similar technique (compress, solve, then lift) is also used for finding Nash equilibria in large games like poker (Gilpin & Sandholm, 2007). In this case the abstraction treats ‘isomorphic’ states equally and thus reduces the effective number of states in the game. Again, the goal is different - poker abstractions are trying to find Nash equilibrium strategies in the original game while OP uses symmetries to select among a set of possible equilibria. ## 3 Zero-Shot Coordination In this paper we study fully cooperative Markov games. To construct this environment we start out with a Dec-POMDP (Nair et al., 2003) with states $s_{t}\in\mathcal{S}$. There are $i=1,\cdots N$ agents who each choose actions, $a^{i}_{t}\in\mathcal{A}$ at each time step. The game is partially observable, $o^{i}_{t}\sim O(o|i,s_{t})$ being each agent’s stochastic observation function. At time $t$ each agent has an action- observation history $\tau^{i}_{t}=\\{o^{i}_{0},a^{i}_{0},r^{i}_{0},\cdots,o^{i}_{t}\\}$ and selects action $a_{i}^{t}$ using stochastic policies of the form $\pi^{i}_{\theta}(a^{i}|\tau^{i}_{t})$. The transition function, $P(s^{\prime}|s,\mathbf{a})$, conditions on the joint action, $\mathbf{a}$. The game is fully cooperative, agents share the reward $r_{t}$ which is conditioned on the joint action and the state. Thus, the goal is to maximize the expected return $J=\mathbb{E}_{\tau}R(\tau)$, where $R(\tau)=\sum_{t}\gamma^{t}r_{t}$ is calculated using the discount factor $\gamma$. Most work on cooperative MARL focuses on a setting where agents are trained together, although they must execute their policies independently at least at test time , _e.g._ (Lowe et al., 2017; Foerster et al., 2018a, b). The goal is to construct learning rules, i.e. functions that map Markov games to (joint) policies that select policies for each agent that together maximize expected discounted return. Because agents are trained together, these policies may be arbitrarily complex. We are instead interested in achieving high returns with partners that were not trained together with our agent. Instead, we will frame the problem as follows: suppose that multiple independent AI designers will construct agents that have to interact in various but ex-ante unknown Dec-POMDPs without being able to coordinate beforehand, what learning rule should these designers agree on? To make this even more concrete, consider the case of independent autonomous vehicles made by multiple firms which have to interact in novel traffic situations on a daily basis. Figure 2: The first key concept we introduce is the class of equivalence mappings, $\Phi$, for a given Dec-POMDP. Each element of $\Phi$ is a bijection of each of $\mathcal{S}$, $O$, and $\mathcal{A}$ onto itself, such that it leaves the Dec-POMDP unchanged: $\displaystyle\phi\in\Phi\iff$ $\displaystyle P(\phi(s^{\prime})|\phi(s),\phi(a))=P(s^{\prime}|s,a)$ $\displaystyle\land$ $\displaystyle R(\phi(s^{\prime}),\phi(a),\phi(s))=R(s^{\prime},a,s)$ $\displaystyle\land$ $\displaystyle O(\phi(o)|\phi(s),\phi(a),i)=O(o|s,a,i)$ $\displaystyle\text{where equalities apply }\forall{s^{\prime},s,a^{\prime}}$ In other words, $\Phi$ describes the symmetries in the underlying Dec-POMDP. We note that our notation is heavily overloaded since each $\phi$ can act on actions, states and the observation function, so $\phi$ shorthand for $\phi=\\{\phi_{\mathcal{S}},\phi_{\mathcal{A}},\phi_{O}\\}$. Next, we extend $\phi$ to also act on trajectories: $\displaystyle\phi(\tau^{i}_{t})=\\{\phi(o^{i}_{0}),\phi(a^{i}_{0}),\phi(r_{0}),\cdots,\phi(o^{i}_{t})\\}.$ At this point an example might be helpful: consider a gridworld with a robot, shown in Figure 2, that can move in the 4 cardinal directions. In our example the goal is in the middle of the room, which leaves two axis of symmetry: We can invert either the x-axis, the y-axis or both, as long as we make the corresponding changes to the action space, for example mapping “up” to “down” and vice versa when inverting the y-axis. In a similar way, we can extend $\phi$ to act on policies $\pi$, as follows: $\displaystyle\pi^{\prime}=\phi(\pi)\iff\pi^{\prime}(\phi(a)|\phi(\tau))=\pi(a|\tau)\vspace{1mm},\vspace{1mm}\forall\tau,a$ These symmetries are the “payoff irrelevant” parts of the Dec-POMDP. They come from the fact that the actions and states in the Dec-POMDP do not come with labels and so taking a policy and permuting it with respect to these symmetries does not change the outcome of interest: the trajectory and the reward. It is precisely these symmetries that can cause problems for self-play trained agents. Since agents are trained together, they can coordinate on how to break symmetries. However, there is no guarantee that multiple SP agents trained separately will break symmetries in the same way. In this case when they are paired together their policies may fail spectacularly. The goal of OP, then, will be to build policies which are maximally robust to this failure mode. ## 4 Other Play We consider the $2$ agent case for ease of notation with $\pi^{1},\pi^{2}$ denoting each agent’s component of the policy and $\mathbf{\pi}$ denoting the joint policy. First, consider self-play (SP) learning rule. This is the learning rule which tries to optimize the following objective: $\displaystyle\mathbf{\pi}^{*}=\arg\max_{\mathbf{\pi}}J(\pi^{1},\pi^{2})$ (1) When the Dec-POMDP is tabular we can solve this via various methods. When it is not, deep reinforcement learning (deep RL) can be used to apply function approximation and gradient based optimization of this objective function. Though there is a large literature focusing on various issues in multi-agent optimization (Busoniu et al., 2006; Hernandez-Leal et al., 2019), our paper is agnostic to the precise method used. These policies can be arbitrary and in complicated Dec-POMDPs multiple maxima to Equation 1 will often exist. These multiple policies can (and, as we will see in our experiments, often will) use coordinated symmetry breaking to receive high payoffs. Therefore, 2 matched, separately trained, SP agents will not necessarily receive the same payoff with each other as they receive with themselves. To alleviate this issue, we need to make the optimization problem more robust to the symmetry breaking. Let us consider the point of view of constructing a strategy for agent $1$ where agent $2$ will be the unknown novel partner. The _other-play_ (OP) objective function for agent $1$ maximizes expected return when randomly matched with a symmetry-equivalent policy of agent $2$ rather than with a particular one. In other words, we perform a version of self-play where agents are not assumed to be able to coordinate on exactly how to break symmetries. $\displaystyle\mathbf{\pi}^{*}=\arg\max_{\mathbf{\pi}}\mathbb{E}_{\phi\sim\Phi}\hskip 2.84526pt{J(\pi^{1},\phi(\pi^{2}))}$ (2) Here the expectation is taken with respect to a uniform distribution on $\Phi$. We call this expected return $J_{OP}$. We will now consider what policies maximize $J_{OP}$. ###### Lemma 1. $\displaystyle J(\pi_{A},\pi_{B})=J(\phi(\pi_{A}^{1}),\phi(\pi_{B}^{2})),\hskip 5.69054pt\forall\phi\in\Phi,\pi_{A},\pi_{B}$ This Lemma follows directly from the fact that the MDP is invariant to any $\phi\in\Phi$. ###### Lemma 2. $\displaystyle\\{\phi\cdot\phi^{\prime}:\phi^{\prime}\in\Phi\\}=\Phi\hskip 2.84526pt,\hskip 2.84526pt\forall\phi\in\Phi$ This Lemma follows from the fact that $\Phi$ is a bijection. ###### Proposition 1. The expected OP return of $\pi$ is equal to the expected return of each player independently playing a policy $\pi_{\Phi}^{i}$ which is the uniform mixture of $\phi(\pi^{i})$ for all $\phi\in\Phi$. ###### Proof. $\displaystyle J_{OP}(\pi)$ $\displaystyle=\mathbb{E}_{\phi\sim\Phi}\hskip 2.84526pt{J(\pi^{1},\phi(\pi^{2}))}$ (3) $\displaystyle=\mathbb{E}_{\phi_{1}\sim\Phi,\phi_{2}\sim\Phi}\hskip 2.84526pt{J(\phi_{1}(\pi^{1}),\phi_{1}(\phi_{2}(\pi^{2})))}$ (4) $\displaystyle=\mathbb{E}_{\phi_{1}\sim\Phi,\phi_{2}\sim\Phi}\hskip 2.84526pt{J(\phi_{1}(\pi^{1}),\phi_{2}(\pi^{2}))}$ (5) $\displaystyle=J(\pi_{\Phi})$ (6) (4) follows from Lemma 1, (5) follows from Lemma 2. ∎ ###### Corollary 1. The distribution $\pi^{*}_{OP}$ produced by OP will be the uniform mixture $\pi_{\Phi}$ with the highest return $J(\pi_{\Phi})$. Let $\mathcal{L}_{i}$ be the set of learning rules which input a Dec-POMDP and output a policy for agent $i$. A meta-equilibrium is a learning rule for each agent such that neither agent can improve their expected payoff by unilaterally deviating to a different learning rule. ###### Proposition 2. If agent $i$ uses OP as their learning rule then OP is a payoff maximizing learning rule for the agent’s partner. Furthermore both agents using OP is the best possible meta-equilibrium. ###### Proof. Since the Dec-POMDP has no labels for actions and states, $\mathcal{L}_{i}$ must choose all $\phi(\pi)$ with equal probability. Among these possible outputs, $\pi^{*}_{OP}$ maximizes the return by Corollary 1. ∎ ## 5 Implementing Other Play via Deep RL We now turn to optimizing the OP objective function. In many applications of interest the Dec-POMDP is not tabular. Thus, deep RL algorithms use function approximation for the state space and attempt to find local maxima of Equation 1 using self-play reinforcement learning. We show how to adapt this method to optimize the other-play objective (Equation 2). This amounts to applying a very specific kind of asymmetric domain randomization (Tobin et al., 2017) during training. During each episode of MARL training, for each agent $i$ a random permutation $\phi_{i}\in\Phi$ is chosen uniformly iid from $\Phi$, and agent $i$ observes and acts on $\phi(\mathcal{S},O,\mathcal{A}$). Importantly, the agents act in different permutations of the same environment. This environment randomization is equivalent to other-play, because the MDP remains constant under $\phi_{i}$ while the effect of agent $i$’s policy on the environment is $\phi_{i}(\pi_{i})$. The fixed points of independent optimization of $\pi$ under this learning rule will be joint policies where each $\pi_{i}$ is a BR to the uniform mixture of permutations of partner policies, i.e. precisely the permutation-invariant equilibria that are the solutions of other-play. We note that OP is fundamentally compatible with any type of optimization strategy and can be applied whenever there are symmetries in the underlying MDP. ## 6 Experiments We evaluate OP in two different settings. In each setting we will compare standard SP against OP. We will perform comparisons of agents trained together to agents trained separately that are placed into a zero-shot coordination test game. ### 6.1 Lever Game We begin with the “lever game” mentioned in the introduction. This environment is tabular, there are only $10$ actions possible per player. Here, during training, we use simple joint action learning, compute the true gradient with respect to the current policy and update. We show training time (i.e. expected reward with itself) and test time (zero-shot) coordination performance for both SP (optimizing equation 1) and OP (optimizing equation 2). The code is available as a notebook online here and can be executed online without downloading: https://bit.ly/2vYkfI7. Figure 3 shows the results. As expected, OP agents coordinate on the unique option of $0.9$ points both during the training phase and at test time. As a consequence, OP agents can carry out successful zero-shot coordination when paired with other OP agents. In contrast, SP agents achieve higher rewards of $1.0$ points during the training phase but entirely fail to coordinate with other, independently trained, SP agents. Figure 3: Train and test performance of self-play and other-play algorithms on the lever coordination game. Shown is the mean, shading is the standard error of the mean (s.e.m.), across 30 different seeds. ### 6.2 Hanabi with AI Agents We now turn to a much more complex environment. We construct agents for the cooperative card game Hanabi, which has recently been established a benchmark environment for multi-agent decision making in partially observable settings (Bard et al., 2020). Hanabi is a cooperative card game with the interesting twist that players cannot see their own cards and hence have to rely on receiving information from the other player (who can see their hand). In Hanabi, there are two main ways of exchanging information: first of all, players can take costly “hint” actions that point out subsets of cards based on rank or color. For example, hinting for “blue” reveals the color of all blue cards. Secondly, observing the actions themselves can be informative, in particular when players have pre-established conventions. The goal in Hanabi is to play cards in a legal order completing stacks of cards, one for each color. There are 5 color and 5 cards and the maximum points is 25. Players will lost a life token if they play a card out of order. Once they exhaust the deck or lose all 3 lives (“bomb out”), the game will terminate. As noted, the vast majority of research on Hanabi has been in the self-play setting, in which a group of agents is trained to jointly obtain the highest possible score. To apply OP in Hanabi we note that assuming no side information, a permutation of the colors of the cards leaves the game unchanged. We use this as our class of symmetries. ### 6.3 MARL Training Details OP can be applied on top of any SP algorithm. In Hanabi the Simplified Action Decoder (SAD) (Hu & Foerster, 2019) method achieves SOTA performance for RL agents. We use SAD as a base algorithm onto which we add OP. We use the open- sourced implementation of SAD as well as most of its hyper-parameters but with two major modifications. First, we use 2 GPUs for simulation instead of 1 as in the original paper. This doubles the data generation speed and has a profound effect on reducing the wall clock time required to achieve competitive performance. Second, we introduce extra hyper-parameters that control the network architecture to add diversity to the model capacity in order to better demonstrate the effectiveness of OP. Specifically, the network can have either 1 or 2 fully connected layers before 2 LSTM layers and can have an optional residual connection to by-pass the LSTM layers. For SP, we re-train the base SAD and the SAD + AUX variant proposed in Hu & Foerster (2019). SAD + AUX is specifically engineered for Hanabi by adding an auxiliary task to predict whether a card is playable, discardable, or unknown. We train agents with the aforementioned 4 different network architectures. We run each hyper-parameter configuration with 3 different seeds and thus 12 models are produced for each category of {SAD, SAD + AUX, SAD + OP, SAD + AUX + OP}. ### 6.4 Evaluation We evaluate the models within the same category by pairing different models together to play the game, a process we refer to as _cross-play_. Clearly, if independent training runs (“seeds”) from the same training method fail to coordinate with each other at test time it is unlikely they will coordinate with agents optimized by through a different process, let alone humans. As such, cross-play is a cheap proxy to evaluate whether a training method has potential for zero-shot coordination with human players. Figure 4 shows the scores obtained between all pairs of agents. Table 1 shows the average within- pair and cross-play scores. We see that SAD coordinates with itself but fails to coordinate with any other SAD agent. SAD with OP, however, significantly improves the cross-play. The effect is especially profound when the model has limited representation power. The top left corner of the graph, which corresponds to the simplest models that have only 1 fully connected layers, 2 LSTM layers and no residual connection, shows almost perfect cooperation scores. With the network growing more complicated, different strategies start to emerge and the cross-play performance drops. Auxiliary task implicitly improves cross-play scores by encouraging all agents to act basing on grounded information and confident predictions. Nonetheless, adding OP to SAD + AUX further improves performance and achieves the highest cross-play payoffs. Figure 4: Cross-Play Matrix. Visualization of paired evaluation of different agents trained under the same method. y-axis represents the agent index of player 1 (first mover) and x-axis represents that of player 2. Agents 0-2: 1-layer FC, without residual connection; Agents 3-5: 1-layer FC, with residual connection; Agents 6-8: 2-layer FC, without residual connection; Agents 9-11: 2-layer FC, with residual connection. All agents have 2-layer LSTMs after the FC layers. Each block in the grid is obtained by evaluating the pair on 10K games with different seeds. Please refer to Table 1 for numeric results. Method | Cross-Play | Cross-Play(*) | Self-Play ---|---|---|--- SAD | 2.52 $\pm$ 0.34 | 3.02 $\pm$ 0.39 | 23.97 $\pm$ 0.04 SAD + OP | 15.32 $\pm$ 0.65 | 18.28 $\pm$ 0.36 | 23.93 $\pm$ 0.02 SAD + AUX | 17.65 $\pm$ 0.69 | 21.09 $\pm$ 0.18 | 24.09 $\pm$ 0.03 SAD + AUX + OP | 22.07 $\pm$ 0.11 | 22.49 $\pm$ 0.18 | 24.06 $\pm$ 0.02 Table 1: Cross-Play Performance. The average performance of pairs of agents that are train with the same method but different network architecture and/or seeds. Please refer to Figure 4 for visualization of performance for each individual pair. Cross-Play score is non-diagonal mean of each grid. Cross- Play(*) is the cross-play score after removing the worst model from the grid. Self-Play score is the score attained when agents play with the partner they are trained with. We can further study the policies resulting from these learning algorithms. Figure 5 picks the agent with highest cross-play performance in each category (top row) as well as their worst possible partner (bottom row) and presents $P(a^{i}_{t}\mid a^{j}_{t-1})$ over a subset of actions averaged over time- steps in 1000 episodes generated through self-play. In other words, we ask, do the agents respond very differently to possible actions of their partner? A large difference indicates that what an agent would do in a situation is very different from what their partner would do: a recipe for miscoordination! We see that two paired SAD agents have very different policies and thus miscoordinate a lot. They also learn “inhuman” conventions that are hard for human to understand. For example, the agent hints Color5 to indicate discarding the 1st card while its partner interprets that as playing the 2nd card. OP eliminates these type of conventions. From the plot and our experience of playing with the SAD + OP agent, we find that it tends to use color hints to indicate either that the partner should save the card, or to disambiguate with a subsequent rank hint. This is not a typical strategy played by seasoned human players but is easy understand and thus makes the agent easier to cooperate with. However, due to the way we implement OP in Hanabi, it is still possible to form secretive conventions such as using all color hints to indicate a specific move. For example, the worst partner of SAD + OP uses all color hints to indicate playing the 5th card. Figure 5: $P(a^{i}_{t}\mid a^{j}_{t-1})$ Matrices. Each subplot can be used as a proxy to roughly decipher conventions of an agent. The y-axis means the action taken at time-step $t$ and x-axis means the percentage of each action as response at time-step $t+1$. Here we only show a quarter of the matrix that corresponds to the interaction between color/rank hint and play/discard positions. C1-C5 and R1-R5 mean hinting 5 different colors and ranks respectively while D1-D5 and P1-P5 mean discarding and playing 1st-5th cards of the hand. For each plot, we take an agent and run 1000 episodes of self- play to compute statistics. The agents that achieved the highest cross-play scores in Figure 4 are used to generate the top row and their worst partners are chosen to render the bottom row. ### 6.5 Hanabi with Humans So far we have focused on AI agents that play other AI agents and have shown that OP is a meta-equilibrium with respect to learning rules in the zero-shot setting. We now ask: do human strategies in Hanabi also have an OP-like quality? In other words, do OP agents perform well with humans? To begin to answer this question we recruited 20 individuals from a board game club. These individuals were familiar with Hanabi but not expert players. We asked each individual to play a game of Hanabi with two bots, in random order, using the user interface open-sourced by (Lerer et al., 2019). We note that we did not provide the participants with any information about the bots, either regarding their strategy or the method through which they were trained. For testing we selected our best SAD + AUX + OP agent based on the cross-play performance (henceforth OP bot). We also have individuals play with the SOTA self-play agent from (Hu & Foerster, 2019) (henceforth SP bot). We download models from their GitHub repo and pick the model based on cross-play scores. For reference, the SP model used here gets 23.99 in self-play and 20.99 in cross-play with other agents where the only difference among them is seed. Since in Hanabi the exact deck being used can make a huge difference (for example, some hands are unwinnable), to reduce the variance of our results we play each seed by two different players, one for our OP agent, and one for the control. Importantly, to prevent any adaptation advantages, we alternate the order between which bot came first across different participants. Humans achieved an average score of 15.75 (s.e.m. 1.23) with the OP bot and “bombed out” 45% of games. Thus the OP bot, which has high cross-play scores with other OP bots is also able to play with humans. Note, in our counting convention players keep the current score when the bomb out, which be believe is more appropriate for the zero-shot setting. By comparison, humans paired with the SP bot achieve an average score of 9.15 (s.e.m. 1.18) and an 85% bomb rate. To the best of our knowledge, the only other research involving human-AI collaboration in Hanabi is (Eger et al., 2017). Here, a hand-coded AI agent, designed to play well with humans is used and achieves an average of around 15.0 points when paired with humans. Beyond the average scores and “bomb-out” rates, we thus also have access to pairwise comparisons for the two bots when playing two different people on the same deck. Figure 6: Shown are all scores obtained by human testing. Each blue dot is one seed, when humans are matched with the SP bot (x-axis) and the OP bot (y-axis). These preliminary numbers confirm that our OP bot significantly outperformed the SOTA self-play bot from (Hu & Foerster, 2019) when paired with humans. In particular, OP won 15 out of the 20 per-seed comparisons and tied in 2 cases, losing to the control group for 3 seeds (p=0.004111exact binomial test of the null hypothesis being that P(OP higher score) $\leq$ P(control higher score).). Of course, these results do not suggest that OP will work in every zero-shot coordination where AI agents need to cooperate with humans. However, they are encouraging and suggest that OP is a fruitful research direction for the important problem of human-AI coordination. ## 7 Other Attempts While attempting to make progress on the zero-shot coordination problem in Hanabi we tried a variety of approaches. Here we discuss some other approaches that seemed promising but did not yield agents that were able to coordinate with agents they were not trained with. While this does not necessarily mean these approaches are doomed to failure, we report these results as information for other researchers interested in this problem. In particular we tried multi-agent RL adaptations of cognitive hierarchies (Stahl, 1993), k-level reasoning (Costa-Gomes et al., 2001) and training a population of agents. Our original inspiration was that both cognitive hierarchies and k-level reasoning should reduce the tendency towards arbitrary symmetry breaking and have been shown to produce human like decision making in other settings (Wright & Leyton-Brown, 2010). Similarly, population based approaches are gaining popularity for regularizing communication protocols in the field of emergent communication, see e.g. (Fitzgerald, 2019; Tieleman et al., 2018; Lowe et al., 2019). We found that none of these approaches produced high cross-play performance in Hanabi, which we now consider a necessary condition for high zero-shot performance with humans. In hind-sight, considering that all of these approaches would necessarily fail in the matrix game example, this is not at all surprising. Still, to help future researchers learn from our endeavours, we are adding all results to the supplementary material and will open-source the corresponding agents. ## 8 Conclusion We have shown that a simple expansion of self-play which we call _other-play_ can construct agents that are better able to zero-shot coordinate with partners they have not seen before. We have proven theoretical properties of the OP strategy, shown how to implement it with deep RL, and shown in experiments with the cooperative card game Hanabi that OP can construct robust agents that can play with other AIs as well as with humans. We do not claim that OP is a silver bullet for all zero-shot coordination problems. In particular, because OP is a modification of the SP algorithm, it can be combined with many of the algorithmic innovations that have been developed to improve SP in various games (Lanctot et al., 2017; Foerster et al., 2018a; Lowe et al., 2017; Foerster et al., 2017). Thus, we believe that this represents an exciting research direction for those interested in moving deep RL beyond two-player, zero-sum environments to ones involving coordination and cooperation. Currently we are assuming that the symmetries $\Phi$ are given to the algorithm. However, in principle, discovering the symmetries of an MDP is another optimization problem, which opens interesting avenues for future work. ## Acknowledgements We would like to thank Noam Brown for encouraging discussions and Pratik Ringshia for help with the UI for human testing. We would also like to thank our human participants for offering to help. ## References * Babes et al. (2008) Babes, M., Munoz de Cote, E., and Littman, M. L. Social reward shaping in the prisoner’s dilemma. _International Conference on Autonomous Agents and Multiagent Systems_ , pp. 1389–1392, 2008. * Bard et al. (2020) Bard, N., Foerster, J. N., Chandar, S., Burch, N., Lanctot, M., Song, H. F., Parisotto, E., Dumoulin, V., Moitra, S., Hughes, E., et al. The hanabi challenge: A new frontier for ai research. _Artificial Intelligence_ , 280:103216, 2020. * Barrett et al. (2011) Barrett, S., Stone, P., and Kraus, S. Empirical evaluation of ad hoc teamwork in the pursuit domain. In _AAMAS_ , pp. 567–574, 2011. * Brown & Sandholm (2018) Brown, N. and Sandholm, T. Superhuman ai for heads-up no-limit poker: Libratus beats top professionals. _Science_ , 359(6374):418–424, 2018. * Busoniu et al. (2006) Busoniu, L., Babuska, R., and De Schutter, B. Multi-agent reinforcement learning: A survey. In _2006 9th International Conference on Control, Automation, Robotics and Vision_ , pp. 1–6. IEEE, 2006. * Camerer (2011) Camerer, C. F. _Behavioral game theory: Experiments in strategic interaction_. Princeton University Press, 2011. * Campbell et al. (2002) Campbell, M., Hoane Jr, A. J., and Hsu, F.-h. Deep blue. _Artificial intelligence_ , 134(1-2):57–83, 2002\. * Carroll et al. (2019) Carroll, M., Shah, R., Ho, M. K., Griffiths, T., Seshia, S., Abbeel, P., and Dragan, A. On the utility of learning about humans for human-ai coordination. In _Advances in Neural Information Processing Systems_ , pp. 5175–5186, 2019. * Costa-Gomes et al. (2001) Costa-Gomes, M., Crawford, V. P., and Broseta, B. Cognition and behavior in normal-form games: An experimental study. _Econometrica_ , 69(5):1193–1235, 2001. * Devlin & Kudenko (2016) Devlin, S. and Kudenko, D. Plan-based reward shaping for multi-agent reinforcement learning. _The Knowledge Engineering Review_ , 31(1):44–58, 2016. * Devlin et al. (2011) Devlin, S., Kudenko, D., and Grześ, M. An empirical study of potential-based reward shaping and advice in complex, multi-agent systems. _Advances in Complex Systems_ , 14(02):251–278, 2011. * Eger et al. (2017) Eger, M., Martens, C., and Córdoba, M. A. An intentional ai for hanabi. In _2017 IEEE Conference on Computational Intelligence and Games (CIG)_ , pp. 68–75. IEEE, 2017. * Fitzgerald (2019) Fitzgerald, N. To populate is to regulate. _arXiv preprint arXiv:1911.04362_ , 2019. * Foerster et al. (2016) Foerster, J., Assael, I. A., De Freitas, N., and Whiteson, S. Learning to communicate with deep multi-agent reinforcement learning. In _Advances in neural information processing systems_ , pp. 2137–2145, 2016. * Foerster et al. (2017) Foerster, J., Nardelli, N., Farquhar, G., Afouras, T., Torr, P. H., Kohli, P., and Whiteson, S. Stabilising experience replay for deep multi-agent reinforcement learning. In _Proceedings of the 34th International Conference on Machine Learning-Volume 70_ , pp. 1146–1155. JMLR. org, 2017. * Foerster et al. (2018a) Foerster, J. N., Farquhar, G., Afouras, T., Nardelli, N., and Whiteson, S. Counterfactual multi-agent policy gradients. In _Thirty-second AAAI conference on artificial intelligence_ , 2018a. * Foerster et al. (2018b) Foerster, J. N., Song, F., Hughes, E., Burch, N., Dunning, I., Whiteson, S., Botvinick, M., and Bowling, M. Bayesian action decoder for deep multi-agent reinforcement learning. _arXiv preprint arXiv:1811.01458_ , 2018b. * Fudenberg & Liang (2019) Fudenberg, D. and Liang, A. Predicting and understanding initial play. _American Economic Review_ , 109(12):4112–41, 2019. * Gilpin & Sandholm (2007) Gilpin, A. and Sandholm, T. Lossless abstraction of imperfect information games. _Journal of the ACM (JACM)_ , 54(5):25–es, 2007\. * Hartford et al. (2016) Hartford, J. S., Wright, J. R., and Leyton-Brown, K. Deep learning for predicting human strategic behavior. In _Advances in Neural Information Processing Systems_ , pp. 2424–2432, 2016. * Hernandez-Leal et al. (2019) Hernandez-Leal, P., Kartal, B., and Taylor, M. E. A survey and critique of multiagent deep reinforcement learning. _Autonomous Agents and Multi-Agent Systems_ , 33(6):750–797, 2019. * Hu & Foerster (2019) Hu, H. and Foerster, J. N. Simplified action decoder for deep multi-agent reinforcement learning. _arXiv preprint arXiv:1912.02288_ , 2019. * Kleiman-Weiner et al. (2016) Kleiman-Weiner, M., Ho, M. K., Austerweil, J. L., Littman, M. L., and Tenenbaum, J. B. Coordinate to cooperate or compete: abstract goals and joint intentions in social interaction. In _CogSci_ , 2016. * Kleinberg et al. (2017) Kleinberg, J., Liang, A., and Mullainathan, S. The theory is predictive, but is it complete? an application to human perception of randomness. In _Proceedings of the 2017 ACM Conference on Economics and Computation_ , pp. 125–126, 2017. * Lanctot et al. (2017) Lanctot, M., Zambaldi, V., Gruslys, A., Lazaridou, A., Tuyls, K., Pérolat, J., Silver, D., and Graepel, T. A unified game-theoretic approach to multiagent reinforcement learning. In _Advances in Neural Information Processing Systems_ , pp. 4190–4203, 2017. * Lazaridou et al. (2016) Lazaridou, A., Peysakhovich, A., and Baroni, M. Multi-agent cooperation and the emergence of (natural) language. _arXiv preprint arXiv:1612.07182_ , 2016. * Lerer & Peysakhovich (2017) Lerer, A. and Peysakhovich, A. Maintaining cooperation in complex social dilemmas using deep reinforcement learning. _arXiv preprint arXiv:1707.01068_ , 2017. * Lerer & Peysakhovich (2018) Lerer, A. and Peysakhovich, A. Learning social conventions in markov games. _arXiv preprint arXiv:1806.10071_ , 2018. * Lerer et al. (2019) Lerer, A., Hu, H., Foerster, J., and Brown, N. Improving policies via search in cooperative partially observable games. _arXiv preprint arXiv:1912.02318_ , 2019. * Lowe et al. (2017) Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, O. P., and Mordatch, I. Multi-agent actor-critic for mixed cooperative-competitive environments. In _Advances in neural information processing systems_ , pp. 6379–6390, 2017. * Lowe et al. (2019) Lowe, R., Gupta, A., Foerster, J., Kiela, D., and Pineau, J. Learning to learn to communicate, 2019. * Mehta et al. (1994) Mehta, J., Starmer, C., and Sugden, R. The nature of salience: An experimental investigation of pure coordination games. _The American Economic Review_ , 84(3):658–673, 1994. * Nair et al. (2003) Nair, R., Tambe, M., Yokoo, M., Pynadath, D., and Marsella, S. Taming decentralized pomdps: Towards efficient policy computation for multiagent settings. In _IJCAI_ , volume 3, pp. 705–711, 2003. * Nowak (2006) Nowak, M. A. _Evolutionary dynamics: exploring the equations of life_. Harvard University Press, 2006. * Peysakhovich & Lerer (2018) Peysakhovich, A. and Lerer, A. Prosocial learning agents solve generalized stag hunts better than selfish ones. In _Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems_ , pp. 2043–2044. International Foundation for Autonomous Agents and Multiagent Systems, 2018. * Peysakhovich & Naecker (2017) Peysakhovich, A. and Naecker, J. Using methods from machine learning to evaluate behavioral models of choice under risk and ambiguity. _Journal of Economic Behavior & Organization_, 133:373–384, 2017. * Ravindran & Barto (2004) Ravindran, B. and Barto, A. G. Approximate homomorphisms: A framework for non-exact minimization in markov decision processes. 2004\. * Resnick et al. (2018) Resnick, C., Kulikov, I., Cho, K., and Weston, J. Vehicle community strategies. _arXiv preprint arXiv:1804.07178_ , 2018. * Schelling (1980) Schelling, T. C. _The strategy of conflict_. Harvard university press, 1980. * Shum et al. (2019) Shum, M., Kleiman-Weiner, M., Littman, M. L., and Tenenbaum, J. B. Theory of minds: Understanding behavior in groups through inverse planning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pp. 6163–6170, 2019. * Silver et al. (2017) Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al. Mastering the game of go without human knowledge. _Nature_ , 550(7676):354–359, 2017. * Stahl (1993) Stahl, D. O. Evolution of smartn players. _Games and Economic Behavior_ , 5(4):604–617, 1993. * Stone et al. (2010) Stone, P., Kaminka, G. A., Kraus, S., and Rosenschein, J. S. Ad hoc autonomous agent teams: Collaboration without pre-coordination. In _Twenty-Fourth AAAI Conference on Artificial Intelligence_ , 2010\. * Sukhbaatar et al. (2016) Sukhbaatar, S., Fergus, R., et al. Learning multiagent communication with backpropagation. In _Advances in neural information processing systems_ , pp. 2244–2252, 2016. * Sunehag et al. (2018) Sunehag, P., Lever, G., Gruslys, A., Czarnecki, W. M., Zambaldi, V., Jaderberg, M., Lanctot, M., Sonnerat, N., Leibo, J. Z., Tuyls, K., et al. Value-decomposition networks for cooperative multi-agent learning based on team reward. In _Proceedings of the 17th international conference on autonomous agents and multiagent systems_ , pp. 2085–2087. International Foundation for Autonomous Agents and Multiagent Systems, 2018. * Tesauro (1994) Tesauro, G. Td-gammon, a self-teaching backgammon program, achieves master-level play. _Neural computation_ , 6(2):215–219, 1994. * Tieleman et al. (2018) Tieleman, O., Lazaridou, A., Mourad, S., Blundell, C., and Precup, D. Shaping representations through communication. _OpenReview_ , 2018. * Tobin et al. (2017) Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., and Abbeel, P. Domain randomization for transferring deep neural networks from simulation to the real world. In _2017 IEEE/RSJ international conference on intelligent robots and systems (IROS)_ , pp. 23–30. IEEE, 2017. * Tucker et al. (2020) Tucker, M., Zhou, Y., and Shah, J. Adversarially guided self-play for adopting social conventions. _arXiv preprint arXiv:2001.05994_ , 2020. * van der Pol et al. (2020) van der Pol, E., Kipf, T., Oliehoek, F. A., and Welling, M. Plannable approximations to mdp homomorphisms: Equivariance under actions. _arXiv preprint arXiv:2002.11963_ , 2020. * Wright & Leyton-Brown (2010) Wright, J. R. and Leyton-Brown, K. Beyond equilibrium: Predicting human behavior in normal-form games. In _Twenty-Fourth AAAI Conference on Artificial Intelligence_ , 2010\. ## Appendix A Details on Other Attempts k | Cross-Play | Self-Play ---|---|--- 1 | 1.06 $\pm$ 0.04 | 1.04 $\pm$ 0.06 2 | 0.95 $\pm$ 0.18 | 0.99 $\pm$ 0.33 3 | 1.49 $\pm$ 0.11 | 2.63 $\pm$ 0.34 4 | 2.48 $\pm$ 0.22 | 5.89 $\pm$ 0.75 5 | 2.04 $\pm$ 0.35 | 7.22 $\pm$ 0.56 Table 2: Cognitive Hierarchies Performance. We train CH for 5 levels with 3 seeds. The cross-play and self-play results are computed by averaging scores of intra-level pairing of agents trained with different seeds. Cross-play score is averaged over 6 pairs and self-play score is averaged over 3 pairs for each cell. In this section we provide more details on other attempts mentioned previously. The core idea behind cognitive hierarchies (CH) (Stahl, 1993) and k-level reasoning (Costa-Gomes et al., 2001) is to train a sequence of $K$ agents of different capabilities. The final agents may hopefully learn strategies that cross-play well through such explicit route of evolution. In our implementation of CH, the first agent $a^{(0)}$ in the sequence is a random agent that pick actions uniformly regardless of the state. The $k$th agent $a^{(k)}$ is trained to be the “best response” to the pool of agents $\\{a^{(0)},...,a^{(k-1)}\\}$. Intuitively, this means that the first agent will learn to play based only on the hinted facts, i.e. ground information, to play Hanabi because $a^{(0)}$’s actions contain no intentions nor conventions. $a^{(2)}$ can then learn to give more useful hints and the subsequent agents may learn more complicated behaviors. In k-level, the $k$th agent $a^{(k)}$ only learns to best respond $a^{(k-1)}$ and other aspects remain the same. Because the agents are trained with a random agent $a^{(0)}$ and they will “bomb out” inevitably, we alter the reward scheme so that the agents receive reward 0, instead of the negative of current score, when they lose all life tokens. The performance of CH is shown in Table 2. The most prominent phenomenon is that CH converges quite slowly, due to the fact that it needs to cooperate with a pool of different yet primitive policies. For each level, we train the models until convergence and 5 levels normally take several days to complete. For reference, it roughly takes less than 20 hours for SAD and our other-play agents to reach 23 points in self-play under the same settings and hardware. The prohibitive cost in time and computation make CH unsuitable for complicated tasks like MARL in Hanabi. Moreover, even though the self-play score is low, we can already see a clear performance gap between self-play and cross-play, making it safe to assume that CH will not work well in zero-shot coordination. k | Cross-Play | Self-Play ---|---|--- 1 | 0.73 $\pm$ 0.18 | 0.95 $\pm$ 0.36 2 | 0.50 $\pm$ 0.02 | 0.54 $\pm$ 0.13 3 | 2.99 $\pm$ 0.11 | 3.24 $\pm$ 0.24 4 | 1.71 $\pm$ 0.12 | 2.57 $\pm$ 0.11 5 | 6.14 $\pm$ 0.58 | 7.27 $\pm$ 1.48 6 | 2.08 $\pm$ 0.22 | 4.52 $\pm$ 1.05 7 | 6.28 $\pm$ 0.92 | 8.82 $\pm$ 2.17 8 | 1.82 $\pm$ 0.25 | 4.89 $\pm$ 1.95 9 | 6.87 $\pm$ 0.91 | 10.26 $\pm$ 2.52 10 | 2.05 $\pm$ 0.28 | 6.54 $\pm$ 2.65 Table 3: K-Level Performance. We train K-Level with $K=10$ and 3 seeds. The cross-play and self-play scores are computed by intra-level pairing of agents trained with different seeds. In Table 3, we show results of K-Level method trained for 10 levels. Despite the gap between cross-play and self-play being smaller, this method suffers from non-monotonic improvements between levels and the same high cost in time and sample complexity. | Population1 | Population2 ---|---|--- Population1 | 23.46 $\pm$ 0.01 | 19.97 $\pm$ 0.06 Population2 | 20.16 $\pm$ 0.06 | 23.44 $\pm$ 0.01 Table 4: Performance of Population Based Method. Each cell is computed by pairing all agents from one population with those from the other population and then average the scores. Diagonal can be seen as self-play score and non- diagonal can be seen as cross-play score under a population setting. Figure 7: $P(a^{i}_{t}\mid a^{j}_{t-1})$ matrices of one model from each population. The semantic of the visualization is identical to Figure 5. Different from CH and K-Level where agents are trained sequentially, population based approaches (Fitzgerald, 2019; Tieleman et al., 2018; Lowe et al., 2019) train agents simultaneously by pairing distinct agents together to generate samples. We briefly experimented with a simple population setting where we initialize $N$ different agents with their private replay buffers. They are uniformly paired together with each other to generate samples and write their observation action sequences into the their own buffer. Each agent is optimized independently at each training step. We train 2 populations with different seeds. Each of them contains 4 agents initialized differently. The numerical results are shown in Table 4. This method can achieve decent cross- play scores. It is worth noting that the diversity between the hyper- parameters of the two populations is much smaller than that of the experiments shown in Figure 4 so that they are not directly comparable. However, a closer look at their respective policy through $P(a^{i}_{t}\mid a^{j}_{t-1})$ matrices reveals the problem. The way they use color hints not only differs greatly from each other but also breaks the color symmetry of the game, which is the exact problem other-play tries to solve. Qualitatively they are hard for human to play with. They manage to achieve good cross-play scores because the agents seldom use color hints.
2024-09-04T02:54:57.717177
2020-03-06T02:30:15
2003.03006
{ "authors": "Lijiang Geng, Guanyu Hu", "full_text_license": null, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "arxiv-papers-0000.json.gz:26071", "submitter": "Guanyu Hu", "url": "https://arxiv.org/abs/2003.03006" }
arxiv-papers
# Bayesian Spatial Homogeneity Pursuit for Survival Data with an Application to the SEER Respiratory Cancer Data Lijiang Geng Department of Statistics, University of Connecticut, CT 06269, USA Guanyu Hu Department of Statistics, University of Missouri - Columbia, MO 65211, USA ###### Abstract In this work, we propose a new Bayesian spatial homogeneity pursuit method for survival data under the proportional hazards model to detect spatially clustered patterns in baseline hazard and regression coefficients. Specially, regression coefficients and baseline hazard are assumed to have spatial homogeneity pattern over space. To capture such homogeneity, we develop a geographically weighted Chinese restaurant process prior to simultaneously estimate coefficients and baseline hazards and their uncertainty measures. An efficient Markov chain Monte Carlo (MCMC) algorithm is designed for our proposed methods. Performance is evaluated using simulated data, and further applied to a real data analysis of respiratory cancer in the state of Louisiana. Keywords: Geographically Weighted Chinese Restaurant Process, MCMC, Piecewise Constant Baseline Hazard, Spatial Clustering ## 1 Introduction Clinical data on individuals are often collected from different geographical regions and then aggregated and analyzed in public health studies. The most popular dataset is the Surveillance, Epidemiology, and End Results (SEER) program (SEER, 2016) data which routinely collects population-based cancer patient data from 20 registries across the United States. This data provides prognostic and demographic factors of cancer patients. In this paper, we focus our study on Louisiana respiratory cancer data which was analyzed in Mu et al. (2020). Analysis of such data conducted on a higher level often assume that covariate effects are constant over the entire spatial domain. This is a rather strong assumption, as all intrinsic heterogeneities in data are ignored. For example, if one was to study the hazard for patients with lung cancer, it is expected that the true hazard is not the same in areas where there is little air pollution and severe air pollution, even for patients with similar characteristics. From Tobler’s first law of geography (Tobler, 1970), it is reasonable to consider similarities between nearby locations in survival data due to environmental circumstances in geographically close regions. In this paper, we will recover the spatial homogeneity pattern of respiratory cancer survival rates among different counties in state of Louisiana. Existing approaches that account for such patterns in survival data can be put into two major categories. The first one is to incorporate spatial random effects in survival models such as the accelerated failure time (AFT) model and the proportional hazards model (Banerjee et al., 2003; Banerjee and Dey, 2005; Zhou et al., 2008; Zhang and Lawson, 2011; Henderson et al., 2012), such that spatial variations are accounted for by different intercepts for different regions, while parameters for covariates are held constant. Another important approach, instead of assuming all covariate effects are constant, allows parameters to be spatially varying in parametric, nonparametric, and semiparametric models (Hu et al., 2020; Hu and Huffer, 2020; Xue et al., 2020). Despite their flexibility, the aforementioned spatially varying coefficients models can be unnecessarily large. Imposing certain constraints on nearby regions so that they have the same parameter values provides an efficient way of reducing the model size without sacrificing too much of its flexibility. While similar endeavor have been made to cluster spatial survival responses (Huang et al., 2007; Bhatt and Tiwari, 2014), the clustering of covariate effects and baseline hazards have yet to be studied for survival data. Two challenges are to be tackled for clustering of coefficients and baseline hazards for spatial survival models. First, the spatial structure needs to be appropriately incorporated into the clustering process. Contiguousness constraints should be added so that truly similar neighbors are driven to the same cluster. The constraints, however, should not be overly emphasized, as two distant regions may still share similar geographical and demographical characteristics and thus parameters. Existing methods, such as in Lee et al. (2017, 2019) and Li and Sang (2019), do not allow for globally discontiguous clusters, which is a serious limitation. Second, the true number of clusters is unknown, and needs to be estimated. With the probabilistic Bayesian framework, simultaneous estimation of the number of clusters and the clustering configuration for each region is achieved by complicated search algorithms (e.g., reversible jump MCMC, Green, 1995) in variable dimensional parameter spaces. Such algorithms assign a prior to the number of clusters that needs to be updated in every MCMC iteration, which made them difficult to implement or automate, and suffer from mixing issues as well as lack of scalability. Nonparametric Bayesian approaches, such as the Chinese restaurant process (CRP; Pitman, 1995), provide another approach to allow for uncertainties in the number of clusters. Its extension, the distance dependent CRP (ddCRP; Blei and Frazier, 2011), considers spatial information, and makes a flexible class of distributions over partitions that allows for dependencies between their elements. The CRP framework, however, has been shown to be inconsistent in its estimation of number of clusters (Miller and Harrison, 2013). Lu et al. (2018) proposed the powered CRP that suppresses the small tail clusters. Similar to the traditional CRP, however, it does not consider distance information, and therefore is not well-suited when spatial homogeneity is to be detected. To address these challenges, in this work, we consider a spatial proportional hazards model, and propose a geographically weighted Chinese restaurant process (gwCRP) to capture the spatial homogeneity of both the regression coefficients and baseline hazards over subareas under piecewise constant hazards models framework (Friedman et al., 1982). Our main contributions in this paper are three folds. First, we develop a new nonparametric Bayesian method for spatial clustering which combines the ideas of geographical weights and Dirichlet mixture models to leverage geographical information. Compared with existing methods, our proposed approach is able to capture both locally spatially contiguous clusters and globally discontiguous clusters. Second, an efficient Markov chain Monte Carlo (MCMC) algorithm is proposed for our proposed model without reversible jumps to simultaneously estimate the number of clusters and clustering configuration. In addition, we apply our method to the analysis of the Surveillance, Epidemiology, and End Results (SEER) Program data in the state of Louisiana among different counties, which provide important information to study spatial survival rates. The remainder of the paper is organized as follows. In Section 2, we develop a homogeneity pursuit of survival data in the piecewise constant proportional hazard framework with gwCRP prior. In Section 3, a collapsed Gibbs sampler algorithm and post MCMC inference are discussed. The extensive simulation studies are carried out in Section 4. For illustration, our proposed methodology is applied to respiratory cancer survival data in Section 5. Finally, we conclude this paper with a brief discussion in Section 6. ## 2 Methodology ### 2.1 Spatial Piecewise Constant Hazards Models Let $T_{\ell i}$ denote the survival time for patient $\ell$ at location $s_{i}$, with $\delta_{\ell i}=1$ representing the event and $\delta_{\ell i}=0$ indicating censored, and $X_{\ell}(s_{i})$ denotes the vector of covariates corresponding to $T_{\ell i}$ for $i=1,2,...,n$, and $\ell=1,2,...,n_{i}$, where $n_{i}$ denotes the number of the patients at location $s_{i}$. In this paper, $s_{1},s_{2},\ldots,s_{n}$ are areal units which is defined in Banerjee et al. (2014). Let $\bm{D}=\\{(T_{\ell i},\delta_{\ell i},X_{\ell}(s_{i})),i=1,2,...,n,\ell=1,2,...,n_{i}\\}$. denote the observed data. We consider a proportional hazards model (Cox, 1972) with piecewise constant baseline hazard. We partition $[0,\infty)$ into $J$ intervals ($0=a_{0}<a_{1}<\dots<a_{J}=\infty$), then the hazard function is given by $\lambda(t|X_{\ell}(s_{i}))=\lambda_{0}(t)\exp(X_{\ell}(s_{i})^{\top}{\mbox{\boldmath$\beta$}}),\quad$ (1) with piecewise constant baseline hazard function $\lambda_{0}(t)=\lambda_{j}$ for $a_{j-1}\leq t<a_{j},~{}j=1,\ldots,J$. For the piecewise constant hazard function mentioned in (1), the baseline hazards $\lambda_{1},\dots,\lambda_{J}$ and regression coefficients $\beta$ are constants over different regions. Due to observed environmental factors, spatially varying patterns in baseline hazards and regression coefficients of hazard function need to be considered. The piecewise constant hazard function with spatially varying pattern is therefore given by $\lambda(t|X_{\ell}(s_{i}))=\lambda_{0(s_{i})}(t)\exp(X_{\ell}(s_{i})^{\top}{\mbox{\boldmath$\beta$}}(s_{i})),\quad$ (2) where $\lambda_{0(s_{i})}(t)=\lambda_{j}(s_{i})$ for $a_{j-1}\leq t<a_{j},~{}j=1,\ldots,J$. Under this model, ${\mbox{\boldmath$\lambda$}}(s_{i})=(\lambda_{1}(s_{i}),\dots,\lambda_{J}(s_{i}))^{\top}$ and ${\mbox{\boldmath$\beta$}}(s_{i})$ represent the location-specific baseline hazards and regression coefficients. After some algebra, the logarithm of likelihood function for observed survival data $\bm{D}$ is obtained as $\begin{split}&\log\mathcal{L}({\mbox{\boldmath$\beta$}}(s_{i}),{\mbox{\boldmath$\lambda$}}(s_{i}),i=1,\ldots,n\mid\bm{D})\\\ &=\sum_{i=1}^{n}\left\\{\sum_{j=1}^{J}d_{ji}\log\lambda_{j}(s_{i})+\sum_{\ell=1}^{n_{i}}\delta_{\ell i}X_{\ell}(s_{i})^{\top}{\mbox{\boldmath$\beta$}}(s_{i})-\sum_{j=1}^{J}\lambda_{j}(s_{i})\left[\sum_{\ell=1}^{n_{i}}\Delta_{j}(T_{\ell i})\exp(X_{\ell}(s_{i})^{\top}{\mbox{\boldmath$\beta$}}(s_{i}))\right]\right\\},\end{split}$ (3) where $d_{ji}=\sum_{\ell=1}^{n_{i}}\delta_{\ell i}{\bf{{1}}}_{[a_{j-1},a_{j})}(T_{\ell i})$, which represents the number of people at location $s_{i}$ who experience the event during the time period from $a_{j-1}$ to $a_{j}$, and $\Delta_{j}(t)=t-a_{j-1}\text{ for }a_{j-1}\leq t<a_{j}.$ For one particular location $s_{i}$, let $\bm{\eta}(s_{i})=\log{\mbox{\boldmath$\lambda$}}(s_{i})$ and define ${\mbox{\boldmath$\theta$}}(s_{i})=({\mbox{\boldmath$\beta$}}(s_{i})^{\top},\bm{\eta}(s_{i})^{\top})^{\top}$ the collection of parameters, then the maximized likelihood estimate (MLE) $\widehat{{\mbox{\boldmath$\theta$}}}(s_{i})$ can be obtained by solving the score function, which is the derivative of the logarithm of likelihood function in (3), and the estimated variance-covariance matrix of MLE is $\widehat{\Sigma}_{i}=(-H)^{-1}$, respectively, where $(-H)$ denotes the negative Hessian matrix. Based on the MLEs and estimated variance-covariance matrices, we have the following approximation of the likelihood. ###### Proposition 1. We assume the regularity conditions A-D in Friedman et al. (1982). As $n_{i}\rightarrow\infty,\,i=1\ldots,n$, the data likelihood $\mathcal{L}({\mbox{\boldmath$\beta$}}(s_{i}),{\mbox{\boldmath$\lambda$}}(s_{i}),i=1,\ldots,n\mid\bm{D})$ is approximated as $\mathcal{L}({\mbox{\boldmath$\beta$}}(s_{i}),{\mbox{\boldmath$\lambda$}}(s_{i}),i=1,\ldots,n\mid\bm{D})\approx\prod_{i=1}^{n}\text{MVN}(\widehat{{\mbox{\boldmath$\theta$}}}(s_{i})|{\mbox{\boldmath$\theta$}}(s_{i}),\widehat{\Sigma}_{i}),$ (4) where MVN stands for the multivariate normal distribution. The derivations of $\widehat{\Sigma}_{i}$ and the proof of Proposition 1 are given in Section A and Section B of Supporting information. Instead of using the log likelihood in (3), our following model is based on normal approximation in Proposition 1 for computational convenience. Based on the normal approximation given in Proposition 1, a natural way which follows Gelfand et al. (2003) for spatially varying pattern of baseline hazards and regression is to give a Gaussian process prior to ${{\mbox{\boldmath$\theta$}}}(s_{i}),\,i=1,\ldots,n$. The Gaussian process for ${{\mbox{\boldmath$\theta$}}}(s_{i}),\,i=1,\ldots,n$ is defined as ${\mbox{\boldmath$\theta$}}\sim\text{MVN}(\bm{1}_{n\times 1}\otimes\bm{\mu},\bm{H}(\phi)\otimes\Sigma),$ (5) where ${\mbox{\boldmath$\theta$}}=({\mbox{\boldmath$\theta$}}(s_{1})^{\top},\ldots,{\mbox{\boldmath$\theta$}}(s_{n})^{\top})^{\top}$, $\bm{\mu}$ is a $p+J$ dimensional vector, $\bm{H}(\phi)$ is a $n\times n$ spatial correlation matrix depending on the distance matrix with parameter $\phi$, $\Sigma$ is a $(p+J)\times(p+J)$ covariance matrix, and $\otimes$ denotes Kronecker product. The $(i,j)$-th entry of $\bm{H}(\phi)$ is $\exp(-\phi|s_{i}-s_{j}|)$, where $|s_{i}-s_{j}|$ is the distance between $s_{i}$ and $s_{j}$, and $\phi>0$ is the range parameter for spatial correlation. For the Gaussian process prior, the parameters of closer locations have stronger correlations. For many spatial survival data, some regions will share same covariate effects or baseline hazards with their nearby regions. In addition, some regions will share similar parameters regardless of their geographical distances, due to the similarities of regions’ demographical information such as income distribution (Ma et al., 2020; Hu et al., 2020), food environment index, air pollution (Zhao et al., 2020), and etc.. A spatially varying pattern for ${{\mbox{\boldmath$\theta$}}}(s_{i}),\,i=1,\ldots,n$ is not always valid. Based on the homogeneity pattern, we focus on the clustering of spatially- varying parameters. In our setting, we assume that the $n$ parameter vectors can be clustered into $k$ groups, i.e., ${\mbox{\boldmath$\theta$}}(s_{i})={\mbox{\boldmath$\theta$}}_{z_{i}}$ where $z_{i}\in\\{1,2,\ldots,k\\}$. ### 2.2 Geographically Weighted Chinese Restaurant Process A latent clustering structure can be introduced to accommodate the spatial heterogeneity on parameters of sub-areas. Under the frequentist framework, the clustering problem could be solved in a two-stage approach: first obtain the estimate of number of clusters, $\widehat{k}$, then detect the optimal clustering assignment among all possible clusterings of $n$ elements into $\widehat{k}$ clusters. However, in this approach, the performance of the estimation of cluster assignments highly relies on the estimated number of clusters, it may ignore uncertainty in the first stage and cause redundant cluster assignments. Bayesian nonparametric method is a natural remedy to simultaneously estimate the number of clusters and cluster assignments. The Chinese restaurant process (CRP; Pitman, 1995; Neal, 2000) offers choices to allow uncertainty in the number of clusters by assigning a prior distribution on $(z_{1},z_{2},\ldots,z_{n})$. In CRP, $z_{i},~{}i=2,\ldots,n$ are defined through the following conditional distribution (also called a Pólya urn scheme, Blackwell et al., 1973). $\displaystyle P(z_{i}=c\mid z_{1},\ldots,z_{i-1})\propto\begin{cases}\absolutevalue{c},&\text{at an existing cluster labeled}\,c,\\\ \alpha,&\text{at a new cluster}.\end{cases}$ (6) Here $\absolutevalue{c}$ refers to the size of cluster labeled $c$, and $\alpha$ is the concentration parameter of the underlying Dirichlet process. Based on the Pólya urn scheme shown in (6), the customers will have no preference for sitting with different customers. For the spatial survival data, nearby regions will share similar environmental effects such as P.M. 2.5, water quality, etc.. These similar effects will lead the nearby sub- regions to share similar parameters. In order to consider similar effects caused by geographical distance, we modify the traditional CRP to geographically weighted CRP (gwCRP) so that the customer will have higher probability sitting with their familiar customers which are geographically nearby. We have the conditional distribution of $\bm{\theta}(s_{i})$ given ${\mbox{\boldmath$\theta$}}(s_{1}),\ldots,{\mbox{\boldmath$\theta$}}(s_{i-1})$ based on following definition. ###### Definition 1. If $G_{0}$ is a continuous distribution and $i>1$, the distribution of $\bm{\theta}(s_{i})$ given $\bm{\theta}(s_{1}),\ldots,\bm{\theta}(s_{i-1})$ is proportional to $\displaystyle f(\bm{\theta}(s_{i})\mid{\mbox{\boldmath$\theta$}}(s_{1}),\ldots,{\mbox{\boldmath$\theta$}}(s_{i-1}))\propto\sum_{r=1}^{K^{*}}\sum_{j=1}^{i-1}w_{ij}\bm{1}(\bm{\theta}(s_{j})=\bm{\theta}^{*}_{r})\delta_{\bm{\theta}^{*}_{r}}(\bm{\theta}(s_{i}))+\alpha G_{0}(\bm{\theta}(s_{i})),$ (7) where $f(\cdot)$ is the distribution density function, $K^{*}$ denote the number of clusters excluding the $i$-th observation, $\bm{\theta}^{*}_{1},\ldots,\bm{\theta}^{*}_{K^{*}}$ are $K^{*}$ distinguished values of $\bm{\theta}_{1},\ldots,\bm{\theta}_{i-1}$, $w_{ij}$ is geographical weight which is calculated by the distance between $s_{i}$ and $s_{j}$, and $\delta(\cdot)$ is the Dirac measure. Based on the Definition 1, we have similar Pólya urn scheme called gwCRP for conditional distribution in (7) with CRP. ###### Proposition 2. A Pólya urn scheme of gwCRP is defined as $\displaystyle P(z_{i}=c\mid z_{1},\ldots,z_{i-1})\propto\begin{cases}\absolutevalue{c^{*}},&\text{at an existing cluster labeled}\,c,\\\ \alpha,&\text{at a new cluster}.\end{cases}$ (8) $\absolutevalue{c^{*}}=\sum_{j=1}^{i-1}w_{ij}{\bf{{1}}}(z_{j}=c)$, where $w_{ij}$ is the geographical weight. Compared the existing geographically weighted regression literatures, our weights are obtained by graph distance between different areas. Following Xue et al. (2020), we denote a graph as $G$, with set of vertices $V(G)=\\{v_{1},\ldots,v_{n}\\}$, and set of edges $E(G)=\\{e_{1},\ldots,e_{m}\\}$. The graph distance between two vertices $v_{i}$ and $v_{j}$ is defined as: $d_{v_{i}v_{j}}=\begin{cases}|V(e)|,&\text{if }e\text{ is the shortest path connecting }v_{i}\text{ and }v_{j},\\\ \infty,&\text{if }v_{i}\text{ and }\,v_{j}\text{ are not connected},\end{cases}$ (9) where $|V(e)|$ represents the cardinality of edges in $e$. For the county level data, we construct the graph $G$ based on adjacency matrix among different counties. We treat $n$ counties as $n$ vertices of this graph and $v_{i}$ and $v_{j}$ are connected when the corresponding counties share the boundary. Based on the graph distance calculated by (9), we calculate the geographical weights by: $w_{ij}=\begin{cases}1,&\text{ if }d_{v_{i}v_{j}}\leq 1,\\\ \exp(-d_{v_{i}v_{j}}\times h),&\text{ if }1<d_{v_{i}v_{j}},\end{cases}$ (10) where $d_{v_{i}v_{j}}$ is the graph distance between areas $i$ and $j$. For the weighting function in (10), we give the largest weight ($w_{ij}\equiv 1$) for the areas sharing the same boundaries, which follows the first law of geography (Tobler, 1970). For simplicity, we refer to gwCRP introduced above as $\text{gwCRP}(\alpha,h)$, where $\alpha$ is the concentration parameter for Dirichlet distribution and $h$ is the spatial smoothness parameter. ###### Remark 1. Based on the Pólya urn scheme defined in (8) and geographical weighting scheme defined in (10), we find that (i) when $h=0$, the gwCRP reduces to traditional CRP, which leads to over-clustering problem in estimating of the number of clusters; (ii) when $h\rightarrow\infty$, a new customer just only choose the table representing spatially contiguous regions. This will also lead to the same over-clustering problem as CRP. ### 2.3 gwCRP for Piecewise Constant Hazards Models Adapting gwCRP to the piecewise constant hazards models, our model and prior can be expressed hierarchically as: $\displaystyle\begin{split}&\log\mathcal{L}({\mbox{\boldmath$\beta$}}_{z_{i}},{\mbox{\boldmath$\lambda$}}_{z_{i}},i=1,\ldots,n\mid\bm{D})\\\ &=\sum_{i=1}^{n}\left\\{\sum_{j=1}^{J}d_{ji}\log\lambda_{jz_{i}}+\sum_{\ell=1}^{n_{i}}\delta_{\ell i}X_{\ell}(s_{i})^{\top}{\mbox{\boldmath$\beta$}}_{z_{i}}-\sum_{j=1}^{J}\lambda_{jz_{i}}\left[\sum_{\ell=1}^{n_{i}}\Delta_{j}(T_{\ell i})\exp(X_{\ell}(s_{i})^{\top}{\mbox{\boldmath$\beta$}}_{z_{i}})\right]\right\\},\\\ &z_{i}\mid\bm{\pi},k\sim\text{Multinomial}(\pi_{1},\cdots,\pi_{k}),\\\ &\bm{\pi}\sim\text{gwCRP}(\alpha,h),\\\ &\bm{\theta}_{r}\sim\mbox{MVN}(0,\Sigma_{0}),\quad r=1,\ldots,k,\end{split}$ (11) where ${\mbox{\boldmath$\theta$}}_{r}=(\beta_{1r},\ldots,\beta_{pr},\log\lambda_{1r},\ldots,\log\lambda_{Jr})^{\top}$ is a $p+J$ dimensional vector. And let $k\rightarrow\infty$, and $\Sigma_{0}$ be hyperparameter for base distribution of ${\mbox{\boldmath$\theta$}}_{1},\ldots,{\mbox{\boldmath$\theta$}}_{r}$. We choose $\Sigma_{0}=100\bm{I}$ in all the simulations and real data analysis providing noninformative priors. The concentration parameter $\alpha$ controls the probability of introducing a new cluster which is similar with CRP. Different values of $h$ lead to different weighting scale for different sub- regions. In our following simulations and real data analysis, we fix $\alpha=1$ and tune $h$ with different values. ## 3 Bayesian Inference In this section, we will introduce the MCMC sampling algorithm, post MCMC inference method, and Bayesian model selection criterion. Our goal is to sample from the posterior distribution of the unknown parameters $k$, $\bm{z}=(z_{1},...,z_{n})\in\\{1,...,k\\}^{n}$, $\bm{\beta}=(\bm{\beta}_{1},\ldots,\bm{\beta}_{k})$, and $\bm{\lambda}=(\bm{\lambda}_{1},\ldots,\bm{\lambda}_{k})$. Based on Proposition 1 and Proposition 2, we can efficiently cycles through the full conditional distributions of $z_{i}|z_{-i}$ for $i=1,2,\ldots,n$ and $\bm{\beta}^{\top},\log\bm{\lambda}^{\top}$, where $z_{-i}=\bm{z}\setminus{\\{z_{i}}\\}$. The marginalization over $k$ can avoid complicated reversible jump MCMC algorithms or even allocation samplers. The full conditionals of $z_{1},\ldots,z_{n}$ are given in Proposition 3. The details of sampling algorithm is given in Section C of Supporting information. ###### Proposition 3. The full conditional distributions $P(z_{i}=c\mid z_{-i},\widehat{\bm{\theta}},{\mbox{\boldmath$\theta$}})$ of $z_{1},\ldots,z_{n}$ is given as $\propto\left\\{\begin{array}[]{ll}&\left(\sum_{j\neq i}w_{ij}{\bf{{1}}}(z_{j}=c)\right)(2\pi)^{-\frac{p}{2}}|\widehat{\Sigma}_{i}|^{-\frac{1}{2}}\exp\left\\{-\frac{1}{2}\left((\widehat{\bm{\theta}}(s_{i})-{\mbox{\boldmath$\theta$}}_{c})^{\top}\widehat{\Sigma}_{i}^{-1}(\widehat{\bm{\theta}}(s_{i})-{\mbox{\boldmath$\theta$}}_{c})\right)\right\\}\text{ at existing $c$}\\\ &\alpha(2\pi)^{-\frac{p}{2}}|\widehat{\Sigma}_{i}|^{-\frac{1}{2}}|\Sigma_{0}|^{-\frac{1}{2}}|\widehat{\Sigma}_{i}^{-1}+\Sigma_{0}^{-1}|^{-\frac{1}{2}}\exp\left\\{-\frac{1}{2}\left(\widehat{\bm{\theta}}(s_{i})^{\top}(\widehat{\Sigma}_{i}+\Sigma_{0})^{-1}\widehat{\bm{\theta}}(s_{i})\right)\right\\}\text{ if $c$ is a new cluster}\\\ \end{array}\right.$ where $\widehat{\Sigma}_{i}$ is the estimated variance-covariance matrix of MLE $\widehat{\bm{\theta}}(s_{i})$ for $i=1,\ldots,n$, and $\Sigma_{0}$ is the variance hyperparameter for the base distribution of ${\mbox{\boldmath$\theta$}}_{1},\ldots,{\mbox{\boldmath$\theta$}}_{r}$. We carry out posterior inference on the group memberships $z_{1},\ldots,z_{n}$ by using Dahl’s method (Dahl, 2006), which proceeds as follows 1. 1. Define membership matrices $B^{(l)}=(B(i,j))_{i,j\in\left\\{1,\ldots,n\right\\}}=(\bm{1}(z_{i}^{(l)}=z_{j}^{(l)}))_{n\times n}$, where $l=1,\ldots,B$ indexes the number of retained MCMC draws after burn-in, and $\bm{1}(\cdot)$ is the indicator function. 2. 2. Calculate the average membership matrix $\overline{B}=\frac{1}{B}\sum_{l=1}^{L}B^{(l)}$, where the summation is element-wise. 3. 3. Identify the most _representative_ posterior sample as the one that is closest to $\overline{B}$ with respect to the element-wise Euclidean distance $\sum_{i=1}^{n}\sum_{j=1}^{n}(B^{(l)}(i,j)-\overline{B}(i,j))^{2}$ among the retained $l=1,\ldots,B$ posterior samples. Therefore, the posterior estimates of cluster memberships $z_{1},\ldots,z_{n}$ and model parameters $\bm{\theta}$ can be obtained based on the draw identified by Dahl’s method. We recast the choice of decaying parameter $h$ as a model selection problem. We use the Logarithm of the Pseudo-Marginal Likelihood (LPML; Ibrahim et al., 2001) based on conditional predictive ordinate (CPO; Gelfand et al., 1992; Geisser, 1993; Gelfand and Dey, 1994) to select $h$. The LPML is defined as $\text{LPML}=\sum_{i=1}^{N}\text{log}(\text{CPO}_{i}),$ (12) where $\text{CPO}_{i}$ is the i-th conditional predictive ordinate. Following Chen et al. (2000), a Monte Carlo estimate of the CPO, within the Bayesian framework, can be obtained as $\widehat{\text{CPO}}_{i}^{-1}=\frac{1}{B}\sum_{b=1}^{B}\frac{1}{f(D_{i}|{\mbox{\boldmath$\theta$}}_{z_{i}}^{b})},$ (13) where $B$ is the total number of Monte Carlo iterations, ${\mbox{\boldmath$\theta$}}_{z_{i}}^{b}$ is the $b$-th posterior sample, and $f(\cdot)$ is the likelihood function define in (3). An estimate of the LPML can subsequently be calculated as: $\widehat{\text{LPML}}=\sum_{i=1}^{N}\text{log}(\widehat{\text{CPO}}_{i}).$ (14) A model with a larger LPML value is preferred. ## 4 Simulation ### 4.1 Simulation Setting and Evaluation Metrics In this section, we present simulation studies under four different designs to illustrate the performance of our proposed gwCRP method and compare with traditional CRP, in terms of both clustering configuration and estimation of regression coefficients and piecewise constant baseline hazards under proportional hazards model. Survival datasets that resemble the SEER respiratory cancer data for Louisiana are generated. The censoring rate is around 30%. We design four different geographical clustering patterns in Louisiana state, which are shown in Figure 1. Designs I and III have three true clusters, and Designs II and IV have two true clusters. In addition, Designs II and III both have one cluster consisting of two disjoint areas since, in practice, it is still possible for two distant counties to belong to the same cluster. Design IV has two clusters both consisting of disjoint areas. Figure 1: Geographical clustering patterns in Louisiana state of simulation designs (This figure appears in color in the electronic version of this article, and any mention of color refers to that version.) For each design, 100 replicate datasets are generated under proportional hazards model with piecewise constant baseline hazard. In each replicate, we generate survival data of 60 subjects for each county, including three regression covariates from $N(0,1)$ i.i.d., survival time and censoring. We set three pieces for the baseline hazards with cutting points 1.5 and 6 for all designs, and over four designs, we have three true clusters at maximum, and the true regression coefficients and baseline hazards used are chosen from ${\mbox{\boldmath$\beta$}}_{1}=(1,0.5,1),{\mbox{\boldmath$\lambda$}}_{1}=(0.045,0.036,0.045)$, ${\mbox{\boldmath$\beta$}}_{2}=(1.5,1,1),{\mbox{\boldmath$\lambda$}}_{2}=(0.045,0.036,0.036)$, and ${\mbox{\boldmath$\beta$}}_{3}=(2,0.5,1.5),{\mbox{\boldmath$\lambda$}}_{3}=(0.036,0.045,0.0495)$. Censoring times are generated independently by taking the minimum of 150 and random values from Exp(0.01) with expectation 100. For each replicate, we set $\alpha=1$ and run different values of $h$, from 0 to 2 with grid 0.2, and from 3 to 10 with grid 1, and select the optimal $h$ via LPML. A total of 2000 MCMC iterations are run for each replicate, with the first 500 iterations as burn-in. To compare the performance of clustering of gwCRP under different values of $h$, both estimation of the number of clusters and the matchability of clustering configurations are reported. In our simulation, we use mean Rand Index (Rand, 1971) which is obtained by using R-package fossil (Vavrek, 2011) to measure the clustering performance. In addition to clustering performance, we further evaluate the estimation performance of covariates coefficients and baseline hazards, which is assessed by average of bias (AB) and average of mean squared error (AMSE) defined as follows. Let $\bm{z}=(z_{1},\ldots,z_{n})$ be the true clustering label vector, ${\mbox{\boldmath$\theta$}}_{r}(s_{i})$ be the true parameter value of cluster $r$, $\kappa_{r}=\sum_{i=1}^{64}{\bf{{1}}}(z_{i}=r)$ be the number of counties in cluster $r$, $r=1,\ldots,k$, $\sum_{r=1}^{k}\kappa_{r}=n$, and for simulated data set $t$, let $\widehat{{\mbox{\boldmath$\theta$}}}_{(t)}(s_{i})$ be Dahl’s method estimate at location $s_{i}$ for $t$-th replicates. Then AB is calculated as $\text{AB}=\frac{1}{k}\sum_{r=1}^{k}\frac{1}{\kappa_{r}}\sum_{i|z_{i}=r}\frac{1}{100}\sum_{t=1}^{100}(\widehat{{\mbox{\boldmath$\theta$}}}_{(t)}(s_{i})-{\mbox{\boldmath$\theta$}}_{r}(s_{i})),$ and AMSE is calculated as $\text{AMSE}=\frac{1}{k}\sum_{r=1}^{k}\frac{1}{\kappa_{r}}\sum_{i|z_{i}=r}\frac{1}{100}\sum_{t=1}^{100}(\widehat{{\mbox{\boldmath$\theta$}}}_{(t)}(s_{i})-{\mbox{\boldmath$\theta$}}_{r}(s_{i}))^{2},$ which calculates mean squared errors for each cluster first, and then average across clusters. ### 4.2 Simulation Results Figure 2 shows the histogram of $k$ estimates and boxplots of Rand Index under different $h$ and the optimal selected by LPML for four simulation designs. We see that when $h=0$, the proposed gwCRP method is identical to the traditional CRP method, and in this case, CRP always tends to over-cluster and often yields smaller Rand Index than the results under $h>0$. Another important trend is that, as $h$ increases, the estimated number of clusters decreases first and then increases, and the Rand Index increases first and then decreases as $h$ becomes too large. As we discussed in Remark 1, this is because when $h$ increases from 0, the spatial patterns in the data is captured by the proposed gwCRP method. However, as $h\rightarrow\infty$, the geographical weights $w_{ij}$ for spatial-discontiguous counties become 0, which means only adjacent counties can be classified into the same cluster, therefore leading to over-clustering phenomenon again. It is also discovered that the clustering perfomance under optimal $h$ selected by LPML is very well, with the probability of selecting true number of clusters always greater than 0.75, and Rand Index larger than or similar to the highest results attained by some fixed value of $h$. Figure 2: Histogram of estimates of $k$ and boxplot of Rand Index under different $h$ and LPML selection for simulation designs. The average h selected by LPML is 1.296 in Design 1, 1.412 in Design 2, 1.366 in Design 3, 1.602 in Design 4. Table 1 summarizes the AB and AMSE results of estimating parameters of gwCRP under different $h$ for different designs. For simplicity of summary results, here the AMSE of $\beta$ is the average of AMSE of $\beta_{1},\beta_{2},\beta_{3}$ since they have similar scales, and the value of $\log{\mbox{\boldmath$\lambda$}}$ is the average of AMSE of $\log\lambda_{1},\log\lambda_{2},\log\lambda_{3}$, respectively. For Designs II, III and IV, the absolute value of AB decrease as $h$ increase from 0 to moderate values, and increase again as $h$ increase to relatively large values. For all four designs, the absolute values of AB for $\lambda$’s of optimal $h$ selected by LPML always are the smallest, and the absolute values for both $\beta$’s and $\lambda$’s of optimal $h$ selected by LPML are always smaller than the values of traditional CRP. The patterns in AMSE are more clear when comparing different methods, that traditional CRP has the largest AMSE and AMSE decrease as $h$ increase from 0 to moderate values, and increase again as $h$ increase to relatively large values. The results of optimal $h$ selected by LPML also has the best performance in estimation. Table 1: AB and AMSE for parameter estimation under different $h$ and LPML selection for different simulation designs Method | Parameter | Design I | Design II | Design III | Design IV ---|---|---|---|---|--- | | AB | AMSE | AB | AMSE | AB | AMSE | AB | AMSE gwCRP $h=0.6$ | $\beta$ | -0.0063 | 0.0069 | 0.0038 | 0.0067 | -0.0065 | 0.0083 | -0.0027 | 0.0060 | $\lambda$ | 0.0815 | 0.0193 | 0.0732 | 0.0186 | 0.0797 | 0.0219 | 0.0827 | 0.0190 gwCRP $h=1.2$ | $\beta$ | -0.0078 | 0.0065 | -0.0002 | 0.0058 | -0.0061 | 0.0087 | -0.0017 | 0.0049 | $\lambda$ | 0.0818 | 0.0194 | 0.0775 | 0.0171 | 0.0794 | 0.0217 | 0.0815 | 0.0172 gwCRP $h=2$ | $\beta$ | -0.0096 | 0.0068 | -0.0006 | 0.0055 | -0.0056 | 0.0085 | -0.0051 | 0.0049 | $\lambda$ | 0.0849 | 0.0190 | 0.0814 | 0.0158 | 0.0798 | 0.0216 | 0.0873 | 0.0191 gwCRP $h=6$ | $\beta$ | -0.0061 | 0.0129 | 0.0030 | 0.0072 | -0.0299 | 0.0204 | -0.0064 | 0.0074 | $\lambda$ | 0.0805 | 0.0281 | 0.0770 | 0.0217 | 0.1005 | 0.0296 | 0.0852 | 0.0195 gwCRP Optimal | $\beta$ | -0.0039 | 0.0059 | 0.0042 | 0.0055 | 0.0074 | 0.0067 | -0.0005 | 0.0035 | $\lambda$ | 0.0661 | 0.0177 | 0.0711 | 0.0145 | 0.0732 | 0.0203 | 0.0777 | 0.0177 CRP | $\beta$ | -0.0046 | 0.0086 | 0.0018 | 0.0092 | -0.0056 | 0.0089 | 0.0003 | 0.0082 | $\lambda$ | 0.0760 | 0.0228 | 0.0717 | 0.0233 | 0.0787 | 0.0239 | 0.0742 | 0.0223 A sensitivity analysis regarding $\alpha$ and the weighting function is conducted. $\alpha=0.5,1,2,5$ and the weighting function $w_{ij}=\exp(-d_{v_{i}v_{j}}^{2}\times h^{2}){\bf{{1}}}\\{d_{v_{i}v_{j}}>1\\}+{\bf{{1}}}\\{d_{v_{i}v_{j}}\leq 1\\}$ which has a faster decay to 0 are ran, and all the results are presented in Section D of Supporting information. The results show that the results of optimal $h$ selected by LPML are insensitive to the choice of $\alpha$ and weighting function. In a brief conclusion based on our simulation studies, gwCRP models have better performance than CRP both for clustering and parameter estimation. Our proposed model selection criterion, LPML, can nearly select the best performing $h$ value for both clustering and parameter estimation. ## 5 SEER Respiratory Cancer Data ### 5.1 Data Description In this section, we apply our proposed model to analyze respiratory cancer data in Louisiana state, which is downloaded from the Surveillance, Epidemiology, and End Results (SEER) Program.We analyzed the survival time of respiratory cancer patients using the SEER public use data (SEER 1973-2016 Public-Use). We refer to Mu et al. (2020) for the detailed data clean description. After cleaning, there are 16213 observations left, and the censoring rate is 30.44%. We select Age, Gender, Cancer grade and Historical stage of cancer for our analysis, and give the summary of survival times and covariates in Table 2. The median survival times for patients in each county are provided in Section E of Supporting information . Table 2: Demographics for the studied dataset. For continuous variables, the mean and standard deviation (SD) are reported. For binary variables, the frequency and percentage of each class are reported. | Mean(SD) / Frequency (Percentage) ---|--- Survival Time | 22.43 (31.90) Event | 12.63 (18.32) Censor | 44.85 (43.06) Diagnostic Age | 66.55 (11.66) Sex | Female | 6548 ($40.39\%$) Male | 9665 ($59.61\%$) Cancer Grade | the class of lower grades | 5307 ($32.73\%$) the class of III or IV | 10906 ($67.27\%$) Historical Stage 111Distant stage means that a tumor has spread to areas of the body distant or remote from the primary tumor | not distant | 9005 ($55.54\%$) distant | 7208 ($44.46\%$) We first fit the Cox model of patients for each county using the covariates selected. The regression coefficients are visualized in Section E of Supporting information. From results shown in Supporting information, it is seen that some counties have similar characteristics, no limited to only adjacent counties, indicating possibilities of globally discontiguous clusters. ### 5.2 Data Analysis To select the optimal number of pieces for the baseline hazard and $h$, we run $J=2$ with the cutpoints $(0,9.01)$, $J=3$ with cutpoints $(0,3.01,9.01)$, $J=4$ with cutpoints $(0,1.01,4.01,9.01)$, and $J=5$ with cutpoints $(0,1.01,3.01,5.01,9.01)$. The cutpoints are set by dividing the start point $0$ and the median survival time $9.01$ by quantiles evenly to ensure there are events at each piece for each county. For each $J$, we run $h$ from 0 to 10 with grid 0.1, and for each combination of $J$ and $h$, 5000 MCMC iterations are run and drop the first 2000 as burn-in. The optimal values selected by LPML is $J=4$ and $h=9.0$, under which the corresponding estimate of number of clusters is two, while the traditional CRP classifies the counties into five clusters. The trace plots of different chains of posterior samples of the estimates for selected counties are presented in the Section E of Supporting information to show the convergence of the MCMC. The plots of clustering patterns of CRP and gwCRP Optimal are shown in Figure 3, from which it is seen that the gwCRP captures the globally discontiguous clusters very well. The estimates and $95\%$ Credible Intervals of regression covariates coefficients and baseline hazards obtained by gwCRP Optimal are given in Table 3, from which we see that, though the baseline hazards are similar, the regression covariates coefficients are quite different across different clusters. We see that our proposed method successfully detects both spatially contiguous cluster and discontinuous cluster simultaneously. The parameter estimates for Age are positive in all counties, indicating that older patients on average are more likely to have the event than younger patients. For the counties in cluster 1, the diagnostic ages has higher hazards effects than other counties. However, for the counties in Cluster 2, male, later cancer stage will have higher hazards effects than other counties. The historical distance stage effects are very similar in two clusters which indicates that the subjects with tumor spreading will have higher hazards. Figure 3: Clustering patterns of counties in Louisiana state under CRP (when J=4) and gwCRP Optimal (when J=4, h=9.0) methods (This figure appears in color in the electronic version of this article, and any mention of color refers to that version.) Table 3: Estimates results of regression coefficients and baseline hazards obtained by gwCRP Optimal ($J=4,~{}h=9.0$). The $95\%$ Credible Interval for estimates of Cluster One is calculated by the $95\%$ HPD Interval of County 13, and The $95\%$ Credible Interval for estimates of Cluster Two is calculated by the $95\%$ HPD Interval of County 33, where the counties were selected by the minimum Euclidean distance from the posterior mean to the average estimate. Parameter | Cluster 1 | Cluster 2 ---|---|--- Estimate | $95\%$ Credible Interval | Estimate | $95\%$ Credible Interval $\beta_{\text{Age}}$ | 0.1847 | (0.1693, 0.2158) | 0.0728 | (0.0138, 0.2056) $\beta_{\text{Sex}}$ | 0.1239 | (0.0912, 0.1732) | 0.3411 | (0.1118, 0.5189) $\beta_{\text{Grade}}$ | 0.5075 | (0.4366, 0.5291) | 0.8290 | (0.4583, 0.9578) $\beta_{\text{Hist-Stage}}$ | 1.3271 | (1.2926, 1.3824) | 1.4434 | (1.3150, 1.7101) $\lambda_{1}$ | 1.0690 | (0.9938, 1.0912) | 1.0359 | (0.9060, 1.2330) $\lambda_{2}$ | 1.0716 | (0.9909, 1.1172) | 1.0877 | (0.8432, 1.3035) $\lambda_{3}$ | 0.9843 | (0.9787, 1.1040) | 1.0912 | (0.8535, 1.2899) $\lambda_{4}$ | 1.0040 | (0.9583, 1.0513) | 1.0209 | (0.8724, 1.2091) ## 6 Discussion In this paper, we proposed a geographically weighted Chinese restaurant process to capture spatial homogeneity of regression coefficients and baseline hazards based on piecewise constant hazard model. An efficient MCMC algorithm is proposed for our methods without complicated reversible jump algorithm. Extensive simulation results are carried out to show that our proposed method has better clustering performance than the traditional CRP in spatial homogeneity pursuit for survival data. Simulation studies also show that our proposed methods have promising results in coefficients and baseline hazard estimation. An application to analysis of SEER data provides an interesting illustration of our proposed methods. Furthermore, four topics beyond the scope of this paper are worth further investigation. In this paper, our proposed algorithm is based on two step estimation under piecewise constant proportional hazard model assumption. Proposing an efficient sampling algorithm without Laplace approximation is an important future work. Furthermore, we fixed the number of pieces of baseline hazards in both simulation studies and real data analysis. Imposing adaptive number of pieces model in baseline hazards is devoted for future research. In addition, variable selection approaches based on hierarchical CRP (Griffiths et al., 2004) is also worth being investigated. Allowing different covariates and baseline hazard share different clustering processes is also an important future work. ## Acknowledgement The authors would like to thank the editor, the associate editors and two reviewers for their valuable comments which help improve the presentation of this paper. ## Data Availability Statement The data that support the findings of this paper are available from the corresponding author upon reasonable request. ## References * Banerjee et al. (2014) Banerjee, S., Carlin, B. P., and Gelfand, A. E. (2014). Hierarchical modeling and analysis for spatial data. Crc Press. * Banerjee and Dey (2005) Banerjee, S. and Dey, D. K. (2005). Semiparametric proportional odds models for spatially correlated survival data. Lifetime Data Analysis 11, 175–191. * Banerjee et al. (2003) Banerjee, S., Wall, M. M., and Carlin, B. P. (2003). Frailty modeling for spatially correlated survival data, with application to infant mortality in Minnesota. Biostatistics 4, 123–142. * Bhatt and Tiwari (2014) Bhatt, V. and Tiwari, N. (2014). A spatial scan statistic for survival data based on weibull distribution. Statistics in medicine 33, 1867–1876. * Blackwell et al. (1973) Blackwell, D., MacQueen, J. B., et al. (1973). Ferguson distributions via pólya urn schemes. The Annals of Statistics 1, 353–355. * Blei and Frazier (2011) Blei, D. M. and Frazier, P. I. (2011). Distance dependent Chinese restaurant processes. Journal of Machine Learning Research 12, 2461–2488. * Chen et al. (2000) Chen, M.-H., Qi-Man, S., and G, I. J. (2000). Monte Carlo Methods in Bayesian Computation. New York: Springer-Verlag. * Cox (1972) Cox, D. R. (1972). Regression models and life-tables. Journal of the Royal Statistical Society B 34, 187–220. * Dahl (2006) Dahl, D. B. (2006). Model-based clustering for expression data via a Dirichlet process mixture model. Bayesian inference for gene expression and proteomics 4, 201–218. * Friedman et al. (1982) Friedman, M. et al. (1982). Piecewise exponential models for survival data with covariates. The Annals of Statistics 10, 101–113. * Geisser (1993) Geisser, S. (1993). Predictive Inference: An Introduction. London: Chapman & Hall. * Gelfand and Dey (1994) Gelfand, A. E. and Dey, D. K. (1994). Bayesian model choice: asymptotics and exact calculations. Journal of the Royal Statistical Society: Series B (Methodological) 56, 501–514. * Gelfand et al. (1992) Gelfand, A. E., Dey, D. K., and Chang, H. (1992). Model determination using predictive distributions with implementation via sampling-based-methods (with discussion). In In Bayesian Statistics 4. University Press. * Gelfand et al. (2003) Gelfand, A. E., Kim, H.-J., Sirmans, C., and Banerjee, S. (2003). Spatial modeling with spatially varying coefficient processes. Journal of the American Statistical Association 98, 387–396. * Green (1995) Green, P. J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika 82, 711–732. * Griffiths et al. (2004) Griffiths, T. L., Jordan, M. I., Tenenbaum, J. B., and Blei, D. M. (2004). Hierarchical topic models and the nested Chinese restaurant process. In Advances in neural information processing systems, pages 17–24. * Henderson et al. (2012) Henderson, R., Shimakura, S., and Gorst, D. (2012). Modeling spatial variation in leukemia survival data. Journal of the American Statistical Association . * Hu et al. (2020) Hu, G., Geng, J., Xue, Y., and Sang, H. (2020). Bayesian spatial homogeneity pursuit of functional data: an application to the U.S. income distribution. * Hu and Huffer (2020) Hu, G. and Huffer, F. (2020). Modified Kaplan–Meier estimator and Nelson–Aalen estimator with geographical weighting for survival data. Geographical Analysis 52, 28–48. * Hu et al. (2020) Hu, G., Xue, Y., and Huffer, F. (2020). A comparison of Bayesian accelerated failure time models with spatially varying coefficients. Sankhya B pages 1–17. * Huang et al. (2007) Huang, L., Pickle, L. W., Stinchcomb, D., and Feuer, E. J. (2007). Detection of spatial clusters: application to cancer survival as a continuous outcome. Epidemiology 18, 73–87. * Ibrahim et al. (2001) Ibrahim, J. G., Chen, M.-H., and Sinha, D. (2001). Bayesian Survival Analysis. New York: Springer-Verlag. * Lee et al. (2017) Lee, J., Gangnon, R. E., and Zhu, J. (2017). Cluster detection of spatial regression coefficients. Statistics in Medicine 36, 1118–1133. * Lee et al. (2019) Lee, J., Sun, Y., and Chang, H. H. (2019). Spatial cluster detection of regression coefficients in a mixed-effects model. Environmetrics page e2578. * Li and Sang (2019) Li, F. and Sang, H. (2019). Spatial homogeneity pursuit of regression coefficients for large datasets. Journal of the American Statistical Association pages 1–21. * Lu et al. (2018) Lu, J., Li, M., and Dunson, D. (2018). Reducing over-clustering via the powered Chinese restaurant process. arXiv preprint arXiv:1802.05392 . * Ma et al. (2020) Ma, Z., Xue, Y., and Hu, G. (2020). Heterogeneous regression models for clusters of spatial dependent data. Spatial Economic Analysis 15, 459–475. * Miller and Harrison (2013) Miller, J. W. and Harrison, M. T. (2013). A simple example of Dirichlet process mixture inconsistency for the number of components. In Advances in Neural Information Processing Systems, pages 199–206. * Mu et al. (2020) Mu, J., Liu, Q., Kuo, L., and Hu, G. (2020). Bayesian variable selection for cox regression model with spatially varying coefficients with applications to louisiana respiratory cancer data. arXiv preprint arXiv:2008.00615 . * Neal (2000) Neal, R. M. (2000). Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics 9, 249–265. * Pitman (1995) Pitman, J. (1995). Exchangeable and partially exchangeable random partitions. Probability Theory and Related Fields 102, 145–158. * Rand (1971) Rand, W. M. (1971). Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association 66, 846–850. * SEER (2016) SEER, P. (2016). Public-use data (1973-2015). national cancer institute, dccps, surveillance research program, cancer statistics branch, released april 2016, based on the november 2015 submission. * Tobler (1970) Tobler, W. R. (1970). A computer movie simulating rrban growth in the Detroit region. Economic Geography 46, 234–240. * Vavrek (2011) Vavrek, M. J. (2011). Fossil: palaeoecological and palaeogeographical analysis tools. Palaeontologia Electronica 14, 16\. * Xue et al. (2020) Xue, Y., Schifano, E. D., and Hu, G. (2020). Geographically weighted Cox regression for prostate cancer survival data in Louisiana. Geographical Analysis 52, 570–587. * Zhang and Lawson (2011) Zhang, J. and Lawson, A. B. (2011). Bayesian parametric accelerated failure time spatial model and its application to prostate cancer. Journal of Applied Statistics 38, 591–603. * Zhao et al. (2020) Zhao, P., Yang, H.-C., Dey, D. K., and Hu, G. (2020). Bayesian spatial homogeneity pursuit regression for count value data. arXiv preprint arXiv:2002.06678 . * Zhou et al. (2008) Zhou, H., Lawson, A. B., Hebert, J. R., Slate, E. H., and Hill, E. G. (2008). Joint spatial survival modeling for the age at diagnosis and the vital outcome of prostate cancer. Statistics in Medicine 27, 3612–3628. ## Supporting Information Web Appendices, Tables, and Figures referenced in Sections 2-5 and R scripts for simulationsand real data examples are available with this paper at the Biometrics website on WileyOnline Library. The R code for the computations of this paper is available at https://github.com/lj-geng/GWCRP.
2024-09-04T02:54:57.730130
2020-03-06T04:51:37
2003.03029
{ "authors": "Vitaly Bergelson and Andreu Ferr\\'e Moragues", "full_text_license": null, "license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/", "provenance": "arxiv-papers-0000.json.gz:26072", "submitter": "Andreu Ferr\\'e Moragues", "url": "https://arxiv.org/abs/2003.03029" }
arxiv-papers
# An ergodic correspondence principle, invariant means and applications Vitaly Bergelson Department of Mathematics, Ohio State University, Columbus, OH 43210, USA<EMAIL_ADDRESS>and Andreu Ferré Moragues Department of Mathematics, Ohio State University, Columbus, OH 43210, USA <EMAIL_ADDRESS> ###### Abstract. A theorem due to Hindman states that if $E$ is a subset of $\mathbb{N}$ with $d^{*}(E)>0$, where $d^{*}$ denotes the upper Banach density, then for any $\varepsilon>0$ there exists $N\in\mathbb{N}$ such that $d^{*}\left(\bigcup_{i=1}^{N}(E-i)\right)>1-\varepsilon$. Curiously, this result does not hold if one replaces the upper Banach density $d^{*}$ with the upper density $\bar{d}$. Originally proved combinatorially, Hindman’s theorem allows for a quick and easy proof using an _ergodic_ version of Furstenberg’s correspondence principle. In this paper, we establish a variant of the ergodic Furstenberg’s correspondence principle for general amenable (semi)-groups and obtain some new applications, which include a refinement and a generalization of Hindman’s theorem and a characterization of countable amenable minimally almost periodic groups. ## 1\. Introduction Many results in additive combinatorics are of the form: If $E\subseteq\mathbb{N}=\\{1,2,\dots,\\}$ is a "large" set, then $E$ is "highly organized". For example, the celebrated Szemerédi theorem [Sz] states that if $E$ has positive upper density, $\bar{d}(E):=\limsup_{N\to\infty}\frac{|E\cap\\{1,\dots,N\\}|}{N}>0$, then $E$ is "AP-rich", meaning that $E$ contains arbitrarily long arithmetic progressions. An equivalent form of Szemerédi’s theorem is the following: if $E\subseteq\mathbb{N}$ has positive upper Banach density, i.e. $d^{*}(E):=\limsup_{N-M\to\infty}\frac{|E\cap\\{M,\dots,N-1\\}|}{N-M}>0$, then $E$ is AP-rich 111Indeed, one can show that both the $d^{*}$ and $\bar{d}$ versions of Szemerédi’s Theorem are equivalent to the original ”finitistic” version in [Sz]. See also Theorem 1.5 in [B1]. In [F3], Furstenberg obtained a proof of Szemerédi’s theorem via the ergodic Szemerédi theorem (EST), which states that for any probability measure preserving system $(X,\mathcal{B},\mu,T)$, any $A\in\mathcal{B}$ with $\mu(A)>0$, and any $k\in\mathbb{N}$, there exists $n\in\mathbb{N}$ such that $\mu(A\cap T^{-n}A\cap\dots\cap T^{-kn}A)>0.$ Furstenberg’s approach (see [F3]) to deriving Szemerédi’s theorem from his EST can be described as follows. Suppose that $\bar{d}(E)>0$. Viewing the $0$-$1$ valued sequence $\xi(m)=\mathbb{1}_{E}(m)$, $m\in\mathbb{Z}$, as an element of the symbolic space $\\{0,1\\}^{\mathbb{Z}}$, and denoting by $T$ the shift transformation $Tx(n)=x(n+1)$ for all $n\in\mathbb{Z}$, Furstenberg establishes the existence of a $T$-invariant measure $\mu$ on $\\{0,1\\}^{\mathbb{Z}}$ as follows. First, by a diagonalization argument, we can find a common subsequence $(N_{k})$ so that the following limit exists for a countable dense subset of $C(\\{0,1\\}^{\mathbb{Z}})$ (1.1) $L(f):=\lim_{k\to\infty}\frac{1}{N_{k}}\sum_{n=1}^{N_{k}}f(T^{n}\xi).$ Applying a standard approximation argument, we see that formula (1.1) holds for all $f\in C(\\{0,1\\}^{\mathbb{Z}})$. Now, $L$ is a positive, normalized functional, and so by Riesz’s representation theorem, there is a Borel probability measure $\mu$ on $\\{0,1\\}^{\mathbb{Z}}$ such that $L(f)=\int_{\\{0,1\\}^{\mathbb{Z}}}f\ d\mu$. Let $X:=\overline{\\{T^{n}\xi:n\in\mathbb{Z}\\}}$ be the orbit closure of $\xi$. Observe that $\mu$ is supported on $X$. Letting $A:=X\cap\\{x\in\\{0,1\\}^{\mathbb{Z}}:x(0)=1\\}$, we get $\mu(A)=\int_{\\{0,1\\}^{\mathbb{Z}}}\varphi\ d\mu=L(\varphi)=\bar{d}(E)>0$, for $\varphi(x):=x(0)$. (Note that the fact $L(\varphi)=\bar{d}(E)$ follows from (1.1)). By the EST there exists some $n\in\mathbb{N}$ such that $\mu(A\cap T^{-n}A\cap\dots\cap T^{-kn}A)>0$. Then, for any $x\in A\cap T^{-n}A\cap\dots\cap T^{-kn}A$ we have $x(0)=1,x(n)=1,\dots,x(kn)=1$. Since $x\in X$, we can choose $l\in\mathbb{N}$ such that $T^{l}\xi$ and $x$ are as close as we wish. This implies that for some $l\in\mathbb{N}$, $\mathbb{1}_{E}(l)=\mathbb{1}_{E}(l+n)=\dots=\mathbb{1}_{E}(l+kn)=1$ and hence $E$ contains the arithmetic progression $\\{l,l+n,\dots,l+kn\\}$ of length $k+1$. One can check that the functional $L$ satisfies the identity (1.2) $\lim_{k\to\infty}\frac{|E\cap(E-n)\cap\dots\cap(E-kn)\cap[1,N_{k}]|}{N_{k}}=\\\ L(\varphi\cdot T^{n}\varphi\cdot\dotso\cdot T^{kn}\varphi)=\int_{\\{0,1\\}^{\mathbb{Z}}}\mathbb{1}_{A}(x)\cdot\mathbb{1}_{A}(T^{n}x)\cdot\dotso\cdot\mathbb{1}_{A}(T^{kn}x)\ d\mu.\\\ \textrm{ (see \cite[cite]{[\@@bibref{}{fsz}{}{}]} p. 210)}$ Now, from (1.2) we can derive the inequality (1.3) $\bar{d}(E\cap(E-n)\cap\dots\cap(E-kn))\geq\mu(A\cap T^{-n}A\cap\dots\cap T^{-kn}A).$ The foregoing discussion leads to the more general principle: ###### Theorem 1.1 (Furstenberg’s correspondence principle for $\bar{d}$. (cf. [B1], Theorem 1.1)). Let $E\subseteq\mathbb{N}$ with $\bar{d}(E)>0$. Then, there is an invertible measure preserving system $(X,\mathcal{B},\mu,T)$ and a set $A\in\mathcal{B}$ with $\mu(A)=\bar{d}(E)>0$ satisfying (1.4) $\bar{d}(E\cap(E-h_{1})\cap\dotso\cap(E-h_{r}))\geq\mu(A\cap T^{-h_{1}}A\cap\dotso\cap T^{-h_{r}}A)$ for all $r\in\mathbb{N}$ and $h_{1},\dots,h_{r}\in\mathbb{N}$. ###### Remark 1.2. All proofs of the above result known to the authors also give a version of (1.4) for unions: (1.5) $\bar{d}(E\cup(E-h_{1})\cup\dots\cup(E-h_{r}))\geq\mu(A\cup T^{-h_{1}}A\cup\dots\cup T^{-h_{r}}A)$ for all $r\in\mathbb{N}$ and $h_{1},\dots,h_{r}\in\mathbb{N}$. See the discussion and ramifications of this fact below. A priori, one could expect that, given $E$ with $\bar{d}(E)>0$, it is possible to judiciously choose the system $(X,\mathcal{B},\mu,T)$ to be ergodic in the above construction. It turns out that, this is not always the case. To see this, we will invoke the following interesting result of Hindman: ###### Theorem 1.3 (Hindman’s covering theorem [H2]). Let $E\subseteq\mathbb{N}$ be a set with $d^{*}(E)>0$. Then, for every $\varepsilon>0$ there is some $N\in\mathbb{N}$ such that (1.6) $d^{*}\left(\bigcup_{i=1}^{N}(E-i)\right)>1-\varepsilon.$ Curiously enough, Theorem 1.3 fails to be true if one replaces $d^{*}$ with $\bar{d}$. Consider, for example, the following set $E\subseteq\mathbb{N}$ provided by Hindman in [H2]: (1.7) $E:=\bigcup_{n\in\mathbb{N}}[2^{2n},2^{2n+1}).$ Then $\bar{d}(E)=\frac{2}{3}$, and one can check that, moreover, $\bar{d}(\bigcup_{i=0}^{N}(E-i))=\frac{2}{3}$ for all $N\in\mathbb{N}$. It follows that for this set $E$ any measure preserving system $(X,\mathcal{B},\mu,T)$ satisfying (1.5) cannot be ergodic. The reason is that for an ergodic system $(X,\mathcal{B},\mu,T)$, if $\mu(A)>0$, then $\lim_{N\to\infty}\mu\left(\bigcup_{i=1}^{N}T^{-i}A\right)=1$. Assuming the inequality (1.5) is valid for the system in question, we would have (1.8) $\bar{d}\left(\bigcup_{i=1}^{N}(E-i)\right)\geq\mu\left(\bigcup_{i=1}^{N}T^{-i}A\right),$ and this cannot hold in our example since the left hand side is bounded away from $1$. However, by appropriately amplifying the proof of Theorem 1.1 discussed above, one can obtain the following ergodic variant thereof. This amplification will be carried out in the appropriate generality in Sections 2 and 6. ###### Theorem 1.4 (Ergodic Furstenberg’s correspondence principle. (cf. [BHK], Proposition 3.1)). Let $E\subseteq\mathbb{N}$ be such that $d^{*}(E)>0$. Then, there is an ergodic measure preserving system $(X,\mathcal{B},\mu,T)$ and a set $A\in\mathcal{B}$ with $\mu(A)=d^{*}(E)>0$ satisfying (1.9) $d^{*}(E\cap(E-h_{1})\cap\dotso\cap(E-h_{r}))\geq\mu(A\cap T^{-h_{1}}A\cap\dotso\cap T^{-h_{r}}A)$ for all $r\in\mathbb{N}$ and $h_{1},\dots,h_{r}\in\mathbb{N}$. Similarly to the situation with Theorem 1.1, one can show that a version of (1.9) holds for unions, i.e. (1.10) $d^{*}(E\cup(E-h_{1})\cup\dots\cup(E-h_{r}))\geq\mu(A\cup T^{-h_{1}}A\cup\dots\cup T^{-h_{r}}A)$ for all $r\in\mathbb{N}$ and $h_{1},\dots,h_{r}\in\mathbb{N}$. This observation was made and utilized in [BBF] (cf. [BBF] Prop. 2.3). The functions $\bar{d}$ and $d^{*}$ have very similar properties. For example, $\bar{d}$ and $d^{*}$ satisfy $d^{*}(\mathbb{N})=1$ and $\bar{d}(\mathbb{N})=1$ and are shift-invariant (i.e. $\bar{d}(E-n)=\bar{d}(E)$ for all $n\in\mathbb{N}$ and $d^{*}(E-n)=d^{*}(E)$ for all $n\in\mathbb{N}$), which allows one to view $\mathbb{N}$ as a generalized probability space with either $\bar{d}$ or $d^{*}$ serving as an (admittedly vague) substitute for the probability measure. Moreover, either of Theorems 1.1 and 1.4 can be used to derive Szemerédi’s theorem from EST. While in certain investigations (see [HF1], [HF2], [Fra], [GLR]), it is of interest to consider the non-ergodic measure preserving systems corresponding to sets satisfying $\bar{d}(E)>0$ (or more generally, sets with the property $\bar{d}_{(I_{N})}(E):=\limsup_{N\to\infty}\frac{|E\cap I_{N}|}{|I_{N}|}>0$, where $(I_{N})$ is a sequence of intervals with increasing length), in some other situations, $d^{*}$ allows for stronger/sharper results. One such application was obtained in [BHK]. Also, observe that Theorem 1.4 immediately implies (via (1.10)) Theorem 1.3. The ergodic approach to Theorem 1.3 has two additional advantages. First, it will allow us to characterize sequences $(n_{k})_{k\in\mathbb{N}}$ with the "Hindman property", i.e., sequences $(n_{k})_{k\in\mathbb{N}}$ such that for any $E\subseteq\mathbb{Z}$ with $d^{*}(E)>0$ one has that for all $\varepsilon>0$ there is some $N\in\mathbb{N}$ such that (1.11) $d^{*}\left(\bigcup_{k=1}^{N}(E-n_{k})\right)>1-\varepsilon,$ (see (1.6)). Second, the robustness of the ergodic approach will enable us to formulate and prove with ease a Hindman-like result for any amenable group. (A different proof of this result was obtained in [BBF] through combinatorial methods). While the fact that Furstenberg’s correspondence principle works equally well for unions was not observed/utilized in the early papers in ergodic Ramsey theory, in hindsight this observation is quite natural if one takes into account the algebraic nature of this principle. The versatility of Furstenberg’s correspondence principle can be perhaps best of all perceived via Gelfand’s representation theorem. The possibility of using Gelfand’s representation theorem was mentioned in [F4] and was explicitly implemented in [B1], [BF], [BCRZ], and [B2]. See also the appendix to Section I in [F2]. We would like to point out that the correspondence principle was not born with the ergodic theoretical proof of Szemerédi’s theorem; indeed, a form of it appears already in Furstenberg’s thesis [F1], where it was used as a tool to reconstruct a stationary process from its past. More concretely, the seminal idea which was put to action in [F1] was to replace the approximate measure space $\mathbb{Z}$ together with the density preserving transformation $x\mapsto x+1$ by a genuine measure preserving system, namely: the orbit closure of the sequence $(\mathbb{1}_{E}(n))_{n\in\mathbb{Z}}$ in $\\{0,1\\}^{\mathbb{Z}}$, where $\mathbb{1}_{E}$ corresponds to the given time series. It follows from Gelfand’s representation theorem that any commutative, unital, countably generated $C^{*}$-algebra $\mathcal{A}$ is topologically and algebraically isomorphic to the algebra of continuous functions $C(X)$, where $X$ is a compact metric space. In our situation, such a $C^{*}$-algebra can be naturally generated by the family $(\mathbb{1}_{E-n})_{n\in\mathbb{Z}}$, where $E\subseteq\mathbb{Z}$ satisfies $d_{(I_{N})}(E):=\lim_{N\to\infty}\frac{|E\cap I_{N}|}{|I_{N}|}>0$ for some sequence of intervals $(I_{N})$ with $|I_{N}|\to\infty$. Let us denote this $C^{*}$-algebra by $\mathcal{A}_{E}$. One can then refine $(I_{N})$ to obtain a subsequence $(I_{N_{k}})$ so that for any set $F$ in the Boolean algebra $\mathcal{B}_{E}$ generated by the family $(E-n)_{n\in\mathbb{Z}}$, $d_{(I_{N_{k}})}(F)$ is well defined. In other words, $d_{(I_{N_{k}})}(\cdot)$ is a _shift-invariant density_ on $\mathcal{B}_{E}$. In turn, $d_{(I_{N_{k}})}(\cdot)$ induces a shift-invariant mean on $\mathcal{A}_{E}$, i.e., a positive functional $L:\mathcal{A}_{E}\rightarrow\mathbb{C}$ such that $L(1)=1$ and for any $F\in\mathcal{B}_{E}$, $L(\mathbb{1}_{F})=d_{(I_{N_{k}})}(F)$. By Gelfand’s representation theorem, there exists a compact metric space $X$ such that $\mathcal{A}_{E}$ is algebraically and topologically isomorphic to $C(X)$. Let $\tilde{L}:C(X)\rightarrow\mathbb{C}$ be the linear functional on $C(X)$ induced by $L$. By Riesz’s representation theorem, the functional $\tilde{L}$ is given by a Borel probability measure on $X$. Let $\Gamma:\mathcal{A}_{E}\rightarrow C(X)$ denote the Gelfand isomorphism. Then, for all $\varphi\in\mathcal{A}_{E}$ we have $L(\varphi)=\tilde{L}(\Gamma(\varphi))=\int_{X}\Gamma(\varphi)\ d\mu.$ Since $\mathbb{1}_{E}$ is an idempotent, and since $\Gamma$ is, in particular, an algebraic isomorphism, the image, $\Gamma(\mathbb{1}_{E})$, is again an idempotent, and hence of the form $\mathbb{1}_{A}$, where $A\in\textrm{Borel}(X)$ and satisfies $\mu(A)=\tilde{L}(\mathbb{1}_{A})=\tilde{L}(\Gamma(\mathbb{1}_{E}))=d_{(I_{N})}(E)$. The $L$-invariant shift operator given by $\varphi(n)\mapsto\varphi(n+1)$, $\varphi\in\mathcal{A}_{E}$ induces a $\mathbb{Z}$-action on $C(X)$, which is, by a theorem of Banach, induced by a measure preserving homeomorphism $T:X\rightarrow X$. Let $\varphi=\mathbb{1}_{E-h_{1}\cap\dots\cap E-h_{r}}=\prod_{i=1}^{r}\mathbb{1}_{E-h_{i}}$, $h_{1},\dots,h_{r}\in\mathbb{Z}$. Since $\mathcal{A}_{E}$ is an algebra, it is clear that $\varphi\in\mathcal{A}_{E}$. This leads us to the equality (1.12) $d_{(I_{N_{k}})}(E\cap E-h_{1}\cap\dots\cap E-h_{r})=L(\mathbb{1}_{E\cap E-h_{1}\cap\dots\cap E-h_{r}})=\mu(A\cap T^{-h_{1}}A\cap\dots\cap T^{-h_{r}}A).$ Linearity and the inclusion-exclusion principle imply that functions of the form $\varphi=\mathbb{1}_{E\cup E-h_{1}\cup\dots\cup E-h_{r}}$ are also in $\mathcal{A}_{E}$. Since $\Gamma$ is an algebraic isomorphism, we see that a formula similar to (1.12) holds for unions as well: (1.13) $d_{(I_{N_{k}})}(E\cup E-h_{1}\cup\dots\cup E-h_{r})=L(\mathbb{1}_{E\cup E-h_{1}\cup\dots\cup E-h_{r}})=\mu(A\cup T^{-h_{1}}A\cup\dots\cup T^{-h_{r}}A).$ As we already mentioned above, all known proofs of Furstenberg’s correspondence principle are such that they allow for (1.5) and (1.10). We explained above how one can get (1.5) with the help of Gelfand’s transform, but, as it will be seen in Sections 2 and 6 (where we will prove and juxtapose general versions of Theorems 1.1 and 1.4 in the setup of amenable groups), the other methods also have an algebraic aspect which is adequate for our purposes. In any case, whatever method is used, it still has to be properly modified and amplified to allow for ergodicity. The approach that we choose in Section 2 uses symbolic dynamics and goes along the lines of the proof of the correspondence principle in [F3], which is sketched above. As we will see, it allows to conveniently localize the place in the proof where the amplification leading to ergodicity has to be made. At this point, we would like to make a comment which explains why utilizing $d^{*}$ allows one to establish the ergodic version of Furstenberg’s correspondence principle. The key advantage of dealing with $d^{*}$ is that, in the course of proving the correspondence principle, one can keep changing the sequence of intervals along which the averaging scheme is applied, which is essential for applicability of the ergodic decomposition. On the other hand, the proof of the $\bar{d}$ version of Furstenberg’s correspondence priniciple is based on refining the averaging scheme given by the sequence of intervals along which $\bar{d}_{(I_{N})}(E)>0$. Theorems 1.1 and 1.4 will be generalized, juxtaposed and exploited in Section 2. The structure of the paper is as follows. In Section 2 we give a proof of a general version of Theorem 1.4, which works for any combination of unions, intersections and complements of shifts of $E$ for countable amenable groups. Moreover, we also give a proof of a general version of Theorem 1.1, comparing the two methods of proof, and pinpointing what exactly in the proof allows us to have ergodicity. In Section 3 we give a quick proof of a general form of Hindman’s theorem 1.3 for countable cancellative amenable semigroups in Section 3. We also generalize the example (1.7) to countable abelian groups and to finitely generated virtually nilpotent groups. In Section 3 we prove a few more combinatorial results which make use of the "ergodic" nature of $d^{*}$. Section 4 is devoted to the characterization of sequences $(n_{k})$ for which (1.11) holds and is followed by Section 5, where we give a characterization of countable amenable WM groups via the “Hindman property” with the help of a general version of Theorem 1.4 proved in Section 2. A group $G$ is called _weakly mixing_ or WM if any measure preserving ergodic action of $G$ on a probability space is automatically weakly mixing 222Such groups also appear in the literature under the name _minimally almost periodic groups_.. For example $A(\mathbb{N})$, the group of finite even permutations of $\mathbb{N}$ is a countable amenable WM group. It is worth noting that in [BCRZ] a different characterization of WM groups is obtained via another result due to Hindman, namely his "finite sums" theorem [H1]. Lastly, in Section 6 we establish a general form of an ergodic Furstenberg correspondence principle for discrete amenable semigroups and derive as a corollary a general form of Hindman’s theorem. ###### Remark 1.5. Throughout this paper, all the measures used are normalized so that $\mu(X)=1$. ## 2\. Furstenberg’s correspondence principle for amenable groups: $\bar{d}$ and $d^{*}$ versions The goal of this section is two-fold. First, we will formulate and prove "amenable" versions of Theorems 1.1 and 1.4 (Theorems 2.3 and 2.8) that (i) encompass not only intersections but unions of sets and their complements, and (ii) are valid for general discrete countable amenable groups. Second, we will pinpoint the distinction between $\bar{d}$ and $d^{*}$ which allows for the stronger, ergodic version of Furstenberg’s correspondence principle (see Theorem 2.8 below). A definition of amenability which is convenient for our purposes uses the notion of Følner sequence. A sequence $(F_{N})$ of finite non-empty subsets of a countable group $G$ is a _(left) Følner sequence_ if $\lim_{N\to\infty}\frac{|F_{N}\Delta gF_{N}|}{|F_{N}|}=0.$ for all $g\in G$. A countable group $G$ is _amenable_ if it admits a (left) Følner sequence 333One can show that every amenable group $G$ admits also right- and indeed two-sided analogues of left Følner sequences (see Corollary 5.3 in [N]). Throughout this paper we deal only with left Følner sequences and will routinely omit the adjective ”left”.. To facilitate the discussion, we will present first the versions of Theorems 1.1 and 1.4 (see Theorems 2.3 and 2.8) for general countable amenable groups. The proofs are in essence the same as those of Theorems 1.1 and 1.4, but we will need this generality for the applications in the forthcoming sections. We will then juxtapose the proofs of Theorems 2.3 and 2.8 which will allow us to explain what exactly in the proof of Theorem 2.8 leads to the ergodicity of the system $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$. To formulate Theorems 2.3 and 2.8 we need a few definitions: ###### Definition 2.1. Let $E\subseteq G$ and let $(F_{N})$ be a Følner sequence. We denote by $\bar{d}_{(F_{N})}(E)$ the upper density of the set $E$ along $(F_{N})$. This notion of largeness is given by the formula $\bar{d}_{(F_{N})}(E):=\limsup_{N\to\infty}\frac{|F_{N}\cap E|}{|F_{N}|}$ We are now in a position to define upper Banach density in this more general context: ###### Definition 2.2. Let $E\subseteq G$. We denote by $d^{*}(E)$ the upper Banach density of the set $E$, which is given by $d^{*}(E):=\sup\\{\bar{d}_{(F_{N})}(E):(F_{N})\textrm{ is a F\o lner sequence}\\}.$ We begin with a short proof (based on the idea of the proof of Theorem 1.1 in [F3] and [F4] 555We could also use for our goals the proofs of the amenable version of Theorem 1.1 given in [BMc] (Theorem 2.1) and [B2] (Theorem 4.11). (see also [FKO])) of a generalization of Theorem 1.1 for countable amenable groups. Note that Theorem 1.1 corresponds to the special case $G:=\mathbb{Z}$ and $F_{N}:=[a_{N},b_{N}]$ with $b_{N}-a_{N}\to\infty$. In what follows we will use the notation $A^{1}=A$ and $A^{0}=A^{c}$. ###### Theorem 2.3 (Furstenberg Correspondence Principle for countable amenable groups, $\bar{d}_{(F_{N})}$ version). Let $(F_{N})$ be a Følner sequence in a countable amenable group $G$ and let $E$ be a subset of $G$ with $\bar{d}_{(F_{N})}(E)>0$. Then there exist a probability measure preserving system $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ and a set $A\in\mathcal{B}$ with $\mu(A)=\bar{d}_{(F_{N})}(E)$ such that (2.1) $\mu((T_{g_{1}})^{-1}A^{w_{1}}\star\dots\star(T_{g_{k}})^{-1}A^{w_{k}})\leq\bar{d}_{(F_{N})}(g_{1}^{-1}E^{w_{1}}\star\dots\star g_{k}^{-1}E^{w_{k}}),$ for all $k\in\mathbb{N}$, all $\\{g_{1},\dots,g_{k}\\}\subset G$ and all $(w_{1},\dots,w_{k})\in\\{0,1\\}^{k}$, and where each of the stars denotes either union or intersection with the understanding that * (i) for all $1\leq i\leq k-1$, the operation represented by $\star$ which stands between $E^{w_{i}}$ and $E^{w_{i+1}}$ is the same as the operation appearing between $A^{w_{i}}$ and $A^{w_{i+1}}$. * (ii) the choices of parentheses which are needed to make the expressions on both sides of formula (2.1) well defined also match. 666For example, we have $\mu(((T_{g_{1}})^{-1}A\cup(T_{g_{2}})^{-1}A^{c})\cap(T_{g_{3}})^{-1}A)\leq\bar{d}_{(F_{N})}((g_{1}^{-1}E\cup g_{2}^{-1}E^{c})\cap g_{3}^{-1}E)\textrm{ and }$ $\mu((T_{g_{1}})^{-1}A\cup((T_{g_{2}})^{-1}A^{c}\cap(T_{g_{3}})^{-1}A))\leq\bar{d}_{(F_{N})}(g_{1}^{-1}E\cup(g_{2}^{-1}E^{c}\cap g_{3}^{-1}E)).$ ###### Proof. Let $X=\\{0,1\\}^{G}$ (viewed as a compact metric space with the usual product topology). Let $(T_{g})_{g\in G}$ be the action of $G$ on $X$ by homeomorphisms defined by the formula $(T_{g}x)_{g_{0}}=x_{gg_{0}}$ for all $g,g_{0}\in G$. Define $\omega\in X$ by setting $\omega(g)=1$ if $g\in E$ and $\omega(g)=0$ otherwise. Put $A=\\{x\in X:x(e)=1\\}$ (Here and elsewhere, $e$ denotes the neutral element of the group $G$). Note that $A$ is a clopen set in $X$ (and hence $\mathbb{1}_{A}$ is a continuous function). Moreover, we have that $T_{g}\omega\in A$ if and only if $g\in E$. Let $(F_{N_{k}})$ be a subsequence such that $\bar{d}_{(F_{N})}(E)=\lim_{k\to\infty}\frac{|E\cap F_{N_{k}}|}{|F_{N_{k}}|}.$ Let $\mu$ be any weak* limit point of the sequence of measures $\frac{1}{|F_{N_{k}}|}\sum_{g\in F_{N_{k}}}\delta_{T_{g}\omega}.$ Moreover, since $C(Y)$ is separable, there exists a further subsequence of $(F_{N_{k}})$, which we will, in order not to overload the notation, still denote by $(F_{N_{k}})$, such that, in the weak* topology, $\mu=\lim_{k\to\infty}\frac{1}{|F_{N_{k}}|}\sum_{g\in F_{N_{k}}}\delta_{T_{g}\omega}$. Clearly, $\mu$ is a $G$-invariant probability measure on $X$. We claim that $\mu(A)=\bar{d}_{(F_{N})}(E)$. Indeed, since $\mathbb{1}_{A}$ is a continuous function, we can write (2.2) $\mu(A)=\int_{X}\mathbb{1}_{A}\ d\mu=\lim_{k\to\infty}\frac{1}{|F_{N_{k}}|}\sum_{g\in F_{N_{k}}}\mathbb{1}_{A}(T_{g}\omega)=d_{(F_{N_{k}})}(E)=\bar{d}_{(F_{N})}(E).$ Now let $g_{1},\dots,g_{k}\in G$ and consider the set $(T_{g_{1}})^{-1}A^{w_{1}}\star\dotso\star(T_{g_{k}})^{-1}A^{w_{k}}$, where the stars denote an arbitrary fixed choice of either union or intersection. This is a clopen set in $X$, so its indicator function is, again, continuous. We have $\mu\left((T_{g_{1}})^{-1}A^{w_{1}}\star\dotso\star(T_{g_{k}})^{-1}A^{w_{k}}\right)=\lim_{k\to\infty}\frac{1}{|F_{N_{k}}|}\sum_{g\in F_{N_{k}}}\mathbb{1}_{(T_{g_{1}})^{-1}A^{w_{1}}\star\dotso\star(T_{g_{k}})^{-1}A^{w_{k}}}(T_{g}\omega)$ $=\lim_{k\to\infty}\frac{1}{|F_{N_{k}}|}|(g_{1}^{-1}E^{w_{1}}\star\dotso\star g_{k}^{-1}E^{w_{k}})\cap F_{N_{k}}|\leq\bar{d}_{(F_{N})}(g_{1}^{-1}E^{w_{1}}\star\dotso\star g_{k}^{-1}E^{w_{k}}).$ We are done. ∎ ###### Remark 2.4. Observe that when $G:=\mathbb{Z}$ and $F_{N}:=[a_{N},b_{N}]$ (where $b_{N}-a_{N}\to\infty$) inequality (2.1) implies the inequalities (1.4) and (1.5). As we saw in the Introduction with the help of Theorem 1.3, for some sets $E\subseteq\mathbb{Z}$ no system $(X,\mathcal{B},\mu,T)$ satisfying inequality (1.5) can be ergodic. The general version of Theorem 1.1 which also involves complements of sets, allows us to arrive at the same conclusion without invoking Theorem 1.3. Indeed, we see that (for $G:=\mathbb{Z}$ and $F_{N}:=[1,N]$) inequality (2.1) implies (2.3) $\bar{d}(E^{c}\cap(E-h))\geq\mu(A^{c}\cap T^{-h}A)$ for all $h\in\mathbb{Z}$. Take $E$ as in the Introduction (see (1.7)). Then, $\mu(A)=\bar{d}(E)=\frac{2}{3}$, so $\mu(A^{c})>0$. However, one can easily check (see also Section 3), that $\bar{d}(E^{c}\cap(E-h))=0$ for all $h\in\mathbb{Z}$. This contradicts inequality (2.3). ###### Remark 2.5. Let $G$ be a countable amenable group. Call a set $E$ a _Hindman set_ if there exists some Følner sequence $(F_{N})$ such that $0<\bar{d}_{(F_{N})}(E)<1\textrm{ and }\bar{d}_{(F_{N})}\left(\bigcup_{g\in F}g^{-1}E\right)<\frac{3}{4}$ for all finite sets $F\subseteq G$. One can show (see Proposition 3.3) that any countable abelian group has a Hindman set. It is interesting to observe that if our countable amenable group $G$ admits a Hindman set, then any measure preserving system $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ satisfying inequalities (2.1) cannot be ergodic. This can be seen in two ways. First, using the special case of inequality (2.3) for unions, i.e. (2.4) $\bar{d}_{(F_{N})}\left(\bigcup_{g\in F}g^{-1}E\right)\geq\mu\left(\bigcup_{g\in F}g^{-1}A\right)\textrm{ for all finite sets }F\subseteq G,$ and arguing, as in the Introduction, that inequality (2.4) together with the fact that $\bar{d}_{(F_{N})}\left(\bigcup_{g\in F}g^{-1}E\right)=\bar{d}_{(F_{N})}(E)<1$ contradict ergodicity. Alternatively, we can use another special case of inequality (2.3), namely: $\bar{d}_{(F_{N})}(E^{c}\cap g^{-1}E)\geq\mu(A^{c}\cap g^{-1}A)\textrm{ for all }g\in G,$ together with the fact that if $E$ is a Hindman set then $\bar{d}_{(F_{N})}(E^{c}\cap g^{-1}E)=0$ for all $g\in G$. (This discussion will be completed in Section 3). We move now to an ergodic version of Theorem 1.4 for general countable amenable groups. The ergodic amplification of Theorem 2.3 hinges on two additional tools: the ergodic decomposition for amenable group actions, and a result about quasi-generic points (see Proposition 2.7 below). ###### Definition 2.6. Let $X$ be a compact metric space on which $G$ acts by homeomorphisms. Let $\mu$ be a $G$-invariant measure. We say that $x_{0}\in X$ is _quasi-generic_ for $\mu$ if there exists a Følner sequence $(F_{N})$ such that $\lim_{N\to\infty}\frac{1}{|F_{N}|}\sum_{g\in F_{N}}f(T_{g}x_{0})=\int_{X}f\ d\mu$ for every $f\in C(X)$. The following Proposition is an amenable version of Proposition 3.9 in [F5]. We include the proof for the reader’s convenience. ###### Proposition 2.7. Let $(T_{g})_{g\in G}$ be an action of $G$ by homeomorphisms on a compact metric space $X$. Let $x_{0}\in X$, and let $Y:=\overline{\\{T_{g}x_{0}:g\in G\\}}$. Suppose that $\mu\in\mathcal{M}(Y)$ is an ergodic $G$-invariant measure. Then $x_{0}$ is quasi-generic for $\mu$. ###### Proof. Since $\mu$ is an ergodic measure, it follows by the mean ergodic theorem (which can be proved in the same way as the classical mean ergodic theorem for isometries of Hilbert spaces, see, for example, Theorem 4.15 in [B2]), that for any Følner sequence $(F_{N})$, and any $f\in L^{2}(\mu)$ $\lim_{N\to\infty}\frac{1}{|F_{N}|}\sum_{g\in F_{N}}f(T_{g}x)=\int_{Y}f\ d\mu,$ where the convergence is in the $L^{2}(\mu)$-norm. Thus, there exists some subsequence $(F_{N_{k}})$ along which we have pointwise convergence for all $f$ in a countable dense subset of $C(Y)$, which in turn, by a simple triangle inequality argument, implies that, for any $f\in C(Y)$ we have $\lim_{N\to\infty}\frac{1}{|F_{N_{k}}|}\sum_{g\in F_{N_{k}}}f(T_{g}x)=\int_{X}f\ d\mu,$ for a.e. $x\in Y$ (and so a.e. $x\in Y$ is quasi-generic for $\mu$). Let $x_{1}\in Y$ be quasi-generic for $\mu$ along the Følner sequence $(F_{N_{k}})$. Take a countable dense set $\\{f_{k}:k\in\mathbb{N}\\}$ in $C(Y)$ such that (2.5) $\left|\frac{1}{|F_{N_{k}}|}\sum_{g\in F_{N_{k}}}f_{j}(T_{g}x_{1})-\int_{Y}f_{j}\ d\mu\right|<\frac{1}{k}$ for all $k\in\mathbb{N}$ and all $j=1,\dots,k$. Since the functions $f_{j}$ are continuous, we can pick $(g_{k})\subseteq G$ such that the inequality (2.5) holds if we replace $x_{1}$ by $T_{g_{k}}x_{0}$, which after a change of variables in the sum yields $\left|\frac{1}{|F_{N_{k}}|}\sum_{g\in F_{N_{k}}g_{k}}f_{j}(T_{g}x_{0})-\int_{Y}f_{j}\ d\mu\right|<\frac{1}{k},$ which implies that $\lim_{k\to\infty}\frac{1}{|F_{N_{k}}g_{k}|}\sum_{g\in F_{N_{k}}g_{k}}f(T_{g}x_{0})=\int_{Y}f\ d\mu$ for all $f\in C(Y)$. In other words, $x_{0}$ is a quasi-generic point for $\mu$ with respect to the Følner sequence $(F_{N_{k}}g_{k})$. Indeed, observe that for all $g\in G$ we have that $|gF_{N_{k}}g_{k}\Delta F_{N_{k}}g_{k}|=|gF_{N_{k}}\Delta F_{N_{k}}|$, which implies that $(F_{N_{k}}g_{k})$ is still a Følner sequence ∎ We are now ready to formulate and prove the amenable ergodic Furstenberg’s Correspondence Principle, adapting arguments from both [BHK] and [BBF]. We note that Theorem 1.4 corresponds to the special case $G:=\mathbb{Z}$ in Theorem 2.8. ###### Theorem 2.8 (Ergodic Furstenberg Correspondence Principle for countable amenable groups; $d^{*}$ version). Let $E$ be a subset of a countable amenable group $G$ with positive upper Banach density. Then there exists an ergodic probability measure preserving system $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ and a set $A\in\mathcal{B}$ with $\mu(A)=d^{*}(E)$ such that (2.6) $\mu((T_{g_{1}})^{-1}A^{w_{1}}\star\dots\star(T_{g_{k}})^{-1}A^{w_{k}})\leq d^{*}(g_{1}^{-1}E^{w_{1}}\star\dots\star g_{k}^{-1}E^{w_{k}})$ for all $k\in\mathbb{N}$, all $\\{g_{1},\dots,g_{k}\\}\subset G$ and all $(w_{1},\dots,w_{k})\in\\{0,1\\}^{k}$, and where each of the stars denotes either union or intersection with the understanding that * (i) for all $1\leq i\leq k-1$, the operation represented by $\star$ which stands between $E^{w_{i}}$ and $E^{w_{i+1}}$ is the same as the operation appearing between $A^{w_{i}}$ and $A^{w_{i+1}}$. * (ii) the choices of parentheses which are needed to make the expressions on both sides of formula (2.1) well defined also match. ###### Proof. We start as in the proof of Theorem 2.3. Let $X=\\{0,1\\}^{G}$ and $(T_{g})_{g\in G}$ be the action of $G$ on $X$ by homeomorphisms defined by the formula $(T_{g}x)_{g_{0}}=x_{gg_{0}}$ for all $g,g_{0}\in G$. Define, as before, $\omega\in X$ by setting $\omega(g)=1$ if $g\in E$ and $\omega(g)=0$ otherwise. Let $Y=\overline{\\{T_{g}\omega:g\in G\\}}$. Put $A=\\{x\in Y:x(e)=1\\}$. Then, $A$ is a clopen set in $Y$. Moreover, we have that $T_{g}\omega\in A$ if and only if $g\in E$. Let $(F_{N})$ be a Følner sequence such that $d^{*}(E)=\bar{d}_{(F_{N})}(E)$. Let $\nu$ be any weak* limit point of the sequence of measures $\frac{1}{|F_{N}|}\sum_{g\in F_{N}}\delta_{T_{g}\omega}.$ Then, as in Theorem 2.3, $\nu$ is a $G$-invariant probability measure on $Y$ such that $\nu(A)=d^{*}(E)$. By the ergodic decomposition theorem (see Theorem 4.2 in [V]), there is a probability measure $\lambda$ on the set of ergodic normalized measures $\mathcal{M}_{G}(X)$ such that (2.7) $\nu(C)=\int_{\mathcal{M}_{G}(X)}\mu_{z}(C)\ d\lambda(z)$ for all $C\in\mathcal{B}$. It follows from the equality (2.7) that there exists some $z$ such that $\mu_{z}(A)\geq\nu(A)=d^{*}(E)$. We show that the measure $\mu_{z}$, which we will now denote by $\mu$, works. Let $g_{1},\dots,g_{k}\in G$ and observe that the set $(T_{g_{1}})^{-1}A^{w_{1}}\star\dots\star(T_{g_{k}})^{-1}A^{w_{k}}$ is a clopen set in $Y$, so its indicator function is continuous. By Proposition 2.7, there exists a Følner sequence $(G_{N})$, with respect to which the point $\omega$ is quasi-generic for the measure $\mu$. This implies that (2.8) $\mu((T_{g_{1}})^{-1}A^{w_{1}}\star\dots\star(T_{g_{k}})^{-1}A^{w_{k}})=\lim_{N\to\infty}\frac{1}{|G_{N}|}\sum_{g\in G_{N}}\mathbb{1}_{T_{g_{1}}^{-1}A^{w_{1}}\star\dots\star T_{g_{k}}^{-1}A^{w_{k}}}(T_{g}\omega)=\\\ \lim_{N\to\infty}\frac{1}{|G_{N}|}\left|(g_{1}^{-1}E^{w_{1}}\star\dots\star g_{k}^{-1}E^{w_{k}})\cap G_{N}\right|\leq d^{*}(g_{1}^{-1}E^{w_{1}}\star\dots\star g_{k}^{-1}E^{w_{k}}).$ In particular, letting $k=1,w_{1}=1$ and $g_{1}=e$ in inequality (2.8) we obtain $\mu(A)\leq d^{*}(E)$. Recalling the previous inequality $\mu(A)\geq\nu(A)=d^{*}(E)$ we see that $\mu(A)=d^{*}(E)$, so we are done. ∎ We conclude this section with some comments on why the utilization of $d^{*}$ allows us to achieve in Theorem 2.8 the goal of ergodicity (whereas, as we saw above, there are sets $E$ for which Theorem 2.3 cannot guarantee it). In both proofs, we start with a Følner sequence $(F_{N})$ wich satisfies $\bar{d}_{(F_{N})}(E)>0$ (in Theorem 2.3) or $d^{*}(E)=\bar{d}_{(F_{N})}(E)$ (in Theorem 2.8). Then we consider weak* limits of the sequences of measures $\frac{1}{|F_{N}|}\sum_{g\in F_{N}}\delta_{T_{g}\omega}$ ($\omega=(\mathbb{1}_{E}(g))_{g\in G}$) along a subsequence of $(F_{N})$. The advantage of $d^{*}$ comes in handy when, after invoking the ergodic decomposition in the proof of Theorem 2.8, we start using shifts $(F_{N_{k}}g_{k})$ of a relevant subsequence $(F_{N_{k}})$ in order to use quasi-genericity of $\omega$, as guaranteed by Proposition 2.7. Unlike the proof of Theorem 2.3 where we are restricted to a given Følner sequence $(F_{N})$ and its subsequences, in the proof of Theorem 2.8, we are conveniently passing to a different Følner sequence without affecting the value of $d^{*}$ for expressions of the form $g_{1}^{-1}E^{w_{1}}\star\dotso\star g_{k}^{-1}E^{w_{k}}$ 777This property of $d^{*}$, when applied to unions, is also behind Hindman’s proof of Theorem 1.3.. ## 3\. Hindman’s Theorem via ergodic theory and some other consequences of the ergodic version of Furstenberg’s correspondence principle In this section we will give a short proof of a natural generalization of Hindman’s theorem to the context of countable amenable groups. Strictly speaking, Hindman’s theorem, as formulated in the introduction (Theorem 1.3) deals with $(\mathbb{N},+)$, which is a semigroup rather than a group, but the results of this section are easily adjusted so that they hold for countable cancellative amenable semigroups. See the explanatory remark at the end of the section (see also Section 6 for a general semigroup version). We also deduce other ergodic-flavored corollaries which provide additional evidence to our claim that $d^{*}$ is better suited for applications. We begin with showing that a general version of Hindman’s theorem (see Theorem 1.6 in the introduction) is an immediate corollary of Theorem 2.8: ###### Theorem 3.1. Let $G$ be a countable amenable group, and let $E$ be a subset of $G$ with $d^{*}(E)>0$. Then, for every $\varepsilon>0$, there exists $k\in\mathbb{N}$ and $g_{1},\dots,g_{k}\in G$ such that $d^{*}(g_{1}^{-1}E\cup\dots\cup g_{k}^{-1}E)>1-\varepsilon$. ###### Proof. By Theorem 2.8 there exists an ergodic system $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ and a set $A\in\mathcal{B}$ such that $\mu(A)=d^{*}(E)$ and for which $d^{*}(h_{1}^{-1}E\cup\dots\cup h_{k}^{-1}E)\geq\mu((T_{h_{1}})^{-1}A\cup\dots\cup(T_{h_{k}})^{-1}A)$ for all $k\in\mathbb{N}$ and $h_{1},\dots,h_{k}\in G$. By ergodicity of the action $(T_{g})_{g\in G}$ we have $\mu\left(\bigcup_{g\in G}(T_{g})^{-1}A\right)=1,$ and so since $G$ is countable, the result follows. ∎ Another corollary that can be obtained from Theorem 2.8, is an ergodicity-like statement for the group $G$. Namely: ###### Corollary 3.2. Let $G$ be a countable amenable group, and let $E\subseteq G$ be such that $d^{*}(E)\in(0,1)$. Then there exists some $g\in G$ such that $d^{*}(E^{c}\cap g^{-1}E)>0$. ###### Proof. Let $E\subseteq G$ be such that $d^{*}(E)\in(0,1)$. By Theorem 2.8 we can find an ergodic measure preserving system $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ and a set $A\in\mathcal{B}$ such that $\mu(A)=d^{*}(E)$ and for all $h\in G$, $d^{*}(E^{c}\cap h^{-1}E)\geq\mu(A^{c}\cap(T_{h})^{-1}A).$ Since $\mu(A)>0$ and $\mu(A^{c})>0$ and the action $(T_{g})_{g\in G}$ is ergodic, there is some $g\in G$ such that $\mu(A^{c}\cap(T_{g})^{-1}A)>0$, so we are done. ∎ We proceed to show that an example similar to (1.7) exists in the context of countable abelian groups. ###### Proposition 3.3. Let $G$ be a countably infinite abelian group. Let $(F_{N})\subseteq G$ be a Følner sequence. Then, there exists a set $E\subseteq G$ such that (3.1) $\bar{d}_{(F_{N})}(E)>0,\textrm{ and }\bar{d}_{(F_{N})}\left(\bigcup_{g\in F}g^{-1}E\right)<\frac{3}{4},$ for all finite sets $F\subseteq G$. ###### Proof. We will show first that the assertion of the proposition holds for a particular Følner sequence, and then upgrade it to an arbitrary Følner sequence. Assume first that $G$ is finitely generated and $\\{g_{1},\dots,g_{k}\\}$ generate $G$. Then, one of the generators of $G$, say $g_{1}$ has infinite order. Consider the Følner sequence $G_{N}:=\\{g_{1}^{a_{1}}\dotso g_{k}^{a_{N}}:0\leq a_{i}\leq 2^{2N}\textrm{ for }1\leq i\leq N\\}$. Define $A_{N}=\\{g_{1}^{a_{1}}\dotso g_{k}^{a_{N}}:2^{2N-1}\leq a_{1}\leq 2^{2N}\textrm{ and }0\leq a_{i}\leq 2^{2N}\textrm{ otherwise }\\}$ and put $E=\bigcup_{N\in\mathbb{N}}A_{N}$. It is not hard to check that the set $E$ satisfies (3.3). Assume now that $G$ is infinitely generated, and let $\\{g_{n}:n\in\mathbb{N}\\}$ be a set of generators for the group $G$. We distinguish two cases. In the first case, one of the generators, say $g_{1}$ has infinite order. Consider the Følner sequence $G_{N}:=\\{g_{1}^{a_{1}}g_{2}^{a_{2}}\cdot\dotso\cdot g_{N}^{a_{N}}:0\leq a_{i}\leq 2^{2N},\textrm{ for }1\leq i\leq N\\},$ and set (3.2) $A_{N}:=\\{g_{1}^{a_{1}}g_{2}^{a_{2}}\cdot\dotso\cdot g_{N}^{a_{N}}:2^{N}\leq a_{1}\leq 2^{2N},\textrm{ and }0\leq a_{i}\leq 2^{2N}\textrm{ for }2\leq i\leq N\\}.$ Letting $E:=\bigcup_{N\in\mathbb{N}}A_{N}$, we get $\bar{d}_{(G_{N})}(E)=\bar{d}_{(G_{N})}\left(\bigcup_{g\in F}g^{-1}E\right)<\frac{3}{4},$ for all finite sets $F\subseteq G$. Now assume that all elements of $G$ have finite order and that the enumeration of $(g_{n})$ is such that $\textrm{ord}(g_{n+1})\geq\textrm{ord}(g_{n})$ for all $n\in\mathbb{N}$. Consider the Følner sequence $G_{N}:=\\{g_{1}^{a_{1}}\cdot\dotso\cdot g_{2^{2N}}^{a_{N}}:0\leq a_{i}\leq\textrm{ord}(g_{i})\textrm{ for all }1\leq i\leq 2^{2N}\\}$ and the sets $A_{N}:=\\{g_{1}^{a_{1}}\cdot\dotso\cdot g_{2^{2N}}^{a_{N}}:0\leq a_{i}\leq b_{i}\textrm{ for all }1\leq i\leq 2^{2N},\ \textrm{with }i\textrm{ even; }a_{i}=0\textrm{ with }i\textrm{ odd }\\},$ where $b_{i}$ is chosen so that $\frac{\left|(\bigcup_{j=1}^{i}A_{j})\cap G_{i}\right|}{|G_{i}|}\in(\frac{1}{4},\frac{1}{2})$. Letting $E:=\bigcup_{N\in\mathbb{N}}A_{N}$, we get that $\bar{d}_{(G_{N})}(E)\geq\frac{1}{4},\quad\textrm{ and }\quad\bar{d}_{(G_{N})}\left(\bigcup_{g\in F}g^{-1}E\right)<\frac{3}{4},$ for all finite subsets $F\subseteq G$. Let now $(F_{N})$ be an arbitrary Følner sequence. We indicate next how to construct the set $E$ that satisfies (3.1) for $(F_{N})$. To do this, we need the following general fact, which follows from Lemma 4.1 in [DHZ] (we thank Tomasz Downarowicz for providing this information). ###### Fact 3.4. Let $\varepsilon>0$. We say that a family of finite sets $\\{A_{n}:n\in\mathbb{N}\\}$ is $\varepsilon$-disjoint if each set $A_{n}$ admits a subset $A_{n}^{\prime}$ such that $|A_{n}^{\prime}|\geq(1-\varepsilon)|A_{n}|$, and such that the new family $\\{A_{n}^{\prime}:n\in\mathbb{N}\\}$ is disjoint. Fix a Følner sequence $(F_{N})$ and let $(\varepsilon_{N})$ be a sequence of positive numbers with $\varepsilon_{N}\to 0$ as $N\to\infty$. Given another Følner sequence $(G_{N})$, we can find a Følner sequence $(G_{N}^{\prime})$ that is equivalent to $(G_{N})$ (i.e., satisfying $\frac{|G_{N}\Delta G_{N}^{\prime}|}{|G_{N}|}\to 0$ as $N\to\infty$) such that for each $N\in\mathbb{N}$, $G_{N}^{\prime}$ is a union of $\varepsilon_{N}$-disjoint subsets of the form $\\{F_{N}g:g\in G\\}$. It is not hard to see that Fact 3.4 implies that the set $E$ in question can be constructed with the help of the same argument utilized above for the special Følner sequence $(G_{N})$. ∎ ###### Remark 3.5. For given $0<a\leq b<1$, one can construct sets $E\subseteq\mathbb{Z}$ with $\bar{d}(E)=a$ such that $\bar{d}(\bigcup_{i=1}^{N}(E-i))=b$, for all large enough $N$. This can be done as follows. Start with a sequence of disjoint intervals $[a_{n},b_{n}]$ such that the set $A:=\bigcup_{n\in\mathbb{N}}[a_{n},b_{n}]$ satisfies $\bar{d}(A)=b$ and $\bar{d}(\bigcup_{i=1}^{N}(A-i))=b$ for all $N\in\mathbb{N}$ (this can be done by imitating Hindman’s construction for $b=\frac{2}{3}$). Now let $\beta=\frac{b}{a}$ and let $E:=A\cap\\{\lfloor n\beta\rfloor:n\in\mathbb{N}\\}$. One easily checks that $E$ satisfies the required properties. ###### Remark 3.6. It is worth noting that the set $E$ constructed in Proposition 3.3 also satisfies (as in Hindman’s original example) (3.3) $\bar{d}_{(F_{N})}(E^{c}\cap g^{-1}E)=0$ for all $g\in G$. It is of interest to know whether the phenomenon exhibited in Proposition 3.3 takes place in non-commutative amenable groups. We cannot show this in complete generality, but for virtually nilpotent groups, we have the following theorem: ###### Theorem 3.7. Let $G$ be a finitely generated, countable virtually nilpotent group. Let $(F_{N})$ be a Følner sequence. Then, there exists a set $E\subseteq G$ with (3.4) $\bar{d}_{(F_{N})}(E)>0\textrm{ and }\bar{d}_{(F_{N})}\left(\bigcup_{g\in B}g^{-1}E\right)<\frac{3}{4},$ for all finite subsets $B\subseteq G$. ###### Sketch of the proof. First, by Fact 3.4, it suffices to show the result for a particular Følner sequence $(F_{N})$ of our choosing. Let $F:=\\{x_{1},\dots,x_{k}\\}$ be a set of generators for $G$. Then, the sequence $\left(\bigcup_{j=1}^{n}F^{j}\right)$ (i.e. words of length at most $n$ generated by $F$) is a Følner sequence of polynomial growth. Let $F_{N}:=\bigcup_{j=1}^{2^{2N}}F^{j}$, and let $A_{N}$ the set of words in $F$ of length between $2^{2N-1}$ and $2^{2N}$. Then the set $E:=\bigcup_{N\in\mathbb{N}}A_{N}$ satisfies (3.4). ∎ We will describe now another interesting application of Theorem 2.8 ###### Definition 3.8. Let $E\subseteq G$. We denote by $\mathcal{A}_{E}$ the algebra of sets generated by shifts of the set $E$, i.e. sets of the form $\\{g^{-1}E:g\in G\\}$ with the help of the operations of union, intersection and complement. ###### Theorem 3.9. Let $E\subseteq G$ be such that $d^{*}(E)>0$. Then, there exists a Følner sequence $(G_{N})_{N\in\mathbb{N}}$ in $G$, such that for all Følner sequences $(F_{N})_{N\in\mathbb{N}}$, for all $E_{1},E_{2}\in\mathcal{A}_{E}$, we have (3.5) $\lim_{N\to\infty}\frac{1}{|F_{N}|}\sum_{g\in F_{N}}d_{(G_{K})}(E_{1}\cap g^{-1}E_{2})=d_{(G_{K})}(E_{1})d_{(G_{K})}(E_{2}).$ Moreover, if we let $E_{1}=E_{2}=E$ in (3.5), we get (3.6) $\lim_{N\to\infty}\frac{1}{|F_{N}|}\sum_{g\in F_{N}}d_{(G_{K})}(E\cap g^{-1}E)=d_{(G_{K})}(E)^{2}=d^{*}(E)^{2}.$ ###### Proof. Let $E\subseteq G$ with $d^{*}(E)>0$ and $E_{1},E_{2}\in\mathcal{A}_{E}$. By Theorem 2.8 there exists an ergodic measure preserving system $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ and a set $A\in\mathcal{B}$ with $\mu(A)=d^{*}(E)$ satisfying (2.6). Let $A_{1},A_{2}\in\mathcal{B}$ be the sets corresponding to the sets $E_{1},E_{2}$. By the proof of Theorem 2.8, the functions $\mathbb{1}_{A_{1}}$ and $\mathbb{1}_{A_{2}}$ are continuous. Let $(G_{K})$ be a Følner sequence such that $\mu=\textrm{w*-}\lim_{N\to\infty}\frac{1}{|F_{N}|}\sum_{g\in F_{N}}\delta_{T_{g}\omega}$, where $\omega=(\mathbb{1}_{E}(g))_{g\in G}$. Thus, we can write (3.7) $\mu(A_{1}\cap g^{-1}A_{2})=\lim_{N\to\infty}\frac{1}{|G_{N}|}\sum_{g\in G_{N}}\mathbb{1}_{A_{1}\cap g^{-1}A_{2}}(T_{g}\omega)=d_{(G_{K})}(E_{1}\cap g^{-1}E_{2}).$ Taking an average over $g\in G$ in (3.7) along any left Følner sequence $(F_{N})$ in $G$ immediately yields (3.5), given that the action $(T_{g})_{g\in G}$ is ergodic. By construction of $\mu$, we have that $\mu(A)=d_{(G_{K})}(E)=d^{*}(E)$ whence (3.6) also follows. ∎ We remark that (3.5) and (3.6) do not hold for arbitrary sequences $(G_{N})$: let $E$ be as in (1.7), take $E_{1}=E_{2}=E$ and put $G_{N}=[1,2^{2N}]$. Nonetheless, we have the following Proposition: ###### Proposition 3.10. Let $E\subseteq G$. Let $(G_{N})$ be a Følner sequence in $G$. Then, there exists a Følner subsequence $(G_{N_{k}})$ such that for all Følner sequences $(F_{N})$ and all sets $F\in\mathcal{A}_{E}$ we have (3.8) $\lim_{N\to\infty}\frac{1}{|F_{N}|}\sum_{g\in F_{N}}d_{(G_{N_{k}})}(F\cap g^{-1}F)\geq d_{(G_{N_{k}})}(F)^{2}.$ If we let $F=E$ in (3.8) we have the following variant of (3.8): (3.9) $\liminf_{N\to\infty}\frac{1}{|F_{N}|}\sum_{g\in F_{N}}\bar{d}_{(G_{N})}(E\cap g^{-1}E)\geq\bar{d}_{(G_{N})}(E)^{2}.$ ###### Proof. Given that $G$ is countable, the family of sets $\mathcal{A}_{E}$ is countable. Thus, via a diagonal procedure, we can take a subsequence $(G_{N_{k}})$ of our given Følner sequence $(G_{N})$ so that (3.10) $\mu:=\textrm{w*-}\lim_{k\to\infty}\frac{1}{|G_{N_{k}}|}\sum_{g\in G_{N_{k}}}\delta_{T_{g}\omega}$ exists, where $\omega=(\mathbb{1}_{E}(g))_{g\in G}$. Letting $A:=\\{x:x(e)=1\\}$ we see that $F(x)=x(0)=\mathbb{1}_{A}(x)\in C(X)$, so the function $F_{1}$ representing the set $E$ is also in $C(X)$. Thus, we can write (3.11) $\lim_{N\to\infty}\frac{1}{|F_{N}|}\sum_{g\in F_{N}}d_{(G_{N_{k}})}(F\cap g^{-1}F)=\lim_{N\to\infty}\frac{1}{|F_{N}|}\sum_{g\in F_{N}}\int F_{1}\cdot T_{g}F_{1}\ d\mu.$ Using the ergodic decomposition for $\mu$, we see that the last term in Equation (3.11) can be rewritten as (3.12) $\lim_{N\to\infty}\frac{1}{|F_{N}|}\sum_{g\in F_{N}}\int\left(\int_{X}F_{1}\cdot T_{g}F_{1}\ d\mu_{t}\right)d\lambda(t),$ where each $\mu_{t}$ is an ergodic measure. Thus, by von Neumann’s Mean Ergodic Theorem and Jensen’s inequality, we have (3.13) $\int\left(\int_{X}F_{1}\ d\mu_{t}\right)^{2}d\lambda(t)\geq\left(\int_{X}F_{1}\ d\mu\right)^{2}.$ By construction, the right hand side in (3.13) is equal to $d_{(G_{N_{k}})}(F)^{2}$. To prove (3.9), we use Theorem 2.3 to obtain a measure preserving system $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ and a set $A\in\mathcal{B}$ with $\mu(A)=\bar{d}_{(G_{N})}(E)$ satisfying inequality (2.1). In particular, this means that for all $g\in G$ we have (3.14) $\bar{d}_{(G_{N})}(E\cap g^{-1}E)\geq\mu(A\cap(T_{g})^{-1}A).$ Using the same argument that leads to inequality (3.13), we get that for any Følner sequence $(F_{N})$ (3.15) $\lim_{N\to\infty}\frac{1}{|F_{N}|}\sum_{g\in F_{N}}\mu(A\cap(T_{g})^{-1}A)\geq\mu(A)^{2}.$ Combining (3.14) and (3.15) we obtain $\liminf_{N\to\infty}\frac{1}{|F_{N}|}\sum_{g\in F_{N}}\bar{d}_{(G_{N})}(E\cap g^{-1}E)\geq\mu^{2}(A)=\bar{d}_{(G_{N})}(E)^{2},$ as desired. ∎ ###### Remark 3.11. As was mentioned at the beginning of the section, the results contained herein can be effortlessly carried over to countable cancellative amenable semigroups. Indeed, if $S$ is such a semigroup, then one can embed it into $G:=\\{st^{-1}:s,t\in S\\}$, which is now going to be a countable amenable group (see Proposition 1.17 in [P]). It is straightforward to check that a Følner sequence in $S$ becomes a Følner sequence in $G$, and this is all that is needed to push the results to this more general context. ## 4\. Ergodic sequences and Hindman’s theorem The goal of this section is to extend Hindman’s covering theorem (Theorem 1.3) to unions of the form $\bigcup_{n=1}^{N}(E-k_{n})$. In particular, we will use Ergodic Theory to characterize (and provide numerous examples of) the sequences $(k_{n})$ with the property that for any $E\subseteq\mathbb{Z}$ with $d^{*}(E)>0$ one has $d^{*}\left(\bigcup_{n=1}^{N}(E-k_{n})\right)\to 1$ as $N\to\infty$. The ergodic approach can easily be extended to amenable groups. We will discuss this after the proof of Theorem 4.8. ###### Definition 4.1. We say that a sequence of integers $(k_{n})_{n\in\mathbb{N}}$ has the _combinatorial sweeping out property_ if for every $E\subseteq\mathbb{Z}$ with $d^{*}(E)>0$ we have (4.1) $d^{*}\left(\bigcup_{n=1}^{N}(E-k_{n})\right)\xrightarrow[N\to\infty]{}1$ The class of sequences satisfying (4.1) is quite wide. For example, as we will see below, ergodic sequences have the combinatorial sweeping out property. ###### Definition 4.2. We say that a sequence of positive integers $(k_{n})$ is an _ergodic sequence_ if for every ergodic measure preserving system $(X,\mathcal{B},\mu,T)$ we have (4.2) $\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}\mu(A\cap T^{-k_{n}}B)=\mu(A)\mu(B).$ for all $A,B\in\mathcal{B}$. ###### Remark 4.3. One can show (see Theorem 4.9 below) that a sequence is ergodic if and only if for all $f\in L^{2}(\mu)$ we have (4.3) $\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}T^{k_{n}}f=\int_{X}f\ d\mu,$ where the convergence is with respect to the $L^{2}(\mu)$ norm. The following definition deals with an ergodic counterpart of the notion of combinatorial sweeping out: ###### Definition 4.4. We say that a sequence of integers $(k_{n})_{n\in\mathbb{N}}$ has the _ergodic sweeping out property_ if for every ergodic measure preserving system $(X,\mathcal{B},\mu,T)$ and for every $A\in\mathcal{B}$ with $\mu(A)>0$ we have (4.4) $\mu\left(\bigcup_{n\in\mathbb{N}}T^{-k_{n}}A\right)=1.$ As we will see in Theorem 4.8, the notions of ergodic sweeping out and combinatorial sweeping out, in fact, coincide, so one can use the term sweeping out unambiguously. But first we will show, as promised, that ergodic sequences have the combinatorial sweeping out property. ###### Proposition 4.5. Let $(k_{n})$ be an ergodic sequence. Then $(k_{n})$ has the combinatorial sweeping out property. ###### Proof. We first note that if $(k_{n})$ is an ergodic sequence, $(X,\mathcal{B},\mu,T)$ is an ergodic measure preserving system and $A\in\mathcal{B}$ is such that $\mu(A)>0$, then (4.5) $\mu\left(\bigcup_{n\in\mathbb{N}}T^{-k_{n}}A\right)=1.$ (Otherwise taking $B=X\setminus\bigcup_{n\in\mathbb{N}}T^{-k_{n}}A$ we would get a contradiction with (4.2).) Now let $E\subseteq\mathbb{Z}$ be such that $d^{*}(E)>0$ and take $\varepsilon>0$. By Theorem 2.8 there exists an ergodic measure preserving system $(X,\mathcal{B},\mu,T)$ and a set $A\in\mathcal{B}$ with $\mu(A)=d^{*}(E)$ satisfying (4.6) $d^{*}\left(\bigcup_{n=1}^{N}(E-k_{n})\right)\geq\mu\left(\bigcup_{n=1}^{N}T^{-k_{n}}A\right)$ for all $N\in\mathbb{N}$. Using (4.5) and continuity of $\mu$ we see that $(k_{n})$ satisfies (1.8). ∎ We would like to note that in the proof of Proposition 4.5 the ergodic Furstenberg correspondence principle, Theorem 2.8, was utilized. Theorem 2.8 will also play an instrumental role in the proof of the equivalence of ergodic sweeping out and combinatorial sweeping out (see Theorem 4.8). There are sequences with the combinatorial sweeping out property that are not ergodic. A rather cheap example is provided by the sequence $k_{n}:=[\log n]$. While this sequence does not satisfy (4.2), it takes on all nonnegative integer values and hence is sweeping out. A more interesting example is given by the sequence $k_{n}:=[n^{2}+\log n]$. By [BKQW], $(k_{n})$ is not ergodic. However, one can show that for any ergodic measure preserving system $(X,\mathcal{B},\mu,T)$ and any sets $A,B\in\mathcal{B}$ with $\mu(A)>0$ and $\mu(B)>0$ there is some $n\in\mathbb{N}$ such that $\mu(A\cap T^{-([n^{2}+\log n])}B)>0$. This implies that $\mu\left(\bigcup_{n\in\mathbb{N}}T^{-[n^{2}+\log n]}A\right)=1$. This fact together with Theorem 4.8 below imply that $(k_{n})$ is sweeping out. ###### Definition 4.6. We say that an invertible measure preserving system $(X,\mathcal{B},\mu,T)$ has a topological model if there exists a measure-theoretically isomorphic system $(\hat{X},\hat{\mathcal{B}},\hat{\mu},\hat{T})$, where $\hat{X}$ is a compact metric space and $T$ is a homeomorphism from $\hat{X}$ to itself. ###### Theorem 4.7 (Jewett-Krieger Theorem, [J], [Kr]). Every ergodic invertible measure preserving system $(X,\mathcal{B},\mu,T)$ has a uniquely ergodic 888A measure preserving system $(X,\mathcal{B},\mu,T)$ is called _uniquely ergodic_ if $X$ is a compact metric space, $T:X\rightarrow X$ is a homeomorphism and $\mu$ is the unique $T$-invariant normalized Borel measure on $X$. topological model. The following theorem characterizes sequences which are "good" for Hindman’s covering theorem (see Theorem 1.3) and establishes the equivalence between measurable sweeping out and combinatorial sweeping out: ###### Theorem 4.8. A sequence of integers has the combinatorial sweeping out property if and only if it has the ergodic sweeping out property. ###### Proof. In one direction, assume that $(k_{n})$ has the ergodic sweeping out property. Let $E\subseteq\mathbb{Z}$ with $d^{*}(E)>0$. Let $(X,\mathcal{B},\mu,T)$ be a measure preserving system and $A\in\mathcal{B}$ which are guaranteed by Theorem 2.8 and satisfy the following special case of (2.6): $d^{*}\left(\bigcup_{n=1}^{N}(E-k_{n})\right)\geq\mu\left(\bigcup_{n=1}^{N}T^{-k_{n}}A\right)\textrm{ for all }N\in\mathbb{N}.$ Since $\mu(A)=d^{*}(E)>0$ and the system $(X,\mathcal{B},\mu,T)$ is ergodic, equality (4.4) holds. Continuity of the measure $\mu$ and equality (4.4) together imply that $d^{*}\left(\bigcup_{n=1}^{N}(E-k_{n})\right)\xrightarrow[N\to\infty]{}1$ and so $(k_{n})$ is combinatorially sweeping out. In the other direction, we will show the contrapositive. Assume that there exists an ergodic system $(X,\mathcal{B},\mu,T)$ and a set $A\in\mathcal{B}$ with $\mu(A)>0$ such that $\mu\left(\bigcup_{n\in\mathbb{N}}T^{-k_{n}}A\right)=1-\delta$ for some $\delta>0$. Without loss of generality, one can assume that $(X,\mathcal{B},\mu,T)$ is invertible. Indeed, otherwise we can work with the invertible extension of $(X,\mathcal{B},\mu,T)$. In view of the Jewett-Krieger theorem (Theorem 4.7), we can also assume that $(X,T)$ is a uniquely ergodic topological dynamical system. Since $X$ is a compact metric space, the probability measure $\mu$ is regular. Let $K_{1}\subseteq A$ be a compact set such that $\mu(K_{1})>0$ and $K_{2}\subseteq X\setminus\bigcup_{n\in\mathbb{N}}T^{-k_{n}}A$ a compact subset such that $\mu(K_{2})\geq\frac{\delta}{2}$. Since $T$ is ergodic, von Neumann’s mean ergodic theorem implies $\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}T^{n}f=\int_{X}f\ d\mu,$ where $f=\mathbb{1}_{K_{1}}$ and the convergence is in the $L^{2}(\mu)$-norm. Thus, there is a subsequence $(N_{k})$ and $X_{0}\subseteq X$ with $\mu(X_{0})=1$ such that for all $x\in X_{0}$ we have $\lim_{k\to\infty}\frac{1}{N_{k}}\sum_{n=1}^{N_{k}}f(T^{n}x)=\int_{X}f\ d\mu.$ Let $x_{0}\in X_{0}$ and consider the set $E:=\\{n\in\mathbb{Z}:T^{n}x_{0}\in K_{1}\\}.$ Note that by the choice of $x_{0}$ and $E$ $\lim_{k\to\infty}\frac{1}{N_{k}}\sum_{n=1}^{N_{k}}\mathbb{1}_{E}(n)=\lim_{k\to\infty}\frac{1}{N_{k}}\sum_{n=1}^{N_{k}}\mathbb{1}_{K_{1}}(T^{n}x_{0})=\int_{X}\mathbb{1}_{K_{1}}\ d\mu=\mu(K_{1})>0$ which implies that $d^{*}(E)>0$. We claim that for all $N\in\mathbb{N}$, we have $d^{*}\left(\bigcup_{n=1}^{N}(E-k_{n})\right)\leq 1-\frac{\delta}{2}.$ Indeed, let $N\in\mathbb{N}$. By our choice of $K_{2}$, the set $T^{-k_{1}}K_{1}\cup\dots\cup T^{-k_{N}}K_{1}$ is a compact set disjoint from $K_{2}$, so by Urysohn’s lemma there is a continuous function $f:X\rightarrow[0,1]$ such that $f(x)=1$ if $x\in T^{-k_{1}}K_{1}\cup\dots\cup T^{-k_{N}}K_{1}$ and $f(x)=0$ if $x\in K_{2}$. Thus, (4.7) $d^{*}\left(\bigcup_{n=1}^{N}(E-k_{n})\right)=\limsup_{N-M\to\infty}\frac{1}{N-M}\sum_{n=M}^{N-1}\mathbb{1}_{T^{-k_{1}}K_{1}\cup\dots\cup T^{-k_{N}}K_{1}}(T^{n}x_{0})\\\ \leq\lim_{N-M\to\infty}\frac{1}{N-M}\sum_{n=M}^{N-1}f(T^{n}x_{0})=\int_{X}f\ d\mu\leq 1-\frac{\delta}{2}.$ (Note that we used the fact that for uniquely ergodic systems, $\lim_{N-M\to\infty}\frac{1}{N-M}\sum_{n=M}^{N-1}f(T^{n}x_{0})$ exists for any continuous function $f$ and any $x_{0}\in X$). ∎ While we have chosen to stick with $\mathbb{Z}$ for the sake of clarity, it is worth mentioning that one can establish a version of Theorem 4.8 for general countable amenable groups. Indeed, it is not hard to define in total analogy with Definition 4.1 and 4.2 the notions of combinatorial and measurable sweeping out for general countable amenable groups. To carry out the amenable generalization of Theorem 4.8, one has to invoke a general form of Jewett- Krieger’s theorem due to Rosenthal (see [R]). Note that Definition 4.2 naturally extends to $\mathbb{Z}^{d}$ and even to any amenable group, and can be used to provide numerous examples of ergodic (and hence sweeping out) sequences. One can show, for example, that every subset of positive density of a minimally almost periodic group is ergodic. We give now some examples of ergodic sequences both in $\mathbb{Z}$ and $\mathbb{Z}^{d}$. (See [BLes] and [BKQW].) * (1) $\\{[bn^{c}]:n\in\mathbb{N}\\}$, where $c\notin\mathbb{Q}$, $c>1$ and $b\neq 0$. * (2) $\\{[bn^{c}+dn^{a}]:n\in\mathbb{N}\\}$, where $b,d\neq 0$, $b/d\notin\mathbb{Q}$, $c\geq 1$, $a>0$ and $a\neq c$. * (3) $\\{[bn^{c}(\log n)^{d}]:n\in\mathbb{N}\\}$, where $b\neq 0$, $c\notin\mathbb{Q}$, $c>1$ and $d$ is any number * (4) $\\{[bn^{c}(\log n)^{d}]:n\in\mathbb{N}\\}$, where $b\neq 0$, $c\in\mathbb{Q}$, $c>1$ and $d\neq 0$. * (5) $\\{[bn^{c}+d(\log n)^{a}]:n\in\mathbb{N}\\}$, where $b,d\neq 0$, $c\geq 1$ and $a>1$. Another class of examples of ergodic sequences is provided by sequences of the form $[g(n)]$, where $g$ is any _tempered function_ 999Let $k$ be a non- negative integer. A real-valued function $g$ which is $(k+1)$ times continuously differentiable on $[x_{0},\infty)$, where $x_{0}\geq 0$, is called a _tempered function of order_ $k$ if (a) $g^{(k+1)}(x)$ tends monotonically to zero as $x\to\infty$, and (b) $\lim_{x\to\infty}x|g^{(k+1)}(x)|=+\infty$. (see [BK], Theorem 7.1) The paper [BKS] provides a class of examples of ergodic sequences involving primes for $\mathbb{Z}^{d}$ actions. It is clear that these sequences will be sweeping out for $\mathbb{Z}^{d}$ due to a straightforward generalization of Lemma 4.5. Namely, sequences of the form $\\{([\xi_{1}(p_{n})],\dots,[\xi_{d}(p_{n})]):n\in\mathbb{N}\\}$, where $p_{n}$ denotes the $n$-th prime with the standard order, and where $\xi_{1},\dots,\xi_{d}$ are functions in a Hardy field with subpolynomial growth such that either (4.8) $\lim_{x\to\infty}\frac{\xi(x)}{x^{l+1}}=\lim_{x\to\infty}\frac{x^{l}}{\xi(x)}=0\textrm{ for some }l\in\mathbb{N},\textrm{ or }\lim_{x\to\infty}\frac{\xi(x)}{x}=\lim_{x\to\infty}\frac{\log x}{\xi(x)}=0,$ and such that any combination of the form $\sum_{i=1}^{d}b_{i}\xi_{i}$ also satisfies (4.8) for all $(b_{1},\dots,b_{d})\in\mathbb{R}^{d}\setminus\\{\vec{0}\\}$. A particular case of this would be sequences of the form $\\{([p_{n}^{c_{1}}],\dots,[p_{n}^{c_{d}}]):n\in\mathbb{N}\\}$, where $p_{n}$ denotes the $n$-th prime, and where $c_{1},\dots,c_{d}$ are distinct positive real numbers such that $c_{i}\notin\mathbb{N}$ for all $i$. (This special case was obtained in [BKMST]). We conclude with showing that strong and weak convergence lead to the same definition of an ergodic sequence. We did not need this fact for this section, but we believe it is of independent interest. Note that Theorem 4.9 can be viewed as a generalization of the well-known fact that the $L^{2}$ version of the mean ergodic theorem follows from its weak convergence version. ###### Theorem 4.9. For a sequence $(k_{n})\subseteq\mathbb{Z}$, the following are equivalent: * (1) For every ergodic measure preserving system $(X,\mathcal{B},\mu,T)$ and for all $A,B\in\mathcal{B}$ (4.9) $\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}\mu(A\cap T^{-k_{n}}B)=\mu(A)\mu(B).$ * (2) For every ergodic measure preserving system $(X,\mathcal{B},\mu,T)$ and for all $f\in L^{2}(\mu)$ (4.10) $\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}T^{k_{n}}f=\int_{X}f\ d\mu.$ where the convergence is with respect to the $L^{2}(\mu)$ norm. ###### Proof. Strong convergence implies weak convergence, so one direction is trivial. In the other direction, notice that for any $x\in\mathbb{T}$, (4.9) implies that $(k_{n}x)_{n\in\mathbb{N}}$ is uniformly distributed in $\overline{\\{nx:n\in\mathbb{Z}\\}}\subseteq\mathbb{T}$. To see this we argue as follows. First, observe that, by a standard approximation argument, (4.9) implies that for all $f,g\in L^{2}(\mu)$ (4.11) $\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}\int_{X}f\cdot T^{k_{n}}\bar{g}\ d\mu=\int_{X}f\ d\mu\int_{X}\bar{g}\ d\mu.$ Let $X=\mathbb{T}$, $x\in\mathbb{T}\setminus\\{0\\}$ and put $T:X\rightarrow X$ the rotation by $x$, i.e. $Tz=z+x$ for all $z\in\mathbb{T}$. Let $\chi$ be a non-trivial character of the compact abelian group $G_{x}:=\overline{\\{nx:n\in\mathbb{Z}\\}}$ (note that either $G_{x}=\mathbb{T}$ or it is a finite subgroup of $\mathbb{T}$), and set $f(y)=\bar{\chi}(y)$ and $g(y)=f(y)=\bar{\chi}(y)$. With these choices, equation (4.11) becomes (4.12) $\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}\int_{X}\bar{\chi}(y)\chi(y+k_{n}x)\ d\nu(y)=\int_{X}\chi(y)\ d\nu(y)\int_{X}\bar{\chi}(y)\ d\nu(y),$ where $\nu$ is the Haar measure on $G_{x}$. Now, simplifying (4.12) we get (4.13) $\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}\chi(k_{n}x)=0,$ so $(k_{n}x)$ is equidistributed on $G_{x}$, as desired. Then, for any $f\in L^{2}(\mu)$, we have (4.14) $\left|\left|\frac{1}{N}\sum_{n=1}^{N}T^{k_{n}}f-\int_{X}f\ d\mu\right|\right|^{2}=\left|\left|\frac{1}{N}\sum_{n=1}^{N}T^{k_{n}}f\right|\right|^{2}-\left|\int_{X}f\ d\mu\right|^{2}$ after expanding the inner product and recombining. Next we are going to invoke the classical Herglotz’s theorem, which states that the positive definite sequence $a(n)=\langle T^{n}f,f\rangle$ has a representation $a(n)=\int_{\mathbb{T}}e^{2\pi inx}\ d\nu_{f}(x)$, for some finite positive measure $\nu_{f}$ on $\mathbb{T}$. Thus, the right hand side of (4.14) becomes (4.15) $\frac{1}{N^{2}}\sum_{n,m=1}^{N}\langle T^{k_{n}-k_{m}}f,f\rangle-\left|\int_{X}f\ d\mu\right|^{2}=\frac{1}{N^{2}}\sum_{n,m=1}^{N}\int_{\mathbb{T}}e^{2\pi i(k_{n}-k_{m})x}\ d\nu_{f}(x)-\left|\int_{X}f\ d\mu\right|^{2}=\\\ \int_{\mathbb{T}}\left|\frac{1}{N}\sum_{n=1}^{N}e^{2\pi ik_{n}x}\right|^{2}\ d\nu_{f}(x)-\left|\int_{X}f\ d\mu\right|^{2}.$ Using (4.13) for $\chi(x)=e^{2\pi ix}$, and invoking ergodicity of $T$ we get (4.16) $\lim_{N\to\infty}\int_{\mathbb{T}}\left|\frac{1}{N}\sum_{n=1}^{N}e^{2\pi ik_{n}x}\right|^{2}\ d\nu_{f}(x)-\left|\int_{X}f\ d\mu\right|^{2}=\nu_{f}(\\{0\\})-\left|\int_{X}f\ d\mu\right|^{2}=0.$ (In the last step we used the fact that $\nu_{f}(\\{0\\})=\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}\int_{\mathbb{T}}e^{2\pi inx}\ d\nu_{f}(x)=\\\ \lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}\langle T^{n}f,f\rangle=\left|\int_{X}f\ d\mu\right|^{2}$.) ∎ ###### Remark 4.10. Theorem 4.9 can be extended to the context of locally compact abelian groups. We omit the details. ## 5\. A characterization of countable amenable weakly mixing groups In this section, we will establish a characterization of countable weakly mixing groups with the help of the amenable version of Hindman’s covering theorem (Theorem 3.1). Recall that a group $G$ is _weakly mixing_ (or minimally almost periodic) if any ergodic measure preserving action of $G$ on a probability space $\mathds{X}=(X,\mathcal{B},\mu)$ is automatically weakly mixing i.e., the diagonal action of $G$ on $\mathds{X}\times\mathds{X}$ is ergodic. (In this case, the diagonal action on an arbitrary finite product $\mathds{X}\times\dots\times\mathds{X}$ is also ergodic). We begin with the following proposition: ###### Proposition 5.1. Let $G$ be a countable amenable WM group. Let $E\subseteq G$ with $d^{*}(E)>0$ and let $d\in\mathbb{N}$. Then, for all $\varepsilon>0$, there exist $g_{1},\dots,g_{k}\in G$ such that (5.1) $d^{*}\left(\bigcup_{i=1}^{k}(g_{i},\dots,g_{i})^{-1}\underbrace{(E\times\dots\times E)}_{d\ \textrm{times}}\right)>1-\varepsilon.$ ###### Proof. By Theorem 2.8 there exists an ergodic measure preserving system $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ and a set $A\in\mathcal{B}$ with $\mu(A)=d^{*}(E)$ satisfying (2.6). Since $G$ is a WM group, the measure $\mu\otimes\mu$ will be ergodic for the diagonal action on $X\times X$. Let $(G_{N})$ be a Følner sequence such that $\mu=\textrm{w*-}\lim_{N\to\infty}\frac{1}{|G_{N}|}\sum_{g\in G_{N}}\delta_{T_{g}\omega}$, where $\omega=(\mathbb{1}_{E}(g))_{g\in G}$, as in Section 2. Notice that, for each $k\in\mathbb{N}$, we have (5.2) $d^{*}\left(\bigcup_{i=1}^{k}(g_{i},\dots,g_{i})^{-1}(E\times\dots\times E)\right)\geq d_{(G_{N}\times\dots\times G_{N})}\left(\bigcup_{i=1}^{k}(g_{i},\dots,g_{i})^{-1}(E\times\dots\times E)\right),$ for all $g_{1},\dots,g_{k}\in G$. Applying the inclusion-exclusion principle we can change the unions in (5.2) for intersections. The density of the typical intersection can be computed with a product of densities (note that $d_{(G_{N}\times G_{N})}(E_{1}\cap E_{2}\times F_{1}\cap F_{2})=d_{(G_{N})}(E_{1}\cap E_{2})d_{(G_{N})}(F_{1}\cap F_{2})$). Finally, using the definition of $\mu$ we see that the quantity on the right hand side in (5.2) equals $\mu\otimes\dots\otimes\mu\left(\bigcup_{i=1}^{k}(T_{g_{i}}\times\dots\times T_{g_{i}})(A\times\dots\times A)\right)$ Since $G$ is a WM group, the measure $\mu\otimes\dots\otimes\mu$ is ergodic for the diagonal $G$-action on the product space $X\times\dots\times X$. Since $\mu(A)>0$ we have $(\mu\otimes\dots\otimes\mu)(A\times\dots\times A)>0$, whence $(\mu\otimes\dots\otimes\mu)\left(\bigcup_{g\in G}(T_{g}\times\dots\times T_{g})^{-1}(A\times\dots\times A)\right)=1$, so the result follows by continuity of $\mu$. ∎ We will show next that the covering property (5.1) characterizes weakly mixing groups: ###### Theorem 5.2. Let $G$ be a countable amenable group that is not WM. Then there exists a set $E\subseteq G$ with $d^{*}(E)\in(0,1)$ such that for all $r\in\mathbb{N}$ and any finite subset $\\{h_{1},\dots,h_{r}\\}$ of $G$ we have (5.3) $d^{*}\left(\bigcup_{i=1}^{r}(h_{i},h_{i})^{-1}(E\times E)\right)\leq C<1,$ where the constant $C$ in (5.3) is independent of $\\{h_{1},\dots,h_{r}\\}$. ###### Proof. Before constructing the set $E$, we need to do some preparatory work. First, we observe that by a Corollary of Lemma 3.3 in [BBF], (see equation (5) in [BBF]) for any $B\subseteq G$, and for any Følner sequence $(F_{n})\subseteq G$, there is a sequence $(t_{n})$ such that $d^{*}(B)=d_{F_{n}t_{n}}(B).$ Hence, it suffices to show that for any pair of Følner sequences $(I_{N})\subseteq G$ and $(J_{N})\subseteq G$ there exists $0<C<1$ such that (5.4) $\bar{d}_{(I_{N}\times J_{N})}\left(\bigcup_{i=1}^{r}(h_{i},h_{i})^{-1}(E\times E)\right)\leq C<1.$ for all $r\in\mathbb{N}$ and all finite sets $\\{h_{1},\dots,h_{r}\\}$. Note that this step is essential because general Følner sequences of $G\times G$ need not be of the form $(I_{N}\times J_{N})$. Next, we observe that since $G$ is not WM it must admit a non-trivial finite dimensional representation $\pi:G\rightarrow U(k)$, where $U(k)$ is the unitary group of $k\times k$ complex matrices (see for example [S], Theorem 3.4). Thus, $H=\\{\pi(g):g\in G\\}$ is a non-trivial subgroup of $U(k)$. Let $X=\overline{H}$, the closure of $H$ in $U(k)$. Then, $X$ is a compact metric group with the topology inherited from $\mathbb{C}^{k^{2}}$. As such, it carries a unique normalized Haar measure (fully supported on $X$) which we denote by $\mu$. Moreover, $G$ acts on $X$ by translations as follows: let $R_{g}x:=\pi(g)\cdot x$. Note that the action $(R_{g})_{g\in G}$ is minimal because $\pi(G)=H$ and $H$ is dense in $X$. Clearly, $(X,\textrm{Borel}(X),\mu,(R_{g})_{g\in G})$ is a uniquely ergodic measure preserving system. Let $d$ be a bi-invariant metric on $X$ (such a metric exists by the Birkhoff- Kakutani Theorem, see [Bi] and [K]). Let $g_{0}\in X$ be such that $d(e,g_{0})=b=\max\\{d(e,g):g\in G\\}>0$ (since $X$ is compact and non- trivial, $0<b<\infty$). Let $U:=B(e,\frac{b}{16})$ (the open ball of radius $\frac{b}{16}$ centered at $e$). Clearly $\mu(U)>0$ since $\mu$ is fully supported on $X$. Let $U^{\prime}:=B(e,r)$ where $0<r<\frac{b}{16}$ is such that $\mu(\partial U^{\prime})=0$. Such an $r$ must exist (see for example Lemma 2.21 in [BCRZ]). At this point we are ready to define the set $E$. Namely, choose an arbitrary $x_{0}\in X$ and put (5.5) $E:=\\{g\in G:R_{g}x_{0}\in U^{\prime}\\}.$ Since $(X,\mu,(R_{g})_{g\in G})$ is uniquely ergodic and $\mu(\partial U^{\prime})=0$, the function $\mathbb{1}_{U^{\prime}}$ can be approximated by continuous functions. From these two facts, it follows that $\textrm{w*-}\lim_{N\to\infty}\frac{1}{|F_{N}|}\sum_{g\in F_{N}}\delta_{T_{g}x_{0}}$ converges to $\mu$. Next, observe that for all $g\in X$, the open set $W:=B(g_{0},\frac{b}{16})\times B(e,\frac{b}{16})$ satisfies (5.6) $(g,g)^{-1}(U\times U)\cap W=\emptyset,$ because otherwise, by the triangle inequality we would get that $d(e,g)\leq d(e,g_{1}u)+d(g_{1}u_{1},g_{1}u_{2})+d(g_{1}u_{2},g_{0})<2\cdot\frac{b}{16}+d(u_{1},u_{2})<\frac{b}{8}+\frac{b}{8}<b$, where $g_{1}\in G$ and $u_{1},u_{2}\in U$ are such that $d(e,g_{1}u_{1})<\frac{b}{16}$ and $d(g_{0},g_{1}u_{2})<\frac{b}{16}$, a contradiction with our choice of $e,g_{0}$. It follows from (5.6) and from the fact that $G$ acts minimally on $X$ that $\overline{\Delta\cdot(U\times U)}\neq X\times X$, where $\Delta$ is the diagonal in $X^{2}$. Thus, $(\mu\otimes\mu)(\Delta\cdot(U\times U)):=C<1.$ Note also that since $\mu$ is fully supported on $X$, $\mu\otimes\mu$ is fully supported on $X\times X$, and so we have $(\mu\otimes\mu)(\Delta\cdot(U\times U)>0$. Let $(I_{N}),(J_{N})$ be two Følner sequences in $G$, and $\\{h_{1},\dots,h_{r}\\}$ a finite set of elements of $G$. Then, (5.7) $\bar{d}_{(I_{N}\times J_{N})}\left(\bigcup_{i=1}^{r}(h_{i},h_{i})^{-1}(E\times E)\right)=\limsup_{N\to\infty}\frac{1}{|I_{N}||J_{N}|}\sum_{g\in F_{N}}\sum_{h\in J_{N}}\mathbb{1}_{\bigcup_{i=1}^{r}(h_{i},h_{i})^{-1}(E\times E)}(g,h).$ By (5.5), the last term in equation (5.7) is equal to (5.8) $\limsup_{N\to\infty}\frac{1}{|I_{N}||J_{N}|}\sum_{g\in F_{N}}\sum_{h\in J_{N}}\mathbb{1}_{\bigcup_{i=1}^{r}(T_{h_{i}}\times T_{h_{i}})^{-1}(U^{\prime}\times U^{\prime})}(T_{g}x_{0},T_{h}x_{0}).$ We now use the inclusion-exclusion principle to separate the variables in each summand. Let $\prod_{i=1}^{r}\mathbb{1}_{T_{h_{i}}U^{\prime}}$, $1\leq t\leq r$, be a typical term obtained through this process. By unique ergodicity of the action $(R_{g})_{g\in G}$ we see that for any Følner sequence $(F_{N})$ $\lim_{N\to\infty}\frac{1}{|F_{N}|}\sum_{g\in F_{N}}\mathbb{1}_{\bigcap_{i=1}^{t}T_{h_{i}}U^{\prime}}(T_{g}x_{0})=\mu\left(\bigcap_{i=1}^{t}T_{h_{i}}U^{\prime}\right).$ Indeed, since $U^{\prime}$ is an open set with $\mu(\partial U^{\prime})=0$, we also have that $\mu(\partial(T_{g}U^{\prime}))=0$ for all $g\in G$ given that $T_{g}(\partial U^{\prime})=\partial(T_{g}U^{\prime})$ as $T_{g}$ is a measure preserving homeomorphism. Consequently, functions of the form $\prod_{i=1}^{t}\mathbb{1}_{T_{g_{i}}U^{\prime}}$, $1\leq t\leq r$, can be approximated by continuous functions. Thus, the limit in equation (5.8) in fact exists and (after performing the inclusion-exclusion principle backwards) is equal to (5.9) $(\mu\otimes\mu)\left(\bigcup_{i=1}^{r}(T_{h_{i}}\times T_{h_{i}})^{-1}(U^{\prime}\times U^{\prime})\right)\leq(\mu\otimes\mu)(\Delta\cdot(U\times U))=C<1,$ which completes the proof. ∎ ## 6\. A general form of Hindman’s covering theorem One may wonder if a version of Hindman’s covering theorem (Theorem 3.1) is valid for discrete amenable semigroups which are not necessarily countable or cancellative. In this section we will show that this is indeed the case. Recall that a discrete semigroup $G$ is left amenable if there exists a left invariant mean $m:\ell^{\infty}(G)\rightarrow\mathbb{C}$ 101010We say that $m\in\ell^{\infty}(G)^{*}$ is a left _invariant mean_ if it is a continuous linear functional from $\ell^{\infty}(G)$ to $\mathbb{C}$ such that (i) for every $f\in\ell^{\infty}(G)$ and for every $g\in G$ we have $m({}_{g}f)=m(f)$, where ${}_{g}f(x):=f(gx)$ for all $x\in G$, (ii) $m(f)\geq 0$ for any bounded function $f:G\rightarrow[0,\infty)$, and (iii) $m(\mathbb{1}_{G})=1$.. (Note that, for discrete countable groups, this is equivalent to the definition of left amenability given in Section 2.) In the context of means, a notion of largeness presents itself. Let us denote by $\mathfrak{M}(G)$ the space of left invariant means on $G$. We say a subset $E$ of a discrete left amenable semigroup $G$ is large if $m(\mathbb{1}_{E})>0$ for some mean $m\in\mathfrak{M}(G)$. This leads to the following definition of a notion of upper Banach density that is valid in all amenable semigroups (see Definition 2.7 [BG]): ###### Definition 6.1. Let $G$ be an amenable semigroup, and let $E\subseteq G$. The _upper Banach density_ of $E$ is $d^{*}(E):=\sup\\{m(\mathbb{1}_{E}):m\in\mathfrak{M}(G).\\}$ Notice that since $\mathfrak{M}(G)$ is weak* compact there is some $m\in\mathfrak{M}(G)$ that achieves the supremum in Definition 6.1. If $G$ is a discrete countable amenable group and $E\subseteq G$, then the upper Banach density of $E$ as given in Definition 2.2 agrees with the one in Definition 6.1. Moreover, in this case, we have $d^{*}(E)=\max\\{m(\mathbb{1}_{E}):m\in\mathfrak{M}(G)\\}$. We are now in a position to formulate a general version of Hindman’s theorem. ###### Theorem 6.2. Let $G$ be an amenable semigroup, and let $E$ be a subset of $G$ with $d^{*}(E)>0$. Then, for every $\varepsilon>0$, there exists $k\in\mathbb{N}$ and $g_{1},\dots,g_{k}\in G$ such that $d^{*}(g_{1}^{-1}E\cup\dots\cup g_{k}^{-1}E)>1-\varepsilon$. The proof of Theorem 6.2 requires a few preliminary results which will be given next. Recall that the space of _weakly almost periodic_ functions on $G$, denoted by $\textrm{WAP}(G)$ is comprised of functions $f\in\ell^{\infty}(G)$ such that the weak closure of their shifts, i.e. $\overline{\\{_{g}f:g\in G\\}}$ is weakly compact (i.e., compact with respect to the weak topology induced by functionals on $\ell^{\infty}(G)$) . ###### Theorem 6.3 (Ryll-Nardzewski, cf. [P], page 86). There is a unique left invariant mean on $\textrm{WAP}(G)$. We will be using the fact (due to Eberlein, [E]) that for a discrete semigroup $G$, functions of the form $f(g)=\langle h_{1},T_{g}h_{2}\rangle$, for $h_{1},h_{2}\in L^{2}(\mu)$ are in $\textrm{WAP}(G)$ (see, for example, Theorem 3.1 in [Bu]). ###### Lemma 6.4. Let $G$ be a discrete semigroup and let $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ be an ergodic measure preserving system. Let $m$ be the unique left invariant mean on $\textrm{WAP}(G)$ and $f_{1},f_{2}\in L^{2}(\mu)$. Then, $m(\langle f_{1},T_{g}f_{2}\rangle)=\int_{X}f_{1}\ d\mu\int_{X}\bar{f}_{2}\ d\mu$ ###### Proof. Let $F_{2}\in L^{2}(\mu)$ be defined via $\langle F_{1},F_{2}\rangle=m(\langle F_{1},T_{g}f_{2}\rangle)$ for all $F_{1}\in L^{2}(\mu)$. We will show that $F_{2}=\int_{X}\bar{f}_{2}\ d\mu$, which implies the result. First observe that since the action $(T_{g})_{g\in G}$ is ergodic, the only invariant functions are the constants. Next, we show that $T_{h}F_{2}=F_{2}$ for all $h\in G$, which implies $F_{2}$ is a constant by ergodicity. Indeed, $\langle F_{1},T_{h}F_{2}\rangle=\langle(T_{h})^{*}F_{1},F_{2}\rangle=m(\langle(T_{h})^{*}F_{1},T_{g}f_{2}\rangle=m(\langle F_{1},(T_{hg})f_{2}\rangle)=$ $m(\langle F_{1},T_{g}f_{2}\rangle)=\langle F_{1},F_{2}\rangle.$ Hence, $F_{2}$ is a constant, so $F_{2}=\langle F_{2},\mathbb{1}_{X}\rangle$. Thus, for all $f_{1}\in L^{2}(\mu)$ we have $\int_{X}f_{1}\cdot\bar{F}_{2}\ d\mu=\langle f_{1},F_{2}\rangle=\langle f_{1},\langle F_{2},\mathbb{1}_{X}\rangle\rangle=\langle\langle f_{1},\mathbb{1}_{X}\rangle,F_{2}\rangle=$ $m(\langle\langle f_{1},\mathbb{1}_{X}\rangle,(T_{g})f_{2}\rangle)=m\left(\int_{X}f_{1}\ d\mu\int_{X}T_{g}\bar{f}_{2}\ d\mu\right)=\int_{X}f_{1}\ d\mu\int_{X}\bar{f}_{2}\ d\mu,$ so we are done. ∎ ###### Remark 6.5. It follows from Lemma 6.4 that for any (not necessarily ergodic) measure preserving action of a discrete semigroup $G$ on a probability space $(X,\mathcal{B},\mu)$ and any $A\in\mathcal{B}$ we have $m(\mu(A\cap(T_{g})^{-1}A))\geq\mu^{2}(A).$ Indeed, invoking a general form of the ergodic decomposition we have $m(\mu(A\cap(T_{g})^{-1}A))=m\left(\int_{\Omega}\mu_{t}(A\cap(T_{g})^{-1}A)\ d\nu(t)\right),$ where $\mu_{t}$ is an ergodic measure for $\nu$-a.e. $t\in\Omega$. Then, since $m$ is linear, Lemma 6.4 implies $m\left(\int_{\Omega}\mu_{t}(A\cap(T_{g})^{-1}A)\ d\nu(t)\right)=\int_{\Omega}m(\mu_{t}(A\cap(T_{g})^{-1}A))\ d\nu(t)=\int_{\Omega}\mu_{t}(A)^{2}\ d\nu(t).$ Applying Jensen’s inequality we get $\int_{\Omega}\mu_{t}(A)^{2}\ d\nu(t)\geq\left(\int_{\Omega}\mu_{t}(A)\ d\nu(t)\right)^{2}=\mu^{2}(A),$ as desired. ###### Lemma 6.6. Let $G$ be a discrete semigroup and let $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ an ergodic measure preserving system. Let $A,B\in\mathcal{B}$ with $\mu(A)>0$ and $\mu(B)>0$. Then, (6.1) $R_{A,B}:=\left\\{g\in G:\mu(B\cap(T_{g})^{-1}A)>\frac{\mu(A)\mu(B)}{2}\right\\}\neq\emptyset.$ ###### Proof. We proceed by contradiction. Assume that for all $g\in G$ we have $\mu(B\cap(T_{g})^{-1}A)\leq\frac{\mu(A)\mu(B)}{2}$. Let $m$ be the unique left invariant mean on $\textrm{WAP}(G)$. Let $A,B\in\mathcal{B}$ with $\mu(A)>0$ and $\mu(B)>0$. By Lemma 6.4, we have (6.2) $m(\mu(B\cap(T_{g})^{-1}A))=\mu(A)\mu(B).$ Equation (6.2) contradicts the assumption that, for all $g\in G$, $\mu(B\cap(T_{g})^{-1}A)\leq\frac{\mu(A)\mu(B)}{2}$, (given that $m(f)\geq 0$ if $f\geq 0$) so we are done. ∎ ###### Lemma 6.7. Let $G$ be a discrete semigroup and $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ an ergodic measure preserving system. Let $A\in\mathcal{B}$ with $\mu(A)>0$. Then, for all $\varepsilon>0$, there are $g_{1},\dots,g_{k}\in G$ such that $\mu\left(\bigcup_{i=1}^{k}(T_{g_{i}})^{-1}A\right)>1-\varepsilon.$ ###### Proof. Let $A\in\mathcal{B}$ with $\mu(A)>0$. We claim that (6.3) $\sup\left\\{\mu\left(\bigcup_{g\in B}(T_{g})^{-1}A\right):B\subseteq G\textrm{ and }|B|\leq|\mathbb{N}|\right\\}=1.$ We proceed by contradiction. Let us assume that this is not the case. Then there exists $0<\delta<1$ such that (6.4) $\sup\left\\{\mu\left(\bigcup_{g\in B}(T_{g})^{-1}A\right):B\subseteq G\textrm{ and }|B|\leq|\mathbb{N}|\right\\}=1-\delta.$ Let $0<\varepsilon^{\prime}<\frac{\mu(A)\delta}{2}$, and choose $B\subseteq G$ with $|B|\leq|\mathbb{N}|$ such that (6.5) $\mu\left(\bigcup_{g\in B}(T_{g})^{-1}A\right)\geq 1-\delta-\varepsilon^{\prime}.$ By assumption, there is some $C\subseteq X\setminus\bigcup_{g\in B}(T_{g})^{-1}A$ with $\mu(C)\geq\delta$. Now, by Lemma 6.6, we can find $g_{0}\in G$ such that $\mu(C\cap(T_{g_{0}})^{-1}A)\geq\frac{\mu(A)\mu(C)}{2}$. (Notice that, in particular, it follows that $g_{0}\notin B$). We have (6.6) $\mu\left(\bigcup_{g\in B\cup\\{g_{0}\\}}(T_{g})^{-1}A\right)\geq 1-\delta-\varepsilon^{\prime}+\mu(C\cap(T_{g_{0}})^{-1}A)\geq 1-\delta-\varepsilon^{\prime}+\frac{\mu(A)\delta}{2}>1-\delta,$ by our choice of $\varepsilon^{\prime}$ and $g_{0}$. This contradicts the definition of supremum, since clearly $B\cup\\{g_{0}\\}$ is still a countable subset of $G$. Thus, (6.3) holds. To complete the proof, let $\varepsilon>0$ and choose $B\subseteq G$ with $|B|\leq|\mathbb{N}|$ such that $\mu\left(\bigcup_{g\in B}(T_{g})^{-1}A\right)>1-\frac{\varepsilon}{2}.$ Since $\mu$ is continuous and $B$ is countable, there exist $g_{1},\dots,g_{k}\in B$ such that $\mu\left(\bigcup_{i=1}^{k}(T_{g_{i}})^{-1}A\right)>1-\frac{\varepsilon}{2}-\frac{\varepsilon}{2},$ so we are done. ∎ Before proving an ergodic version of Furstenberg’s correspondence principle for means, we formulate two additional results. We start with a version of Theorem 2.3 for means, which we will juxtapose with its ergodic counterpart, Theorem 6.11, below. ###### Theorem 6.8 (Furstenberg correspondence principle for means (cf. [BLei] and [BMc])). Let $G$ be a discrete amenable semigroup. Let $m\in\mathfrak{M}(G)$, $E\subseteq G$ with $m(\mathbb{1}_{E})>0$. Then there exists a probability measure preserving system $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ such that $X$ is compact and Hausdorff, $\mathcal{B}=\textrm{Borel}(X)$ and $(T_{g})_{g\in G}$ is a $G$-action on $X$ by continuous self-maps of $X$. Finally, $A$ is a set in $\mathcal{B}$ for which $\mu(A)=m(\mathbb{1}_{B})$, and $\mu$ is such that for all $k\in\mathbb{N}$, $g_{1},\dots,g_{k}\in G$ we have (6.7) $m\left(\mathbb{1}_{E}\prod_{i=1}^{k}\mathbb{1}_{g_{i}^{-1}E}\right)=\mu(A\cap T_{g_{1}}^{-1}A\cap\dots\cap T_{g_{k}}^{-1}A).$ ###### Proof. The proof for discrete amenable groups provided in [BLei] extends verbatim to our context without any major modification. ∎ ###### Remark 6.9. The proof of Theorem 6.8 in [BLei] can be easily adjusted to include unions and complements as in Theorem 2.3. The other result we need can be found in [P]: ###### Theorem 6.10 (Proposition 0.1 [P]). The space of left invariant means $\mathfrak{M}(G)$ is a weak*-compact, convex spanning subset of $\ell^{\infty}(G)^{*}$. We are now in position to formulate and prove a general version of the ergodic Furstenberg correspondence principle. ###### Theorem 6.11 (Ergodic Furstenberg correspondence principle for means). Let $E\subseteq G$ be such that $m(\mathbb{1}_{E})>0$ for some mean $m\in\mathfrak{M}(G)$. Then there exists a mean $\tilde{m}\in\mathfrak{M}(G)$, and an ergodic measure preserving system $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ such that for all $k\in\mathbb{N}$ we have (6.8) $\tilde{m}(\mathbb{1}_{E^{w_{0}}\star g_{1}^{-1}E^{w_{1}}\star\dotso\star g_{k}^{-1}E^{w_{k}}})=\mu(A^{w_{0}}\star(T_{g_{1}})^{-1}A^{w_{1}}\star\dots\star(T_{g_{k}})^{-1}A^{w_{k}}),$ where $A\in\mathcal{B}$ is such that $0<\mu(A)=m(\mathbb{1}_{E})\leq\tilde{m}(\mathbb{1}_{E})$, and each of the stars denotes either union or intersection with the understanding that * (i) for all $1\leq i\leq k-1$, the operation represented by $\star$ which stands between $E^{w_{i}}$ and $E^{w_{i+1}}$ is the same as the operation appearing between $A^{w_{i}}$ and $A^{w_{i+1}}$. * (ii) the choices of parentheses which are needed to make the expressions on both sides of formula (2.1) well defined also match. ###### Proof. First, we remark that, as in the proof of Theorem 6.8, any invariant mean on $G$ is given by a $G$-invariant probability measure on $\beta G$, the Stone- Čech compactification of $G$, via the isomorphism $(\ell^{\infty}G)^{*}\cong C(\beta G)^{*}$121212See [GJ] for the background on Stone-Čech compactifications.. It is easy to see that this isomorphism is behind the formula (6.8). By Choquet’s theorem (which we can apply in view of Theorem 6.10), we can write (6.9) $m=\int_{\textrm{Ext}(\mathfrak{M}(G))}m_{t}\ d\lambda(t),$ for some probability measure $\lambda$ supported on $\textrm{Ext}(\mathfrak{M}(G))$, the set of extreme points of $\mathfrak{M}(G)$. Notice that extreme points of $\mathfrak{M}(G)$ get mapped to extreme points of the set of probability measures on $\beta G$ via the isomorphism $(\ell^{\infty}(G))^{*}\cong C(\beta G)^{*}$. It is well known that the measures that are extreme points in the set of $G$-invariant probability measures on $\beta G$ are in fact ergodic. It follows from formula (6.9), that since $m(\mathbb{1}_{E})>0$, we have that $m_{t}(\mathbb{1}_{E})>0$ for a set of $t$ of positive $\lambda$-measure. Thus we can choose $\tilde{m}$, an extreme point of $\mathfrak{M}(G)$, for which $\tilde{m}(\mathbb{1}_{E})\geq m(\mathbb{1}_{E})>0$. Using the aforementioned isomorphism, we obtain an ergodic measure $\mu$ for which (6.8) holds. ∎ ###### Remark 6.12. It is worth noticing that the ergodicity of the system $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ in Theorem 6.11 stems from the fact that passed from the mean $m$ to $\tilde{m}$. If one restricts oneself to only using a given mean $m$, as in Theorem 6.8, ergodicity cannot be guaranteed. We also see in the statement of Theorem 6.11 that moving form $m$ to $\tilde{m}$ does not affect combinatorial applications, as we can require $\tilde{m}(\mathbb{1}_{E})\geq m(\mathbb{1}_{E})$. This situation is similar to the one discussed in the comments at the end of Section 2. We are now ready for a proof of Theorem 6.2: ###### Proof of Theorem 6.2. Let $E\subseteq G$ with $d^{*}(E)>0$. Then, there is some $m_{0}\in\mathfrak{M}(G)$ such that $m_{0}(\mathbb{1}_{E})>0$. Let $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ be an ergodic measure preserving system satisfying the equality (6.8) for some set $A\in\mathcal{B}$ with $\mu(A)>0$. (See Theorem 6.11 above). Let $E\subseteq G$ with $d^{*}(E)>0$. Then, there is some $m_{0}\in\mathfrak{M}(G)$ such that $m_{0}(\mathbb{1}_{E})>0$. Let $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ be an ergodic measure preserving system satisfying the equality (6.8) for some set $A\in\mathcal{B}$ with $\mu(A)>0$. (See Theorem 6.11 above). Let $E\subseteq G$ with $d^{*}(E)>0$. Then, there is some $m_{0}\in\mathfrak{M}(G)$ such that $m_{0}(\mathbb{1}_{E})>0$. Let $(X,\mathcal{B},\mu,(T_{g})_{g\in G})$ be an ergodic measure preserving system satisfying the equality (6.8) for some set $A\in\mathcal{B}$ with $\mu(A)>0$. (See Theorem 6.11 above). Let $\varepsilon>0$. By Lemma 6.7, we can find $g_{1},\dots,g_{k}\in G$ such that $\mu\left(\bigcup_{i=1}^{k}(T_{g_{i}})^{-1}A\right)>1-\varepsilon,$ since $\mu(A)>0$. Equality (6.8) then implies that for some $m\in\mathfrak{M}(G)$ we have $m(\mathbb{1}_{\bigcup_{i=1}^{k}g_{i}^{-1}E})>1-\varepsilon,$ whence the result follows from the definition of $d^{*}$. ∎ We conclude this section with brief remarks on yet another generalization of Hindman’s covering theorem. Assume that $G$ is a locally compact amenable group (i.e. that $G$ has a left invariant _topological_ mean, see [Gre] for more details). The Furstenberg’s correspondence principle which was proved in [BCRZ] (see also [BF]) can be "upgraded" to an ergodic Furstenberg correspondence principle similar to Theorem 2.8 and Theorem 6.8. Based on this enhancement one can prove a version of Hindman’s Theorem for locally compact groups. Before providing the formulation we need two definitions. ###### Definition 6.13. Let $G$ be a locally compact amenable group and let $E\subseteq G$. We define the upper Banach density of $E$ as follows: (6.10) $d^{*}(E)=\sup\\{m(\mathbb{1}_{E}):m\textrm{ is a left-invariant topological mean on }L^{1}(G,\mu)\\},$ where $\mu$ is a Haar measure on $G$. ###### Definition 6.14. Let $G$ be a locally compact amenable group. We say that a set $E\subseteq G$ is _substantial_ if $E\supseteq UW$, where $U$ is a non-empty open subset in $G$ containing $\textrm{id}_{G}$ and $W$ is a measurable set with $d^{*}(W)>0$. ###### Theorem 6.15. Let $G$ be a locally compact amenable group. Let $E\subseteq G$ be a substantial set. Let $\varepsilon>0$. Then there exist $g_{1},\dots,g_{k}\in G$ such that $d^{*}\left(\bigcup_{i=1}^{k}g_{i}^{-1}E\right)>1-\varepsilon.$ ## References * [BBF] M. Beiglböck, V. Bergelson and A. Fish, Sumset phenomenon in countable amenable groups, Adv. Math. 223 (2010), no. 2, 416–432. * [B1] V. Bergelson, Ergodic Ramsey theory, in Logic and combinatorics (Arcata, Calif., 1985), 63–87, Contemp. Math., 65, Amer. Math. Soc., Providence, RI. * [B2] V. Bergelson. Ergodic theory and Diophantine problems. Topics in symbolic dynamics and applications (Temuco, 1997), 167–205, London Math. Soc. Lecture Note Ser., 279, Cambridge Univ. Press, Cambridge, 2000. * [BCRZ] V. Bergelson, J. C. Christopherson, D. Robertson, P. Zorin-Kranich. Finite products sets and minimally almost periodic groups, J. Funct. Anal. 270 (2016), no. 6, 2126–2167. * [BF] V. Bergelson and H. Furstenberg, WM groups and Ramsey theory, Topology Appl. 156 (2009), no. 16, 2572–2580. * [BG] V. Bergelson and D. Glasscock, On the interplay between additive and multiplicative largeness and its combinatorial applications, J. Combin. Theory Ser. A 172 (2020), 105203, 60 pp. * [BHK] V. Bergelson, B. Host and B. Kra, Multiple recurrence and nilsequences, Invent. Math. 160 (2005), no. 2, 261–303. * [BK] V. Bergelson and I. J. Håland Knutson, Weak mixing implies weak mixing of higher orders along tempered functions, Ergodic Theory Dynam. Systems 29 (2009), no. 5, 1375–1416. * [BKMST] V. Bergelson, G. Kolesnik, M. Madritsch, Y. Son and R. Tichy. Uniform distribution of prime powers and sets of recurrence and van der Corput sets in $\mathbb{Z}^{k}$, Israel J. Math. 201 (2014), no. 2, 729–760. * [BKS] V. Bergelson, G. Kolesnik and Y. Son, Uniform distribution of subpolynomial functions along primes and applications, J. Anal. Math. 137 (2019), no. 1, 135–187. * [BLei] V. Bergelson and A. Leibman, Cubic averages and large intersections, in Recent trends in ergodic theory and dynamical systems, 5–19, Contemp. Math., 631, Amer. Math. Soc., Providence, RI. * [BLes] V. Bergelson and E. Lesigne, Van der Corput sets in ${Z}^{d}$, Colloq. Math. 110 (2008), no. 1, 1–49. * [BMc] V. Bergelson and R. McCutcheon, Recurrence for semigroup actions and a non-commutative Schur theorem, in Topological dynamics and applications (Minneapolis, MN, 1995), 205–222, Contemp. Math., 215, Amer. Math. Soc., Providence, RI. * [BMo] V. Bergelson and J. Moreira, Van der Corput’s difference theorem: some modern developments, Indag. Math. (N.S.) 27 (2016), no. 2, 437–479. * [BMoR] V. Bergelson, J. Moreira and F. K. Richter. Single and multiple recurrence along non-polynomial sequences. arXiv:1171.05729, 2017. * [Bi] G. Birkhoff, A note on topological groups, Compositio Math. 3 (1936), 427–430. * [BKQW] M. Boshernitzan, G. Kolesnik, A. Quas and M. Wierdl. Ergodic averaging sequences, J. Anal. Math. 95 (2005), 63–103. * [Bu] R. B. Burckel, Weakly almost periodic functions on semigroups, Gordon and Breach Science Publishers, New York, 1970. * [C1] G. Choquet, Existence et unicité des représentations intégrales au moyen des points extrémaux dans les cônes convexes, in Séminaire Bourbaki, Vol. 4, Exp. 139, 33–47, Soc. Math. France, Paris. * [C2] G. Choquet, Les cônes convexes faiblement complets dans l’analyse, in Proc. Internat. Congr. Mathematicians (Stockholm, 1962), 317–330, Inst. Mittag-Leffler, Djursholm. * [DHZ] T. Downarowicz, D. Huczek and G. Zhang, Tilings of amenable groups, J. Reine Angew. Math. 747 (2019), 277–298. * [E] W. F. Eberlein, Abstract ergodic theorems and weak almost periodic functions, Trans. Amer. Math. Soc. 67 (1949), 217–240. * [EW] M. Einsiedler and T. Ward, Ergodic theory with a view towards number theory, Graduate Texts in Mathematics, 259, Springer-Verlag London, Ltd., London, 2011. * [FK] R. Feres and A. Katok, Ergodic theory and dynamics of $G$-spaces (with special emphasis on rigidity phenomena), in Handbook of dynamical systems, Vol. 1A, 665–763, North-Holland, Amsterdam. * [Fra] N. Frantzikinakis, Ergodicity of the Liouville system implies the Chowla conjecture, Discrete Anal. 2017, Paper No. 19, 41 pp. * [F1] H. Furstenberg, Prediction Theory, ProQuest LLC, Ann Arbor, MI, 1958. * [F2] H. Furstenberg, Disjointness in ergodic theory, minimal sets, and a problem in Diophantine approximation, Math. Systems Theory 1 (1967), 1–49. * [F3] H. Furstenberg, Ergodic behavior of diagonal measures and a theorem of Szemerédi on arithmetic progressions, J. Analyse Math. 31 (1977), 204–256. * [F4] H. Furstenberg, Poincaré recurrence and number theory, Bull. Amer. Math. Soc. (N.S.) 5 (1981), no. 3, 211–234. * [F5] H. Furstenberg, Recurrence in ergodic theory and combinatorial number theory, Princeton University Press, Princeton, NJ, 1981. * [FKO] H. Furstenberg, Y. Katznelson and D. Ornstein, The ergodic theoretical proof of Szemerédi’s theorem, Bull. Amer. Math. Soc. (N.S.) 7 (1982), no. 3, 527–552. * [Gre] F. P. Greenleaf, Invariant means on topological groups and their applications, Van Nostrand Mathematical Studies, No. 16, Van Nostrand Reinhold Co., New York, 1969. * [GJ] L. Gillman and M. Jerison, Rings of continuous functions, The University Series in Higher Mathematics, D. Van Nostrand Co., Inc., Princeton, NJ, 1960. * [GLR] A. Gomilko, M. Lemanczyk, T. de la Rue, On Furstenberg systems of aperiodic multiplicative functions of Matomaki, Radziwill and Tao. arXiv preprint: https://arxiv.org/abs/2006.09958. * [GrS] G. Greschonig and K. Schmidt, Ergodic decomposition of quasi-invariant probability measures, Colloq. Math. 84/85 (2000), part 2, 495–514. * [HK] B. Hasselblatt and A. Katok, Principal structures, in Handbook of dynamical systems, Vol. 1A, 1–203, North-Holland, Amsterdam. * [H1] N. Hindman, Finite sums from sequences within cells of a partition of $N$, J. Combinatorial Theory Ser. A 17 (1974), 1–11. * [H2] N. Hindman, On density, translates, and pairwise sums of integers, J. Combin. Theory Ser. A 33 (1982), no. 2, 147–157. * [HF1] N. Frantzikinakis and B. Host, The logarithmic Sarnak conjecture for ergodic weights, Ann. of Math. (2) 187 (2018), no. 3, 869–931. * [HF2] N. Frantzikinakis and B. Host, Furstenberg systems of bounded multiplicative functions and applications, arXiv preprint: https://arxiv.org/abs/1804.08556. * [J] R. I. Jewett, The prevalence of uniquely ergodic systems, J. Math. Mech. 19 (1969/1970), 717–729. * [K] S. Kakutani, Selected papers. Vol. 2, Contemporary Mathematicians, Birkhäuser Boston, Inc., Boston, MA, 1986. * [Kr] W. Krieger, On unique ergodicity, in Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability (Univ. California, Berkeley, Calif., 1970/1971), Vol. II: Probability theory, 327–346, Univ. California Press, Berkeley, CA. * [N] I. Namioka, Følner’s conditions for amenable semi-groups, Math. Scand. 15 (1964), 18–28. * [P] A. L. T. Paterson, Amenability, Mathematical Surveys and Monographs, 29, American Mathematical Society, Providence, RI, 1988. * [Ph] R. R. Phelps, Lectures on Choquet’s theorem, second edition, Lecture Notes in Mathematics, 1757, Springer-Verlag, Berlin, 2001. * [R] A. Rosenthal, Strictly ergodic models and amenable group action. PhD Thesis. Paris 6, 1986. * [S] K. Schmidt, Asymptotic properties of unitary representations and mixing, Proc. London Math. Soc. (3) 48 (1984), no. 3, 445–460. * [Sz] E. Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, Acta Arith. 27 (1975), 199–245. * [V] V. S. Varadarajan, Groups of automorphisms of Borel spaces, Trans. Amer. Math. Soc. 109 (1963), 191–220. * [W] B. Weiss, Strictly ergodic models for dynamical systems, Bull. Amer. Math. Soc. (N.S.) 13 (1985), no. 2, 143–146.
2024-09-04T02:54:57.745485
2020-03-06T05:13:47
2003.03035
{ "authors": "Masaaki Fujii and Akihiko Takahashi", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26073", "submitter": "Masaaki Fujii", "url": "https://arxiv.org/abs/2003.03035" }
arxiv-papers
# A Mean Field Game Approach to Equilibrium Pricing with Market Clearing Condition 111 All the contents expressed in this research are solely those of the author and do not represent any views or opinions of any institutions. The author is not responsible or liable in any manner for any losses and/or damages caused by the use of any contents in this research. Masaaki Fujii222Quantitative Finance Course, Graduate School of Economics, The University of Tokyo. , Akihiko Takahashi333Quantitative Finance Course, Graduate School of Economics, The University of Tokyo. ( First version: 6 March, 2020 This version: 3 October, 2020 ) ###### Abstract In this work, we study an equilibrium-based continuous asset pricing problem which seeks to form a price process endogenously by requiring it to balance the flow of sales-and-purchase orders in the exchange market, where a large number of agents $1\leq i\leq N$ are interacting through the market price. Adopting a mean field game (MFG) approach, we find a special form of forward- backward stochastic differential equations of McKean-Vlasov type with common noise whose solution provides a good approximate of the market price. We show the convergence of the net order flow to zero in the large $N$-limit and get the order of convergence in $N$ under some conditions. We also extend the model to a setup with multiple populations where the agents within each population share the same cost and coefficient functions but they can be different population by population. Keywords : FBSDE of McKean-Vlasov type, common noise, general equilibrium ## 1 Introduction One of the most important problems in the financial economics is to understand how the asset price processes are formed through the interaction among a large number of rational competitive agents. In this paper, using a stylized model of security exchange, we try to explicitly form an approximate market price process which balances the flow of sales-and-purchase orders from a large number of rational financial institutions. If we directly force the price process to balance the net order flow, the strategies of the agents become strongly coupled and the problem is hardly solvable. In fact, it is even unclear how to make the cost functions of the agents well-defined, since the market price results in a very complicated recursive functional of strategies of all the agents that makes it difficult to guarantee the convexity of the cost functions. In order to circumvent this problem, we make use of the recent developments of mean field games. Since its inception brought by the pioneering works of Lasry & Lions [20, 21, 22] and Huang, Malhame & Caines [18], mean field game has rapidly developed into one of the most actively studied topics in the field of probability theory, applied mathematics, engineering, finance and economics. The greatest strength of the mean field game approach is to render notoriously difficult problems of stochastic differential games among many agents tractable by transforming it to a simpler form of stochastic control problem. There exist two approaches to the mean field games, one is analytic approach using partial differential equations (PDEs), and the others is probabilistic approach based on forward-backward stochastic differential equations (FBSDEs). For details of analytic approach and its applications, the interested readers may consult the works of Bensousssan, Frehse & Yam [3], Gomes, Nurbekyan & Pimentel [14], Achdou et.al. [1], Gomes, Pimentel & Voskanyan [15] and also Kolokoltsov & Malafeyev [19]. On the other hand, the probabilistic approach was developed by the series of works of Carmona & Delarue [4, 5, 6] and the recent two volumes of monograph [7, 8] provide its full mathematical details and many references for a wide array of applications of mean field games. Interestingly, from the perspective of equilibrium asset pricing, the number of applications of mean field games is quite limited. In most of the existing literature, the authors have given a response function of the price process exogenously and searched an approximate Nash equilibrium among agents. See, for example, applications to optimal trading as well as liquidation of portfolio, exploitation of exhaustible resources and related issues among many agents [10, 11, 12, 23], or an application to electricity pricing with smart grids [2, 9]. In the work [17], the authors treat explicitly the balance of demand and supply in the oil market, but the demand is exogenously given as a function of the oil price. One notable exception is the work of Gomes & Saude [16], in which the authors explicitly force demand and supply to balance and endogenously construct the market clearing electricity price. They use the analytic approach and the resultant equilibrium price process becomes deterministic due to the absence of common noise. In the current paper, we extend the work [16] by adopting the probabilistic approach. In order to understand the price processes, in particular those of financial assets, including systemic signals which impacts all the agents is crucially important. We find an interesting form of FBSDEs of McKean-Vlasov type with common noise as a limit problem. Although it involves dependence in conditional law, it only appears as a conditional expectation. This allows us to adopt the well-known Peng-Wu’s continuation method [24] to prove the existence of a unique strong solution. The resultant candidate of the market price process is derived completely endogenously by the optimal trading strategies of the agents facing systemic information (including securities’ coupon stream) as well as idiosyncratic noise. Another benefit of probabilistic approach is that it allows us to quantify the relation between the actual game with finite number of agents and its large population limit. In a similar manner to the standard mean field games in proving $\varepsilon$-Nash equilibrium, we show that the solution of the mean-field limit problem actually provides asymptotic market clearing in the large-$N$ limit. Under additional integrability conditions, Glivenko-Cantelli convergence theorem in the Wasserstein distance even provides a specific order of convergence in terms of the number of agents $N$. We also discuss the extension of the model to the situation with multiple populations where the agents share the same cost and coefficient functions within each population but they can be different population by population. This will provide an important tool to study the price formation in the presence of different type of agents such as Buy-side and Sell-side institutions, for example. The organization of the paper is as follows: After explaining the notation in Section 2, we give an intuitive derivation of the limit problem from the game of finite number of agents in Section 3, which motivates the readers to study the special type of FBSDEs of MKV-type. The solvability of the FBSDE is studied in Section 4. Using the derived regularity of the solution, we prove the asymptotic market clearing in Section 5. In Section 6, we discuss the extension of the model to the setup with multiple populations. Finally, in Section 7, we give concluding remarks. We discuss further extensions of the model and future directions of research. ## 2 Notation We introduce (N+1) complete probability spaces: $\displaystyle(\overline{\Omega}^{0},\overline{{\cal F}}^{0},\overline{\mathbb{P}}^{0})\quad{\rm{and}}\quad(\overline{\Omega}^{i},\overline{{\cal F}}^{i},\overline{\mathbb{P}}^{i})_{i=1}^{N}~{},$ endowed with filtrations $\overline{\mathbb{F}}^{i}:=(\overline{{\cal F}}_{t}^{i})_{t\geq 0}$, $i\in\\{0,\cdots,N\\}$. Here, $\overline{\mathbb{F}}^{0}$ is the completion of the filtration generated by $d^{0}$-dimensional Brownian motion $\boldsymbol{W}^{0}$ (hence right- continuous) and, for each $i\in\\{1,\cdots,N\\}$, $\overline{\mathbb{F}}^{i}$ is the complete and right-continuous augmentation of the filtration generated by $d$-dimensional Brownian motions $\boldsymbol{W}^{i}$ as well as a $\boldsymbol{W}^{i}$-independent $n$-dimensional square-integrable random variables $(\xi^{i})$. $(\xi^{i})_{i=1}^{N}$ are supposed to have the same law. We also introduce the product probability spaces $\Omega^{i}=\overline{\Omega}^{0}\times\overline{\Omega}^{i},\quad{\cal F}^{i},\quad\mathbb{F}^{i}=({\cal F}_{t}^{i})_{t\geq 0},\quad\mathbb{P}^{i}~{},i\in\\{1,\cdots,N\\}$ where $({\cal F}^{i},\mathbb{P}^{i})$ is the completion of $(\overline{{\cal F}}^{0}\otimes\overline{{\cal F}}^{i},\overline{\mathbb{P}}^{0}\otimes\overline{\mathbb{P}}^{i})$ and $\mathbb{F}^{i}$ is the complete and right-continuous augmentation of $(\overline{{\cal F}}_{t}^{0}\otimes\overline{{\cal F}}_{t}^{i})_{t\geq 0}$. In the same way, we define the complete probability space $(\Omega,{\cal F},\mathbb{P})$ endowed with $\mathbb{F}=({\cal F}_{t})_{t\geq 0}$ satisfying the usual conditions as a product of $(\overline{\Omega}^{i},\overline{{\cal F}}^{i},\overline{\mathbb{P}}^{i};\overline{\mathbb{F}}^{i})_{i=0}^{N}$. Throughout the work, the symbol $L$ and $L_{\varpi}$ denote given positive constants, the symbol $C$ a general positive constant which may change line by line. When we want to emphasize that $C$ depends only on some specific variables, say $a$ and $b$, we use the symbol $C(a,b)$. For a given constant $T>0$, we use the following notation for frequently encountered spaces: $\bullet~{}$$\mathbb{L}^{2}({\cal G};\mathbb{R}^{d})$ denotes the set of $\mathbb{R}^{d}$-valued ${\cal G}$-measurable square integrable random variables. $\bullet~{}$$\mathbb{S}^{2}(\mathbb{G};\mathbb{R}^{d})$ is the set of $\mathbb{R}^{d}$-valued $\mathbb{G}$-adapted continuous processes $\boldsymbol{X}$ satisfying $\displaystyle||X||_{\mathbb{S}^{2}}:=\mathbb{E}\bigl{[}\sup_{t\in[0,T]}|X_{t}|^{2}\bigr{]}^{\frac{1}{2}}<\infty~{}.$ $\bullet~{}$$\mathbb{H}^{2}(\mathbb{G};\mathbb{R}^{d})$ is the set of $\mathbb{R}^{d}$-valued $\mathbb{G}$-progressively measurable processes $\boldsymbol{Z}$ satisfying $\displaystyle||Z||_{\mathbb{H}^{2}}:=\mathbb{E}\Bigl{[}\Bigl{(}\int_{0}^{T}|Z_{t}|^{2}dt\Bigr{)}\Bigr{]}^{\frac{1}{2}}<\infty~{}.$ $\bullet~{}$${\cal L}(X)$ denotes the law of a random variable $X$. $\bullet~{}{\cal P}(\mathbb{R}^{d})$ is the set of probability measures on $(\mathbb{R}^{d},{\cal B}(\mathbb{R}^{d}))$. $\bullet~{}{\cal P}_{p}(\mathbb{R}^{d})$ with $p\geq 1$ is the subset of ${\cal P}(\mathbb{R}^{d})$ with finite $p$-th moment; i.e., the set of $\mu\in{\cal P}(\mathbb{R}^{d})$ satisfying $\displaystyle M_{p}(\mu):=\Bigl{(}\int_{\mathbb{R}^{d}}|x|^{p}\mu(dx)\Bigr{)}^{\frac{1}{p}}<\infty~{}.$ We always assign ${\cal P}_{p}(\mathbb{R}^{d})$ with $(p\geq 1)$ the $p$-Wasserstein distance $W_{p}$, which makes ${\cal P}_{p}(\mathbb{R}^{d})$ a complete separable metric space. As an important property, for any $\mu,\nu\in{\cal P}_{p}(\mathbb{R}^{d})$, we have $\displaystyle W_{p}(\mu,\nu)={\inf}\Bigl{\\{}\mathbb{E}[|X-Y|^{p}]^{\frac{1}{p}};{\cal L}(X)=\mu,{\cal L}(Y)=\nu\Bigr{\\}}~{},$ ( 2.1) where “$\inf$” is taken over all random variables with laws equal to $\mu$ and $\nu$, respectively. For more details, see Chapter 5 in [7]. We frequently omit the arguments such as $(\mathbb{G},\mathbb{R}^{d})$ in the above definitions when there is no confusion from the context. ## 3 Intuitive Derivation of the Mean Field Problem In this section, in order to introduce the special form of forward-backward stochastic differential equations of McKean-Vlasov type to be studied in this paper, we give a heuristic derivation of the mean-field limit problem from the corresponding equilibrium problem with finite number of agents. As a motivating example, we consider the equilibrium-based pricing problem of $n$ types of securities, which are continuously traded in the security exchange in the presence of a large number of participating agents indexed by $i\in\\{1,\cdots,N\\}$. Every agent is supposed to have many small clients who can only trade directly to the agent via over-the-counter markets (OTC) and have no access to the security exchange. We suppose that each agent $i\in\\{1,\cdots,N\\}$ tries to solve the problem $\inf_{\boldsymbol{\alpha^{i}}\in\mathbb{A}^{i}}J^{i}(\boldsymbol{\alpha}^{i})$ ( 3.1) with some cost functional $\displaystyle J^{i}(\boldsymbol{\alpha}^{i}):=\mathbb{E}\Bigl{[}\int_{0}^{T}f(t,X_{t}^{i},\alpha_{t}^{i},\varpi_{t},c_{t}^{0},c_{t}^{i})dt+g(X_{T}^{i},\varpi_{T},c_{T}^{0},c_{T}^{i})\Bigr{]}~{},$ subject to the dynamic constraint: $\displaystyle dX_{t}^{i}=\Bigl{(}\alpha_{t}^{i}+l(t,\varpi_{t},c_{t}^{0},c_{t}^{i})\Bigr{)}dt+\sigma_{0}(t,\varpi_{t},c_{t}^{0},c_{t}^{i})dW_{t}^{0}+\sigma(t,\varpi_{t},c_{t}^{0},c_{t}^{i})dW_{t}^{i}$ with $X_{0}^{i}=\xi^{i}\in\mathbb{L}^{2}(\overline{{\cal F}}_{0}^{i};\mathbb{R}^{n})$. Here, $(X_{t}^{i})_{t\geq 0}$ is an $\mathbb{R}^{n}$-valued process denoting the time-$t$ position of the $n$ securities for the agent $i$ with the initial position $\xi^{i}$. $(c_{t}^{0})_{t\geq 0}\in\mathbb{H}^{2}(\overline{\mathbb{F}}^{0};\mathbb{R}^{n})$ denotes the coupon payments from the securities or the market news commonly available to all the agents, while $(c_{t}^{i})_{t\geq 0}\in\mathbb{H}^{2}(\overline{\mathbb{F}}^{i};\mathbb{R}^{n})$ denotes some independent factors affecting only on the agent $i$. Moreover, $(c_{t}^{i})_{t\geq 0}$ are also assumed to have the common law for all $1\leq i\leq N$. We further suppose $c_{T}^{0}$ and $c_{T}^{i}$ are square integrable to handle the terminal cost $g$. Each agent controls $(\alpha_{t}^{i})_{t\geq 0}$ denoting the trading speed though the security exchange. The remaining terms $(l,\sigma_{0},\sigma)$ denote the order flow to the agent from his/her clients through over-the-counter (OTC) markets. $(\varpi_{t})_{t\geq 0}$ is the market price of the $n$ securities. The space of admissible strategies $\mathbb{A}^{i}$ of the agent $i$ is the set of processes $(\alpha^{i}_{t})_{t\geq 0}$ adapted to the complete right-continuous augmentation of the filtration $\bigl{(}\sigma\\{\varpi_{s}:s\leq t\\})\vee{\cal F}_{t}^{i}\bigr{)}_{t\geq 0}$ satisfying $\mathbb{E}\int_{0}^{T}|\alpha_{t}^{i}|^{2}dt<\infty~{}.$ In contrast to the standard optimization problems with a given market price process, we want to understand the fundamental mechanism of financial market which determines the market price by the equilibrium condition. The equilibrium price $(\varpi_{t})_{t\geq 0}$ adapted to the filtration $\mathbb{F}$ is determined endogenously so that the optimal strategies of the agents $(\widehat{\alpha}^{i}_{t})_{i=1}^{N}$ satisfy the market clearing condition for every $t\in[0,T]$, $\mathbb{P}$-a.s. $\displaystyle\sum_{i=1}^{N}\widehat{\alpha}_{t}^{i}=0~{},$ ( 3.2) which denotes the balance point of demand and supply at the security exchange. Although we have already made simplistic assumptions such that the cost functions as well as the coefficient functions of all the agents are common, the problem is still hardly solvable. Due to the clearing condition $(\ref{orig-clearing})$, we cannot adopt an open-loop equilibrium approach. In particular, $(\varpi_{t})_{t\geq 0}$ becomes a complicated functional of the agents’ trading strategies and hence the problem for each agent is highly recursive with respect to $(\alpha^{i}_{t})_{t\geq 0,1\leq i\leq N}$. It is even unclear how to guarantee the cost function well-defined by making it convex with respect to the controls. In order to obtain some insight, let us consider a much simpler situation. It is natural to suppose that the impact to the market price $(\varpi_{t})_{t\in[0,T]}$ from the individual agent becomes negligibly small when $N$ is sufficiently large. Moreover, $(\varpi_{t})_{t\in[0,T]}$ is likely to be given by $\overline{\mathbb{F}}^{0}$-progressively measurable process since the effects from the idiosyncratic parts from many agents are expected to be canceled out. If this is the case, the problem for each agent reduces to the standard stochastic optimal control problem in a given random environment $(\varpi_{t},c_{t}^{0},c_{t}^{i})_{t\in[0,T]}$ with $\mathbb{F}^{i}$-adapted trading strategy $(\alpha^{i}_{t})_{t\in[0,T]}$. Let us first investigate this simple problem in details. We introduce the cost functions: $f:[0,T]\times(\mathbb{R}^{n})^{5}\rightarrow\mathbb{R}$, $g:(\mathbb{R}^{n})^{4}\rightarrow\mathbb{R}$, $\overline{f}:[0,T]\times(\mathbb{R}^{n})^{4}\rightarrow\mathbb{R}$ and $\overline{g}:(\mathbb{R}^{n})^{3}\rightarrow\mathbb{R}$, which are measurable functions such that $\displaystyle f(t,x,\alpha,\varpi,c^{0},c):=\langle\varpi,\alpha\rangle+\frac{1}{2}\langle\alpha,\Lambda\alpha\rangle+\overline{f}(t,x,\varpi,c^{0},c),$ $\displaystyle g(x,\varpi,c^{0},c):=-\delta\langle\varpi,x\rangle+\overline{g}(x,c^{0},c)~{}.$ ###### Assumption 3.1. (MFG-a) (i) $\Lambda$ is a positive definite $n\times n$ symmetric matrix with $\underline{\lambda}I_{n\times n}\leq\Lambda\leq\overline{\lambda}I_{n\times n}$ in the sense of 2nd-order form where $\underline{\lambda}$ and $\overline{\lambda}$ are some constants satisfying $0<\underline{\lambda}\leq\overline{\lambda}$. (ii) For any $(t,x,\varpi,c^{0},c)$, $\displaystyle|\overline{f}(t,x,\varpi,c^{0},c)|+|\overline{g}(x,c^{0},c)|\leq L(1+|x|^{2}+|\varpi|^{2}+|c^{0}|^{2}+|c|^{2})~{}.$ (iii) $\overline{f}$ and $\overline{g}$ are continuously differentiable in $x$ and satisfy, for any $(t,x,x^{\prime},\varpi,c^{0},c)$, $|\partial_{x}\overline{f}(t,x^{\prime},\varpi,c^{0},c)-\partial_{x}\overline{f}(t,x,\varpi,c^{0},c)|+|\partial_{x}\overline{g}(x^{\prime},c^{0},c)-\partial_{x}\overline{g}(x,c^{0},c)|\leq L|x^{\prime}-x|~{},$ and $|\partial_{x}\overline{f}(t,x,\varpi,c^{0},c)|+|\partial_{x}\overline{g}(x,c^{0},c)|\leq L(1+|x|+|\varpi|+|c^{0}|+|c|)$. (iv)The functions $\overline{f}$ and $\overline{g}$ are convex in $x$ in the sense that for any $(t,x,x^{\prime},\varpi,c^{0},c)$, $\displaystyle\overline{f}(t,x^{\prime},\varpi,c^{0},c)-\overline{f}(t,x,\varpi,c^{0},c)-\langle x^{\prime}-x,\partial_{x}\overline{f}(t,x,\varpi,c^{0},c)\rangle\geq\frac{\gamma^{f}}{2}|x^{\prime}-x|^{2}~{},$ $\displaystyle\overline{g}(x^{\prime},c^{0},c)-\overline{g}(x,c^{0},c)-\langle x^{\prime}-x,\partial_{x}\overline{g}(x,c^{0},c)\rangle\geq\frac{\gamma^{g}}{2}|x^{\prime}-x|^{2}~{},$ with some constants $\gamma^{f},\gamma^{g}\geq 0$. (v) $l,\sigma_{0},\sigma$ are the measurable functions defined on $[0,T]\times(\mathbb{R}^{n})^{3}$ and are $\mathbb{R}^{n},\mathbb{R}^{n\times d^{0}}$ and $\mathbb{R}^{n\times d}$-valued, respectively. Moreover they satisfy the linear growth condition: $\displaystyle|(l,\sigma_{0},\sigma)(t,\varpi,c^{0},c)|\leq L(1+|\varpi|+|c^{0}|+|c|)$ for any $(t,\varpi,c^{0},c)$. (vi) $\delta\in[0,1)$ is a given constant. The first term $\langle\varpi,\alpha\rangle$ of $f$ denotes the direct cost incurred by the sales and purchase of the securities and the second term $\frac{1}{2}\langle\alpha,\Lambda\alpha\rangle$ is some fee to be paid to the exchange depending on the trading speed, or may be interpreted as some internal cost. The first term of $g$ denotes the mark-to-market value at the closing time with some discount factor $\delta<1$.444We shall see that the condition $\delta<1$ is necessary to obtain well-defined terminal condition for the limit problem. $\overline{f}$ and $\overline{g}$ denote the running as well as the terminal cost which are affected by the market price, coupon streams, or the news. ###### Remark 3.1. If we think $c^{0}$ as a coupon stream of the securities, one may consider for example, $\displaystyle\overline{f}(t,x,\varpi,c^{0},c)=-\langle c^{0},x\rangle+\overline{f}^{\prime}(t,x,\varpi,c)$ as a running cost with an appropriate measurable function $\overline{f}^{\prime}$. For securities with a given maturity $T$ with exogenously specified payoff $c^{0}$, such as bonds and futures, it is natural to consider $\displaystyle g(x,c^{0})=\overline{g}(x,c^{0})=-\langle c^{0},x\rangle$ as the terminal cost. For this problem, the (reduced) Hamiltonian is given by $\displaystyle H(t,x,y,\alpha,\varpi,c^{0},c)=\langle y,\alpha+l(t,\varpi,c^{0},c)\rangle+f(t,x,\alpha,\varpi,c^{0},c)~{}.$ Since $\partial_{\alpha}H(t,x,y,\alpha,\varpi,c^{0},c)=y+\varpi+\Lambda\alpha$, the minimizer of the Hamiltonian is $\displaystyle\widehat{\alpha}(y,\varpi):=-\overline{\Lambda}(y+\varpi)$ ( 3.3) where $\overline{\Lambda}:=\Lambda^{-1}$. The adjoint FBSDE associated with the stochastic maximal principle for each agent $1\leq i\leq N$ is thus given by, $\displaystyle dX_{t}^{i}=\Bigl{(}\widehat{\alpha}(Y_{t}^{i},\varpi_{t})+l(t,\varpi_{t},c_{t}^{0},c_{t}^{i})\Bigr{)}dt+\sigma_{0}(t,\varpi_{t},c_{t}^{0},c_{t}^{i})dW_{t}^{0}+\sigma(t,\varpi_{t},c_{t}^{0},c_{t}^{i})dW_{t}^{i}~{},$ $\displaystyle dY_{t}^{i}=-\partial_{x}\overline{f}(t,X_{t}^{i},\varpi_{t},c_{t}^{0},c_{t}^{i})dt+Z_{t}^{i,0}dW_{t}^{0}+Z_{t}^{i}dW_{t}^{i}~{},$ ( 3.4) with $X_{0}^{i}=\xi^{i}$ and $Y_{T}^{i}=\partial_{x}g(X_{T}^{i},\varpi_{T},c_{T}^{0},c_{T}^{i})$. ###### Theorem 3.1. Under Assumption (MFG-a) and a given $(\varpi_{t})_{t\in[0,T]}\in\mathbb{H}^{2}(\overline{\mathbb{F}}^{0};\mathbb{R}^{n})$, the problem $(\ref{agent-P})$ for each agent is uniquely characterized by the FBSDE $(\ref{agent-FBSDE})$ which is strongly solvable with a unique solution $(X^{i},Y^{i},Z^{i,0},Z^{i})\in\mathbb{S}^{2}(\mathbb{F}^{i};\mathbb{R}^{n})\times\mathbb{S}^{2}(\mathbb{F}^{i};\mathbb{R}^{n})\times\mathbb{H}^{2}(\mathbb{F}^{i};\mathbb{R}^{n\times d^{0}})\times\mathbb{H}^{2}(\mathbb{F}^{i};\mathbb{R}^{n\times d})$. ###### Proof. Since the cost functions are jointly convex with $(x,\alpha)$ and strictly convex in $\alpha$, the problem is the special situation investigated in Section 1.4.4 in [8]. Note that, in our case, the diffusion terms $\sigma_{0},\sigma$ are independent of $(X^{i},\alpha^{i})$. The proof is the direct result of Theorem 1.60 in the same reference. ∎ Using the above solution, the optimal strategy of each agent is given by $\displaystyle\widehat{\alpha}^{i}_{t}=-\overline{\Lambda}(Y_{t}^{i}+\varpi_{t}),\quad t\in[0,T]~{}.$ Let us check the market clearing condition. In the current situation, $(\ref{orig-clearing})$ is equivalent to $\displaystyle\varpi_{t}=-\frac{1}{N}\sum_{i=1}^{N}Y_{t}^{i}$ which is of course inconsistent with the our simplifying assumption that requires $(\varpi_{t})_{t\geq 0}$ to be an $\overline{\mathbb{F}}^{0}$-adapted process. However, in the current setup, for any $t\in[0,T]$, $(Y^{i}_{t})_{i=1}^{N}$ are exchangeable random variables due to the construction of the probability space, common coefficient functions, and the fact that $(\xi^{i})_{i=1}^{N}$ as well as $(c_{t}^{i},t\in[0,T])_{i=1}^{N}$ are assumed to be i.i.d. Thus De Finetti’s theory of exchangeable sequence of random variables tells, $\displaystyle\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{i=1}^{N}Y_{t}^{i}=\mathbb{E}\Bigl{[}Y_{t}^{1}|\bigcap_{k\geq 1}\sigma\\{Y_{t}^{j},j\geq k\\}\Bigr{]}\quad{\rm a.s.}$ See for example Theorem 2.1 in [8]. It also seems natural to expect that the tail $\sigma$-field is reduced to $\overline{{\cal F}}_{t}^{0}$. Therefore we can expect that, in the large-$N$ limit, the market price of the securities may be given by $\varpi_{t}=-\mathbb{E}[Y_{t}^{1}|\overline{{\cal F}}_{t}^{0}]$. The above observation motivates us to consider the following FBSDE: $\displaystyle dX_{t}=\Bigl{(}\widehat{\alpha}\bigl{(}Y_{t},-\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}]\bigr{)}+l\bigl{(}t,-\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}],c_{t}^{0},c_{t}\bigr{)}\Bigr{)}dt$ $\displaystyle\qquad\quad+\sigma_{0}\bigl{(}t,-\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}],c_{t}^{0},c_{t}\bigr{)}dW_{t}^{0}+\sigma\bigl{(}t,-\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}],c_{t}^{0},c_{t}\bigr{)}dW_{t}^{1}~{},$ $\displaystyle dY_{t}=-\partial_{x}\overline{f}\bigl{(}t,X_{t},-\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}],c_{t}^{0},c_{t}\bigr{)}dt+Z_{t}^{0}dW_{t}^{0}+Z_{t}dW_{t}^{1}~{},$ with $X_{0}=\xi$ with $\displaystyle Y_{T}=\frac{\delta}{1-\delta}\mathbb{E}\bigl{[}\partial_{x}\overline{g}(X_{T},c_{T}^{0},c_{T})|\overline{{\cal F}}_{T}^{0}\bigr{]}+\partial_{x}\overline{g}(X_{T},c_{T}^{0},c_{T})$. To simplify the notation, we have omitted the superscript $1$ from $Y^{1}$, $X^{1}$, $\xi^{1}$ and $c^{1}$. Let us remark on the terminal condition. $Y_{T}=\partial_{x}g(X_{T},-\mathbb{E}[Y_{T}|\overline{{\cal F}}_{T}^{0}],c_{T}^{0},c_{T})$ is not yet fully specified. Taking the conditional expectation in the both sides gives $\displaystyle\mathbb{E}[Y_{T}|\overline{{\cal F}}_{T}^{0}]=\delta\mathbb{E}[Y_{T}|\overline{{\cal F}}_{T}^{0}]+\mathbb{E}\bigl{[}\partial_{x}\overline{g}(X_{T},c_{T}^{0},c_{T})|\overline{{\cal F}}_{T}^{0}\bigr{]}~{},$ which implies $\mathbb{E}[Y_{T}|\overline{{\cal F}}_{T}^{0}]=\frac{1}{1-\delta}\mathbb{E}\bigl{[}\partial_{x}\overline{g}(X_{T},c_{T}^{0},c_{T})|\overline{{\cal F}}_{T}^{0}\bigr{]}$. Substituting this expression for $\mathbb{E}[Y_{T}|\overline{{\cal F}}_{T}^{0}]$ in $\partial_{x}g$, we get the above specification of the terminal condition. This is the FBSDE we are going to study in the following. It is of McKean- Vlasov type with common noise, and similar to the FBSDEs relevant for the extended mean field games. In the following, we are going to prove the existence of a unique solution to the above FBSDE under appropriate conditions and then show that $-\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}]$ is actually a good approximate of the market price by investigating how accurately it achieves the market clearing condition $(\ref{orig-clearing})$ when $N$ increases. ## 4 Solvability of the mean-field FBSDE We now investigate the solvability of the FBSDE derived in the last section $\displaystyle dX_{t}=\Bigl{(}\widehat{\alpha}\bigl{(}Y_{t},-\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}]\bigr{)}+l\bigl{(}t,-\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}],c_{t}^{0},c_{t}\bigr{)}\Bigr{)}dt$ $\displaystyle\qquad\quad+\sigma_{0}\bigl{(}t,-\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}],c_{t}^{0},c_{t}\bigr{)}dW_{t}^{0}+\sigma\bigl{(}t,-\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}],c_{t}^{0},c_{t}\bigr{)}dW_{t}^{1}~{},$ $\displaystyle dY_{t}=-\partial_{x}\overline{f}\bigl{(}t,X_{t},-\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}],c_{t}^{0},c_{t}\bigr{)}dt+Z_{t}^{0}dW_{t}^{0}+Z_{t}dW_{t}^{1}~{},$ ( 4.1) with $X_{0}=\xi$ with $\displaystyle Y_{T}=\frac{\delta}{1-\delta}\mathbb{E}\bigl{[}\partial_{x}\overline{g}(X_{T},c_{T}^{0},c_{T})|\overline{{\cal F}}_{T}^{0}\bigr{]}+\partial_{x}\overline{g}(X_{T},c_{T}^{0},c_{T})$. $\widehat{\alpha}$ is defined as in $(\ref{def-ha})$. $(c^{0}_{t})_{t\geq 0}\in\mathbb{H}^{2}(\overline{\mathbb{F}}^{0};\mathbb{R}^{n})$ and $(c_{t})_{t\geq 0}\in\mathbb{H}^{2}(\overline{\mathbb{F}}^{1};\mathbb{R}^{n})$ with square integrable $c_{T}^{0},c_{T}$ are given as inputs. Let us remind the notation to write $\xi=\xi^{1}$ and $c=c^{1}$. ### 4.1 Unique existence for small $T$ ###### Assumption 4.1. (MFG-b) For any $(t,x,c^{0},c)\in[0,T]\times(\mathbb{R}^{n})^{3}$ and any $\varpi,\varpi^{\prime}\in\mathbb{R}^{n}$, the coefficient functions $l,\sigma_{0},\sigma$ and $\overline{f}$ satisfy $\displaystyle|(l,\sigma_{0},\sigma)(t,\varpi,c^{0},c)-(l,\sigma_{0},\sigma)(t,\varpi^{\prime},c^{0},c)|+|\partial_{x}\overline{f}(t,x,\varpi,c^{0},c)-\partial_{x}\overline{f}(t,x,\varpi^{\prime},c^{0},c)|\leq L_{\varpi}|\varpi-\varpi^{\prime}|~{}.$ Due to the Lipschitz continuity and the absence of $(Z^{0},Z)$ in the diffusion coefficients of the forward SDE, we have the following short-term existence result. ###### Theorem 4.1. Under Assumptions (MFG-a,b), there exists some constant $\tau>0$ which depends only on $(L,L_{\varpi},\underline{\lambda},\delta)$ such that for any $T\leq\tau$, there exists a unique strong solution $(X,Y,Z^{0},Z)\in\mathbb{S}^{2}(\mathbb{F}^{1};\mathbb{R}^{n})\times\mathbb{S}^{2}(\mathbb{F}^{1};\mathbb{R}^{n})\times\mathbb{H}^{2}(\mathbb{F}^{1};\mathbb{R}^{n\times d^{0}})\times\mathbb{H}^{2}(\mathbb{F}^{1};\mathbb{R}^{n\times d})$ to the FBSDE $(\ref{fbsde-single-p})$. ###### Proof. Although there exist terms involving $\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}]$, one can adopt the standard technique for the Lipschitz FBSDE. See, for example, the proof of Theorem 1.45 [8]. ∎ ### 4.2 Unique existence for general $T$ In order to obtain existence result for general $T$, we are going to apply the technique developed by Peng & Wu [24]. In the case of the standard optimization problem, the joint convexity in the state and control variables combined with strict convexity in the control variable are enough to obtain the unique existence. Interestingly however, we need a strict convexity also in the state variable $X$ in our problem. As we shall see, this is because the term $-\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}]$ which appears due to the clearing condition weakens the convexity. ###### Assumption 4.2. (MFG-c1) (i) The functions $\sigma_{0}$ and $\sigma$ are independent of the argument $\varpi$. (ii) For any $t\in[0,T]$, any random variables $x,x^{\prime},c^{0},c\in\mathbb{L}^{2}({\cal F};\mathbb{R}^{n})$ and any sub-$\sigma$-field ${\cal G}\subset{\cal F}$, the function $l$ satisfies the monotone condition with some positive constant $\gamma^{l}>0$: $\displaystyle\mathbb{E}\Bigl{[}\langle l(t,\mathbb{E}[x|{\cal G}],c^{0},c)-l(t,\mathbb{E}[x^{\prime}|{\cal G}],c^{0},c),x-x^{\prime}\rangle\Bigr{]}\geq\gamma^{l}\mathbb{E}\bigl{[}\mathbb{E}[x-x^{\prime}|{\cal G}]^{2}\bigr{]}~{}.$ (iii) There exists a strictly positive constant $\gamma$ satisfying $0<\gamma\leq\Bigl{(}\gamma^{f}-\frac{L_{\varpi}^{2}}{4\gamma^{l}}\Bigr{)}\wedge\gamma^{g}$. Moreover, for any $x,x^{\prime},c^{0},c\in\mathbb{L}^{2}({\cal F};\mathbb{R}^{n})$ and any sub-$\sigma$-field ${\cal G}\subset{\cal F}$, the function $\overline{g}$ satisfies $\displaystyle\gamma^{g}\mathbb{E}[|x-x^{\prime}|^{2}]+\frac{\delta}{1-\delta}\mathbb{E}\Bigl{[}\langle\mathbb{E}\bigl{[}\partial_{x}\overline{g}(x,c^{0},c)-\partial_{x}\overline{g}(x^{\prime},c^{0},c)|{\cal G}\bigr{]},x-x^{\prime}\rangle\Bigr{]}\geq\gamma\mathbb{E}[|x-x^{\prime}|^{2}]~{}.$ ###### Remark 4.1. If $l$ and $\partial_{x}\overline{g}$ have separable forms such as $h(x)+h^{c}(c^{0},c)$ with some functions $h$ and $h^{c}$, then the conditions (ii) and (iii) are satisfied when the function $h$ is monotone. Economically speaking, the condition (ii) implies that the demand from the individual OTC clients of each agent toward the security decreases when its market price rises. The next theorem is the first main existence result. ###### Theorem 4.2. Under Assumptions (MFG-a,b,c1), there exists a unique strong solution $(X,Y,Z^{0},Z)\in\mathbb{S}^{2}(\mathbb{F}^{1};\mathbb{R}^{n})\times\mathbb{S}^{2}(\mathbb{F}^{1};\mathbb{R}^{n})\times\mathbb{H}^{2}(\mathbb{F}^{1};\mathbb{R}^{n\times d^{0}})\times\mathbb{H}^{2}(\mathbb{F}^{1};\mathbb{R}^{n\times d})$ to the FBSDE $(\ref{fbsde-single-p})$. ###### Proof. In order to simplify the notation, let us define the functionals $B,F$ and $G$ for any $y,x,c^{0},c\in\mathbb{L}^{2}({\cal F};\mathbb{R}^{n})$ by $\displaystyle B(t,y,c^{0},c):=\Bigl{(}-\overline{\Lambda}(y-\mathbb{E}[y|\overline{{\cal F}}_{t}^{0}])+l(t,-\mathbb{E}[y|\overline{{\cal F}}_{t}^{0}],c^{0},c)\Bigr{)},$ $\displaystyle F(t,x,y,c^{0},c):=-\partial_{x}\overline{f}\bigl{(}t,x,-\mathbb{E}[y|\overline{{\cal F}}_{t}^{0}],c^{0},c\bigr{)},$ $\displaystyle G(x,c^{0},c):=\frac{\delta}{1-\delta}\mathbb{E}\bigl{[}\partial_{x}\overline{g}(x,c^{0},c)|\overline{{\cal F}}_{T}^{0}\bigr{]}+\partial_{x}\overline{g}(x,c^{0},c)~{}.$ ( 4.2) With the convention $\Delta y:=y-y^{\prime}$, $\Delta x:=x-x^{\prime}$, one can easily confirms $\displaystyle\mathbb{E}\bigl{[}\langle B(t,y,c^{0},c)-B(t,y^{\prime},c^{0},c),\Delta y\rangle\bigr{]}\leq-\gamma^{l}\mathbb{E}\bigl{[}\mathbb{E}[\Delta y|\overline{{\cal F}}_{t}^{0}]^{2}\bigr{]}~{},$ $\displaystyle\mathbb{E}\bigl{[}\langle F(t,x,y,c^{0},c)-F(t,x^{\prime},y^{\prime},c^{0},c),\Delta x\rangle\bigr{]}\leq-\Bigl{(}\gamma^{f}-\frac{L_{\varpi}^{2}}{4\gamma^{l}}\Bigr{)}\mathbb{E}[|\Delta x|^{2}]+\gamma^{l}\mathbb{E}\bigl{[}\mathbb{E}[\Delta y|\overline{{\cal F}}_{t}^{0}]^{2}\bigr{]}~{},$ $\displaystyle\mathbb{E}\bigl{[}\langle G(x,c^{0},c)-G(x^{\prime},c^{0},c),\Delta x\rangle\bigr{]}\geq\gamma\mathbb{E}[|\Delta x|^{2}],$ ( 4.3) where the first estimate follows from (MFG-c1)(ii) and Jensen’s inequality, the second from (MFG-a)(iv), (MFG-b) and Cauchy-Schwarz inequality. The third one is the direct consequence of (MFG-c)(iii). We first make the following hypothesis: there exists some constant $\varrho\in[0,1)$ such that, for any $(I_{t}^{b})_{t\geq 0}$, $(I_{t}^{f})_{t\geq 0}$ in $\mathbb{H}^{2}(\mathbb{F}^{1};\mathbb{R}^{n})$ and any $\eta\in\mathbb{L}^{2}({\cal F}_{T}^{1};\mathbb{R}^{n})$, there exists a unique solution $(x^{\varrho},y^{\varrho},z^{0,\varrho},z^{\varrho})\in\mathbb{S}^{2}(\mathbb{F}^{1};\mathbb{R}^{n})\times\mathbb{S}^{2}(\mathbb{F}^{1};\mathbb{R}^{n})\times\mathbb{H}^{2}(\mathbb{F}^{1};\mathbb{R}^{n\times d^{0}})\times\mathbb{H}^{2}(\mathbb{F}^{1};\mathbb{R}^{n\times d})$ to the FBSDE: $\displaystyle dx_{t}^{\varrho}=\bigl{(}\varrho B(t,y_{t}^{\varrho},c_{t}^{0},c_{t})+I_{t}^{b}\bigr{)}dt+\sigma_{0}(t,c_{t}^{0},c_{t})dW_{t}^{0}+\sigma(t,c_{t}^{0},c_{t})dW_{t}^{1}~{},$ $\displaystyle dy_{t}^{\varrho}=-\bigl{(}(1-\varrho)\gamma x_{t}^{\varrho}-\varrho F(t,x_{t}^{\varrho},y_{t}^{\varrho},c_{t}^{0},c_{t})+I_{t}^{f}\bigr{)}dt+z_{t}^{0,\varrho}dW_{t}^{0}+z_{t}^{\varrho}dW_{t}^{1}~{},$ ( 4.4) with $x_{0}^{\varrho}=\xi$ and $y_{T}^{\varrho}=\varrho G(x_{T}^{\varrho},c_{T}^{0},c_{T})+(1-\varrho)x_{T}^{\varrho}+\eta$. Note that when $\varrho=0$ we have a decoupled set of SDE and BSDE and hence the hypothesis trivially holds. Our goal is to extend the $\varrho$ up to $1$ by following Peng-Wu’s continuation method [24]. Now, for an arbitrary set of inputs $(x,y,z^{0},z)\in\mathbb{S}^{2}(\mathbb{F}^{1};\mathbb{R}^{n})^{2}\times\mathbb{H}^{2}(\mathbb{F}^{1};\mathbb{R}^{n\times d^{0}})\times\mathbb{H}^{2}(\mathbb{F}^{1};\mathbb{R}^{n\times d})$ and constant $\zeta\in(0,1)$, consider $\displaystyle dX_{t}=\bigl{[}\varrho B(t,Y_{t},c_{t}^{0},c_{t})+\zeta B(t,y_{t},c_{t}^{0},c_{t})+I_{t}^{b}\bigr{]}dt+\sigma_{0}(t,c_{t}^{0},c_{t})dW_{t}^{0}+\sigma(t,c_{t}^{0},c_{t})dW_{t}^{1}~{},$ $\displaystyle dY_{t}=-\bigl{[}(1-\varrho)\gamma X_{t}-\varrho F(t,X_{t},Y_{t},c_{t}^{0},c_{t})+\zeta(-\gamma x_{t}-F(t,x_{t},y_{t},c_{t}^{0},c_{t}))+I_{t}^{f}\bigr{]}dt$ $\displaystyle\qquad\quad+Z_{t}^{0}dW_{t}^{0}+Z_{t}dW_{t}^{1}~{},$ ( 4.5) with $X_{0}=\xi$ and $Y_{T}=\varrho G(X_{T},c_{T}^{0},c_{T})+(1-\varrho)X_{T}+\zeta(G(x_{T},c_{T}^{0},c_{T})-x_{T})+\eta$. The existence of the solution $(X,Y,Z^{0},Z)\in\mathbb{S}^{2}\times\mathbb{S}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}$ is guaranteed by the previous hypothesis. We are going to prove the map $(x,y,z^{0},z)\mapsto(X,Y,Z^{0},Z)$ defined above becomes strict contraction when $\zeta>0$ is chosen small enough. For two set of inputs $(x,y,z^{0},z)$ and $(x^{\prime},y^{\prime},z^{0\prime},z^{\prime})$, let us denote the corresponding solutions to $(\ref{shifted-1})$ by $(X,Y,Z^{0},Z)$ and $(X^{\prime},Y^{\prime},Z^{0\prime},Z^{\prime})$, respectively. We put $\Delta X_{t}:=X_{t}-X_{t}^{\prime}$, $\Delta Y_{t}:=Y_{t}-Y_{t}^{\prime}$ and similarly for the others. Applying Itô’s formula to $\langle\Delta X_{t},\Delta Y_{t}\rangle$ and using the estimates $(\ref{peng-wu- condition})$, we obtain $\displaystyle\mathbb{E}\bigl{[}\langle\Delta X_{T},\Delta Y_{T}\rangle\bigr{]}\leq-\gamma\mathbb{E}\int_{0}^{T}|\Delta X_{t}|^{2}dt$ $\displaystyle\hskip 85.35826pt+\zeta C\mathbb{E}\int_{0}^{T}\Bigl{[}|\Delta Y_{t}|(|\Delta y_{t}|+\mathbb{E}[\Delta y_{t}|\overline{{\cal F}}_{t}^{0}])+|\Delta X_{t}|(|\Delta x_{t}|+\mathbb{E}[|\Delta y_{t}|\overline{{\cal F}}_{t}^{0}])\Bigr{]}dt~{},$ $\displaystyle\mathbb{E}\bigl{[}\langle\Delta X_{T},\Delta Y_{T}\rangle\bigr{]}\geq(\varrho\gamma+(1-\varrho))\mathbb{E}[|\Delta X_{T}|^{2}]-\zeta C\mathbb{E}\bigl{[}|\Delta X_{T}|(|\Delta x_{T}|+\mathbb{E}[|\Delta x_{T}||\overline{{\cal F}}_{T}^{0}])\bigr{]}~{},$ with some $\varrho$-independent constant $C$. Let us set $\gamma_{c}:=\min(1,\gamma)>0$. Then one easily confirms $0<\gamma_{c}\leq\varrho\gamma+(1-\varrho)$ for any $\varrho\in[0,1)$. Then the above estimates yields $\displaystyle\gamma_{c}\mathbb{E}\Bigl{[}|\Delta X_{T}|^{2}+\int_{0}^{T}|\Delta X_{t}|^{2}dt\Bigr{]}$ $\displaystyle\leq$ $\displaystyle\zeta C\mathbb{E}\bigl{[}|\Delta X_{T}|(|\Delta x_{T}|+\mathbb{E}[|\Delta x_{T}||\overline{{\cal F}}_{T}^{0}])\bigr{]}$ $\displaystyle+\zeta C\mathbb{E}\int_{0}^{T}\Bigl{[}|\Delta Y_{t}|(|\Delta y_{t}|+\mathbb{E}[\Delta y_{t}|\overline{{\cal F}}_{t}^{0}])+|\Delta X_{t}|(|\Delta x_{t}|+\mathbb{E}[|\Delta y_{t}|\overline{{\cal F}}_{t}^{0}])\Bigr{]}dt~{}.$ Using Young’s inequality and and a new constant $C$, we get $\displaystyle\mathbb{E}[|\Delta X_{T}|^{2}]+\mathbb{E}\int_{0}^{T}|\Delta X_{t}|^{2}dtd\leq\zeta C\mathbb{E}\int_{0}^{T}\bigl{(}|\Delta Y_{t}|^{2}+(|\Delta x_{t}|^{2}+|\Delta y_{t}|^{2})\bigr{)}dt+\zeta C\mathbb{E}[|\Delta x_{T}|^{2}]~{}.$ ( 4.6) Treating $X,X^{\prime}$ as inputs, the standard estimates for the Lipschitz BSDEs (see, for example, Theorem 4.2.3 in [25]) gives $\displaystyle\mathbb{E}\Bigl{[}\sup_{t\in[0,T]}|\Delta Y_{t}|^{2}+\int_{0}^{T}(|\Delta Z_{t}^{0}|^{2}+|\Delta Z_{t}|^{2})dt\Bigr{]}$ $\displaystyle\qquad\quad\leq C\mathbb{E}\Bigl{[}|\Delta X_{T}|^{2}+\int_{0}^{T}|\Delta X_{t}|^{2}dt\Bigr{]}+\zeta C\mathbb{E}\Bigl{[}|\Delta x_{T}|^{2}+\int_{0}^{T}(|\Delta x_{t}|^{2}+|\Delta y_{t}|^{2})dt\Bigr{]}~{}.$ Combining with $(\ref{eq-pw-1})$ and choosing $\zeta>0$ small, we obtain $\displaystyle\mathbb{E}\Bigl{[}\sup_{t\in[0,T]}|\Delta Y_{t}|^{2}+\int_{0}^{T}(|\Delta Z_{t}^{0}|^{2}+|\Delta Z_{t}|^{2})dt\Bigr{]}\leq\zeta C\mathbb{E}\Bigl{[}|\Delta x_{T}|^{2}+\int_{0}^{T}(|\Delta x_{t}|^{2}+|\Delta y_{t}|^{2})dt\Bigr{]}~{}.$ ( 4.7) By the similar procedures, we also have $\displaystyle\mathbb{E}\Bigl{[}\sup_{t\in[0,T]}|\Delta X_{t}|^{2}\Bigr{]}\leq\zeta C\mathbb{E}\Bigl{[}|\Delta x_{T}|^{2}+\int_{0}^{T}(|\Delta x_{t}|^{2}+|\Delta y_{t}|^{2})dt\Bigr{]}~{}.$ ( 4.8) From $(\ref{eq-pw-2})$ and $(\ref{eq-pw-3})$, we obtain $\displaystyle\mathbb{E}\Bigl{[}\sup_{t\in[0,T]}|\Delta X_{t}|^{2}+\sup_{t\in[0,T]}|\Delta Y_{t}|^{2}+\int_{0}^{T}(|\Delta Z_{t}^{0}|^{2}+|\Delta Z_{t}|^{2})dt\Bigr{]}$ $\displaystyle\leq\zeta C\mathbb{E}\Bigl{[}\sup_{t\in[0,T]}|\Delta x_{t}|^{2}+\sup_{t\in[0,T]}|\Delta y_{t}|^{2}+\int_{0}^{T}(|\Delta z_{t}^{0}|^{2}+|\Delta z_{t}|^{2})dt\Bigr{]}.$ Thus there exists $\zeta>0$, being independent of the size of $\varrho$, that makes the map $(x,y,z^{0},z)\mapsto(X,Y,Z^{0},Z)$ strict contraction. Therefore the initial hypothesis holds true for $(\varrho+\zeta)$, which establishes the existence. The uniqueness follows from the next proposition. ∎ ###### Proposition 4.1. Given two set of inputs $(\xi,c^{0},c),(\xi^{\prime},c^{0\prime},c^{\prime})$, coefficients $(\delta,\Lambda),(\delta^{\prime},\Lambda^{\prime})$ and the coefficient functions $(l,\sigma_{0},\sigma,\overline{f},\overline{g}),(l^{\prime},\sigma_{0}^{\prime},\sigma^{\prime},\overline{f}^{\prime},\overline{g}^{\prime})$ satisfying Assumptions (MFG-a,b,c1), let us denote the corresponding solutions to $(\ref{fbsde-single-p})$ by $(X,Y,Z^{0},Z)$ and $(X^{\prime},Y^{\prime},Z^{0\prime},Z^{\prime})$, respectively. We also define the functionals $(B,F,G)$and $(B^{\prime},F^{\prime},G^{\prime})$ by $(\ref{BG-notation})$ with corresponding coefficients, respectively. Then, we have the following stability result: $\displaystyle\mathbb{E}\Bigl{[}\sup_{t\in[0,T]}|\Delta X_{t}|^{2}+\sup_{t\in[0,T]}|\Delta Y_{t}|^{2}+\int_{0}^{T}(|\Delta Z_{t}^{0}|^{2}+|\Delta Z_{t}|^{2})dt\Bigr{]}$ $\displaystyle\qquad\leq C\mathbb{E}\Bigl{[}|\Delta\xi|^{2}+|\overline{G}|^{2}+\int_{0}^{T}\Bigl{(}|\overline{F}(t)|^{2}+|\overline{B}(t)|^{2}+|\overline{\sigma}_{0}(t)|^{2}+|\overline{\sigma}(t)|^{2}\Bigr{)}dt\Bigr{]}~{},$ where $C$ is a constant depending only on $T$ as well as the Lipschitz constants of the system, and $\displaystyle\overline{B}(t):=B(t,Y_{t}^{\prime},c^{0}_{t},c_{t})-B^{\prime}(t,Y_{t}^{\prime},c_{t}^{0\prime},c_{t}^{\prime}),$ $\displaystyle\overline{F}(t):=F(t,X_{t}^{\prime},Y_{t}^{\prime},c_{t}^{0},c_{t})-F^{\prime}(t,X_{t}^{\prime},Y_{t}^{\prime},c_{t}^{0\prime},c_{t}^{\prime}),$ $\displaystyle(\overline{\sigma}_{0},\overline{\sigma})(t)=(\sigma_{0}(t,c_{t}^{0},c_{t})-\sigma_{0}^{\prime}(t,c_{t}^{0\prime},c_{t}^{\prime}),\sigma(t,c_{t}^{0},c_{t})-\sigma^{\prime}(t,c_{t}^{0\prime},c_{t}^{\prime})),$ $\displaystyle\overline{G}:=G(X_{T}^{\prime},c_{T}^{0},c_{T})-G^{\prime}(X_{T}^{\prime},c_{T}^{0\prime},c_{T}^{\prime})~{},$ and $\Delta\xi:=\xi-\xi^{\prime}$, $\Delta X_{t}:=X_{t}-X_{t}^{\prime}$ and similarly for the other variables. ###### Proof. Let us put $\Delta B(t):=B(t,Y_{t},c_{t}^{0},c_{t})-B(t,Y_{t}^{\prime},c_{t}^{0},c_{t})$, $\Delta F(t):=F(t,X_{t},Y_{t},c_{t}^{0},c_{t})-F(t,X_{t}^{\prime},Y_{t}^{\prime},c_{t}^{0},c_{t})$ and $\Delta G:=G(X_{T},c_{T}^{0},c_{T})-G(X_{T}^{\prime},c_{T}^{0},c_{T})$. We get by Itô’s formula that $\displaystyle\mathbb{E}\bigl{[}\langle\Delta X_{T},\Delta G+\overline{G}\rangle\bigr{]}=\mathbb{E}\Bigl{[}\langle\Delta\xi,\Delta Y_{0}\rangle+\int_{0}^{T}\Bigl{(}\langle\overline{F}(t),\Delta X_{t}\rangle+\langle\overline{B}(t),\Delta Y_{t}\rangle$ $\displaystyle\qquad+\langle\overline{\sigma}_{0}(t),\Delta Z_{t}^{0}\rangle+\langle\overline{\sigma}(t),\Delta Z_{t}\rangle+\bigl{(}\langle\Delta F(t),\Delta X_{t}\rangle+\langle\Delta B(t),\Delta Y_{t}\rangle\bigr{)}\Bigr{)}dt\Bigr{]}~{}.$ Using $(\ref{peng-wu-condition})$, we obtain $\displaystyle\gamma\mathbb{E}\Bigl{[}|\Delta X_{T}|^{2}+\int_{0}^{T}|\Delta X_{t}|^{2}dt\Bigr{]}\leq\mathbb{E}\Bigl{[}\langle\Delta\xi,\Delta Y_{0}\rangle-\langle\Delta X_{T},\overline{G}\rangle$ $\displaystyle\qquad+\int_{0}^{T}\Bigl{(}\langle\overline{F}(t),\Delta X_{t}\rangle+\langle\overline{B}(t),\Delta Y_{t}\rangle+\langle\overline{\sigma}_{0}(t),\Delta Z_{t}^{0}\rangle+\langle\overline{\sigma}(t),\Delta Z_{t}\rangle\Bigr{)}dt\Bigr{]}~{}.$ ( 4.9) On the other hand, the standard estimates for Lipschitz SDEs and BSDEs give $\displaystyle\mathbb{E}\Bigl{[}\sup_{t\in[0,T]}|\Delta Y_{t}|^{2}+\int_{0}^{T}(|\Delta Z_{t}^{0}|^{2}+|\Delta Z_{t}|^{2})dt\Bigr{]}$ $\displaystyle\qquad\leq C\mathbb{E}\Bigl{[}|\overline{G}|^{2}+\int_{0}^{T}|\overline{F}(t)|^{2}dt\Bigr{]}+C\mathbb{E}\Bigl{[}|\Delta X_{T}|^{2}+\int_{0}^{T}|\Delta X_{t}|^{2}dt\Bigr{]}~{},$ ( 4.10) $\displaystyle\mathbb{E}\Bigl{[}\sup_{t\in[0,T]}|\Delta X_{t}|^{2}\Bigr{]}\leq C\mathbb{E}\Bigl{[}|\Delta\xi|^{2}+\int_{0}^{T}\bigl{[}|\overline{B}(t)|^{2}+|\overline{\sigma}_{0}(t)|^{2}+|\overline{\sigma}(t)|^{2}\bigr{]}dt\Bigr{]}+C\mathbb{E}\int_{0}^{T}|\Delta Y_{t}|^{2}dt~{}.$ Combining the above inequalities $(\ref{stability-1})$ and $(\ref{stability-y})$ gives $\displaystyle\mathbb{E}\Bigl{[}\sup_{t\in[0,T]}|\Delta X_{t}|^{2}+\sup_{t\in[0,T]}|\Delta Y_{t}|^{2}+\int_{0}^{T}(|\Delta Z_{t}^{0}|^{2}+|\Delta Z_{t}|^{2})dt\Bigr{]}$ $\displaystyle\leq C\mathbb{E}\Bigl{[}|\Delta\xi|^{2}+|\overline{G}|^{2}+\int_{0}^{T}\bigl{[}|\overline{F}(t)|^{2}+|\overline{B}(t)|^{2}+|\overline{\sigma}_{0}(t)|^{2}+|\overline{\sigma}(t)|^{2}\bigr{]}dt\Bigr{]}$ $\displaystyle+C\mathbb{E}\Bigl{[}\langle\Delta\xi,\Delta Y_{0}\rangle-\langle\Delta X_{T},\overline{G}\rangle+\int_{0}^{T}\bigl{[}\langle\overline{F}(t),\Delta X_{t}\rangle+\langle\overline{B}(t),\Delta Y_{t}\rangle+\langle\overline{\sigma}_{0}(t),\Delta Z_{t}^{0}\rangle+\langle\overline{\sigma}(t),\Delta Z_{t}\rangle\bigr{]}dt\Bigr{]}~{}.$ Now simple application of Young’s inequality establishes the claim. ∎ ###### Corollary 4.1. Under Assumptions (MFG-a,b,c1), the solution $(X,Y,Z^{0},Z)$ to the FBSDE $(\ref{fbsde-single-p})$ satisfies the following estimate: $\displaystyle\mathbb{E}\Bigl{[}\sup_{t\in[0,T]}|X_{t}|^{2}+\sup_{t\in[0,T]}|Y_{t}|^{2}+\int_{0}^{T}(|Z_{t}^{0}|^{2}+|Z_{t}|^{2})dt\Bigr{]}\leq C\mathbb{E}\Bigl{[}|\xi|^{2}+|\partial_{x}\overline{g}(0,c_{T}^{0},c_{T})|^{2}$ $\displaystyle\hskip 113.81102pt+\int_{0}^{T}\Bigl{(}|\partial_{x}\overline{f}(t,0,0,c_{t}^{0},c_{t})|^{2}+|l(t,0,c_{t}^{0},c_{t})|^{2}+|(\sigma_{0},\sigma)(t,c_{t}^{0},c_{t})|^{2}\Bigr{)}dt\Bigr{]}~{},$ where $C$ is a constant depending only on $T,\delta$ and Lipschitz constants of the system. ###### Proof. By quick inspection of the proof for Proposition 4.1, one can confirm that as long as there exists a solution $(X^{\prime},Y^{\prime},Z^{0\prime},Z^{\prime})\in\mathbb{S}^{2}\times\mathbb{S}^{2}\times\mathbb{H}^{2}\times\mathbb{H}^{2}$, their coefficients need not satisfy Assumption (MFG-a,b,c1). In particular, by putting $\xi^{\prime}$ and $(l^{\prime},\sigma_{0}^{\prime},\sigma^{\prime},\overline{f}^{\prime},\overline{g}^{\prime})$ all zero, we have a trivial solution $(X^{\prime},Y^{\prime},Z^{0^{\prime}},Z^{\prime})=(0,0,0,0)$. The desired estimate now follows from Proposition 4.1. ∎ #### Securities of maturity $T$ with exogenously specified payoff If we consider the exchange markets of bonds and futures, or other financial derivatives with maturity $T$, those securities cease to exist at $T$ after paying exogenously specified amount of cash $c_{T}^{0}$. In this case, it is natural to consider with $\delta=0$ and $\displaystyle g(x,c^{0})=\overline{g}(x,c^{0}):=-\langle c^{0},x\rangle~{},$ ( 4.11) since there is no reason to put penalty on the outstanding volume at $T$. In this case, the terminal function $g$ in $(\ref{terminal-special})$ does not have the strict convexity. Fortunately, even in this case, we can prove the unique existence as well as the stability result of the same form. ###### Assumption 4.3. (MFG-c2) (i) The functions $\sigma_{0}$ and $\sigma$ are independent of the argument $\varpi$. (ii) For any $t\in[0,T]$, any random variables $x,x^{\prime},c^{0},c\in\mathbb{L}^{2}({\cal F};\mathbb{R}^{n})$ and any sub-$\sigma$-field ${\cal G}\subset{\cal F}$, the function $l$ satisfies the monotone condition with some positive constant $\gamma^{l}>0$: $\displaystyle\mathbb{E}\Bigl{[}\langle l(t,\mathbb{E}[x|{\cal G}],c^{0},c)-l(t,\mathbb{E}[x^{\prime}|{\cal G}],c^{0},c),x-x^{\prime}\rangle\Bigr{]}\geq\gamma^{l}\mathbb{E}\bigl{[}\mathbb{E}[x-x^{\prime}|{\cal G}]^{2}\bigr{]}~{}.$ (iii) $\gamma:=\gamma^{f}-\frac{L_{\varpi}^{2}}{4\gamma^{l}}$ is strictly positive and the terminal function $g$ is given by $(\ref{terminal-special})$ with $\delta=0$. ###### Theorem 4.3. Under Assumptions (MFG-a,b,c2), there exists a unique strong solution $(X,Y,Z^{0},Z)\in\mathbb{S}^{2}(\mathbb{F}^{1};\mathbb{R}^{n})\times\mathbb{S}^{2}(\mathbb{F}^{1};\mathbb{R}^{n})\times\mathbb{H}^{2}(\mathbb{F}^{1};\mathbb{R}^{n\times d^{0}})\times\mathbb{H}^{2}(\mathbb{F}^{1};\mathbb{R}^{n\times d})$ to the FBSDE $(\ref{fbsde-single-p})$. Moreover, the same form of stability and $\mathbb{L}^{2}$ estimates given in Proposition 4.1 and Corollary 4.1 hold. ###### Proof. Note that, in this case, the terminal condition for the BSDE is independent of $X_{T}$. Thus, as in Theorem 2.3 [24], we put $y_{T}^{\varrho}=Y_{T}=-c_{T}^{0}$ in $(\ref{shifted-0})$ and $(\ref{shifted-1})$, respectively. Using the fact that $\langle\Delta X_{T},\Delta Y_{T}\rangle=0$, one can follow the same arguments to get the desired result. The proof of the stability result can also be done in almost exactly the same way. ∎ ## 5 Asymptotic Market Clearing We are now ready to investigate if our FBSDE $(\ref{fbsde-single-p})$ actually provides a good approximate of the market price and if so, how accurate it is. By Theorem 3.1, if we use $(-\mathbb{E}\bigl{[}Y_{t}|\overline{{\cal F}}_{t}^{0}\bigr{]})_{t\in[0,T]}$ as the input $(\varpi_{t})_{t\in[0,T]}$, where $(Y_{t})_{t\in[0,T]}$ is the unique solution to the FBSDE $(\ref{fbsde- single-p})$ with the convention $\xi=\xi^{1}$ and $c=c^{1}$, the optimal strategy of the individual agent is given by $\displaystyle\widehat{\alpha}^{i}_{\rm{mf}}(t):=\widehat{\alpha}(Y_{t}^{i},-\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}])=-\overline{\Lambda}(Y_{t}^{i}-\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}])$ ( 5.1) where $(Y^{i}_{t})_{t\in[0,T]}$ is the solution to $(\ref{agent-FBSDE})$ with $(\varpi_{t}=-\mathbb{E}\bigl{[}Y_{t}|\overline{{\cal F}}_{t}^{0}\bigr{]})_{t\in[0,T]}$. ###### Theorem 5.1. If the conditions for Theorem 4.1, Theorem 4.2 or Theorem 4.3 are satisfied then we have $\displaystyle\lim_{N\rightarrow\infty}\mathbb{E}\int_{0}^{T}\Bigl{|}\frac{1}{N}\sum_{i=1}^{N}\widehat{\alpha}^{i}_{\rm{mf}}(t)\Bigr{|}^{2}dt=0~{}.$ Moreover if there exists some constant $\Gamma$ such that $\sup_{t\in[0,T]}\mathbb{E}\bigl{[}|Y_{t}|^{q}\bigr{]}^{\frac{1}{q}}\leq\Gamma<\infty$ for some $q>4$, then there exists some constant $C$ independent of $N$ such that $\displaystyle\mathbb{E}\int_{0}^{T}\Bigl{|}\frac{1}{N}\sum_{i=1}^{N}\widehat{\alpha}^{i}_{\rm{mf}}(t)\Bigr{|}^{2}dt\leq C\Gamma^{2}\epsilon_{N},$ ( 5.2) where $\epsilon_{N}:=N^{-2/\max(n,4)}\bigl{(}1+\log(N)\mathbb{1}_{\\{n=4\\}}\bigr{)}$. ###### Proof. Let us consider the following set of FBSDEs with $1\leq i\leq N$ on the filtered probability space $(\Omega,{\cal F},\mathbb{P};\mathbb{F})$ constructed in Section 2. $\displaystyle d\underline{X}_{t}^{i}$ $\displaystyle=$ $\displaystyle\Bigl{(}-\overline{\Lambda}(\underline{Y}_{t}^{i}-\mathbb{E}[\underline{Y}_{t}^{i}|\overline{{\cal F}}_{t}^{0}])+l(t,-\mathbb{E}[\underline{Y}_{t}^{i}|\overline{{\cal F}}_{t}^{0}],c_{t}^{0},c_{t}^{i})\Bigr{)}dt$ $\displaystyle\qquad+\sigma_{0}(t,-\mathbb{E}[\underline{Y}_{t}^{i}|\overline{{\cal F}}_{t}^{0}],c_{t}^{0},c_{t}^{i})dW_{t}^{0}+\sigma(t,-\mathbb{E}[\underline{Y}_{t}^{i}|\overline{{\cal F}}_{t}^{0}],c_{t}^{0},c_{t}^{i})dW_{t}^{i},$ $\displaystyle d\underline{Y}_{t}^{i}$ $\displaystyle=$ $\displaystyle-\partial_{x}\overline{f}(t,\underline{X}_{t}^{i},-\mathbb{E}[\underline{Y}_{t}^{i}|\overline{{\cal F}}_{t}^{0}],c_{t}^{0},c_{t}^{i})dt+\underline{Z}_{t}^{i,0}dW_{t}^{0}+\underline{Z}_{t}^{i}dW_{t}^{i},$ with $\underline{X}_{0}^{i}=\xi^{i}$ and $\underline{Y}_{T}^{i}=\delta/(1-\delta)\mathbb{E}[\partial_{x}\overline{g}(\underline{X}_{T}^{i},c_{T}^{0},c_{T}^{i})|\overline{{\cal F}}_{T}^{0}]+\partial_{x}\overline{g}(\underline{X}_{T}^{i},c_{T}^{0},c_{T}^{i})$. Thanks to the existence of unique strong solution, Yamada-Watanabe Theorem for FBSDEs (see, Theorem 1.33 [8]), there exists some measurable function $\Phi$ such that for every $1\leq i\leq N$, $\displaystyle(\underline{X}^{i}_{t},\underline{Y}^{i}_{t})_{t\in[0,T]}=\Phi\Bigl{(}(c_{t}^{0})_{t\in[0,T]},(W_{t}^{0})_{t\in[0,T]},\xi^{i},(c_{t}^{i})_{t\in[0,T]},(W_{t}^{i})_{t\in[0,T]}\Bigr{)}~{}.$ Hence, conditionally on $\overline{{\cal F}}^{0}$, the set of proceses $(\underline{X}^{i}_{t},\underline{Y}^{i}_{t})_{t\in[0,T]}$ with $1\leq i\leq N$ are independently and identically distributed. In particular, we have $\mathbb{P}$-a.s. $\displaystyle\mathbb{E}[\underline{Y}_{t}^{i}|\overline{{\cal F}}_{t}^{0}]=\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}],\quad\forall t\in[0,T],$ $\displaystyle\mathbb{E}[\partial_{x}\overline{g}(\underline{X}_{T}^{i},c_{T}^{0},c_{T}^{i})|\overline{{\cal F}}_{T}^{0}]=\mathbb{E}[\partial_{x}\overline{g}(X_{T},c_{T}^{0},c_{T})|\overline{{\cal F}}_{T}^{0}]~{}.$ ( 5.3) Note that, under the convention $\xi^{1}=\xi$ and $c^{1}=c$, we actually have $(\underline{X}^{1},\underline{Y}^{1})=(X,Y)$. From $(\ref{law-identity})$, we conclude that $(X^{i}_{t},Y^{i}_{t},Z^{i,0}_{t},Z^{i}_{t})_{t\in[0,T]}=(\underline{X}^{i}_{t},\underline{Y}^{i}_{t},\underline{Z}^{i,0}_{t},\underline{Z}_{t}^{i})_{t\in[0,T]}$ in $\mathbb{S}^{2}(\mathbb{F}^{i})\times\mathbb{S}^{2}(\mathbb{F}^{i})\times\mathbb{H}^{2}(\mathbb{F}^{i})\times\mathbb{H}^{2}(\mathbb{F}^{i})$. Therefore, $\displaystyle\frac{1}{N}\sum_{i=1}^{N}\widehat{\alpha}^{i}_{\rm{mf}}(t)=-\overline{\Lambda}\Bigl{(}\frac{1}{N}\sum_{i=1}^{N}\underline{Y}_{t}^{i}-\mathbb{E}[\underline{Y}_{t}^{1}|\overline{{\cal F}}_{t}^{0}]\Bigr{)}~{}.$ ( 5.4) We can easily check that $\displaystyle\mathbb{E}\Bigl{[}W_{2}\Bigl{(}\frac{1}{N}\sum_{i=1}^{N}\delta_{\underline{Y}_{t}^{i}},{\cal L}(\underline{Y}_{t}^{1}|\overline{{\cal F}}_{t}^{0})\Bigr{)}^{2}\Bigr{|}\overline{{\cal F}}_{t}^{0}\Bigr{]}\leq\frac{2}{N}\sum_{i=1}^{N}\mathbb{E}\bigl{[}|\underline{Y}_{t}^{i}|^{2}|\overline{{\cal F}}_{t}^{0}\bigr{]}+2\mathbb{E}\bigl{[}|\underline{Y}_{t}^{1}|^{2}|\overline{{\cal F}}_{t}^{0}\bigr{]}=4\mathbb{E}\bigl{[}|\underline{Y}_{t}^{1}|^{2}|\overline{{\cal F}}_{t}^{0}\bigr{]}~{}.$ Since $(\underline{Y}_{t}^{i})_{1\leq i\leq N}$ are $\overline{{\cal F}}^{0}_{t}$-conditionally independently and identically distributed and also $\underline{Y}^{1}\in\mathbb{S}^{2}$, the same arguments leading to $(2.14)$ in [8] imply that the pointwise convergence holds: $\displaystyle\lim_{N\rightarrow\infty}\mathbb{E}\Bigl{[}W_{2}\Bigl{(}\frac{1}{N}\sum_{i=1}^{N}\delta_{\underline{Y}_{t}^{i}},{\cal L}(\underline{Y}_{t}^{1}|\overline{{\cal F}}_{t}^{0})\Bigr{)}^{2}\Bigr{]}=0~{}.$ ( 5.5) We are now going to show that the set of functions, $(f_{N})_{N\in\mathbb{N}}$ defined by $\displaystyle[0,T]\ni t\mapsto f_{N}(t):=\mathbb{E}\bigl{[}W_{2}\bigl{(}\overline{\mu}_{t},\mu_{t}\bigr{)}^{2}\bigr{]}\in\mathbb{R}$ with $\overline{\mu}_{t}:=\frac{1}{N}\sum_{i=1}^{N}\delta_{\underline{Y}_{t}^{i}}$ and $\mu_{t}:={\cal L}(\underline{Y}_{t}^{1}|\overline{{\cal F}}_{t}^{0})$ are precompact in the set ${\cal C}([0,T];\mathbb{R})$ endowed with the topology of uniform convergence. In fact, uniformly in $N$, $\displaystyle\sup_{t\in[0,T]}|f_{N}(t)|\leq 4\sup_{t\in[0,T]}\mathbb{E}\bigl{[}|\underline{Y}_{t}^{1}|^{2}\bigr{]}\leq C<\infty$ ( 5.6) where $C$ is given by the estimate in Corollary 4.1. Moreover, for any $0\leq t,s\leq T$, Cauchy-Schwarz, $(\ref{fN-inequality})$ and the triangular inequalities give $\displaystyle|f_{N}(t)-f_{N}(s)|\leq\mathbb{E}\Bigl{[}\Bigl{(}W_{2}(\overline{\mu}_{t},\mu_{t})+W_{2}(\overline{\mu}_{s},\mu_{2})\Bigr{)}^{2}\Bigr{]}^{\frac{1}{2}}\mathbb{E}\Bigl{[}\Bigl{(}W_{2}(\overline{\mu}_{t},\mu_{t})-W_{2}(\overline{\mu}_{s},\mu_{2})\Bigr{)}^{2}\Bigr{]}^{\frac{1}{2}}$ $\displaystyle\quad\leq C\mathbb{E}\Bigl{[}\Bigl{(}W_{2}(\overline{\mu}_{t},\mu_{t})-W_{2}(\overline{\mu}_{s},\mu_{s})\Bigr{)}^{2}\Bigr{]}^{\frac{1}{2}}\leq C\mathbb{E}\Bigl{[}W_{2}(\overline{\mu}_{t},\overline{\mu}_{s})^{2}+W_{2}(\mu_{t},\mu_{s})^{2}\Bigr{]}^{\frac{1}{2}}~{}.$ $\displaystyle\quad\leq C\mathbb{E}\Bigl{[}\frac{1}{N}\sum_{i=1}^{N}|\underline{Y}_{t}^{i}-\underline{Y}_{s}^{i}|^{2}+|\underline{Y}_{t}^{1}-\underline{Y}_{s}^{1}|^{2}\Bigr{]}^{\frac{1}{2}}$ $\displaystyle\quad\leq C\mathbb{E}\bigl{[}|\underline{Y}_{t}^{1}-\underline{Y}_{s}^{1}|^{2}\bigr{]}^{\frac{1}{2}}~{},$ uniformly in $N$, where we have used the fact that $(\underline{Y}^{i})_{i\geq 1}$ are conditionally i.i.d at the last inequality. Since $(\underline{Y}_{t}^{1})_{t\in[0,T]}$ is a continuous process, the above estimate tells that $(f_{N})_{N\in\mathbb{N}}$ is equicontinuous, which is also uniformly equicontinuous since we are working on the finite interval. Now, Arzela-Ascoli theorem implies the desired precompactness. Combining with the pointwise convergence $(\ref{pointwise-conv})$, we thus conclude $\displaystyle\lim_{N\rightarrow\infty}\sup_{t\in[0,T]}\mathbb{E}\Bigl{[}W_{2}\Bigl{(}\frac{1}{N}\sum_{i=1}^{N}\delta_{\underline{Y}_{t}^{i}},{\cal L}(\underline{Y}_{t}^{1}|\overline{{\cal F}}_{t}^{0})\Bigr{)}^{2}\Bigr{]}=0~{}.$ ( 5.7) From the definition of Wasserstein distance $(\ref{def-W})$, we have $\displaystyle\Bigl{|}\frac{1}{N}\sum_{i=1}^{N}\underline{Y}_{t}^{i}-\mathbb{E}[\underline{Y}_{t}^{1}|\overline{{\cal F}}_{t}^{0}]\Bigr{|}\leq W_{1}\Bigl{(}\frac{1}{N}\sum_{i=1}^{N}\delta_{\underline{Y}_{t}^{i}},{\cal L}(\underline{Y}_{t}^{1}|\overline{{\cal F}}_{t}^{0})\Bigr{)}~{},$ and hence, from $(\ref{alpha-m})$, $\displaystyle\mathbb{E}\int_{0}^{T}\Bigl{|}\frac{1}{N}\sum_{i=1}^{N}\widehat{\alpha}^{i}_{\rm{mf}}(t)\Bigr{|}^{2}dt\leq C\sup_{t\in[0,T]}\mathbb{E}\Bigl{[}W_{2}\Bigl{(}\frac{1}{N}\sum_{i=1}^{N}\delta_{\underline{Y}_{t}^{i}},{\cal L}(\underline{Y}_{t}^{1}|\overline{{\cal F}}_{t}^{0})\Bigr{)}^{2}\Bigr{]}~{}.$ ( 5.8) The first conclusion now follows from $(\ref{law-conv})$. The latter claims directly follows from the expression $(\ref{law-conv-2})$ and the (Fourth Step) in the proof of Theorem 2.12 in [8]. ∎ Theorem 5.1 justifies our intuitive understanding and a special type of FBSDEs $(\ref{fbsde-single-p})$ derived in Section 3 as a reasonable model to approximate the market clearing price. When there exists higher integrability, Glivenko-Cantelli convergence theorem in the Wasserstein distance even provides a specific order $\epsilon_{N}$ of convergence in terms of the number of agents $N$ $(\ref{Glivenko-Cantelli})$. See Theorem 5.8 and Remark 5.9 in [7] for more details. ###### Remark 5.1. Consider the situation treated in Theorem 4.3, for example, a market model of a Futures contract. If the contract pays unit amount of the underlying asset per contract whose value is exogenously given by $c_{T}^{0}$, our mean-field limit model $(\ref{fbsde-single-p})$ gives $Y_{T}=-c_{T}^{0}$. This means that the modeled Futures price satisfies $\varpi_{T}=-\mathbb{E}[Y_{T}|\overline{{\cal F}}_{T}^{0}]=c_{T}^{0}$, which guarantees the convergence of the modeled price to the value of the underlying asset at the maturity $T$. This is a crucially important feature that any market model of this type of securities must satisfy. ## 6 Extension to Multiple Populations The main limitation of the last model is that there exists only one type of agents who share the common cost functions as well as the coefficient functions for their state dynamics. Interestingly, it is rather straightforward to extend the model to the situation with multiple populations, where the agents in each population share the same cost and coefficient functions but they can be different population by population. From the perspective of the practical applications, this is a big advantage since we can analyze, for example, the interactions between the Sell-side and Buy- side institutions for financial applications, or consumers and producers for economic applications. For general issues of mean field games as well as mean field type control problems in the presence of multiple populations without common noise, see Fujii [13]. Although there exists a common noise in the current model, the conditional law only enters as a form of expectation. Therefore, as long as the system of FBSDEs is Lipschitz continuous, there exists a unique strong solution at least for small $T$. For general $T$, although it is rather difficult to find appropriate set of assumptions, it is still possible for some simple cases. In this section, our main task is to find an appropriate limit model that extends $(\ref{fbsde-single-p})$ for multiple populations and the sufficient conditions that make appropriate monotone conditions hold, which guarantees the existence of unique solution. In the following, we shall treat $m$ populations indexed by $p\in\\{1,\cdots,m\\}$. For each $p$, $N_{p}\geq 1$ agents are assumed to belong to the population. We denote by $(p,i)$ the ith agent in the population $p$. First, let us enlarge the probability space constructed in Section 2. In addition to $(\overline{\Omega}^{0},\overline{{\cal F}}^{0},\overline{\mathbb{P}}^{0};\overline{\mathbb{F}}^{0})$, we introduce $(\overline{\Omega}^{p,i},\overline{{\cal F}}^{p,i},\overline{\mathbb{P}}^{p,i};\overline{\mathbb{P}}^{p,i})$ with $1\leq i\leq N_{p}$ and $1\leq p\leq m$, each of which is generated by $(\xi^{p,i},\boldsymbol{W}^{p,i})$ with $d$-dimensional Brownian motion $\boldsymbol{W}^{p,i}$ and a $\boldsymbol{W}^{p,i}$-independent $\mathbb{R}^{n}$-valued square integrable random variable $\xi^{p,i}$. For each $p$, $(\xi^{p,i})_{i=1}^{N_{p}}$ are assumed to have the common law. We define $(\Omega^{p,i},{\cal F}^{p,i},\mathbb{P}^{p,i};\mathbb{F}^{p,i})$ as the product of $(\overline{\Omega}^{0},\overline{{\cal F}}^{0},\overline{\mathbb{P}}^{0};\overline{\mathbb{F}}^{0})$ and $(\overline{\Omega}^{p,i},\overline{{\cal F}}^{p,i},\overline{\mathbb{P}}^{p,i};\overline{\mathbb{P}}^{p,i})$. Finally $(\Omega,{\cal F},\mathbb{P};\mathbb{F})$ is defined as a product of all the spaces $(\overline{\Omega}^{0},\overline{{\cal F}}^{0},\overline{\mathbb{P}}^{0};\overline{\mathbb{F}}^{0})$ and $(\overline{\Omega}^{p,i},\overline{{\cal F}}^{p,i},\overline{\mathbb{P}}^{p,i};\overline{\mathbb{F}}^{p,i})$, $1\leq i\leq N_{p},1\leq p\leq m$, and $(\Omega^{i},{\cal F}^{i},\mathbb{P}^{i};\mathbb{F}^{i})$ as a product of $(\overline{\Omega}^{0},\overline{{\cal F}}^{0},\overline{\mathbb{P}}^{0};\overline{\mathbb{F}}^{0})$ and $(\overline{\Omega}^{p,i},\overline{{\cal F}}^{p,i},\overline{\mathbb{P}}^{p,i};\overline{\mathbb{F}}^{p,i})$ with $1\leq p\leq m$. Every probability space is assumed to be complete and every filtration is assumed to be complete and right-continuously augmented to satisfy the usual conditions. As we have done in Section 3, we first assume that the market price of $n$ securities is given exogenously by $\varpi_{t}\in\mathbb{H}^{2}(\overline{\mathbb{F}}^{0};\mathbb{R}^{n})$. Under this setup, we consider the control problem for each $(p,i)$ agent defined by $\inf_{\boldsymbol{\alpha}^{p,i}\in\mathbb{A}^{p,i}}J^{p,i}(\boldsymbol{\alpha}^{p,i})~{},$ ( 6.1) with $\displaystyle J^{p,i}(\boldsymbol{\alpha}^{p,i}):=\mathbb{E}\Bigl{[}\int_{0}^{T}f_{p}(t,X_{t}^{p,i},\alpha_{t}^{p,i},\varpi_{t},c_{t}^{0},c_{t}^{p,i})dt+g_{p}(X_{T}^{p,i},\varpi_{T},c_{T}^{0},c_{T}^{p,i})\Bigr{]}~{},$ subject to the dynamic constraint: $\displaystyle dX_{t}^{p,i}=\Bigl{(}\alpha_{t}^{p,i}+l_{p}(t,\varpi_{t},c_{t}^{0},c_{t}^{p,i})\Bigr{)}dt+\sigma_{p,0}(t,\varpi_{t},c_{t}^{0},c_{t}^{p,i})dW_{t}^{0}+\sigma_{p}(t,\varpi_{t},c_{t}^{0},c_{t}^{p,i})dW_{t}^{p,i}$ with $X_{0}^{p,i}=\xi^{p,i}$. As before we assume $(c_{t}^{0})_{t\geq 0}\in\mathbb{H}^{2}(\overline{\mathbb{F}}^{0};\mathbb{R}^{n})$ and $(c_{t}^{p,i})_{t\geq 0}\in\mathbb{H}^{2}(\overline{\mathbb{F}}^{p,i};\mathbb{R}^{n})$. In addition, within each population $p$, the random sources $(c^{p,i}_{t})_{t\geq 0}$ are assumed to have a common law $1\leq i\leq N_{p}$. Admissible strategies $\mathbb{A}^{p,i}$ is the space $\mathbb{H}^{2}(\mathbb{F}^{p,i};\mathbb{R}^{n})$. The measurable functions $f_{p}:[0,T]\times(\mathbb{R}^{n})^{5}\rightarrow\mathbb{R}$, $g_{p}:(\mathbb{R}^{n})^{4}\rightarrow\mathbb{R}$, $\overline{f}_{p}:[0,T]\times(\mathbb{R}^{n})^{4}\rightarrow\mathbb{R}$ and $\overline{g}_{p}:(\mathbb{R}^{n})^{3}\rightarrow\mathbb{R}$ are given by $\displaystyle f_{p}(t,x,\alpha,\varpi,c^{0},c):=\langle\varpi,\alpha\rangle+\frac{1}{2}\langle\alpha,\Lambda_{p}\alpha\rangle+\overline{f}_{p}(t,x,\varpi,c^{0},c)~{},$ $\displaystyle g_{p}(x,\varpi,c^{0},c):=-\delta\langle\varpi,x\rangle+\overline{g}_{p}(x,c^{0},c)~{}.$ ###### Assumption 6.1. (MFG-A) We assume the following conditions uniformly in $p\in\\{1,\cdots,m\\}$. (i) $\Lambda_{p}$ is a positive definite $n\times n$ symmetric matrix with $\underline{\lambda}I_{n\times n}\leq\Lambda_{p}\leq\overline{\lambda}I_{n\times n}$ in the sense of 2nd- order form where $\underline{\lambda}$ and $\overline{\lambda}$ are some constants satisfying $0<\underline{\lambda}\leq\overline{\lambda}$. (ii) For any $(t,x,\varpi,c^{0},c)$, $\displaystyle|\overline{f}_{p}(t,x,\varpi,c^{0},c)|+|\overline{g}_{p}(x,c^{0},c)|\leq L(1+|x|^{2}+|\varpi|^{2}+|c^{0}|^{2}+|c|^{2})~{}.$ (iii) $\overline{f}_{p}$ and $\overline{g}_{p}$ are continuously differentiable in $x$ and satisfy, for any $(t,x,x^{\prime},\varpi,c^{0},c)$, $|\partial_{x}\overline{f}_{p}(t,x^{\prime},\varpi,c^{0},c)-\partial_{x}\overline{f}_{p}(t,x,\varpi,c^{0},c)|+|\partial_{x}\overline{g}_{p}(x^{\prime},c^{0},c)-\partial_{x}\overline{g}_{p}(x,c^{0},c)|\leq L|x^{\prime}-x|~{},$ and $|\partial_{x}\overline{f}_{p}(t,x,\varpi,c^{0},c)|+|\partial_{x}\overline{g}_{p}(x,c^{0},c)|\leq L(1+|x|+|\varpi|+|c^{0}|+|c|)$. (iv)The functions $\overline{f}_{p}$ and $\overline{g}_{p}$ are convex in $x$ in the sense that for any $(t,x,x^{\prime},\varpi,c^{0},c)$, $\displaystyle\overline{f}_{p}(t,x^{\prime},\varpi,c^{0},c)-\overline{f}_{p}(t,x,\varpi,c^{0},c)-\langle x^{\prime}-x,\partial_{x}\overline{f}_{p}(t,x,\varpi,c^{0},c)\rangle\geq\frac{\gamma^{f}}{2}|x^{\prime}-x|^{2}~{},$ $\displaystyle\overline{g}_{p}(x^{\prime},c^{0},c)-\overline{g}_{p}(x,c^{0},c)-\langle x^{\prime}-x,\partial_{x}\overline{g}_{p}(x,c^{0},c)\rangle\geq\frac{\gamma^{g}}{2}|x^{\prime}-x|^{2}~{},$ with some constants $\gamma^{f},\gamma^{g}\geq 0$. (v) $l_{p},\sigma_{p,0},\sigma_{p}$ are the measurable functions defined on $[0,T]\times(\mathbb{R}^{n})^{3}$ and are $\mathbb{R}^{n},\mathbb{R}^{n\times d^{0}}$ and $\mathbb{R}^{n\times d}$-valued, respectively. Moreover they satisfy the linear growth condition: $\displaystyle|(l_{p},\sigma_{p,0},\sigma_{p})(t,\varpi,c^{0},c)|\leq L(1+|\varpi|+|c^{0}|+|c|)$ for any $(t,\varpi,c^{0},c)$. (vi) $\delta\in[0,1)$ is a given constant. Under Assumption (MFG-A), Theorem 3.1 guarantees that the control problem $(\ref{control-pi})$ for each agent $(p,i)$ is uniquely characterized by $\displaystyle dX_{t}^{p,i}=\Bigl{(}\widehat{\alpha}_{p}(Y_{t}^{p,i},\varpi_{t})+l_{p}(t,\varpi_{t},c_{t}^{0},c_{t}^{p,i})\Bigr{)}dt+\sigma_{p,0}(t,\varpi_{t},c_{t}^{0},c_{t}^{p,i})dW_{t}^{0}+\sigma_{p}(t,\varpi_{t},c_{t}^{0},c_{t}^{p,i})dW_{t}^{p,i},$ $\displaystyle dY_{t}^{p,i}=-\partial_{x}\overline{f}_{p}(t,X_{t}^{p,i},\varpi_{t},c_{t}^{0},c_{t}^{p,i})dt+Z_{t}^{p,i,0}dW_{t}^{0}+Z_{t}^{p,i}dW_{t}^{p,i},$ ( 6.2) with $X_{0}^{p,i}=\xi^{p,i}$ and $Y_{T}^{p,i}=-\delta\varpi_{T}+\partial_{x}\overline{g}_{p}(X_{T}^{p,i},c_{T}^{0},c_{T}^{p,i})$. We have defined $\widehat{\alpha}_{p}(y,\varpi):=-\overline{\Lambda}_{p}(y+\varpi)$ and $\overline{\Lambda}_{p}:=(\Lambda_{p})^{-1}$ as before. There exists a unique strong solution $(X^{p,i}_{t},Y^{p,i}_{t},Z^{p,i,0}_{t},Z^{p,i}_{t})_{t\in[0,T]}\in\mathbb{S}^{2}(\mathbb{F}^{p,i};\mathbb{R}^{n})\times\mathbb{S}^{2}(\mathbb{F}^{p,i};\mathbb{R}^{n})\times\mathbb{H}^{2}(\mathbb{F}^{p,i};\mathbb{R}^{n\times d^{0}})\times\mathbb{H}^{2}(\mathbb{F}^{p,i};\mathbb{R}^{n\times d})$, and the optimal trading strategy for the agent $(p,i)$ is given by $\displaystyle\widehat{\alpha}_{t}^{p,i}=\widehat{\alpha}_{p}(Y_{t}^{p,i},\varpi_{t})~{},\forall t\in[0,T].$ Let us check the market clearing condition under this setup. In order to balance the demand and supply of securities at the exchange, we need to have $\sum_{p=1}^{m}\sum_{i=1}^{N_{p}}\widehat{\alpha}(Y^{p,i}_{t},\varpi_{t})=0$. This requires the market price to satisfy $\displaystyle\varpi_{t}=-\Bigl{(}\sum_{p=1}^{m}n_{p}\overline{\Lambda}_{p}\Bigr{)}^{-1}\sum_{p=1}^{m}n_{p}\overline{\Lambda}_{p}\Bigl{(}\frac{1}{N_{p}}\sum_{i=1}^{N_{p}}Y_{t}^{p,i}\Bigr{)}~{},$ where $N=\sum_{p=1}^{m}N_{p}$ and $n_{p}:=N_{p}/N$. At the moment, this is inconsistent to the initial assumption that requires $(\varpi_{t})_{t\geq 0}$ to be $\overline{\mathbb{F}}^{0}$-adapted. However, since for each $1\leq p\leq m$, $(Y^{p,i}_{t})_{i=1}^{N_{p}}$ are $\overline{{\cal F}}^{0}$-conditionally independently and identically distributed, we may follow the same arguments used in Section 3. If we take $N\rightarrow\infty$ while keeping the relative size of populations $n_{p}$ constant, we can expect to obtain $\displaystyle\varpi_{t}=-\hat{\Xi}\sum_{p=1}^{m}\hat{\Lambda}_{p}\mathbb{E}[Y_{t}^{p,1}|\overline{{\cal F}}_{t}^{0}]$ ( 6.3) in the large population limit where $\hat{\Lambda}_{p}:=n_{p}\overline{\Lambda}_{p},\qquad\hat{\Xi}:=\Bigl{(}\sum_{p=1}^{m}\hat{\Lambda}_{p}\Bigr{)}^{-1}~{}.$ ###### Remark 6.1. When $\Lambda_{p}=\Lambda$ for every population $p$, one can easily check that $(\ref{market-price-guess})$ becomes $\varpi_{t}=-\sum_{p=1}^{m}n_{p}\mathbb{E}[Y_{t}^{p,1}|\overline{{\cal F}}_{t}^{0}]~{}.$ Since $Y$ of the adjoint equation represents the marginal cost i.e., the first order derivative of the value function with respect to the state variable $x$, the above expression of $\varpi$ implies that the market price may be given by the population-weighted average of the marginal benefit (-cost) across the entire populations. ### 6.1 Limit problem with multiple populations By the observation we have just made, we are motivated to study the following limit problem with $1\leq p\leq m$: $\displaystyle dX_{t}^{p}$ $\displaystyle=$ $\displaystyle\Bigl{(}\widehat{\alpha}_{p}\bigl{(}Y_{t}^{p},\varpi(\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}])\bigr{)}+l_{p}\bigl{(}t,\varpi(\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}]),c_{t}^{0},c_{t}^{p}\bigr{)}\Bigr{)}dt$ $\displaystyle+\sigma_{p,0}\bigl{(}t,\varpi(\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}]),c_{t}^{0},c_{t}^{p}\bigr{)}dW_{t}^{0}+\sigma_{p}\bigl{(}t,\varpi(\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}]),c_{t}^{0},c_{t}^{p}\bigr{)}dW_{t}^{p,1},$ $\displaystyle dY_{t}^{p}$ $\displaystyle=$ $\displaystyle-\partial_{x}\overline{f}_{p}\bigl{(}t,X_{t}^{p},\varpi(\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}]),c_{t}^{0},c_{t}^{p}\bigr{)}dt+Z_{t}^{p,0}dW_{t}^{0}+Z_{t}^{p}dW_{t}^{p,1}~{},$ ( 6.4) with $X_{0}^{p}=\xi^{p}$ and $\displaystyle Y_{T}^{p}=\frac{\delta}{1-\delta}\hat{\Xi}\sum_{p=1}^{m}\hat{\Lambda}_{p}\mathbb{E}\bigl{[}\partial_{x}\overline{g}_{p}(X_{T}^{p},c_{T}^{0},c_{T}^{p})|\overline{{\cal F}}_{T}^{0}\bigr{]}+\partial_{x}\overline{g}_{p}(X_{T}^{p},c_{T}^{0},c_{T}^{p})~{}.$ We put as before $\xi^{p}:=\xi^{p,1}$ and $c^{p}:=c^{p,1}$ to lighten the notation. Here, $\displaystyle\varpi(\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}]):=-\hat{\Xi}\sum_{p=1}^{m}\hat{\Lambda}_{p}\mathbb{E}[Y_{t}^{p}|\overline{{\cal F}}_{t}^{0}],\quad\widehat{\alpha}_{p}(y,\varpi):=-\overline{\Lambda}_{p}(y+\varpi)$ and hence $(\ref{fbsde-multiple-p})$ is actually an $m$-coupled system of FBSDEs of McKean-Vlasov type. One can derive the terminal condition from $Y_{T}^{p}=-\delta\varpi(\mathbb{E}[Y_{T}|\overline{{\cal F}}_{T}^{0}])+\partial_{x}\overline{g}_{p}(X_{T}^{p},c_{T}^{0},c_{T}^{p})~{},$ ( 6.5) by summing over $1\leq p\leq m$ after taking conditional expectation given $\overline{{\cal F}}_{T}^{0}$. In the following, we use the notation $(X_{t},Y_{t},Z_{t}^{0},Z_{t})_{t\in[0,T]}=\Bigl{(}(X_{t}^{p})_{p=1}^{m},(Y_{t}^{p})_{p=1}^{m},(Z_{t}^{p,0})_{p=1}^{m},(Z_{t}^{p})_{p=1}^{m}\Bigr{)}_{t\in[0,T]}~{}.$ ( 6.6) ### 6.2 Solvability for small $T$ For small $T$, Lipschitz continuity suffices to guarantee the existence of a unique solution. ###### Assumption 6.2. (MFG-B) Uniformly in $p\in\\{1,\cdots,m\\}$, for any $(t,x,c^{0},c)\in[0,T]\times(\mathbb{R}^{n})^{3}$ and any $\varpi,\varpi^{\prime}\in\mathbb{R}^{n}$, the coefficient functions $l_{p},\sigma_{p,0},\sigma_{p}$ and $\overline{f}_{p}$ satisfy $\displaystyle|(l_{p},\sigma_{p,0},\sigma_{p})(t,\varpi,c^{0},c)-(l_{p},\sigma_{p,0},\sigma_{p})(t,\varpi^{\prime},c^{0},c)|$ $\displaystyle\qquad\qquad+|\partial_{x}\overline{f}_{p}(t,x,\varpi,c^{0},c)-\partial_{x}\overline{f}_{p}(t,x,\varpi^{\prime},c^{0},c)|\leq L_{\varpi}|\varpi-\varpi^{\prime}|~{}.$ The following theorem follows exactly in the same way as Theorem 4.1. ###### Theorem 6.1. Under Assumptions (MFG-A,B), there exists some constant $\tau>0$ which depends only on $(L,L_{\varpi},\delta,n_{p},\Lambda_{p})$ such that for any $T\leq\tau$, there exists a unique strong solution $(X,Y,Z^{0},Z)\in\mathbb{S}^{2}\bigl{(}\mathbb{F}^{1};(\mathbb{R}^{n})^{m}\bigr{)}\times\mathbb{S}^{2}\bigl{(}\mathbb{F}^{1};(\mathbb{R}^{n})^{m}\bigr{)}\times\mathbb{H}^{2}\bigl{(}\mathbb{F}^{1};(\mathbb{R}^{n\times d^{0}})^{m}\bigr{)}\times\mathbb{H}^{2}\bigl{(}\mathbb{F}^{1};(\mathbb{R}^{n\times d})^{m}\bigr{)}$ to the FBSDE $(\ref{fbsde-multiple-p})$. ###### Remark 6.2. Note that the above system of FBSDEs becomes a linear-quadratic form by choosing $(l_{p},\sigma_{p,0},\sigma_{p},\overline{f}_{p},\overline{g}_{p})$ appropriately. In this case, the problem reduces to solving ordinary differential equations of Riccati type. Therefore, the existence of a solution for a given $T$ can be tested, at lest numerically, by checking the absence of a “blow up” in its solution. ### 6.3 Solvability for general $T$ We now move on to the existence result of a unique solution for general $T$. It is very difficult to find general existence criteria for fully-coupled multi-dimensional FBSDEs. A the moment, in order to apply well-known Peng-Wu’s method, let us put the following simplifying assumptions. ###### Assumption 6.3. (MFG-C1) (i) For every $1\leq p\leq m$, the functions $\sigma_{p,0}$ and $\sigma_{p}$ are independent of the argument $\varpi$. (ii) $\Lambda_{p}$=$\Lambda$ and $n_{p}=1/m$ for every $p$. (iii) For any $t\in[0,T]$, any random variables $x^{p},x^{p\prime},c^{0},c^{p}\in\mathbb{L}^{2}({\cal F};\mathbb{R}^{n})$ and any sub-$\sigma$-field ${\cal G}\subset{\cal F}$, the functions $(l_{p})_{p=1}^{m}$ satisfy with some positive constant $\gamma^{l}>0$, $\displaystyle\sum_{p=1}^{m}\mathbb{E}\Bigl{[}\bigl{\langle}l_{p}\bigl{(}t,\mathbb{E}[\overline{x}|{\cal G}],c^{0},c^{0}\bigr{)}-l_{p}\bigl{(}t,\mathbb{E}[\overline{x}^{\prime}|{\cal G}],c^{0},c^{p}\bigr{)},x^{p}-x^{p\prime}\bigr{\rangle}\Bigr{]}\geq m\gamma^{l}\mathbb{E}\bigl{[}\mathbb{E}[\overline{x}-\overline{x}^{\prime}|{\cal G}]^{2}\bigr{]},$ where $\overline{x}:=\frac{1}{m}\sum_{p=1}^{m}x^{p}$ and similarly for $\overline{x}^{\prime}$. (iv) There exists a strictly positive constant $\gamma$ satisfying $0<\gamma\leq\Bigl{(}\gamma^{f}-\frac{L_{\varpi}^{2}}{4\gamma^{l}}\Bigr{)}\wedge\gamma^{g}$. Moreover, the functions $(\overline{g}_{p})_{p=1}^{m}$ satisfy for any $x^{p},x^{p\prime},c^{0},c^{p}\in\mathbb{L}^{2}({\cal F};\mathbb{R}^{n})$ and any sub-$\sigma$-field ${\cal G}\subset{\cal F}$, $\displaystyle\frac{\delta}{1-\delta}m^{-1}\mathbb{E}\Bigl{[}\bigl{\langle}\sum_{p=1}^{m}\mathbb{E}[\partial_{x}\overline{g}_{p}(x^{p},c^{0},c^{p})-\partial_{x}\overline{g}_{p}(x^{p\prime},c^{0},c^{p})|{\cal G}],\sum_{p=1}^{m}(x^{p}-x^{p\prime})\bigr{\rangle}\Bigr{]}$ $\displaystyle\qquad\quad+\gamma^{g}\sum_{p=1}^{m}\mathbb{E}[|x^{p}-x^{p\prime}|^{2}]\geq\gamma\sum_{p=1}^{m}\mathbb{E}[|x^{p}-x^{p\prime}|^{2}]~{}.$ ###### Remark 6.3. The conditions (iii) and (iv) in the above assumption are rather restrictive. The condition (iii) is satisfied, for example, if $l_{p}$ has a separable form $l_{p}=h(x)+h_{p}(c^{0}_{t},c_{t}^{p})$ with some function $h$, which is common to every population and strictly monotone. (iv) is also satisfied by requiring similar structure. Or, since $\partial_{x}\overline{g}_{p}$ is Lipschitz continuous in $x$, the absolute value of the first term is bounded by $\frac{\delta}{1-\delta}\max((L_{p})_{p=1}^{m})\sum_{p=1}^{m}\mathbb{E}|x^{p}-x^{p\prime}|^{2}$, where the $L_{p}$ is the Lipschitz constant for $\partial_{x}\overline{g}_{p}$. Thus the condition (iv) is satisfied if $\delta\max((L_{p})_{p=1}^{m})$ is sufficiently small. The next result is the counterpart of Theorem 4.2. ###### Theorem 6.2. Under Assumptions (MFG-A,B,C1), there exists a unique strong solution $(X,Y,Z^{0},Z)\in\mathbb{S}^{2}\bigl{(}\mathbb{F}^{1};(\mathbb{R}^{n})^{m}\bigr{)}\times\mathbb{S}^{2}\bigl{(}\mathbb{F}^{1};(\mathbb{R}^{n})^{m}\bigr{)}\times\mathbb{H}^{2}\bigl{(}\mathbb{F}^{1};(\mathbb{R}^{n\times d^{0}})^{m}\bigr{)}\times\mathbb{H}^{2}\bigl{(}\mathbb{F}^{1};(\mathbb{R}^{n\times d})^{m}\bigr{)}$ to the FBSDE $(\ref{fbsde-multiple-p})$. Moreover, the same form of stability and $\mathbb{L}^{2}$ estimates given in Proposition 4.1 and Corollary 4.1 hold. ###### Proof. Under Assumption (MFG-C1), $(\ref{fbsde-multiple-p})$ can be written as $\displaystyle dX_{t}^{p}=\Bigl{\\{}-\overline{\Lambda}\Bigl{(}Y_{t}^{p}-\frac{1}{m}\sum_{p=1}^{m}\mathbb{E}[Y_{t}^{p}|\overline{{\cal F}}_{t}^{0}]\Bigr{)}+l_{p}\Bigl{(}t,-\frac{1}{m}\sum_{p=1}^{m}\mathbb{E}[Y_{t}^{p}|\overline{{\cal F}}_{t}^{0}],c_{t}^{0},c_{t}^{p}\Bigr{)}\Bigr{\\}}dt$ $\displaystyle\qquad\qquad+\sigma_{p,0}(t,c_{t}^{0},c_{t}^{p})dW_{t}^{0}+\sigma_{p}(t,c_{t}^{0},c_{t}^{p})dW_{t}^{p,1},$ $\displaystyle dY_{t}^{p}=-\partial_{x}\overline{f}_{p}\Bigl{(}t,X_{t}^{p},-\frac{1}{m}\sum_{p=1}^{m}\mathbb{E}[Y_{t}^{p}|\overline{{\cal F}}_{t}^{0}],c_{t}^{0},c_{t}^{p}\Bigr{)}dt+Z_{t}^{p,0}dW_{t}^{0}+Z_{t}^{p}dW_{t}^{p,1},$ with $X_{0}^{p}=\xi^{p}$ and $Y_{T}^{p}=\frac{\delta}{1-\delta}\frac{1}{m}\sum_{p=1}^{m}\mathbb{E}\bigl{[}\partial_{x}\overline{g}_{p}(X_{T}^{p},c_{T}^{0},c_{T}^{p})|\overline{{\cal F}}_{T}^{0}\bigr{]}+\partial_{x}\overline{g}_{p}(X_{T}^{p},c_{T}^{0},c_{T}^{p})~{}.$ For each $p$, let us define the functionals $B_{p},F_{p}$ and $G_{p}$ for any $y^{p},x^{p},c^{0},c^{p}\in\mathbb{L}^{2}({\cal F};\mathbb{R}^{n})$ with $y:=(y^{p})_{p=1}^{m}$, $x:=(x^{p})_{p=1}^{m}$ and $c:=(c^{p})_{p=1}^{m}$ by $\displaystyle B_{p}(t,y,c^{0},c^{p}):=-\overline{\Lambda}\Bigl{(}y^{p}-\frac{1}{m}\sum_{p=1}^{m}\mathbb{E}[y^{p}|\overline{{\cal F}}_{t}^{0}]\Bigr{)}+l_{p}\Bigl{(}t,-\frac{1}{m}\sum_{p=1}^{m}\mathbb{E}[y^{p}|\overline{{\cal F}}_{t}^{0}],c^{0},c^{p}\Bigr{)}$ $\displaystyle F_{p}(t,x^{p},y,c^{0},c^{p}):=-\partial_{x}\overline{f}\Bigl{(}t,x^{p},-\frac{1}{m}\sum_{p=1}^{m}\mathbb{E}[y^{p}|\overline{{\cal F}}_{t}^{0}],c^{0},c^{p}\Bigr{)},$ $\displaystyle G_{p}(x,c^{0},c):=\frac{\delta}{1-\delta}\frac{1}{m}\sum_{p=1}^{m}\mathbb{E}[\partial_{x}\overline{g}_{p}(x^{p},c^{0},c^{p})|\overline{{\cal F}}_{T}^{0}]+\partial_{x}\overline{g}_{p}(x^{p},c^{0},c^{p})~{},$ and set $B(t,y,c^{0},c):=(B_{p}(t,y,c^{0},c^{p}))_{p=1}^{m}$, $F(t,x,y,c^{0},c):=(F_{p}(t,x^{p},y,c^{0},c^{p}))_{p=1}^{m}$ and $G(x,c^{0},c):=(G_{p}(x,c^{0},c))_{p=1}^{m}$. With $\Delta y:=y-y^{\prime}$ and $\Delta x:=x-x^{\prime}$, we have from (MFG-C1)(iii), $\displaystyle\mathbb{E}\Bigl{[}\langle B(t,y,c^{0},c)-B(t,y^{\prime},c^{0},c),\Delta y\rangle\Bigr{]}:=\sum_{p=1}^{m}\mathbb{E}\Bigl{[}\langle B_{p}(t,y,c^{0},c)-B_{p}(t,y^{\prime},c^{0},c),\Delta y^{p}\rangle\Bigr{]}$ $\displaystyle\leq-\sum_{p=1}^{m}\mathbb{E}[\langle\Delta y^{p},\overline{\Lambda}\Delta y^{p}\rangle]+\frac{1}{m}\mathbb{E}\Bigl{[}\bigl{\langle}\sum_{p=1}^{m}\mathbb{E}[\Delta y^{p}|\overline{{\cal F}}_{t}^{0}],\overline{\Lambda}\sum_{p=1}^{m}\Delta y^{p}\bigr{\rangle}\Bigr{]}-m\gamma^{l}\mathbb{E}\Bigl{[}\Bigl{(}\frac{1}{m}\sum_{p=1}^{m}\mathbb{E}[\Delta y^{p}|\overline{{\cal F}}_{t}^{0}]\Bigr{)}^{2}\Bigr{]}$ $\displaystyle\leq-m\gamma^{l}\mathbb{E}\Bigl{[}\Bigl{(}\frac{1}{m}\sum_{p=1}^{m}\mathbb{E}[\Delta y^{p}|\overline{{\cal F}}_{t}^{0}]\Bigr{)}^{2}\Bigr{]}.$ ( 6.7) There exists a orthogonal matrix $P$ such that $P^{\top}\overline{\Lambda}P$ becomes diagonal. Then working on the new basis $\hat{y}^{p}=P^{\top}\Delta y^{p}$, $1\leq p\leq m$, the last inequality of $(\ref{DelB-ineq})$ can be checked component by component $1\leq i\leq n$ by the fact $(\sum_{p=1}^{m}\hat{y}^{p}_{i})^{2}\leq m\sum_{p=1}^{m}|\hat{y}^{p}_{i}|^{2}$. Second, from (MFG-A)(iv), (MFG-B) and Cauchy-Schwarz inequality, $\displaystyle\mathbb{E}\Bigl{[}\langle F(t,x,y,c^{0},c)-F(t,x^{\prime},y^{\prime},c^{0},c),\Delta x\rangle\Bigr{]}$ $\displaystyle\hskip 85.35826pt\leq-\Bigl{(}\gamma^{f}-\frac{L_{\varpi}^{2}}{4\gamma^{l}}\Bigr{)}\mathbb{E}[|\Delta x|^{2}]+m\gamma^{l}\mathbb{E}\Bigl{[}\Bigl{(}\frac{1}{m}\sum_{p=1}^{m}\mathbb{E}[\Delta y_{t}^{p}|\overline{{\cal F}}_{t}^{0}]\Bigr{)}^{2}\Bigr{]}.$ ( 6.8) Finally, from (MFG-A, C1)(iv), we immediately get $\displaystyle\mathbb{E}\Bigl{[}\langle G(x,c^{0},c)-G(x^{\prime},c^{0},c),\Delta x\rangle\Bigr{]}\geq\gamma\mathbb{E}[|\Delta x|^{2}]~{}.$ Now we have established the monotone conditions corresponding to $(\ref{peng- wu-condition})$ for the current model. We can now repeat the same procedures in the proof of Theorem 4.2 and Proposition 4.1. ∎ Let us give the results for the securities of maturity $T$ with exogenously specified payoff. ###### Assumption 6.4. (MFG-C2) (i) For every $1\leq p\leq m$, the functions $\sigma_{p,0}$ and $\sigma_{p}$ are independent of the argument $\varpi$. (ii) $\Lambda_{p}$=$\Lambda$ and $n_{p}=1/m$ for every $p$. (iii) For any $t\in[0,T]$, any random variables $x^{p},x^{p\prime},c^{0},c^{p}\in\mathbb{L}^{2}({\cal F};\mathbb{R}^{n})$ and any sub-$\sigma$-field ${\cal G}\subset{\cal F}$, the functions $(l_{p})_{p=1}^{m}$ satisfy with some positive constant $\gamma^{l}>0$, $\displaystyle\sum_{p=1}^{m}\mathbb{E}\Bigl{[}\bigl{\langle}l_{p}\bigl{(}t,\mathbb{E}[\overline{x}|{\cal G}],c^{0},c^{0}\bigr{)}-l_{p}\bigl{(}t,\mathbb{E}[\overline{x}^{\prime}|{\cal G}],c^{0},c^{p}\bigr{)},x^{p}-x^{p\prime}\bigr{\rangle}\Bigr{]}\geq m\gamma^{l}\mathbb{E}\bigl{[}\mathbb{E}[\overline{x}-\overline{x}^{\prime}|{\cal G}]^{2}\bigr{]},$ where $\overline{x}:=\frac{1}{m}\sum_{p=1}^{m}x^{p}$ and similarly for $\overline{x}^{\prime}$. (iv) $\gamma:=\gamma^{f}-\frac{L_{\varpi}^{2}}{4\gamma^{l}}$ is strictly positive. Moreover, $\delta=0$ and the terminal function $g_{p}$ is given by $\displaystyle g_{p}(x,c^{0})=\overline{g}_{p}(x,c^{0}):=-\langle c^{0},x\rangle$ ( 6.9) for every $1\leq p\leq m$. ###### Theorem 6.3. Under Assumptions (MFG-A,B,C2), there exists a unique strong solution $(X,Y,Z^{0},Z)\in\mathbb{S}^{2}\bigl{(}\mathbb{F}^{1};(\mathbb{R}^{n})^{m}\bigr{)}\times\mathbb{S}^{2}\bigl{(}\mathbb{F}^{1};(\mathbb{R}^{n})^{m}\bigr{)}\times\mathbb{H}^{2}\bigl{(}\mathbb{F}^{1};(\mathbb{R}^{n\times d^{0}})^{m}\bigr{)}\times\mathbb{H}^{2}\bigl{(}\mathbb{F}^{1};(\mathbb{R}^{n\times d})^{m}\bigr{)}$ to the FBSDE $(\ref{fbsde-multiple-p})$. Moreover, the same form of the stability and $\mathbb{L}^{2}$ estimates given in Proposition 4.1 and Corollary 4.1 holds. ###### Proof. Using the inequalities $(\ref{DelB-ineq})$ and $(\ref{DelF-ineq})$ with $\sum_{p=1}^{m}\langle\Delta X_{T}^{p},\Delta Y_{T}^{p}\rangle=0$, we can follow the same arguments in the proof of Theorem 4.3. ∎ ### 6.4 Asymptotic market clearing for multi-population model At the last part of this section, we investigate the asymptotic market clearing in the presence of multiple populations. As in Section 5, we define $(\varpi_{t})_{t\in[0,T]}$ using the solution to the system of the mean-field FBSDEs: $\displaystyle\varpi_{t}=\varpi(\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}]):=-\hat{\Xi}\sum_{p=1}^{m}\hat{\Lambda}_{p}\mathbb{E}\bigl{[}Y_{t}^{p}|\overline{{\cal F}}_{t}^{0}\bigr{]}$ where $(Y_{t}^{p})_{p=1}^{m}$ is the solution of $(\ref{fbsde-multiple-p})$. In order to test the accuracy of the above $(\varpi_{t})_{t\in[0,T]}$ as a market clearing price, we solve the individual agent problem $(\ref{control- pi})$ with this $\varpi$ as an input. The corresponding individual problem $(\ref{control-pi})$ for the agent $(p,i)$ is given by the unique strong solution $(X^{p,i},Y^{p,i},Z^{p,i,0},Z^{p,i})$ of $(\ref{fbsde-pi})$. The optimal strategy for the agent $(p,i)$ is then given by $\displaystyle\widehat{\alpha}_{\rm{mf}}^{p,i}(t):=-\overline{\Lambda}_{p}\Bigl{(}Y_{t}^{p,i}-\hat{\Xi}\sum_{q=1}^{m}\hat{\Lambda}_{q}\mathbb{E}\bigl{[}Y_{t}^{q}|\overline{{\cal F}}_{t}^{0}\bigr{]}\Bigr{)}~{},\forall t\in[0,T]~{}.$ ###### Theorem 6.4. If the conditions for Theorem 6.1, Theorem 6.2 or Theorem 6.3 are satisfied then we have $\displaystyle\lim_{N\rightarrow\infty}\mathbb{E}\int_{0}^{T}\Bigl{|}\frac{1}{N}\sum_{p=1}^{m}\sum_{i=1}^{N_{p}}\widehat{\alpha}^{p,i}_{\rm{mf}}(t)\Bigr{|}^{2}dt=0~{},$ where $N:=\sum_{p=1}^{m}N_{p}$ and the limit is taken while keeping $(n_{p}:=N_{p}/N)_{1\leq p\leq m}$ constant. Moreover if there exists some constant $\Gamma$ such that $\sup_{t\in[0,T]}\mathbb{E}\bigl{[}|Y_{t}|^{q}\bigr{]}^{\frac{1}{q}}\leq\Gamma<\infty$ for some $q>4$, then there exists some constant $C$ independent of $N$ such that $\displaystyle\mathbb{E}\int_{0}^{T}\Bigl{|}\frac{1}{N}\sum_{p=1}^{m}\sum_{i=1}^{N_{p}}\widehat{\alpha}^{p,i}_{\rm{mf}}(t)\Bigr{|}^{2}dt\leq C\Gamma^{2}\epsilon_{N},$ where $\epsilon_{N}:=N^{-2/\max(n,4)}\bigl{(}1+\log(N)\mathbb{1}_{\\{n=4\\}}\bigr{)}$. ###### Proof. By the definition of $\widehat{\alpha}^{p,i}_{\rm{mf}}$, we have $\displaystyle\frac{1}{N}\sum_{p=1}^{m}\sum_{i=1}^{N_{p}}\widehat{\alpha}^{p,i}_{\rm{mf}}(t)$ $\displaystyle=$ $\displaystyle-\frac{1}{N}\sum_{p=1}^{m}\sum_{i=1}^{N_{p}}\overline{\Lambda}_{p}\Bigl{(}Y_{t}^{p,i}-\hat{\Xi}\sum_{q=1}^{m}\hat{\Lambda}_{q}\mathbb{E}[Y_{t}^{q}|\overline{{\cal F}}_{t}^{0}]\Bigr{)}$ ( 6.10) $\displaystyle=$ $\displaystyle-\sum_{p=1}^{m}\hat{\Lambda}_{p}\Bigl{(}\frac{1}{N_{p}}\sum_{i=1}^{N_{p}}Y_{t}^{p,i}-\mathbb{E}[Y_{t}^{p}|\overline{{\cal F}}_{t}^{0}]\Bigr{)}~{}.$ On the other hand, we have for each $1\leq p\leq m$, $1\leq i\leq N_{p}$, $\displaystyle dX_{t}^{p,i}$ $\displaystyle=$ $\displaystyle\Bigl{(}\widehat{\alpha}_{p}\bigl{(}Y_{t}^{p,i},\varpi(\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}])\bigr{)}+l_{p}\bigl{(}t,\varpi(\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}]),c_{t}^{0},c_{t}^{p,i}\bigr{)}\Bigr{)}dt$ $\displaystyle+\sigma_{p,0}\bigl{(}t,\varpi(\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}]),c_{t}^{0},c_{t}^{p,i}\bigr{)}dW_{t}^{0}+\sigma_{p}\bigl{(}t,\varpi(\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}]),c_{t}^{0},c_{t}^{p,i}\bigr{)}dW_{t}^{p,i},$ $\displaystyle dY_{t}^{p,i}$ $\displaystyle=$ $\displaystyle-\partial_{x}\overline{f}_{p}\bigl{(}t,X_{t}^{p,i},\varpi(\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}]),c_{t}^{0},c_{t}^{p,i}\bigr{)}dt+Z_{t}^{p,i,0}dW_{t}^{0}+Z_{t}^{p,i}dW_{t}^{p,i}~{},$ with $X_{0}^{p,i}=\xi^{p,i}$, $\displaystyle Y_{T}^{p,i}=-\delta\varpi(\mathbb{E}[Y_{T}|\overline{{\cal F}}_{T}^{0}])+\partial_{x}\overline{g}_{p}(X_{T}^{p,i},c_{T}^{0},c_{T}^{p,i})~{}.$ By the unique strong solvability, Yamada-Watanabe theorem implies that there exists some function $\Phi_{p}$ for each $1\leq p\leq m$ such that for every $1\leq i\leq N_{p}$, $\displaystyle(Y_{t}^{p,i})_{t\in[0,T]}=\Phi_{p}\Bigl{(}c^{0},(W_{t}^{0})_{t\in[0,T]},(\mathbb{E}[Y_{t}^{q}|\overline{{\cal F}}_{t}^{0}]_{t\in[0,T]})_{1\leq q\leq m},\xi^{p,i},(c_{t}^{p,i})_{t\in[0,T]},(W^{p,i}_{t})_{t\in[0,T]}\Bigr{)}.$ Hence $(Y_{t}^{p,i})_{t\in[0,T],1\leq i\leq N_{p}}$ are independently and identically distributed conditionally on $\overline{{\cal F}}^{0}$. In particular, we have $\mathbb{E}[Y_{t}^{p,i}|\overline{{\cal F}}_{t}^{0}]=\mathbb{E}[Y_{t}^{p,1}|\overline{{\cal F}}_{t}^{0}]$. We now compare $(X^{p,1}_{t},Y^{p,1}_{t},Z^{p,1,0}_{t},Z^{p,1}_{t})_{t\in[0,T]}$ with $(X^{p}_{t},Y^{p}_{t},Z^{p,0}_{t},Z^{p}_{t})_{t\in[0,T]}$ by treating $\varpi(\mathbb{E}[Y_{t}|\overline{{\cal F}}_{t}^{0}])$ as external inputs. Note that the terminal condition of the latter satisfies the relation $(\ref{terminal-pre})$. Then the standard stability result of the Lipschitz FBSDEs implies $(Y^{p,1}_{t})_{t\in[0,T]}=(Y^{p}_{t})_{t\in[0,T]}$ in $\mathbb{S}^{2}(\mathbb{F}^{p,1};\mathbb{R}^{n})$. As a result we have obtained $\mathbb{E}[Y_{t}^{p}|\overline{{\cal F}}_{t}^{0}]=\mathbb{E}[Y_{t}^{p,1}|\overline{{\cal F}}_{t}^{0}]$. Using the expression $(\ref{clearing-mp})$, we obtain $\displaystyle\frac{1}{N}\sum_{p=1}^{m}\sum_{i=1}^{N_{p}}\widehat{\alpha}^{p,i}_{\rm{mf}}(t)$ $\displaystyle=$ $\displaystyle-\sum_{p=1}^{m}\hat{\Lambda}_{p}\Bigl{(}\frac{1}{N_{p}}\sum_{i=1}^{N_{p}}Y_{t}^{p,i}-\mathbb{E}[Y_{t}^{p,1}|\overline{{\cal F}}_{t}^{0}]\Bigr{)}~{}.$ We can now repeat the last part of the proof for Theorem 5.1. ∎ ## 7 Concluding Remarks and Further Extensions In this work, we have studied endogenous formation of market clearing price using a stylized model of a security exchange. We have derived a special type of FBSDE of McKean-Vlasov type with common noise whose solution provides a good approximate of the equilibrium price. In addition to the existence of strong unique solution to the FBSDE, we have proved that the modeled price asymptotically clear the market in the large $N$-limit. We also gave the order of convergence $\epsilon_{N}$ when the solution of the FBSDE possesses higher order of integrability. In the following, let us list up of a further extension of our technique and some interesting topics for future projects: $\bullet~{}$Dependence on the conditional law of the state: For applications to energy and commodity markets, or economic models with producers and consumers, one may want to study the cost functions $(\overline{f},\overline{g})$ depending on the empirical distribution of the sate $X$ of the agents such as ${\overline{f}}\Bigl{(}t,X_{t}^{i},\frac{1}{N}\sum_{j=1}^{N}\delta_{X_{t}^{j}},\varpi_{t},c^{0}_{t},c^{i}_{t}\Bigr{)}$. Under the setup with conditional independence, the cost function for the limit problem is naturally given by ${\overline{f}}\Bigl{(}t,X_{t},{\cal L}(X_{t}|\overline{{\cal F}}_{t}^{0}),\varpi_{t},c^{0}_{t},c_{t}\Bigr{)}$. Even in this case, the resultant FBSDE $(\ref{fbsde-single-p})$ is solvable, at least for small $T$, if $(\partial_{x}\overline{f},\partial_{x}\overline{g})$ are Lipschitz continuous in the measure argument with respect to $W_{2}$-distance. Under the stronger assumption guaranteeing the monotone conditions $(\ref{peng-wu-condition})$, one can even achieve the existence of unique solution in general $T$. As long as the source of common noise is solely from the filtration $\overline{\mathbb{F}}^{0}$ generated by $\boldsymbol{W}^{0}$, we can avoid subtleties regarding the admissibility (so-called $H$-hypothesis). See Remark 2.10 in [8] as a useful summary for this issue. $\bullet~{}$Explicit solution: If we chose $\overline{f},\overline{g}$ as quadratic functions and $l,\sigma_{0},\sigma$ as affine functions, we obtain a linear-quadratic mean field game with common noise. In this case, an explicit solution may be available where the coefficients functions are given as the solutions to differential equations of Riccati type. $\bullet~{}$Property of market price process: It seems interesting to study the properties of the market clearing price theoretically and numerically. For example, if $n=d^{0}$ the equivalent martingale measure (EMM) can be uniquely determined. Based on the payoff distribution $c^{0}$ and the cost functions of the agents $(\overline{f},\overline{g})$, one may study how the market price process under the EMM behaves, for example, the relation between the skew of its implied volatility and the risk-averseness of the agents. ## References * [1] Achdou, Y., J.Buera, F., Lasry, J., Lions, P. and Moll, B., 2014, Partial differential equation models in macroeconomics, Philosophical Transaction of The Royal Society, A 372:20130397. * [2] Alasseur, C., Ben Taher, I., Matoussi, A., 2020, An extended mean field games for storage in smart grids, Journal of Optimization Theory and Applications, 184: 644-670. * [3] Bensoussan, A., Frehse, J. and Yam, P., 2013, Mean field games and mean field type control theory, SpringerBriefs in Mathematics, NY. * [4] Carmona. R. and Delarue, F., 2013, Mean field forward-backward stochastic differential equations, Electron. Commun. Probab., Vol. 18, No. 68, pp. 1-15. * [5] Carmona, R. and Delarue, F., 2013, Probabilistic analysis of mean-field games, SIAM J. Control. Optim., Vol. 51, No. 4, pp. 2705-2734. * [6] Carmona, R. and Delarue, F., 2015, Forward-backward stochastic differential equations and controlled McKean-Vlasov dynamics, The Annals of Probability, Vol. 43, No. 5, pp. 2647-2700. * [7] Carmona, R. and Delarue, F., 2018, Probabilistic Theory of Mean Field Games with Applications I, Springer International Publishing, Switzerland. * [8] Carmona, R. and Delarue, F., 2018, Probabilistic Theory of Mean Field Games with Applications II, Springer International Publishing, Switzerland. * [9] Djehiche, B., Barreiro-Gomez, J. and Tembine, H., 2018, Electricity price dynamics in the smart grid: a mean-field-type game perspective, 23rd International Symposium on Mathematical Theory of Networks and Systems Hong Kong University of Science and Technology, Hong Kong, July 16-20, 2018. * [10] Fu, G., Graewe, P., Horst, U. and Popier, A., 2019, A mean field game of optimal portfolio liquidation, available from https://arxiv.org/pdf/1804.04911.pdf. * [11] Fu, G., Horst, U., 2018, Mean-Field Leader-Follower Games with terminal state constraint, avialable from https://arxiv.org/pdf/1809.04401.pdf. * [12] Fu, G., 2019, Extended mean field games with singular controls, available at https://arxiv.org/pdf/1909.04154.pdf. * [13] Fujii, M., 2019, Probabilistic approach to mean field games and mean field type control problems with multiple populations, working paper, available from https://arxiv.org/pdf/1911.11501.pdf. * [14] Gomes, D.A., Nurbekyan, L. and Pimentel, E.A., 2015, Economic models and mean-field games theory, Publicaoes Matematicas, IMPA, Rio, Brazil. * [15] Gomes, D.A., Pimental, E.A. and Voskanyan, V., 2016, Regularity Theory for Mean-field game systems, SpringerBriefs in Mathematicsm. * [16] Gomes, D.A. and Saude, J., 2020, A mean-field game approach to price formation, Dyn Games Appl (2020). https://doi.org/10.1007/s13235-020-00348-x. * [17] Gueant, O., Lasry, J., Lions, P., 2010, Mean field games and Oil production, Economica. The Economics of Sustainable Development. * [18] Hunag, M., Malhame and R., Caines, P.E., 2006, Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle, Commun. Inf. Syst., Vol. 6, No. 3, pp. 221-252. * [19] Kolokoltsov, V.N. and Malafeyev, O.A., 2019, Many agent games in socio-economic systems: corruption, inspection, coalition building, network growth, security, Springer Series in Operations Research and Financial Engineering. * [20] Lasry, J. M. and Lions, P.L., 2006, Jeux a champ moyen I. Le cas stationnaire, C. R. Sci. Math. Acad. Paris, 343 pp. 619-625. * [21] Lasry, J. M. and Lions, P.L., 2006, Jeux a champ moyen II. Horizon fini et controle optimal, C. R. Sci. Math. Acad. Paris, 343, pp. 679-684. * [22] Lasry, J.M. and Lions, P.L., 2007, Mean field games, Jpn. J. Math., Vol. 2, pp. 229-260. * [23] Lehalle, C.A. and Mouzouni, C., 2019, A mean field game of portfolio trading and its consequences on perceived correlations, available at https://arxiv.org/pdf/1902.09606.pdf. * [24] Peng, S. and Wu, Z., 1999, Fully coupled forward-backward stochastic differential equations and applications to optimal control. SIAM J. Control Optim. ${\boldsymbol{37}}$, pp. 825-843. * [25] Zhang, J., 2017, Backward Stochastic Differential Equations, Springer, NY.
2024-09-04T02:54:57.763763
2020-03-06T08:29:37
2003.03072
{ "authors": "Chan Hee Song, Dawn Lawrie, Tim Finin, James Mayfield", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26074", "submitter": "Chan Hee Song", "url": "https://arxiv.org/abs/2003.03072" }
arxiv-papers
# Improving Neural Named Entity Recognition with Gazetteers Chan Hee Song University of Notre Dame <EMAIL_ADDRESS>Dawn Lawrie HLTCOE, JHU <EMAIL_ADDRESS>Tim Finin UMBC, HLTCOE <EMAIL_ADDRESS>James Mayfield HLTCOE, JHU <EMAIL_ADDRESS> ###### Abstract The goal of this work is to improve the performance of a neural named entity recognition system by adding input features that indicate a word is part of a name included in a gazetteer. This article describes how to generate gazetteers from the Wikidata knowledge graph as well as how to integrate the information into a neural NER system. Experiments reveal that the approach yields performance gains in two distinct languages: a high-resource, word- based language, English and a high-resource, character-based language, Chinese. Experiments were also performed in a low-resource language, Russian on a newly annotated Russian NER corpus from Reddit tagged with four core types and twelve extended types. This article reports a baseline score. It is a longer version of a paper in the 33rd FLAIRS conference (?). ## 1 Introduction Named Entity Recognition (NER) is an important task in natural language understanding that entails spotting mentions of conceptual entities in text and classifying them according to a given set of categories. It is particularly useful for downstream tasks such as information retrieval, question answering, and knowledge graph population. Developing a well- performing, robust NER system can facilitate more sophisticated queries that involve entity types in information retrieval and more complete extraction of information for knowledge graph population. Various approaches exist to automated named entity recognition. Older statistical methods use conditional random fields (?), perceptrons (?), and support vector machines (?). More recent approaches have applied deep neural models, beginning with ? (?). Further advances came from the addition of a BiLSTM model with CRF decoding (?), which led to the current state-of-the-art model. Our NER architecture combines recent advances in transfer learning (?) and a BiLSTM-CRF model, producing a BERT-BiLSTM-CRF model. In our model, BERT generates an embedding for each word. This embedding is fed into a multi-layer BiLSTM, which is often jointly trained with a pre-trained encoder at training time. This fine-tunes the encoder to the NER task. At test time, the BiLSTM outputs are decoded using a CRF. Other approaches show similar results, such as a BiLSTM-CRF that uses a character-level CNN added to BiLSTM-CRF (?; ?). We adopt a BERT-based model as the baseline system for comparison. In the context of natural language understanding, a gazetteer is simply a collection of common entity names typically organized by their entity type. These have been widely used in natural language processing systems since the early 1990s, such as a large list of English place names provided by the MUC-5 Message Understanding Conference (?) to support its TIPSTER information extraction task. Initially these lists were employed to help recognize and process mentions of entities that were places or geo-political regions, hence the name gazetteer. Their use quickly evolved to cover more entity types and subtypes, such as cities, people, organizations, political parties and religions. Statistical approaches have benefited by using gazetteers as an additional source of information, often because the amount of labeled data for training an NER system tends to be small. A lack of training data is of particular concern when using neural architectures, which generally require large amounts of training data to perform well. Gazetteers are much easier to produce than labeled training data and can be mined from existing sources. Therefore, it is important to know whether this rich source of information can be effectively integrated into a neural model. This paper first focuses on generating gazetteers from Wikidata, presenting a simple way to gather a large quantity of annotated entities from Wikidata. It then describes how to integrate the gazetteers with a neural architecture by generating features from gazetteers alongside the features from BERT as input to the BiLSTM. We aim to provide an additional external knowledge base to neural systems similar to the way people use external knowledge to determine what is an entity and to what category it belongs. Adding gazetteer features (often called lexical features) to neural systems has been shown to improve performance on well-studied datasets like English OntoNotes and CONLL-2003 NER using a closed-world neural system (_i.e.,_ BiLSTM-CRF) (?). We extended this approach and validated that gazetteer features are still beneficial to datasets in a more diverse set of languages and with models that use a pre-trained encoder. For generality, we applied and evaluated our approaches on datasets in three languages: a high-resource, word-based language, English; a high-resource, character-based language, Chinese; and a lower-resource, high morphological language, Russian. We will first present how our gazetteer is generated from publicly available data source, Wikidata. Then we will analyze our experimental results. Type | Description | Examples ---|---|--- PER | Person | Enrico Rastelli, Beyoncé ORG | Organization | International Jugglers Association COMM | Commercial Org. | Penguin Magic, Papermoon Diner POL | Political Organization | Green Party, United Auto Workers GPE | Geo-political Entity | Morocco, Carlisle, Côte d’Ivoire LOC | Natural Location | Kartchner Caverns, Lake Erie FAC | Facility | Stieff Silver Building, Hoover Dam GOVT | Government Building | White House, 10 Downing St. AIR | Airport | Ninoy Aquino International, BWI EVNT | Named Event | WWII, Operation Desert Storm VEH | Vehicle | HMS Titanic, Boeing 747 COMP | Computer Hard/Software | Nvidia GeForce RTX 2080 Ti, Reunion MIL | Military Equip. | AK-47, Fat Man, cudgel MIL_G | Generic Military Equip. | tank, aircraft carrier, rifle MIL_N | Named Military Equip. | USS Nimitz, 13"/35 caliber gun CHEM | Chemical | Iron, NaCl, hydrochloric acid MISC | Other named entity | Dark Star, Lord of the Rings Table 1: We worked with types where training data was available in several languages, including four core types (in bold) and twelve additional ones. ## 2 Related Work Our work builds on the neural approach to NER, which was introduced when ? (?) used Long Short-Term Memory (LSTM), achieving just above average performance for English and improvement for German. LSTM was proposed by ? (?), expanded by ? (?), and reached its modern form with ? (?). Recent NER systems have adopted a forward-backward LSTM or BiLSTM, mainly using the BiLSTM-CRF architecture first proposed by ? (?), and now widely studied and augmented. For example, ? (?) and ? (?) augmented the BiLSTM-CRF architecture with a character-level CNN to add additional features to the architecture. Adding lexical features to the system has been studied widely, mainly by matching words in the dataset to words in pre-gathered gazetteers. ? (?) uses gazetteers during embedding generation; ? (?) uses gazetteers to generate a one-hot encoded match of the words in the data to those in the gazetteers; and ? (?) generates gazetteer embeddings from Wikipedia. ? (?) presents an architecture incorporating gazetteer information for Chinese, which is a language that often has a greater number of false positive matches because it is logographic. However, our approach provides a simple augmentation to existing neural models and demonstrates that Chinese can benefit from gazetteer matches. We take the ? (?)’s approach to matching the gazetteer because of its simplicity and universality in application to many different neural models. We also show that it is applicable to neural models with a deep pre-trained encoder. Transfer learning architectures have shown significant improvement in various natural language processing tasks such as understanding, inference, question answering, and machine translation. BERT (?), uses stacked bi-directional transformer layers trained on masked word prediction and next sentence prediction tasks. BERT is trained on over 3.3 billion words gathered mainly from Wikipedia and Google Books. By adding a final output layer, BERT can be adapted to many different natural language processing tasks. In this work, we apply BERT to NER and use BiLSTM-CRF as the output layer of BERT. Our approach embodies a simple architecture that does not require a dataset-specific architecture or feature engineering. ## 3 Gazetteer Creation We describe the knowledge source we use to create our gazetteers and outline the process we used to automatically produce cleaned gazetteers for the entity types of interest. Figure 1: Statistics for canonical names for Wikidata entities for each type. Additional lists hold aliases and, for Russian, inflected forms. ### 3.1 Wikidata Knowledge Graph Our gazetteers were created by extracting canonical names (e.g., Manchester United F.C.) and aliases (e.g., Red Devil, Man U) of entities of a given type (e.g., ORG) from Wikidata (?). Wikidata is a large, collaboratively edited knowledge graph with information drawn from and used by a number of Wikimedia projects, including 310 Wikipedia sites in different languages. Its goal is to integrate entities and knowable facts about them for use in Wikimedia sites in a language-independent manner. Wikidata is multilingual, with all of its strings tagged with a two-letter ISO 639-1 language code. Wikidata currently has more than 900 million statements about 77 million items, supported by an ontology with nearly 2.4 million fine-grained types and more than 7,250 properties. An example item is the entity Q7186 shown in Figure 2. Items have a canonical name, short description and set of aliases in one or more languages. Property statements encode relations between items or between an item and a literal value and can have metadata including qualifiers (e.g., a period of time during which the property held), provenance information (e.g., the URL of an attesting source), and a rank (e.g., to distinguish a preferred value from alternative or deprecated ones). The data is exposed as RDF triples and can be queried using Wikimedia APIs or SPARQL queries sent to a public query service. We used the public SPARQL service to get both canonical names (e.g., Johns Hopkins University) and aliases (e.g., JHU, Johns Hopkins, Hopkins) in each of the languages studied for a given entity type (e.g., ORG). In addition, the Wikimedia community has developed many tools for searching for items, exploring the ontology, and updating entries. ### 3.2 Gazetteer Generation The gazetteer is generated by searching Wikidata via SPARQL queries sent to the public query server to retrieve both canonical names (e.g., Johns Hopkins University) and aliases (e.g., JHU, Johns Hopkins, Hopkins)) in each of the languages studied. The first step was to construct a mapping from our project’s 16 target types shown in Table 1 to Wikidata’s fine-grained type system (?). Our types included four common core types (person, organization, geopolitical entity (GPE), location) and twelve additional types (airport, chemical, commercial organization, computer hardware/software, event, facility, government building, military equipment, money, political organization, title, vehicle). The mapping for some types was simple: person corresponds to Wikidata’s Q5 and vehicle to Q42889. Others had a complex mapping that eliminate Wikidata subtypes that seemed too specialized (e.g., lunar craters and ice rumples from Wikidata’s geographic object) or allow us to retrieve more entity names given the public server’s one-minute query timeout. Figure 2: Wikidata entities have a unique ID, a canonical name, aliases and a short description in one or more languages along with any number of statements representing properties or relations and including qualifiers and provenance metadata. Figure 3: New training data is generated by replacing existing annotated names with gazetteer names of the same type. The initial name lists were filtered by type-dependant regular expressions to delete names we thought to be unhelpful (e.g., Francis of Assisi as a person because historical figures are unlikely to be mentioned in our targeted genres), remove Wikipedia artifacts (e.g., parentheticals), and eliminate punctuation, names that were too short or too long, and duplicate names. Although one could say that these changes bias the gazetteers, there is no reason not engineer a gazetteer in a way that is most helpful for the data. Wikidata is still being used in an automated way since we are relying on available labels. Text | Jack | is | on | Hong | Kong | International | Airport | in | Lantau | Island | , | Hong | Kong ---|---|---|---|---|---|---|---|---|---|---|---|---|--- PER | B-PER | O | O | O | O | O | O | O | O | O | O | O | O LOC | O | O | O | O | O | O | O | O | B-LOC | I-LOC | O | O | O GPE | O | O | O | B-GPE | I-GPE | O | O | O | O | O | O | B-GPE | I-GPE ORG | O | O | O | O | O | I-ORG | I-ORG | O | O | O | O | O | O Figure 4: This example shows gazetteer matches for the sentence "Jack is on Hong Kong International Airport in Lantau Island, Hong Kong" showing both full and partial matches. We produced additional lists for Russian using a custom script that generates type-sensitive inflected and familiar forms of canonical names and aliases. For an extreme example, the Russian name for the person Vladimir Vladimirovich Putin ( Владимир Владимирович Путин) produces more than 100 variations. The result is a collection of 96 gazetteer files with total 15.7M entity names, 4.2M for English, 2.1M for Russian and 584K for Chinese with an additional 8.7M Russian names produced by our morphological scripts. We kept the gazetteers for canonical names, aliases, and inflected forms separate to facilitate experimentation. We also worked with data downloaded directly from Wikidata, which uses a JSON serialization in which only the immediate types of each entity is provided. The entity Museum of Modern Art, for example, is identified as an instance of an art museum, an art institution and a copyright holder’s organisation. To decide which, if any, of our target types an entity belongs, we constructed a dictionary that maps relevant types to our 16 target types. For our example, this turns out to be three types: ORG, LOC, and FAC. Although doing the mapping sounds daunting given an ontology with more than two million types, it is simplified by exploiting the fact that most of these types do not have any immediate instances. We developed SPARQL queries that identified for each of our 16 types all of their subtypes that had one or more immediate instances. The ORG type, for example has 15,904 subtypes but only 5,962 have immediate instances; the LOC type has only 1,181 subtypes with immediate instances. The resulting dictionary was thus relatively small without losing any information and was used to quickly recognize entities of interest in the Wikidata dumps as well as identify their target types. ## 4 Exploiting Gazetteers ### 4.1 Gazetteer Features To use a gazetteer as a feature in the NER system, words in the dataset are matched with a gazetteer and turned into one-hot vectors for each entity type. Those one-hot vectors are then concatenated with word embeddings generated from other sources. For example, a word embedding of size 768 from BERT is concatenated with the gazetteer one-hot vectors sized to the number of entities $x$. Although each gazetteer represents an entity type, no attempt is made to communicate that type to the Bi-LSTM layer. We use the BIO (Beginning-Inside-Outside) tagging scheme where B-<type> tags the first token of an entity, I-<type> the subsequent ones, and O tagging non- entity tokens. For gazetteer matches, the gazetteer uses two matching schemes: full match and partial match. * • Full match: an $n$-gram in the dataset matches fully with a gazetteer entry. If there are multiple matches in same entity category, the longest match is preferred. * • Partial match: an $n$-gram in the dataset matches partially with the gazetteer. Only partial matches of length greater than one are accepted, except for the PER type, due to frequency of one-word person names. As an example, consider a gazetteer that contains {Jack:PER, Lantau Island:LOC, Hong Kong Government:GPE, JFK International Airport:ORG}, Figure 4 shows how full matches and partial matches are handled. Jack is fully matched with PER, so it is tagged B-PER. Lantau Island is also fully matched and tagged LOC. Hong Kong partially matches Hong Kong Government, and since the match length is greater than one, it is considered a match and thus tagged as GPE. Hong Kong International Airport is also partially matched with gazetteer entry JFK International Airport, so International Airport is tagged with ORG. As seen in the example match, tags of the gazetteer entries are assigned during a partial match. For character-level tokenized text, like Chinese, we forgo partial matching because it produces too many false matches. However for all other language, we both utilize full match and partial match as shown in the experiments. After matching, matches are one-hot encoded with each tag type assigned a separate one-hot vector. Therefore, for each token in the text, it gets assigned a number of tag types of one-hot vectors. These one-hot vectors are concatenated to the other features, which are fed into the BiLSTM. ### 4.2 Generating Augmented Training Data We experimented with a second application of gazetteers that uses them to generate additional training data. In this approach, we select sentences from our initial human-annotated training data, replace one or more of the annotated entities with a randomly selected gazetteer entity of the same type, and retrain the system. However, this approach did not produce statistically significant improvements. We developed a script that takes as input a BIO-tagged file, a type, and gazetteer for that type, and produces a modified version of the file with entity instances of the type replace with a random entity selected from the gazetteer. Additional arguments control whether all instances of a given entity in the input BIO file are replaced with the same gazetteer entity and specify the random seed, to support repeatable experiments. Our current experiments were run by replacing entities for all types and to allowing a given input entity to be replaced with different gazetteer entities each time it appears. Figure 3 shows an example with an original annotated sentence from OntoNotes on the left, and a new, generated training instance on the right. ## 5 Architecture Hyperparameters --- BiLSTM layers | $1$ BiLSTM hidden size | $256$ BiLSTM dropout | $.5$ Optimizer | adafactor Gradient clipping | $1.0$ Learning rate scheduler | cosine decay BERT layers used | $-4,-3$, $-2,-1$ Weight decay | $.005$ Mini batch size | $8$ Table 2: Default hyperparameters used in the baseline model We use a common baseline Bi-LSTM-CRF model like many sequence to sequence closed-world NER systems (?), which includes a stacked bi-directional recurrent neural network with long short-term memory units and a conditional random field decoder and is similar to ? (?) without the character-level CNN. We combine this system with BERT (?), which is a stack of bi-directional transformer encoders. We keep the BERT frozen during training and testing, feeding the text into BERT and concatenating its final four layers as an input to our Bi-LSTM-CRF. In addition, the features generated from gazetteers are concatenated with the outputs from BERT and fed into the Bi-LSTM-CRF. Table 2 shows the hyperparameters used for our experiments. We did not perform a hyperparameter search. ## 6 Experimental Data Sets To demonstrate the effectiveness of our new approaches to NER, datasets labeled with names are required. For English and Chinese, there are established datasets. We chose OntoNotes v5.0 (?) because it has a large number of labeled entities. Table 3 contains the statistics for these datasets. Russian, on the other hand, has little labeled data for NER. We chose to create our own dataset over Russian informal text. This section describes the construction of this collection, as well as its tag set and statistics. Figure 5: Architecture used in our models, with baseline components in blue and additional gazetteer features in green. The Russian Reddit collection comes from Russian comments collected over 433 threads on the Reddit online discussion forum (?). Reddit organizes threads around a submission that is posted to a channel. The first step in building the collection was to identify Russian threads. Annotators examined threads with at least ten comments in a majority of Cyrillic characters to determine whether the thread was written in Russian. We eliminated images and movies from the thread seeds, as well as seeds from sites primarily devoted to image content; such seeds typically contain few named entities in their comments. Over 30,000 threads met these criteria. Threads were prioritized based on the source of the material in the submission, where newswire and blogs were preferred. Annotators examined around 800 of these threads and identified the language of the comments, 433 of which were in Russian. These comments were automatically sentence-segmented using CoreNLP (?), so that named entity tagging could be performed by annotators at the sentence level. The Dragonfly annotation tool (?) was used to record the entity tags through an in-house Mechanical Turk- like interface. One goal of this collection was to have a wider variety of entity types so that future research could investigate types that have varying frequencies of attestation. Beyond the common core types, types were chosen that were sufficiently attested in the data. In addition, we desired to have a few subtypes of the common types to be able to experiment with this hierarchical relationship. To assure quality annotations, either sentences were doubly-annotated and a third annotator reviewed disagreements or the sentence was singly-annotated and a second annotator reviewed the annotations. The inter-annotator agreement on the doubly annotated text was 53%. The annotators agreed on whether a token was part of a name 63% of the time. Since agreement was measured at the token level, both a name’s tag and span had to match exactly. Finally the collection was split 80-10-10 into train, development, and test respectively. Table 3 shows the size of the collection, which was labeled with 16 types as shown in Table 1. The frequency of each type is shown in Table 4. The data set is available from https://github.com/hltcoe/rus-reddit-ner-dataset. Dataset | Type | Train | Test | Dev ---|---|---|---|--- English OntoNotes | Sentences | 82.1k | 9.0k | 12.7k | Tokens | 1644.2k | 172.1k | 251.0k | Entities | 70.3k | 6.9k | 10.9k Chinese OntoNotes | Sentences | 37.5k | 4.3k | 6.2k | Tokens | 1241.1k | 149.7k | 178.4k | Entities | 37.9k | 4.5k | 5.4k Russian Reddit | Sentences | 22.8k | 3.2k | 3.1k | Tokens | 281.7k | 39.3k | 37.9k | Core Ent. | 8.1k | 1.1k | 1.0k | Extended Ent. | 11.2k | 1.5k | 1.4k Table 3: Statistics of dataset sizes Tag Type | Frequency by Collection ---|--- | English | Chinese | Russian | OntoNotes | OntoNotes | Reddit PER | 27.4k | 14.1k | 3.3k ORG | 30.0k | 10.1k | 1.1k COMM | - | - | 409 POL | - | - | 174 GPE | 28.2k | 20.2k | 5.5k LOC | 2.7k | 2.7k | 451 FAC | - | - | 50 GOVT | - | - | 36 AIR | - | - | 5 EVNT | - | - | 152 VEH | - | - | 63 COMP | - | - | 273 MIL_G | - | - | 260 MIL_N | - | - | 139 CHEM | - | - | 21 MISC | - | - | 1.5k Table 4: Statistics of datasets by tag type ## 7 Results using Gazetteer Features We use our models for NER tasks on the English OntoNotes, Chinese OntoNotes Russian Reddit datasets. For each, we run our baseline models and the models with added gazetteer features at least ten times, depending on the size of the collection. Smaller collections were run a greater number of times because of the greater variability in the output. Performance is reported as precision (P), recall (R), and their harmonic mean (F1). Statistics of the dataset are shown in Tables 3 and 4. The statistics for gazetteer coverage for individual datasets are shown in Table 6. We use only the four core types for English and Chinese because our gazetteer tag types do not include the extended OntoNotes types. However, we experiment with both core and extended types for the Russian dataset. For each experiment, we train for fixed epochs and choose the model that shows the minimum loss on the development set. ### 7.1 English and Chinese OntoNotes We use the English OntoNotes v5.0 dataset compiled for the CoNLL-2013 shared task (?) and follow the standard train/dev/test split as presented in (?). We use pre-trained Cased BERT-Base with 12-layer, 768-hidden, 12-heads, 110M parameters available on the Google Github. The experiment is run for 10 trials and trained for 30 epochs. The model with the minimum dev set loss is selected and run on the test set. Table 5 shows our experiment results. We compute the p-value of the distribution using a $t$-test. We show that adding gazetteer features increases $0.52$ F1 score, an improvement that is statistically significant ($p<0.001$). We attribute this to an even coverage of the percentage of entities across train, dev, and test sets as seen in Table 6, as well as a high coverage (high 80s) for GPE entities, the entity type with the largest F1 gain. We use the Chinese OntoNotes v5.0 dataset with four core types compiled for CoNLL-2013 and follow the standard train/dev/test split as before. We use the pre-trained Chinese BERT-Base for simplified and traditional Chinese which has 12-layer, 768-hidden, 12-heads, 110M parameters. The experiment is run for 10 trials and trained for 30 epochs. The model with minimum loss on dev set is selected for testing. The gazetteer feature leads to a statically significant improvement ($p=0.003$), which we attribute this to high GPE coverage and even coverage across dataset splits. However, the absolute increase in F1 score is around 0.3, which is lower than English dataset. We believe Chinese showed less improvement due to our decision to forgo partial matches due to the high frequency of partial matched n-grams stemming from the language’s logographic nature. Dataset | Model | P | R | F1 ---|---|---|---|--- English | Baseline | 92.46 | 91.77 | 92.11 (SD: $0.10$) Gazetteer | 92.82 | 92.44 | 92.63 (SD: $0.12$) +Aliases | 92.69 | 92.50 | 92.59 (SD: $0.11$) Chinese | Baseline | 83.40 | 84.63 | 84.01 (SD: $0.16$) Gazetteer | 83.91 | 84.72 | 84.31 (SD: $0.23$) +Aliases | 83.84 | 84.76 | 84.30 (SD: $0.25$) Table 5: Performance of BERT-BiLSTM-CRF baseline and $+$ gazetteer features on English and Chinese OntoNotes, SD stands for standard deviation ### 7.2 Russian Reddit Dataset We use the Russian Reddit dataset to evaluate the performance of Russian NER. We use pre-trained Multilingual Cased BERT-Base with 12-layer, 768-hidden, 12-heads, 110M parameters. We use the same baseline BERT-BiLSTM-CRF model with gazetteer feature added. For the Russian Reddit dataset, the experiment is run for 20 trials with 30 epochs. The model with minimum loss on dev is selected for testing. The different number of trials is due to the smaller size of this dataset, as is shown in Table 3. We report experiments with both core types and extended types using the Russian Reddit dataset. Table 7 shows Russian dataset experiments with gazetteers and different tag types. Dataset | Type | Train | Test | Dev ---|---|---|---|--- English OntoNotes | PER | 38.2% | 44.3% | 37.5% ORG | 19.3% | 17.2% | 19.0% GPE | 88.7% | 86.8% | 87.2% LOC | 26.3% | 23.7% | 30.0% Chinese OntoNotes | PER | 24.0% | 21.2% | 21.6% ORG | 18.0% | 17.4% | 23.4% GPE | 76.2% | 75.4% | 77.2% LOC | 18.1% | 17.4% | 14.1% Russian Reddit | PER | 12.5% | 15.4% | 11.4% ORG | 16.9% | 26.6% | 9.8% COMM | 31.4% | 33.3% | 5.3% GPE | 23.4% | 23.1% | 20.2% LOC | 7.4% | 0% | 5.8% FAC | 4.3% | 50.0% | 0% GOVT | 7.7% | 0% | 0% AIR | 0% | 0% | 0% EVNT | 3.4% | 0% | 0% VEH | 6.1% | 11.1% | 20.0% COMP | 24.0% | 17.6% | 0% MIL_G | 0% | 0% | 0% MIL_N | 0% | 0% | 0% CHEM | 0% | 0% | 14.3% MISC | 0% | 0% | 0% Table 6: Statistics for the entity types and subtypes for each of the three collections. Our OntoNotes data only covered the core types while our Russian Reddit included additional types and subtypes. While the mean of the trials is slightly higher for those with gazetteer features, none of the results shows statistical significance. We attribute this to (a) lower coverage of our gazetteer for those in the dataset; and (b) uneven gazetteer coverage throughout train, dev, and test sets as is seen in Table 6. That table reports additional results from using inflected and familiar forms of entity canonical names and aliases, as described in Section 2. However, our takeaway here is that adding gazetteer features does not hurt the performance of the neural systems, but only improves it when the gazetteer has high coverage, as can be seen in the English and Chinese experiments. ## 8 Results for Training Data Augmentation Using our gazetteers to produce additional training data produced mixed results and generally was inconsistent in improving performance for our models using BERT for either the four core types or the extended set. We ran early experiments using FastText embeddings (?) and found that using our gazetteers with Russian inflections for PER improved performance for most types. However, we did not see the same gains when using BERT. Tags-Model | P | R | F1 ---|---|---|--- C-Baseline | 80.21 | 72.12 | 75.95 (SD:0.43) C-Gazetteer | 79.81 | 72.03 | 75.72 (SD:0.44) C-Inflected | 79.75 | 72.01 | 75.68 (SD:0.42) C-Alias | 79.68 | 72.05 | 75.67 (SD:0.48) E-Baseline | 73.36 | 56.88 | 64.08 (SD:0.44) E-Gazetteer | 73.33 | 57.05 | 64.17 (SD:0.58) E-Inflected | 73.31 | 57.01 | 64.14 (SD:0.51) E-Alias | 73.08 | 57.08 | 64.10 (SD:0.48) Table 7: Performance of BERT-BiLSTM-CRF baseline and model with gazetteer features on Russian Reddit datasets for the Core (C) and Extended (E) tag sets This may be due to several reasons. First, both core and extended types are quite broad. Replacing the annotated ORG Harvard University with the gazetteer ORG Disneyland in the sentence "Professor Pinker teaches at Harvard University" seems anomalous to us and probably also to our model. Second, our experiments were done with relatively small amounts of annotated training data, especially for Russian. While drawing on gazetteer data may help introduce new patterns not present in the training data, such as ORGs beginning with "Association of", the chances of this helping when evaluated with the relatively small test partition is low. Third, entity names were extracted from Wikidata without regard to their utility, including both very prominent entities (e.g., the LOC Atlantic Ocean) and very obscure ones (e.g., Avalonia, a microcontinent in the Paleozoic era). We plan to further explore this use case by replacing annotated entities with gazetteer entities that are in a same finer-grained Wikidata type. We can readily identify all of the Wikidata types to which a reasonably prominent entity (e.g., Harvard University) belongs. We will select several hundred of these types as targets for gazetteer entity replacement (e.g., Q2385804 – education institution) that are similar to an annotated entity. We can then associate gazetteer entities with these target types, replacing an entity such as "Harvard University" with an entity that is more similar, e.g., "Swarthmore College" or "Loyola Academy"). We also plan to limit obscure entities using a measure of prominence derived from Wikidata metrics, such as their number of incoming and outgoing links. ## 9 Conclusion and Future Work We present a simple way to generate a gazetteer, and show how it can be used in a neural NER systems. We also present a new Russian NER corpus gathered from Reddit. We show that with enough coverage on the dataset, gazetteer features improve neural NER systems, even systems using deep pre-trained models such as BERT. We hypothesize that paying attention to how tuned the signal is between the gazetteer and the training set greatly impacts ho much the neural system learns to pay attention to the gazetteer. Modifying the gazetteer based on the training data is a path we plan to explored. In general, we believe gazetteer features should be a standard addition to any NER system and show that even with low coverage, the gazetteer features do not hurt the performance of neural NER systems. While our gazetteer data augmentation did not show consistent improvement, we believe that future work in more sophisticated and contextualized replacement scheme will benefit low-resource languages such as Russian. In addition the noisiness of the gazetteers may have a great impact on performance since the NER system may learn not to trust a gazetteer that does not assist with tagging a sufficient number of time. Future work will identify techniques to produce gazetteers that are trustworthy relative to the training data to see if such gazetteers can be shown to be more helpful. Gazetteers and associated software is available from https://github.com/hltcoe/gazetteer-collection. ## Acknowledgments We thank Johns Hopkins University, the Human Language Technology Center of Excellence and the 2019 SCALE workshop for their hospitality and for facilitating an excellent research environment. ## References * [Bojanowski et al. 2017] Bojanowski, P.; Grave, E.; Joulin, A.; and Mikolov, T. 2017\. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics 5:135–146. * [Chiu and Nichols 2016] Chiu, J. P., and Nichols, E. 2016\. Named entity recognition with bidirectional LSTM-CNNs. Trans. of the ACL 4:357–370. * [Collobert et al. 2011] Collobert, R.; Weston, J.; Bottou, L.; Karlen, M.; Kavukcuoglu, K.; and Kuksa, P. 2011\. Natural language processing (almost) from scratch. JMLR 12(Aug). * [Devlin et al. 2018] Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018\. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. * [Ding et al. 2019] Ding, R.; Xie, P.; Zhang, X.; Lu, W.; Li, L.; and Si, L. 2019\. A neural multi-digraph model for Chinese NER with gazetteers. In Proc. ACL, 1462–1467. * [Finkel, Grenager, and Manning 2005] Finkel, J. R.; Grenager, T.; and Manning, C. 2005\. Incorporating non-local information into information extraction systems by gibbs sampling. In Proc. ACL, 363–370. * [Gers, Schmidhuber, and Cummins 2000] Gers, F. A.; Schmidhuber, J.; and Cummins, F. 2000\. Learning to forget: Continual prediction with LSTM. Neural Computation 12(10):2451–2471. * [Ghaddar and Langlais 2016] Ghaddar, A., and Langlais, P. 2016\. WikiCoref: An English coreference-annotated corpus of Wikipedia articles. In Proc. LREC. * [Graves and Schmidhuber 2005] Graves, A., and Schmidhuber, J. 2005\. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural networks 18(5-6):602–610. * [Hammerton 2003] Hammerton, J. 2003\. Named entity recognition with long short-term memory. In NAACL, CONLL ’03, 172–175. * [Hochreiter and Schmidhuber 1997] Hochreiter, S., and Schmidhuber, J. 1997\. Long short-term memory. Neural Comput. 9(8):1735–1780. * [Huang, Xu, and Yu 2015] Huang, Z.; Xu, W.; and Yu, K. 2015\. Bidirectional LSTM-CRF models for sequence tagging. CoRR abs/1508.01991. * [Lin et al. 2018] Lin, Y.; Costello, C.; Zhang, B.; Lu, D.; Ji, H.; Mayfield, J.; and McNamee, P. 2018\. Platforms for non-speakers annotating names in any language. In Proceedings of ACL 2018, System Demonstrations, 1–6. Association for Computational Linguistics. * [Ma and Hovy 2016] Ma, X., and Hovy, E. 2016\. End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In Proc. ACL. * [Ma and Sun 2016] Ma, S., and Sun, X. 2016\. A new recurrent neural CRF for learning non-linear edge features. arXiv:1611.04233. * [Manning et al. 2014] Manning, C. D.; Surdeanu, M.; Bauer, J.; Finkel, J.; Bethard, S. J.; and McClosky, D. 2014\. The Stanford CoreNLP natural language processing toolkit. In Proc. ACL Demos. * [Mayfield, McNamee, and Piatko 2003] Mayfield, J.; McNamee, P.; and Piatko, C. 2003\. Named entity recognition using hundreds of thousands of features. In 7th Conference on Natural Language Learning at HLT-NAACL 2003 \- Volume 4, CONLL ’03, 184–187. Association for Computational Linguistics. * [Passos, Kumar, and McCallum 2014] Passos, A.; Kumar, V.; and McCallum, A. 2014\. Lexicon infused phrase embeddings for named entity resolution. In Conf. on Computational Natural Language Learning. * [Pellissier Tanon et al. 2016] Pellissier Tanon, T.; Vrandečić, D.; Schaffert, S.; Steiner, T.; and Pintscher, L. 2016\. From freebase to wikidata: The great migration. In Int. Conf. on world wide web. * [Pradhan and Ramshaw 2017] Pradhan, S., and Ramshaw, L. 2017\. Ontonotes: Large scale multi-layer, multi-lingual, distributed annotation. In Handbook of Linguistic Annotation. Springer. 521–554. * [Pradhan et al. 2013] Pradhan, S.; Moschitti, A.; Xue; Ng, H. T.; Björkelund, A.; Uryupina, O.; Zhang, Y.; and Zhong, Z. 2013\. Towards robust linguistic analysis using ontonotes. In Conf. on Computational Natural Language Learning. * [Ratinov and Roth 2009] Ratinov, L., and Roth, D. 2009\. Design challenges and misconceptions in named entity recognition. In Conf. on computational natural language learning, 147–155. * [Reddit 2019] Reddit. 2019\. Reddit web site. http://reddit.com/. * [Song et al. 2020] Song, C. H.; Lawrie, D.; Finin, T.; and Mayfield, J. 2020\. Gazetteer generation for neural named entity recognition. In Florida Artificial Intelligence Research Symposium. * [Sundheim 1993] Sundheim, B. M. 1993\. Tipster/muc-5: information extraction system evaluation. In Proceedings of the 5th Conference on Message Understanding, 27–44. Association for Computational Linguistics. * [Vrandečić and Krötzsch 2014] Vrandečić, D., and Krötzsch, M. 2014\. Wikidata: A free collaborative knowledgebase. Commun. ACM 57(10).
2024-09-04T02:54:57.773516
2020-03-06T08:58:25
2003.03085
{ "authors": "Hanaa Zitane, Ali Boutoulout, Delfim F. M. Torres", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26075", "submitter": "Delfim F. M. Torres", "url": "https://arxiv.org/abs/2003.03085" }
arxiv-papers
###### Abstract We investigate the stability and stabilization concepts for infinite dimensional time fractional differential linear systems in Hilbert spaces with Caputo derivatives. Firstly, based on a family of operators generated by strongly continuous semigroups and on a probability density function, we provide sufficient and necessary conditions for the exponential stability of the considered class of systems. Then, by assuming that the system dynamics is symmetric and uniformly elliptic and by using the properties of the Mittag–Leffler function, we provide sufficient conditions that ensure strong stability. Finally, we characterize an explicit feedback control that guarantees the strong stabilization of a controlled Caputo time fractional linear system through a decomposition approach. Some examples are presented that illustrate the effectiveness of our results. ###### keywords: fractional differential equations; fractional diffusion systems; Caputo derivative; stability and stabilization in Hilbert spaces; decomposition method. xx 1 5 Submitted: 31 Dec 2019; Revised: 21 and 28 Feb 2020; Accepted: 02 March 2020. Published in: Mathematics 2020, Volume 8, Issue 3, 353. DOI:black10.3390/math8030353 The Stability and Stabilization of Infinite Dimensional Caputo-Time Fractional Differential Linear Systems Hanaa Zitane 1,†, Ali Boutoulout 2,† and Delfim F. M. Torres 3,†* Hanaa Zitane, Ali Boutoulout and Delfim F. M. Torres Correspondence<EMAIL_ADDRESS>Tel.: +351-234-370-668 (D.F.M.T.) The authors contributed equally to this work. 26A33; 93D15. ## 1 Introduction Fractional order calculus is a natural generalization of classical integer order calculus. It deals with integrals and derivatives of an arbitrary real or complex order. Fractional order calculus has become very popular, in recent years, due to its demonstrated applications in many fields of applied sciences and engineering, such as the spread of contaminants in underground water, the charge transport in amorphous semiconductors, and diffusion of pollution in the atmosphere Rahimy (2010); Kilbas et al. (2006); Diethelm (2010). Because it generalizes and includes in the limit the integer order calculus, the fractional calculus has the potential to accomplish much more than what integer order calculus achieves Hilfer (2000). In particular, it has proved to be a powerful tool to describe long-term memory and hereditary properties of various dynamical complex processes Sabatier et al. (2007), diffusion processes, such as those found in batteries Gabano and Poinot (2011) and electrochemical and control processes Ichise et al. (1971), to model and control epidemics Rosa and Torres (2019); Silva and Torres (2019) and mechanical properties of viscoelastic systems and damping materials, such as stress and strain Bagley and Calico (1991). One can find in the literature several different fractional calculus. Here we use the fractional calculus of Caputo, which was introduced by Michele Caputo in his 1967 paper Caputo (2008). Such calculus has appeared, in a natural way, for representing observed phenomena in laboratory experiments and field observations, where the mathematical theory was checked with experimental data. Indeed, the operator introduced by Caputo in 1967, and used by us in the present work, represents an observed linear dissipative mechanism phenomena with a time derivative of order 0.15 entering the stress-strain relation Caputo (2008). More recently, a variational analysis with Caputo operators has been developed, which provides further mathematical substance to the use of Caputo fractional operators Malinowska and Torres (2012); Almeida et al. (2015). In the analysis and design of control systems, the stability issue has always an important role Mahmoud and Karaki (2018); Rocha et al. (2018). For a dynamical system, an equilibrium state is said to be stable if such system remains close to this state for small disturbances, and for an unstable system the question is how to stabilize it, especially by a feedback control law Sontag (2012). The stabilization concept for integer order systems and related problems has been considered in several works, see, e.g., Pritchard and Zabczyk (1981); Curtain and Zwart (1995); Triggiani (1975); Balakrishnan (1981) and references cited therein. In Pritchard and Zabczyk (1981), the relationship between the asymptotic behavior of a system, the spectrum properties of its dynamics, and the existence of a Lyapunov functional is provided. Several techniques are considered to study different kinds of stabilization, for example, the exponential stabilization is studied via a decomposition method Triggiani (1975) while the strong stabilization is developed using the Riccati approach Balakrishnan (1981). Similarly as classical dynamical systems, stability analysis is a central task in the study of fractional dynamical systems, which has attracted increasing interest of many researchers Silva and Torres (2019); Wojtak et al. (2018). For finite dimensional systems, the stability concept for fractional differential systems equipped with the Caputo derivative is investigated in many works Zhang et al. (2011). In Matignon (1996), Matignon studies the asymptotic behavior for linear fractional differential systems with the Caputo derivative, where their dynamics $A$ is a constant coefficient matrix. In this case the stability is guaranteed if the eigenvalues of the dynamics matrix $A$, $\lambda\in\sigma(A)$, satisfy $|arg(\lambda)|>\dfrac{\pi\alpha}{2}$ Matignon (1996). Since then, many scholars have carried out further studies on the stability for different classes of fractional linear systems Qian et al. (2010); Li et al. (2010). In Qian et al. (2010), stability theorems for fractional differential systems, which include linear systems, time-delayed systems, and perturbed systems, are established, while in Li et al. (2010), Ge, Chen and Kou provide results on the Mittag–Leffler stability and propose a Lyapunov direct method, which covers the power law stability and the exponential stability. See also Matar and Abu Skhail (2018), where the Mittag–Leffler and the class-K function stability of fractional differential equations of order $\alpha\in(1,2)$ are investigated. In 2018, the notion of regional stability was introduced for fractional systems in Ge et al. (2018), where the authors study the Mittag–Leffler stability and the stabilization of systems with Caputo derivatives, but only on a sub-region of its geometrical domain. More recently, fractional output stabilization problems for distributed systems in the Riemann–Liouville sense were studied Zitane et al. (2020, in press, 2019), where feedback controls, which ensure exponential, strong, and weak stabilization of the state fractional spatial derivatives, with real and complex orders, are characterized. An analysis of the literature shows that existing results on stability of fractional systems are essentially limited to finite-dimensional fractional order linear systems, while results on infinite-dimensional spaces are a rarity. In contrast, here we investigate global stability and stabilization of infinite dimensional fractional dynamical linear systems in the Hilbert space $L^{2}(\Omega)$ with Caputo derivatives of fractional order $0<\alpha<1$. In particular, we characterize exponential and strong stability for fractional Caputo systems on infinite-dimensional spaces. The remainder of this paper is organized as follows. In Section 2, some basic knowledge of fractional calculus and some preliminary results, which will be used throughout the paper, are given. In Section 3, we prove results on the global asymptotic and exponential stability of Caputo-time fractional differential linear systems. In contrast with available results in the literature, which are restricted to systems of integer order or to fractional systems in the finite dimensional state space $\mathbb{R}^{n}$, here we study a completely different class of systems: we investigate fractional linear systems where the state space is the Hillbert space $L^{2}(\Omega)$. We also characterize the stabilization of a controlled Caputo diffusion linear system via a decomposition method. Section 4 presents the main conclusions of the work and some interesting open questions that deserve further investigations. ## 2 Preliminaries and Notation In this section, we introduce several definitions and results of fractional calculus that are used in the sequel. [Kilbas et al. (2006)] Let $0<\alpha<1$ and $T>0$. The Caputo derivative of fractional order $\alpha$ for an absolutely continuous function $y(\cdot)$ on $[0,T]$ can be defined as follows: ${}^{C}D_{t}^{\alpha}y(t)=\dfrac{1}{\Gamma(1-\alpha)}\int_{0}^{t}(t-s)^{-\alpha}\dfrac{d}{ds}y(s)\,\mathrm{d}s,$ where $\Gamma(1-\alpha)$ is the Euler Gamma function. [Zhou and Jiao (2010)] For any given function $g\in L^{2}(0,T,L^{2}(\Omega))$, we say that function $y\in C(0,T,L^{2}(\Omega))$ is a mild solution of the system $\left\\{\begin{array}[]{ll}{}^{C}D_{t}^{\alpha}y(t)=Ay(t)+g(t)&t\in]0,+\infty[\\\ y(0)=y_{0}&y_{0}\in L^{2}(\Omega)\end{array}\right.$ (1) if it satisfies $y(t)=S_{\alpha}(t)y_{0}+\int_{0}^{t}(t-s)^{\alpha-1}K_{\alpha}(t-s)g(s)\,\mathrm{d}s,$ (2) where $S_{\alpha}(t)=\int_{0}^{+\infty}\Psi_{\alpha}(\theta)S(t^{\alpha}\theta)\,\mathrm{d}\theta$ (3) and $K_{\alpha}(t)=\alpha\int_{0}^{+\infty}\theta\Psi_{\alpha}(\theta)S(t^{\alpha}\theta)\,\mathrm{d}\theta$ (4) with $\Psi_{\alpha}(\theta)=\frac{1}{\alpha}\theta^{-1-\frac{1}{\alpha}}T_{\alpha}(\theta^{-\frac{1}{\alpha}}),$ (5) $(S(t))_{t\geq 0}$ the strongly continuous semigroup generated by operator $A$, and $T_{\alpha}$ the probability density function defined on $(0,\infty)$ by $T_{\alpha}=\frac{1}{\pi}\sum_{n=1}^{+\infty}(-1)^{n}\theta^{\alpha n-1}\frac{\Gamma(n\alpha+1)}{n!}\sin(n\pi\alpha).$ [Mainardi et al. (2007)] The probability density function $T_{\alpha}$ defined on $(0,\infty)$ satisfies $T_{\alpha}(\theta)\geq 0,\quad\theta\in(0,\infty),\quad\text{ and }\int_{0}^{+\infty}T_{\alpha}(\theta)\,\mathrm{d}\theta=1.$ [Erdélyi et al. (1981)] The Mittag–Leffler function of one parameter is defined as $E_{\eta}(z)=\sum_{n=0}^{+\infty}\frac{z^{n}}{\Gamma(\eta n+1)}\quad\text{with}\quad Re(\eta)>0,\quad z~{}\in\mathbb{C}.$ [Erdélyi et al. (1981)] The Mittag–Leffler function of two parameters is defined as $E_{\eta,\beta}(z)=\sum_{n=0}^{+\infty}\frac{z^{n}}{\Gamma(\eta n+\beta)}\quad\text{with}\quad Re(\eta)>0,\quad\beta>0,\quad z\in\mathbb{C}.$ The Mittag–Leffler function appears naturally in the solution of fractional differential equations and in various applications: see Erdélyi et al. (1981) and references therein. The exponential function is a special case of the Mittag–Leffler function Joshi et al. (2020): for $\beta=1$ one has $E_{\eta,1}(z)=E_{\eta}(z)$ and $E_{1,1}(z)=e^{z}$. [Mainardi (2014)] The Mittag–Leffler function $E_{\alpha}(-t^{\alpha})$ is completely monotonic: for all $0<\alpha<1$, for all $n\in\mathbb{N}$ and $t>0$, one has $(-1)^{n}\frac{d^{n}}{dt^{n}}E_{\alpha}(-t^{\alpha})\geq 0.$ [Schneider (1996)] The generalized Mittag–Leffler function $E_{\eta,\beta}(-x)$, $x\geq 0$, is completely monotonic for $\eta,\beta>0$ if and only if $\eta\in(0,1]$ and $\beta\geq\eta$. [Podlubny (1999)] Let $\beta>0$, $0<\eta<2$, and $\mu$ be an arbitrary real number such that $\frac{\pi\eta}{2}<\mu<\min\\{\pi,\pi\eta\\}$. Then, the following asymptotic expressions hold: * • if $|arg(z)|\leq\mu$ and $|z|>0$, then $|E_{\eta,\beta}(z)|\leq M_{1}(1+|z|)^{(1-\beta)/\eta}e^{Re(z^{\frac{1}{{\eta}}})}+\frac{M_{2}}{1+|z|};$ (6) * • if $\mu<|arg(z)|\leq\pi$ and $|z|\geq 0$, then $|E_{\eta,\beta}(z)|\leq\frac{M_{2}}{1+|z|},$ (7) where $M_{1}$ and $M_{2}$ are positive constants. ## 3 Main Results Our main goal is to study the stability and provide stabilization for a class of abstract Caputo-time fractional differential linear systems. ### 3.1 Stability of Time Fractional Differential Systems Let $\Omega$ be an open bounded subset of $\mathbb{R}^{n}$, $n=1,2,3,\ldots$, and let us consider the following abstract time fractional order differential system: $\left\\{\begin{array}[]{ll}{}^{C}D_{t}^{\alpha}z(t)=Az(t),&t\in\,]0,+\infty[,\\\ z(0)=z_{0},&z_{0}\in L^{2}(\Omega),\end{array}\right.$ (8) where ${}^{C}D_{t}^{\alpha}$ is the left-sided Caputo fractional derivative of order $0<\alpha<1$, the second order operator $A:D(A)\subset L^{2}(\Omega)\longrightarrow L^{2}(\Omega)$ is linear, with dense domain and such that the coefficients do not depend on time $t$, and such that it is also the infinitesimal generator of the $C_{0}$-semi-group $(S(t))_{t\geq 0}$ on the Hilbert state space $L^{2}(\Omega)$ endowed with its usual inner product $<\cdot,\cdot>$ and the corresponding norm $\Arrowvert\cdot\Arrowvert$. The unique mild solution of system (8) can be written, from Lemma 2, as $z(t)=S_{\alpha}(t)z_{0},$ where $S_{\alpha}(t)$ is defined by (3). We begin by proving the following lemma, which will be used thereafter. Let $A$ be the infinitesimal generator of a $C_{0}$-semi-group $(S(t))_{t\geq 0}$ on the Hilbert space $L^{2}(\Omega)$. Assume that there exists a function $h(\cdot)\in L^{2}(0,+\infty;\mathbb{R}^{+})$ satisfying $\Arrowvert S_{\alpha}(t+s)z\Arrowvert\leq h(t)\Arrowvert S_{\alpha}(s)z\Arrowvert,\quad\forall\,t,s\geq 0,\quad\forall\,z\in L^{2}(\Omega).$ (9) Then the operators $(S_{\alpha}(t))_{t\geq 0}$ are uniformly bounded. ###### Proof. To prove that $(S_{\alpha}(t))_{t\geq 0}$ are bounded, we have to show that $\forall z\in L^{2}(\Omega)~{}\underset{t\geq 0}{\sup}\Arrowvert S_{\alpha}(t)z\Arrowvert<\infty.$ (10) By _reductio ad absurdum_ , let us suppose that (10) does not hold, which means that there exists a sequence $(t_{s}+\tau_{n})$, $t_{s}>0$ and $\tau_{n}\longrightarrow+\infty$, satisfying $\Arrowvert S_{\alpha}(t_{s}+\tau_{n})z\Arrowvert\longrightarrow+\infty~{}as~{}n\longrightarrow+\infty.$ (11) From relation $\displaystyle\int_{0}^{+\infty}\Arrowvert S_{\alpha}(s+\tau_{n})z\Arrowvert^{2}\,\mathrm{d}s=\displaystyle\int_{\tau_{n}}^{+\infty}\Arrowvert S_{\alpha}(s)z\Arrowvert^{2}\,\mathrm{d}s,~{}~{}0\leq s<+\infty,$ it follows that the right-hand side goes to $0$ as $n\longrightarrow+\infty$. Using Fatou’s Lemma yields $\underset{n\longrightarrow+\infty}{\lim~{}\inf}~{}\Arrowvert S_{\alpha}(s+\tau_{n})z\Arrowvert=0~{}~{}\forall~{}s>0.$ Therefore, for some $s_{0}<t_{s}$, we may find a subsequence $\tau_{n_{k}}$ such that $\underset{k\longrightarrow+\infty}{\lim}\Arrowvert S_{\alpha}(s_{0}+\tau_{n_{k}})z\Arrowvert=0.$ By virtue of condition (9), one obtains $\Arrowvert S_{\alpha}(t_{s}+\tau_{n_{k}})z\Arrowvert\leq h(t_{s}-s_{0})\Arrowvert S_{\alpha}(s_{0}+\tau_{n_{k}})z\Arrowvert\underset{k\longrightarrow+\infty}{\longrightarrow 0},$ which contradicts (11). The intended conclusion follows from the uniform boundedness principle. ∎ Let $z_{0}\in L^{2}(\Omega)$. System (8) is said to be exponentially stable if there exist two strictly positive constants, $M>0$ and $\omega>0$, such that $\Arrowvert z(t)\Arrowvert\leq Me^{-\omega t}\Arrowvert z_{0}\Arrowvert,\quad\forall t\geq 0.$ The next theorem provides necessary and sufficient conditions for exponential stability of the abstract fractional order differential system (8). Suppose that the operators $(S_{\alpha}(t))_{t\geq 0}$ fulfill assumption (9) and $\forall z\in L^{2}(\Omega)~{}~{}~{}~{}~{}~{}\Arrowvert S_{\alpha}(t+s)z\Arrowvert\leq\Arrowvert S_{\alpha}(t)z\Arrowvert\cdot\Arrowvert S_{\alpha}(s)z\Arrowvert,\quad\forall t,s\geq 0.$ (12) Then, system (8) is exponentially stable if, and only if, for every $z\in L^{2}(\Omega)$ there exists a positive constant $\delta<\infty$ such that $\displaystyle\int_{0}^{+\infty}\Arrowvert S_{\alpha}(t)z\Arrowvert^{2}\,\mathrm{d}t<\delta.$ (13) ###### Proof. One has $\begin{split}t\Arrowvert S_{\alpha}(t)z\Arrowvert^{2}&=\displaystyle\int_{0}^{t}\Arrowvert S_{\alpha}(t)z\Arrowvert^{2}\,\mathrm{d}s\\\ &=\displaystyle\int_{0}^{t}\Arrowvert S_{\alpha}(t-s+s)z\Arrowvert^{2}\,\mathrm{d}s.\end{split}$ Combining assumption (9), Lemma 3.1, and condition (13), one gets $\begin{split}t\Arrowvert S_{\alpha}(t)z\Arrowvert^{2}&\leq\displaystyle\int_{0}^{t}\Arrowvert S_{\alpha}(s)z\Arrowvert^{2}\Arrowvert S_{\alpha}(t-s)z\Arrowvert^{2}\,\mathrm{d}s\\\ &\leq N\delta\Arrowvert z\Arrowvert^{2}\end{split}$ for some $N>0$. Therefore, for $t$ sufficiently large, it follows that $\Arrowvert S_{\alpha}(t)\Arrowvert<1.$ Then, there exists $t_{1}>0$ such that $\ln\Arrowvert S_{\alpha}(t)\Arrowvert<0,\quad\forall t\geq t_{1}.$ Thus, $\omega_{0}=\underset{t\geq 0}{\inf}~{}\dfrac{\ln\Arrowvert S_{\alpha}(t)\Arrowvert}{t}<0.$ Now, let us show that $\omega_{0}=\underset{t\longrightarrow+\infty}{\lim}\frac{\ln\Arrowvert S_{\alpha}(t)\Arrowvert}{t}.$ (14) Let $t_{s}>0$ be a fixed number and $N^{{}^{\prime}}=\underset{t\in[0,t_{s}]}{\sup}{\Arrowvert S_{\alpha}(t)\Arrowvert}$. Thus, for each $t>t_{s}$, there exists $m\in\mathbb{N}$ such that $mt_{s}\leq t\leq(m+1)t_{s}$. From (12), it follows that $\begin{split}\Arrowvert S_{\alpha}(t)\Arrowvert&=\Arrowvert S_{\alpha}(mt_{s}+(t-mt_{s}))\Arrowvert\\\ &\leq\Arrowvert S_{\alpha}(mt_{s})\Arrowvert\Arrowvert S_{\alpha}(t-mt_{s})\Arrowvert,\end{split}$ which yields $\frac{\ln\Arrowvert S_{\alpha}(t)\Arrowvert}{t}\leq\frac{\ln\Arrowvert S_{\alpha}(mt_{s})\Arrowvert}{t}+\frac{\ln\Arrowvert S_{\alpha}(t-mt_{s})\Arrowvert}{t}.$ Using again (12), it results that $\frac{\ln\Arrowvert S_{\alpha}(t)\Arrowvert}{t}\leq\frac{mt_{s}}{t}\frac{\ln\Arrowvert S_{\alpha}(t_{s})\Arrowvert}{t_{s}}+\frac{\ln\Arrowvert N^{{}^{\prime}}\Arrowvert}{t}.$ Since $\dfrac{mt_{s}}{t}\leq 1$ and $t_{s}$ is arbitrary, one obtains $\underset{t\longrightarrow+\infty}{\lim~{}\sup}~{}\frac{\ln\Arrowvert S_{\alpha}(t)\Arrowvert}{t}\leq\underset{t>0}{\inf}~{}\frac{\ln\Arrowvert S_{\alpha}(t)\Arrowvert}{t}\leq\underset{t\longrightarrow+\infty}{\lim~{}\inf}~{}\frac{\ln\Arrowvert S_{\alpha}(t)\Arrowvert}{t}.$ Consequently, (14) holds. Hence, we conclude that for all $\omega\in\,]0,-\omega_{0}[$, there exists $M>0$ such that $\forall z\in L^{2}(\Omega)\quad\Arrowvert S_{\alpha}(t)z\Arrowvert\leq Me^{-\omega t}\Arrowvert z\Arrowvert,~{}~{}\forall t\geq 0,$ which means that system (8) is exponentially stable. The converse is obvious. ∎ When $\alpha=1$, the conditions (9) and (12) are verified, and we retrieve from our Theorem 3.1 the results established in Curtain and Zwart (1995); Pritchard and Zabczyk (1981) about the exponential stability of system (8) on $\Omega$, which is equivalent to $\int_{0}^{+\infty}\Arrowvert S(t)z\Arrowvert^{2}\,\mathrm{d}t<\infty,\quad\forall z\in L^{2}(\Omega).$ Let $z_{0}\in L^{2}(\Omega)$. System (8) is said to be strongly stable if its corresponding solution $z(t)$ satisfies $\Arrowvert z(t)\Arrowvert\longrightarrow 0~{}~{}as~{}~{}t\longrightarrow+\infty.$ In our next theorem, we provide sufficient conditions that guaranty the strong stability of the fractional order differential system (8). The result generalizes the asymptotic result established by Matignon for finite dimensional state spaces, where the dynamics of the system $A$ is considered to be a matrix with constant coefficients in $\mathbb{R}^{n}$ Matignon (1996). In contrast, here we tackle the stability for a different class of systems. Precisely, we consider fractional systems where the system dynamics $A$ is a linear operator generating a strongly continuous semigroup in the infinite dimensional state space $L^{2}(\Omega)$. Let $(\lambda_{p})_{p\geq 1}$ and $(\phi_{p})_{p\geq 1}$ be the eigenvalues and the corresponding eigenfunctions of operator $A$ on $L^{2}(\Omega)$. If $A$ is a symmetric uniformly elliptic operator, then system (8) is strongly stable on $\Omega$. ###### Proof. Since $A$ is a symmetric uniformly elliptic operator, it follows that system (8) admits a weak solution defined by $z(t)=\sum_{p=1}^{+\infty}E_{\alpha}(\lambda_{p}t^{\alpha})\langle z_{0},\phi_{p}\rangle\phi_{p}\quad\forall\,z_{0}\in L^{2}(\Omega),$ where $(\lambda_{p})_{p\geq 1}$ satisfy $0>\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{j}\geq\cdots,\lim\limits_{j\longrightarrow\infty}=-\infty,$ and $(\phi_{p})_{p\geq 1}$ forms an orthonormal basis in $L^{2}(\Omega)$ Sakamoto and Yamamoto (2011); Courant and Hilbert (1953). Using the fact that function $E_{\alpha}(-t^{\alpha})$ is completely monotonic, for all $\alpha\in(0,1)$ and $t>0$ (Lemma 2), yields $\begin{split}\Arrowvert z(t)\Arrowvert&=\left\Arrowvert\displaystyle\sum\limits_{{p=1}}^{+\infty}E_{\alpha}(\lambda_{p}t^{\alpha})\langle z_{0},\phi_{p}\rangle\phi_{p}\right\Arrowvert\\\ &\leq|E_{\alpha}(\lambda_{1}t^{\alpha})|\Arrowvert z_{0}\Arrowvert.\end{split}$ Moreover, from Lemma 2, it follows that $\Arrowvert z(t)\Arrowvert\leq\frac{M_{2}}{1-\lambda_{1}t^{\alpha}}\Arrowvert z_{0}\Arrowvert\longrightarrow 0\quad\text{ as }\quad t\longrightarrow+\infty$ for some $M_{2}>0$. Hence, system (8) is strongly stable on $\Omega$. ∎ Let us consider, on $\Omega=]0,1[$, the following one-dimensional fractional diffusion system defined by $\left\\{\begin{array}[]{lll}{}^{C}D_{t}^{0.5}z(x,t)=\dfrac{\partial^{2}z}{\partial x^{2}}(x,t),&x\in\Omega,\quad t\in\,]0,+\infty[,\\\ z(0,t)=z(1,t)=0,&\forall t>0,\\\ z(x,0)=z_{0},&x\in\Omega,\end{array}\right.$ (15) where the second order operator $A=\dfrac{\partial^{2}}{\partial x^{2}}$ has its spectrum given by the eigenvalues $\lambda_{p}=-(p\pi)^{2}$, $p\geq 1$, and the corresponding eigenfunctions are $\phi_{p}(x)=\sqrt{\frac{2}{1+(p\pi)^{2}}}\sin(p\pi x)$, $p\geq 1$. Operator $A$ generates a $C_{0}$-semi-group $(S(t))_{t\geq 0}$ defined by $S(t)z_{0}=\sum_{p=1}^{+\infty}e^{\lambda_{p}t}\langle z_{0},\phi_{p}\rangle\phi_{p}.$ Moreover, the solution of system (15) is given by $S_{0.5}(t)z_{0}=\sum_{p=1}^{+\infty}E_{0.5}(\lambda_{p}t^{0.5})\langle z_{0},\phi_{p}\rangle\phi_{p}.$ One has that operator $A$ is symmetric and uniformly elliptic. Consequently, from our Theorem 3.1, we deduce that system (15) is strongly stable on $\Omega$. This is illustrated numerically in Figure 1 for $z(x,0)=\sin(\pi x)$, $t=0.1$, $t=0.15$, $t=0.2$, and $t=1.0$. Figure 1: The state of system (15) for $z(x,0)=\sin(\pi x)$, $t=0.1$, $t=0.15$, $t=0.2$, and $t=1.0$, illustrating the fact that (15) is strongly stable on $\Omega=]0,1[$. ### 3.2 Stabilization of Time Fractional Differential Systems Let $\Omega$ be an open bounded subset of $\mathbb{R}^{n}$, $n=1,2,3,\ldots$. We consider the following Caputo-time fractional differential linear system: $\left\\{\begin{array}[]{ll}{}^{C}D_{t}^{\alpha}z(t)=Az(t)+Bu(t),&t\in]0,+\infty[,\quad 0<\alpha<1,\\\ z(0)=z_{0},&z_{0}\in L^{2}(\Omega),\end{array}\right.$ (16) with the same assumptions on $A$ as in Section 3.1 and where $B$ is a bounded linear operator from $U$ into $L^{2}(\Omega)$, where $U$ is the space of controls, assumed to be a Hilbert space. By Lemma 2, the unique mild solution $z(\cdot)$ of system (16) is defined by $z(t)=S_{\alpha}(t)z_{0}+\displaystyle\int_{0}^{t}(t-s)^{\alpha-1}K_{\alpha}(t-s)Bu(s)\,\mathrm{d}s,$ (17) where $S_{\alpha}(t)$ and $K_{\alpha}(t)$ are given, respectively, by (3) and (4) . System (16) is said to be exponentially (respectively strongly) stabilizable if there exists a bounded operator $K\in\mathcal{L}(L^{2}(\Omega),U)$ such that the system $\left\\{\begin{array}[]{ll}{}^{C}D_{t}^{\alpha}z(t)=(A+BK)z(t),&t\in\,]0,+\infty[,\\\ z(0)=z_{0},&z_{0}\in L^{2}(\Omega),\end{array}\right.$ (18) is exponentially (respectively strongly) stable on $\Omega$. It is clear that the exponential stabilization of system (16) implies the strong stabilization of (16). Note that the concept is general: when $\alpha=1$, we obtain the classical definitions of stability and stabilization. Let $(S^{k}(t))_{t\geq 0}$ be the strongly continuous semi-group generated by $A+BK$, where $K\in\mathcal{L}(L^{2}(\Omega),U)$ is the feedback operator. The unique mild solution of system (16) can be written as $z(t)=S^{k}_{\alpha}(t)z_{0}$ with $S^{k}_{\alpha}(t)=\displaystyle\int_{0}^{+\infty}\Psi_{\alpha}(\theta)S^{k}(t^{\alpha}\theta)\,\mathrm{d}\theta,$ where $\Psi_{\alpha}(\theta)$ is defined by (5). Let $A+BK$ generate a strongly continuous semi-group $(S^{k}(t))_{t\geq 0}$ on $L^{2}(\Omega)$. If the operator $(S^{k}_{\alpha}(t))_{t\geq 0}$ satisfies conditions (9) and (12) and if $\forall z\in L^{2}(\Omega)~{}~{}~{}~{}~{}~{}\displaystyle\int_{0}^{+\infty}\Arrowvert S^{k}_{\alpha}(t)z\Arrowvert^{2}\,\mathrm{d}t<~{}\infty$ holds, then system (16) is exponentially stabilizable on $\Omega$. ###### Proof. The proof is similar to the proof of Theorem 3.1. ∎ Let $(\lambda^{k}_{p})_{p\geq 1}$ and $(\phi^{k}_{p})_{p\geq 1}$ be the eigenvalues and the corresponding eigenfunctions of operator $A+BK$ on $L^{2}(\Omega)$. If $A+BK$ is a symmetric uniformly elliptic operator, then system (16) is strongly stabilizable on $\Omega$. ###### Proof. The proof is similar to the proof of Theorem 3.1. ∎ Let us consider, on $\Omega=]0,1[$, the following fractional differential system of order $\alpha=0.2$: $\begin{cases}{}^{C}D_{t}^{0.2}z(x,t)=\displaystyle\frac{1}{100}\dfrac{\partial^{2}z}{\partial x^{2}}(x,t)+\displaystyle\frac{1}{2}z(x,t)+BKz(x,t),&(x,t)\in\Omega\times]0,+\infty[,\\\ z(0,t)=z(1,t)=0,&\forall t>0,\\\ z(x,0)=z_{0},&x\in\Omega,\end{cases}$ (19) with the linear bounded operator $B=I$ and where we take $K=-B^{*}=-I$. The operator $A+BK=\frac{1}{100}\dfrac{\partial^{2}}{\partial x^{2}}-\frac{1}{2},$ with spectrum given by the eigenvalues $\lambda^{k}_{p}=-\frac{1}{2}-\frac{1}{100}(p\pi)^{2}$, $p\geq 1$, and the corresponding eigenfunctions $\phi^{k}_{p}(x)=\sqrt{\frac{2}{1+(p\pi)^{2}}}\cos(p\pi x)$, $p\geq 1$, generates a $C_{0}$-semi-group $(S^{k}(t))_{t\geq 0}$ defined by $S^{k}(t)z_{0}=\displaystyle\sum_{p=1}^{+\infty}e^{\lambda^{k}_{p}t}\langle z_{0},\phi^{k}_{p}\rangle\phi^{k}_{p}.$ Furthermore, the solution of system (19) can be written as $z(t)=S^{k}_{0.2}(t)z_{0}=\displaystyle\sum_{p=1}^{+\infty}E_{0.2}(\lambda^{k}_{p}t^{0.2})\langle z_{0},\phi^{k}_{p}\rangle\phi^{k}_{p}.$ It is clear that $A+BK$ is a symmetric and uniformly elliptic operator. Hence, from Theorem 3.2, we deduce that system (19) is strongly stabilizable on $\Omega$, i.e., the system $\begin{cases}{}^{C}D_{t}^{0.2}z(x,t)=\displaystyle\frac{1}{100}\dfrac{\partial^{2}z}{\partial x^{2}}(x,t)+\displaystyle\frac{1}{2}z(x,t)+Bu(t),&(x,t)\in\Omega\times]0,+\infty[,\\\ z(0,t)=z(1,t)=0,&\forall t>0,\\\ z(x,0)=z_{0},&x\in\Omega,\end{cases}$ is strongly stabilizable by the feedback control $u(t)=-B^{*}z(t)$. Figure 2 shows, for $z(x,0)=x(x-1)$, that the state $z(x,t)$ of system (19) is unstable at $t=0$. Moreover, we see that the state evolves close to 0 at $t=10$. Numerically, the state is stabilized by $u(t)=-B^{*}z(t)$ with an error equal to $1.75\times 10^{-04}$. Figure 2: The state of system (19) for $z(x,0)=x(x-1)$, $t=0$, and $t=10$, illustrating the fact that (19) is unstable at $t=0$ but it is stabilized at $t=10$ on $\Omega=]0,1[$. ### 3.3 Decomposition Method Now, we study the stabilization of system (16) using the decomposition method, which consists in decomposing the state space and the system using the spectral properties of operator $A$. Let $\xi>0$ be fixed and assume that there are at most finitely many nonnegative eigenvalues of $A$ and each with finite dimensional eigenspace. In other words, assume there exists $l\in\mathbb{N}$ such that $\sigma(A)=\sigma_{u}(A)\cup\sigma_{s}(A),$ (20) where $\sigma_{u}(A)=\sigma(A)\cap\\{\lambda_{p},~{}~{}~{}p=1,2,\ldots,l\\}$, $\sigma_{s}(A)=\sigma(A)\cap\\{\lambda_{p},~{}~{}~{}p=l+1,l+2\ldots\\}$ with $\lambda_{l}\geq 0$ and $\lambda_{l+1}\leq-\xi$. Because the sequence $(\phi_{p})_{p\geq 1}$ forms a complete and orthonormal basis in $H=L^{2}(\Omega)$, it follows that the state space $H$ can be decomposed as $H=H_{u}\oplus H_{s},$ (21) where $H_{u}=PH=\mathrm{span}\\{\phi_{1},\phi_{2},\ldots,\phi_{l}\\}$ and $H_{s}=(I-P)H=\mathrm{span}\\{\phi_{l+1},\phi_{l+2},\ldots\\}$ with $P\in\mathcal{L}(H)$ the projection operator Kato (1966). Hence, system (16) can be decomposed into the following two sub-systems: $\begin{cases}{}^{C}D_{t}^{\alpha}z_{u}(t)=A_{u}z_{u}(t)+PBu(t),\\\ z_{0u}=Pz_{0},\end{cases}$ (22) and $\begin{cases}{}^{C}D_{t}^{\alpha}z_{s}(t)=A_{s}z_{s}(t)+(I-P)Bu(t),\\\ z_{0s}=(I-P)z_{0},\end{cases}$ (23) where $A_{s}$ and $A_{u}$ are the restrictions of $A$ on $H_{s}$ and $H_{u}$, respectively, and are such that $\sigma(A_{s})=\sigma_{s}(A)$, $\sigma(A_{u})=\sigma_{u}(A)$, and $A_{u}$ is a bounded operator on $H_{u}$. Our next result asserts that stabilization of system (16) is equivalent to the one of system (22). Let the spectrum $\sigma(A)$ of $A$ satisfy the above spectrum decomposition assumptions (20) for some $\xi>0$ and $A_{s}$ be a symmetric uniformly elliptic operator. If system (22) is strongly stabilizable by the control $u(t)=D_{u}z_{u}(t)$ (24) with $D_{u}\in\mathcal{L}(H,U)$ such that $\Arrowvert z_{u}(t)\Arrowvert\leq C~{}t^{-\mu},\quad\mu,C>0,$ (25) then system (16) is strongly stabilizable using the feedback control $v(t)=D_{u}z_{u}(t)$. ###### Proof. Using the fact that system (22) is strongly stabilizable by control (24), and inequality (25) yields $\Arrowvert z_{u}(t)\Arrowvert\longrightarrow 0\text{ as }t\longrightarrow+\infty$ (26) and $\Arrowvert u(t)\Arrowvert\leq C\Arrowvert D_{u}\Arrowvert t^{-\mu},$ (27) the unique weak solution of system (23) can be written in the space $H_{s}$ as $z_{s}(t)=\sum_{p=l+1}^{+\infty}E_{\alpha}(\lambda_{p}t^{\alpha})\langle z_{0s},\phi_{p}\rangle\phi_{p}+\sum_{p=l+1}^{+\infty}\displaystyle\int_{0}^{t}(t-s)^{\alpha-1}E_{\alpha,\alpha}(\lambda_{p}(t-s)^{\alpha})\langle(I-P)Bu(s),\phi_{p}\rangle\phi_{p}\,\mathrm{d}s$ since $A_{s}$ is a symmetric uniformly elliptic operator Sakamoto and Yamamoto (2011). Using the spectrum decomposition relation (20), Lemma 2, and Lemma 2, one has that $E_{\alpha}(\lambda_{p}t^{\alpha})\leq E_{\alpha}(-\xi t^{\alpha})\quad\text{ for all }p\geq l+1$ (28) and $E_{\alpha,\alpha}(\lambda_{p}(t-s)^{\alpha})\leq E_{\alpha,\alpha}(-\xi(t-s)^{\alpha})\quad\text{ for all }p\geq l+1.$ (29) Then, feeding system (23) by the same control $u(t)=D_{u}z_{u}(t)$ and using (27)–(29), it follows that $\begin{split}\Arrowvert z_{s}(t)\Arrowvert&\leq E_{\alpha}(-\xi t^{\alpha})\Arrowvert z_{0s}\Arrowvert+C\Arrowvert D_{u}\Arrowvert\Arrowvert I-P\Arrowvert\Arrowvert B\Arrowvert\displaystyle\int_{0}^{t}(t-s)^{\alpha-1}s^{-\mu}E_{\alpha,\alpha}(-\xi(t-s)^{\alpha})\,\mathrm{d}s\\\ &\leq E_{\alpha}(-\xi t^{\alpha})\Arrowvert z_{0s}\Arrowvert+C\Arrowvert D_{u}\Arrowvert\Arrowvert I-P\Arrowvert\Arrowvert B\Arrowvert\displaystyle\sum\limits_{{n=1}}^{+\infty}\displaystyle\int_{0}^{t}\dfrac{(-\xi)^{n}(t-s)^{\alpha n+\alpha-1}s^{-\mu}\,\mathrm{d}s}{\Gamma(\alpha n+\alpha)}\\\ &\leq E_{\alpha}(-\xi t^{\alpha})\Arrowvert z_{0s}\Arrowvert+C\Arrowvert D_{u}\Arrowvert\Arrowvert I-P\Arrowvert\Arrowvert B\Arrowvert\displaystyle\sum\limits_{{n=1}}^{+\infty}\dfrac{(-\xi)^{n}t^{\alpha n+\alpha-\mu}}{\Gamma(\alpha n+\alpha-\mu-1)\Gamma(1-\mu)^{-1}}\\\ &\leq E_{\alpha}(-\xi t^{\alpha})\Arrowvert z_{0s}\Arrowvert+C\Gamma(1-\mu)\Arrowvert D_{u}\Arrowvert\Arrowvert I-P\Arrowvert\Arrowvert B\Arrowvert t^{\alpha-\mu}E_{\alpha,\alpha-\mu+1}(-\xi t^{\alpha}).\end{split}$ Lemma 2 implies that $\Arrowvert z_{s}(t)\Arrowvert\leq\frac{M_{1}}{1+\xi t^{\alpha}}\Arrowvert z_{s0}\Arrowvert+C\Gamma(1-\mu)\Arrowvert D_{u}\Arrowvert\Arrowvert I-P\Arrowvert\Arrowvert B\Arrowvert\frac{M_{2}t^{\alpha-\mu}}{1+\xi t^{\alpha}}$ for some $M_{1},M_{2}>0$. Therefore, $\Arrowvert z_{s}(t)\Arrowvert\longrightarrow 0\text{ as }t\longrightarrow+\infty.$ (30) On the other hand, we have that $\Arrowvert z(t)\Arrowvert=\Arrowvert z_{s}(t)+z_{u}(t)\Arrowvert\leq\Arrowvert z_{s}(t)\Arrowvert+\Arrowvert z_{u}(t)\Arrowvert.$ (31) Combining (26), (30), and (31), we deduce the strong stabilization of system (16). ∎ ## 4 Conclusions and Future Work We investigated the stability problem of infinite dimensional time fractional differential linear systems under Caputo derivatives of order $\alpha\in(0,1)$, where the state space is the Hillbert space $L^{2}(\Omega)$. We proved necessary and sufficient conditions for exponential stability and obtained a characterization for the asymptotic stability, which is guaranteed if the system dynamics is symmetric and uniformly elliptic. Moreover, some stabilization criteria were also proved. Finally, we investigated the strong stabilization of the system via a decomposition method, where an explicit feedback control is obtained. Illustrative examples were given, showing the effectiveness of the theoretical results. As future work, we intend to extend our work to the class of infinite dimensional time fractional differential nonlinear systems. Various other questions are still open and deserve further investigations, such as, studying boundary stability and gradient stability for time fractional differential linear systems or considering the more recent notion of $\Lambda$-fractional derivative Lazopoulos and Lazopoulos (2019), and thus obtaining a geometrical interpretation. Each author equally contributed to this paper, read and approved the final manuscript. This research was funded by Moulay Ismail University (H.Z.); by Hassan II Academy of Science and Technology, project N 630/2016 (A.B.); and by The Portuguese Foundation for Science and Technology, R&D unit CIDMA, within project UIDB/04106/2020 (D.F.M.T.). ###### Acknowledgements. This research is part of first author’s Ph.D. project, which is carried out at Moulay Ismail University, Meknes, and has began during a one-month visit of Zitane to the R&D Unit CIDMA, Department of Mathematics, University of Aveiro, Portugal, June 2019. The hospitality of the host institution is here gratefully acknowledged. The authors are strongly grateful to three anonymous referees for their suggestions and invaluable comments. The authors declare no conflict of interest. References ## References * Rahimy (2010) Rahimy, M. Applications of fractional differential equations. Appl. Math. Sci. (Ruse) 2010, 4, 2453–2461. * Kilbas et al. (2006) Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and applications of fractional differential equations; Elsevier Science B.V., Amsterdam, 2006. * Diethelm (2010) Diethelm, K. The analysis of fractional differential equations; Vol. 2004, Lecture Notes in Mathematics, Springer-Verlag, Berlin, 2010. doi:black10.1007/978-3-642-14574-2. * Hilfer (2000) Hilfer, R. Applications of fractional calculus in physics; World Scientific Publishing Co., Inc., River Edge, NJ, 2000. doi:black10.1142/9789812817747. * Sabatier et al. (2007) Sabatier, J.; Agrawal, O.P.; Machado, J.A.T. Advances in fractional calculus; Springer, Dordrecht, 2007. doi:black10.1007/978-1-4020-6042-7. * Gabano and Poinot (2011) Gabano, J.D.; Poinot, T. Fractional modelling and identification of thermal systems. Signal Process. 2011, 91, 531–541. * Ichise et al. (1971) Ichise, M.; Nagayanagi, Y.; Kojima, T. An analog simulation of non-integer order transfer functions for analysis of electrode processes. J. Electroanal. Chem. Interfacial Electrochem. 1971, 33, 253–265. * Rosa and Torres (2019) Rosa, S.; Torres, D.F.M. Optimal control and sensitivity analysis of a fractional order TB model. Stat. Optim. Inf. Comput. 2019, 7, 617–625. doi:black10.19139/soic.v7i3.836. arXiv:1812.04507 * Silva and Torres (2019) Silva, C.J.; Torres, D.F.M. Stability of a fractional HIV/AIDS model. Math. Comput. Simulation 2019, 164, 180–190. doi:black10.1016/j.matcom.2019.03.016. arXiv:1903.02534 * Bagley and Calico (1991) Bagley, R.L.; Calico, R.A. Fractional order state equations for the control of viscoelastically damped structures. J. Guid. Control Dyn. 1991, 14, 304–311. * Caputo (2008) Caputo, M. Linear models of dissipation whose $Q$ is almost frequency independent. II. Fract. Calc. Appl. Anal. 2008, 11, 4–14. Reprinted from Geophys. J. R. Astr. Soc. 13 (1967), no. 5, 529–539. * Malinowska and Torres (2012) Malinowska, A.B.; Torres, D.F.M. Introduction to the fractional calculus of variations; Imperial College Press, London, 2012; pp. xvi+275. doi:black10.1142/p871. * Almeida et al. (2015) Almeida, R.; Pooseh, S.; Torres, D.F.M. Computational methods in the fractional calculus of variations; Imperial College Press, London, 2015; pp. xii+266. doi:black10.1142/p991. * Mahmoud and Karaki (2018) Mahmoud, M.S.; Karaki, B.J. Improved stability analysis and control design of reset systems. IET Control Theory Appl. 2018, 12, 2328–2336. doi:black10.1049/iet-cta.2018.5410. * Rocha et al. (2018) Rocha, D.; Silva, C.J.; Torres, D.F.M. Stability and optimal control of a delayed HIV model. Math. Methods Appl. Sci. 2018, 41, 2251–2260. doi:black10.1002/mma.4207. arXiv:1609.07654 * Sontag (2012) Sontag, E.D. Stability and feedback stabilization. In Mathematics of complexity and dynamical systems. Vols. 1–3; Springer, New York, 2012; pp. 1639–1652. doi:black10.1007/978-1-4614-1806-1_105. * Pritchard and Zabczyk (1981) Pritchard, A.J.; Zabczyk, J. Stability and stabilizability of infinite-dimensional systems. SIAM Rev. 1981, 23, 25–52. doi:black10.1137/1023003. * Curtain and Zwart (1995) Curtain, R.F.; Zwart, H. An introduction to infinite-dimensional linear systems theory; Vol. 21, Texts in Applied Mathematics, Springer-Verlag, New York, 1995. doi:black10.1007/978-1-4612-4224-6. * Triggiani (1975) Triggiani, R. On the stabilizability problem in Banach space. J. Math. Anal. Appl. 1975, 52, 383–403. doi:black10.1016/0022-247X(75)90067-0. * Balakrishnan (1981) Balakrishnan, A.V. Strong stabilizability and the steady state Riccati equation. Appl. Math. Optim. 1981, 7, 335–345. doi:black10.1007/BF01442125. * Wojtak et al. (2018) Wojtak, W.; Silva, C.J.; Torres, D.F.M. Uniform asymptotic stability of a fractional tuberculosis model. Math. Model. Nat. Phenom. 2018, 13, Art. 9, 10. doi:black10.1051/mmnp/2018015. arXiv:1801.07059 * Zhang et al. (2011) Zhang, F.; Li, C.; Chen, Y. Asymptotical stability of nonlinear fractional differential system with Caputo derivative. Int. J. Differ. Equ. 2011, pp. Art. ID 635165, 12. doi:black10.1155/2011/635165. * Matignon (1996) Matignon, D. Stability results for fractional differential equations with applications to control processing. Computational engineering in systems applications. Lille, France, 1996, Vol. 2, pp. 963–968. * Qian et al. (2010) Qian, D.; Li, C.; Agarwal, R.P.; Wong, P.J.Y. Stability analysis of fractional differential system with Riemann-Liouville derivative. Math. Comput. Modelling 2010, 52, 862–874. doi:black10.1016/j.mcm.2010.05.016. * Li et al. (2010) Li, Y.; Chen, Y.; Podlubny, I. Stability of fractional-order nonlinear dynamic systems: Lyapunov direct method and generalized Mittag-Leffler stability. Comput. Math. Appl. 2010, 59, 1810–1821. doi:black10.1016/j.camwa.2009.08.019. * Matar and Abu Skhail (2018) Matar, M.M.; Abu Skhail, E.S. On stability of nonautonomous perturbed semilinear fractional differential systems of order $\alpha\in(1,2)$. J. Math. 2018, pp. Art. ID 1723481, 10. doi:black10.1155/2018/1723481. * Ge et al. (2018) Ge, F.; Chen, Y.; Kou, C. Regional analysis of time-fractional diffusion processes; Springer, Cham, 2018. doi:black10.1007/978-3-319-72896-4. * Zitane et al. (2020) Zitane, H.; Larhrissi, R.; Boutoulout, A. On the fractional output stabilization for a class of infinite dimensional linear systems. In Recent Advances in Modeling, Analysis and Systems Control: Theoretical Aspects and Applications; Springer, Cham, 2020; pp. 241–259. * Zitane et al. (in press) Zitane, H.; Larhrissi, R.; Boutoulout, A. Fractional output stabilization for a class of bilinear distributed systems. Rend. Circ. Mat. Palermo (2) in press. doi:black10.1007/s12215-019-00429-w. * Zitane et al. (2019) Zitane, H.; Larhrissi, R.; Boutoulout, A. Riemann Liouville fractional spatial derivative stabilization of bilinear distributed systems. J. Appl. Nonlinear Dyn. 2019, 8, 447–461. doi:black10.5890/JAND.2019.09.008. * Zhou and Jiao (2010) Zhou, Y.; Jiao, F. Existence of mild solutions for fractional neutral evolution equations. Comput. Math. Appl. 2010, 59, 1063–1077. doi:black10.1016/j.camwa.2009.06.026. * Mainardi et al. (2007) Mainardi, F.; Paradisi, P.; Gorenflo, R. Probability distributions generated by fractional diffusion equations. arXiv preprint arXiv:0704.0320 2007. * Erdélyi et al. (1981) Erdélyi, A.; Magnus, W.; Oberhettinger, F.; Tricomi, F.G. Higher transcendental functions. Vol. III; Robert E. Krieger Publishing Co., Inc., Melbourne, Fla., 1981. * Joshi et al. (2020) Joshi, S.; Mittal, E.; Pandey, R.M. On Euler type integrals involving extended Mittag-Leffler functions. Bol. Soc. Parana. Mat. (3) 2020, 38, 125–134. * Mainardi (2014) Mainardi, F. On some properties of the Mittag-Leffler function $E_{\alpha}(-t^{\alpha})$, completely monotone for $t>0$ with $0<\alpha<1$. Discrete Contin. Dyn. Syst. Ser. B 2014, 19, 2267–2278. doi:black10.3934/dcdsb.2014.19.2267. * Schneider (1996) Schneider, W.R. Completely monotone generalized Mittag-Leffler functions. Exposition. Math. 1996, 14, 3–16. * Podlubny (1999) Podlubny, I. Fractional differential equations; Academic Press, Inc., San Diego, CA, 1999. * Sakamoto and Yamamoto (2011) Sakamoto, K.; Yamamoto, M. Initial value/boundary value problems for fractional diffusion-wave equations and applications to some inverse problems. J. Math. Anal. Appl. 2011, 382, 426–447. doi:black10.1016/j.jmaa.2011.04.058. * Courant and Hilbert (1953) Courant, R.; Hilbert, D. Methods of mathematical physics. Vol. I; Interscience Publishers, Inc., New York, N.Y., 1953; pp. xv+561. * Kato (1966) Kato, T. Perturbation theory for linear operators; Springer-Verlag New York, Inc., New York, 1966. * Lazopoulos and Lazopoulos (2019) Lazopoulos, K.A.; Lazopoulos, A.K. On the Mathematical Formulation of Fractional Derivatives. Progr. Fract. Differ. Appl. 2019, 5, 261–267. doi:black10.18576/pfda/050402.
2024-09-04T02:54:57.783309
2020-03-06T09:06:34
2003.03089
{ "authors": "Sergii Morozov, Stefano Vezzoli, Ali Hossain Khan, Iwan Moreels,\n Riccardo Sapienza", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26076", "submitter": "Sergii Morozov", "url": "https://arxiv.org/abs/2003.03089" }
arxiv-papers
Present address:] Centre for Nano Optics, University of Southern Denmark, Campusvej 55, Odense M, DK-5230, Denmark. # Objective-free excitation of quantum emitters with a laser-written micro parabolic mirror Sergii Morozov [ Stefano Vezzoli The Blackett Laboratory, Department of Physics, Imperial College London, London SW7 2BW, United Kingdom Ali Hossain Khan Iwan Moreels Department of Chemistry, Ghent University, Krijgslaan 281-S3, Gent 9000, Belgium Riccardo Sapienza<EMAIL_ADDRESS>The Blackett Laboratory, Department of Physics, Imperial College London, London SW7 2BW, United Kingdom ###### Abstract The efficient excitation of quantum sources such as quantum dots or single molecules requires high NA optics which is often a challenge in cryogenics, or in ultrafast optics. Here we propose a 3.2 $\mu$m wide parabolic mirror, with a 0.8 $\mu$m focal length, fabricated by direct laser writing on CdSe/CdS colloidal quantum dots, capable of focusing the excitation light to a sub- wavelength spot and to extract the generated emission by collimating it into a narrow beam. This mirror is fabricated via in-situ volumetric optical lithography, which can be aligned to individual emitters, and it can be easily adapted to other geometries beyond the paraboloid. This compact solid-state transducer from far-field to the emitter has important applications in objective-free quantum technologies. ††preprint: APP20-LE-00202 ## I Introduction Highly confined optical fields are required for excitation of individual emitters and for efficient generation of single photons. Confining light to sub-wavelength volumes to decrease the background signal or to increase the fluency, is a common strategy in many optical domains, such as confocal microscopy, super-resolution microscopy, optical lithography, optical tweezers, ion and atom trapping, high density data storage, material processing. Lens-based objectives are traditionally used to focus light to a diffraction limited spot, a size which depends on the numerical aperture (NA) Abbe (1883). High NA objectives are bulky and have to be operated at short working distances which complicates their practical applications, especially at cryogenic temperatures, while their chromatic aberrations and temporal dispersion hamper their use with ultra-short pulses. Objectives can be replaced by reflective architectures which provide inherent achromaticity and nonparaxial focusing due to the high NA. The simplest reflective objective is a parabolic mirror, which concentrates an incident beam around the geometrical focal point. The resulting focal spot has a size which is on a par with or better than that of high quality lens objectives Lieb and Meixner (2001); Stadler _et al._ (2008). Parabolic mirrors have found multiple applications in confocal microscopy Drechsler _et al._ (2001), cryostat based single molecule spectroscopy Durand _et al._ (1999), scanning optical near-field microscopy Sackrow _et al._ (2008), Raman microscopy Zhang _et al._ (2009), solar cells Kosten _et al._ (2013), light-emitting diodes Tanriseven, Maaskant, and Corbett (2008), nonlinear optics Penjweini _et al._ (2019). Recently, deep parabolic mirrors covering $4\pi$ solid angle gained attention in quantum optics because of their efficient focusing, ability of trapping individual quantum emitters, and of extracting a collimated beam of single photons Sondermann and Leuchs (2015); Salakhutdinov _et al._ (2016). However, future practical applications require miniaturisation and integration of parabolic mirrors. When dealing with integrated photonics, different strategies to focus light into quantum emitters and collect their radiation have been explored. Nanofocusing has been achieved with structured metamaterials, as for example using a metalense based on plasmonic Fresnel plates Ma and Liu (2010), hypergratings Thongrattanasiri and Podolskiy (2009), and plasmonic metamirrors Ding _et al._ (2019). Plasmonic antennas have been shown to confine optical fields to zeptoliter volumes delivering nano resolution for plasmonic direct writing lithography Wang _et al._ (2016), albeit often hampered by ohmic losses, difficulty in precise nanometric alignment with an emitter, as well as complicated and expensive fabrication techniques. Deterministic integration of micro optical components and individual quantum emitters can be achieved either by pre- or post-fabrication alignment, combined with lithographic fabrication. By lithographic techniques, various compact optical systems have been fabricated to control and manipulate light at the nanoscale, as for example waveguides Shi _et al._ (2016); Colautti _et al._ (2020), polarisation rotators Schumann _et al._ (2014), microdisc resonators Schell _et al._ (2013), objectives Fischbach _et al._ (2017); Gissibl _et al._ (2016), dielectric pillar antennas Au _et al._ (2019), pillar microcavities Dousse _et al._ (2008). High-index solid immersion lenses have been used to improve the coupling to quantum light sources, decreasing the laser excitation spot by a factor $1/n$ and magnifying the photoluminescence image of an emitter by a factor $n$ Sapienza _et al._ (2015); Sartison _et al._ (2017); Schmidt _et al._ (2019). Microscale parabolic antennas have been shown to be an ideal design for directing light from quantum emitters Schell _et al._ (2014); Morozov _et al._ (2018), however the focusing abilities of such compact structures have not been explored. In this letter, we report a compact parabolic mirror for sub-wavelength excitation of quantum emitters placed in its focal spot. The mirror also directs the generated photons into a low divergent beam along the parabola symmetry axis. The parabolic mirror is fabricated by in-situ optical volumetric lithography, which produces paraboloid structures in a single laser exposure step and results in high optical quality surfaces. We experimentally demonstrate that this mirror can focus light to a spot which is comparable to one of a high NA oil immersion objective. With a focal length of 0.8 $\mu$m, and a predicted focal spot of 120 nm ($\sim\lambda_{ex}/4$), the micro-mirror acts as an ideal optical transducer from the emitter to free-space. ## II Results and Discussion ### II.1 Focusing with a micro parabolic mirror A parabolic mirror illuminated with a collimated laser beam concentrates the excitation energy in its focal point. If the NA is large enough, the mirror has a sub-wavelength focal spot formed with minimal aberrations Lieb and Meixner (2001), and broadband response in the visible and near infrared range of electromagnetic spectrum. While refractive optics is intrinsically limited by the frequency-dependent dielectric constant of its constituent, this is not the case for the parabolic mirror which is intrinsically achromatic (i.e. different wavelengths focus always in the same plane). Instead, the different refractive index experienced by wavelength will affect the focal spot size (see SI Fig.S2). We consider a micro parabolic mirror with focal length $f=0.8~{}\mu$m and aperture diameter $d=3.2~{}\mu$m covering a $2\pi$ solid angle. In such a geometry, the radius $R$ of the dish is double of its height $H$, that is $R=2H=2f$. The dish is filled with a polymer ($n_{p}=1.49$) and illuminated from a glass substrate ($n_{g}=1.52$) with a linearly polarised plane wave at $\lambda_{ex}=442$ nm as sketched in Fig.1(a). Due to the small size of the mirror, diffraction effects are expected, which can be well captured in finite-difference time-domain (FDTD) numerical simulations. The map of total electric field intensity $|\bm{E}|^{2}$ through the focal spot is plotted in Fig.1(b-d). The focal plane in Fig.1(b) demonstrates that the maximum confinement is achieved in the $y$ direction, which is orthogonal to the excitation polarisation ($x$ here). The intensity cross-sections through the focal spot in $x$, $y$, and $z$ directions are plotted in Fig.1(e), where the maximum of normalised total electric field intensity in $z$ direction is close to the position of geometrical focal point, just 30 nm shifted into the glass substrate. We ascribe this to the small refractive index mismatch of the parabolic mirror filling $n_{p}$ and the glass substrate $n_{g}$ (see SI Fig.S1). The obtained intensity distributions around the focal point have a full width at half maximum (FWHM) in lateral ($x$ and $y$) and axial ($z$) directions of $\Delta x=222$ nm, $\Delta y=120$ nm and $\Delta z=247$ nm. The focal spot size scales with the refractive index of the parabolic mirror filling, and thus a much tighter focusing can be reached in comparison with an air-filled parabolic mirror (see SI Fig.S2). Hence, the sub-wavelength focus of $\sim\lambda_{ex}/4$ can be achieved for a focal length of only 0.8 $\mu$m, being unaffected by size-effects and diffraction. Mirrors with focal length shorter than that drastically decrease the intensity in the focal point (see SI Fig.S3), and loose the ability to collimate the generated emission Morozov _et al._ (2018). Another important characteristic of high-NA parabolic mirrors is their ability to convert the incident light polarisation from transverse ($x$ here) to longitudinal ($z$ here), allowing for the generation of optical fields with a strong electric field component along the optical axis Drechsler _et al._ (2001); Debus _et al._ (2003). The numerical simulations confirm the polarisation conversion, indicating a $|\bm{E_{x}}|^{2}$ component is the dominant contribution to the intensity in focal region, while $|\bm{E_{y}}|^{2}$ and $|\bm{E_{z}}|^{2}$ components have 24 and 4 times lower magnitude (see SI Fig.S4). Hence, the micro parabolic mirror is able to excite also quantum emitters with electric dipoles oriented perpendicularly to the sample plane, with about $20$% efficiency. Figure 1: Micro parabolic mirror for focusing light at the nanoscale. (a) In geometrical optics, a parabolic mirror concentrates the collimated illumination in its focal point $F$. (b-d) FDTD simulations of intensity distribution in a micro parabolic mirror with focal length of 0.8 $\mu$m illuminated with a collimated $x$-polarised plane wave at $\lambda=442$ nm. The intensity distributions of total electric field intensity are shown in $x-y$ focal plane in (b), as well as in orthogonal $x-z$ and $y-z$ planes through the focal point in (c) and (d), respectively. The outline of parabolic mirror is shown by orange line. (e) Sections of the intensity profiles from panels (b-d) demonstrate the sub-wavelength focal spot of the micro parabolic mirror. Figure 2: Fabrication of micro parabolic mirrors by in-situ volumetric lithography. (a) The laser beam was focused below the sample plane to polymerise a parabolic micro structure over a quantum dot layer (not to scale). (b) The unexposed photoresist was washed away resulting in a polymeric parabolic dish on glass substrate. (c) The sample was coated with 80 nm gold layer to form a micro parabolic mirror with quantum dots in its focal plane. (d) Deviation of a micro parabolic mirror surface from a perfect paraboloid shape was obtained by AFM scanning. The black dashed circle represents the mirror aperture. (e) Confocal scan of the parabolic mirror focal plane reveals a quasi homogeneous photoluminescence intensity distribution, while the photoluminescence of quantum dots outside aperture (dashed white circle) is completely quenched by the deposited gold layer. ### II.2 Fabrication We fabricated micro parabolic mirrors over a fluorescent layer composed of colloidal giant shell CdSe/CdS quantum dots emitting at $\lambda_{em}=650$ nm Christodoulou _et al._ (2014). The giant shell configuration of quantum dots is highly photostable, and helped us to eliminate such undesirable artifacts as photobleaching. The quantum dots are used here to map the intensity distribution in the focal plane of the micro parabolic mirror, as their fluorescence is proportional to the excitation light intensity, provided that it is well below the saturation intensity. A layer of quantum dots was deposited by spin-coating over a glass coverslip prior to the fabrication of the mirror dish. In the fabrication, we followed the one-photon direct laser writing which we have described in Morozov _et al._ (2018). Such one-photon photopolymerization has a power threshold Delrot _et al._ (2018), thus a clear boundary in the polymerization is formed. Our lithography technique is based on the polymerization of structures with the volumetric intensity profile of a focused Gaussian beam (the blue hourglass-shaped profile in Fig.2). First, we localised the quantum dots by means of scanning confocal microscopy retrieving the position of the sample plane. Next, we controllably defocused the confocal system using a piezo stage in order to expose the photoresist to a part of hourglass-shaped intensity iso-surface (Fig.2(a)). This outer part of intensity iso-surface follows a paraboloid shape allowing for the polymerisation of parabolic structures in a single laser exposure step. After washing off the unexposed photoresist, a parabolic polymer structure reveals over the quantum dot layer (Fig.2(b)). In the final step, we sputtered an 80 nm gold layer over the sample to form a metal parabolic mirror (Fig.2(c)). The fabrication technique results in a smooth mirror surface of high optical quality as demonstrated in Fig.2(d): the deviation of a micro parabolic mirror shape from a perfect paraboloid surface was within $\pm 70$ nm, extracted from a 3D fit of an atomic force microscope (AFM) scan. A confocal scan in Fig.2(e) shows the focal plane of a fabricated parabolic mirror, where the fluorescence signal originates from the quantum dot layer. The fluorescence signal within the parabolic mirror aperture is distributed quasi homogeneously without a dominant intensity spot ($I_{avg}=392$ cnts/px, $I_{std}=96$ cnts/px), while the quantum dot layer fluorescence outside the mirror aperture is completely quenched by the deposited gold layer. ### II.3 Excitation of quantum emitters The fabricated micro parabolic mirror with a quantum dot fluorescence layer in its focal plane was illuminated with a collimated laser beam at $\lambda_{ex}=442$ nm to demonstrate its focusing properties. The resulting fluorescence intensity profile of the parabolic mirror focal plane (Fig.3(a)) was imaged on a CCD camera using an objective (Nikon Plan Apochromat 100x, NA = 1.45). The obtained intensity distribution in Fig.3(a) is very different from the confocal intensity map presented in Fig.2(e), and a dominant intensity peak is clearly visible in the position of the micro parabolic mirror focal spot. The residual background in Fig.3(b) comes from fluorescence signal of the quantum dots in the focal plane of the micro parabolic mirror excited by the collimated laser beam. In addition, the intensity profile is modulated by interference rings around the central bright focal spot, which we attribute to the reflection of photoluminescence between parabolic surface and the focal plane (see SI Fig.S7). In order to characterise the fluorescence intensity distribution in the focal plane of the parabolic mirror (Fig.3(a)), we extract an intensity cross section through the focal spot as shown by green line in Fig.3(b). The experimental intensity cross section is characterised by a FWHM of $\Delta_{exp}=554\pm 38$ nm, which is comparable with the simulated focal spot size $\Delta_{sim}=347$ nm. This simulated focal spot shown by red line in Fig.3(b) was obtained by a convolution of the parabolic mirror excitation profile presented in Fig.1 with the point spread function $\Delta_{PSF}=224$ nm of the imaging system with NA=1.45 at $\lambda_{em}=650$ nm, estimated from the Abbe diffraction limit (see SI Fig.S5). The difference in FWHM between the experimental and simulated values of focal spot sizes is due to the presence of the quantum dot layer, which can scatter both the exciting beam and the emitted light, thus slightly widening the collected spot. To compare the performance of the parabolic mirror we also directly excited the quantum dots layer at the position of the micro parabolic mirror focal spot with the high NA objective. In Fig.3(b) we present a cross section (blue line) of the photoluminescence intensity excited in this way (see also SI Fig.S6). This profile has the same FWHM of $\Delta_{exp}=555\pm 38$ nm as in the case of focusing with the micro parabolic mirror. While one could conclude that the micro parabolic mirror focusing power is comparable to that of the 1.45 NA objective, we point out that the measured spot size comes from (i) excitation through the mirror with an expected average focal spot size of $\Delta_{par}$=171 nm (Fig.1), and (ii) imaging through an objective with finite resolution. Although we expect a slightly smaller focal spot size for a 1.45 NA objective ($\Delta_{obj}$=152 nm), its measured focal spot would be very similar to the micro parabolic mirror (see SI). Therefore, we conclude that the results shown in Fig.3(b) are compatible with the expected focusing power of the micro parabolic mirror, and $\Delta_{exp}=555$ nm is only an upper bound for the size of the focal spot. Figure 3: Excitation of quantum emitters with the micro parabolic mirror. (a) A collimated laser beam at $\lambda_{ex}=442$ nm causes fluorescence of all quantum dots in the focal plane, while the parabolic mirror reflects the laser beam exciting quantum dots only in its focal spot resulting in a bright intensity spot in the centre of mirror aperture (dashed white circle). (b) The green intensity cross section of the micro parabolic focal spot was acquired along the dotted white line in panel (a). The blue cross section was obtained by focusing with the high NA objective in the position of parabolic mirror focal spot. The red line shows the simulated focal spot intensity cross section. ### II.4 Directing quantum emission The photons emitted by the quantum dots in the focal spot are collimated by the micro parabolic mirror. We demonstrate this by imaging the back focal plane of microscope objective to obtain the Fourier space momentum distribution of quantum dot emission. Such a Fourier space image of the micro parabolic mirror focal plane intensity profile from Fig.3(a) is presented in Fig.4(a). The emission of the quantum dots excited with the micro parabolic mirror is directed into a narrow beam along the optical axis, which is completely contained in an NA=0.5 solid angle. The fluorescence beam collimation is characterised in Fig.4(b), which presents the intensity cross sections of the micro parabolic mirror radiation pattern. This result summarizes the full operational principle of the parabolic mirror: first it efficiently couples a directional plane wave onto the emitter, similarly to what achieved with a conventional objective Morozov _et al._ (2018), and then it directs the generated photoluminescence back to the far-field. The simulation of an isolated single horizontal dipole in the focal spot of the micro parabolic mirror confirms the unidirectional radiation pattern and is plotted as a red curve in Fig.4(b), with a half power beam width of $\theta_{1/2}^{sim}=8^{\circ}$ at $\lambda_{em}=650$ nm. The green curve in Fig.4(b) is the radiation pattern cross section obtained from Fig.4(a), and the micro parabolic mirror excitation with a collimated laser beam in the widefield configuration yields in the collimated fluorescence beam with $\theta_{1/2}^{exp}=25^{\circ}$ (after the background subtraction, see SI Fig.S8). The difference in the half power beam width originates from the layer of quantum dots distributed quasi-homogeneously in the sample plane. The emission of the quantum dots out of the focal spot of the micro parabolic mirror is directional as well, however it is not aligned with the mirror optical axis. The parabolic mirror reflects their emission at different angles, and thus broadens the radiation pattern in Fig.4. The background would be absent in case of a single quantum dot in the focal point of the micro parabolic mirror as we have shown in ref. Morozov _et al._ (2018). In the current experimental conditions the background contribution can be reduced by confocal excitation with the high NA objective in the position of focal spot of micro parabolic antenna resulting in the collimated emission with $\theta_{1/2}^{exp}=19^{\circ}$ (blue curve in Fig.4(b)). Figure 4: The fluorescence of quantum dots in the focal spot of the micro parabolic mirror is collimated. (a) A Fourier space image of quantum dots in the focal spot of the micro parabolic mirror presented in Fig.3(a). The main lobe of the radiation pattern is completely within NA=0.5. (b) Radiation pattern in polar coordinates demonstrate the low beam divergence. The radiation pattern obtained using the micro parabolic antenna excitation is shown by green curve acquired along the dotted white line in the panel (a). The blue line represents the radiation pattern obtained using confocal excitation of the quantum dots in the focal spot of the same micro parabolic mirror. The red line is the simulation of an isolated single $x$-oriented dipole in the focal spot of the micro parabolic mirror. ## III Conclusion In conclusion, we scaled down to microscale the concept of a reflective objective based on the micro parabolic mirror. We fabricated such a compact parabolic mirror with a focal length of only $0.8$ $\mu$m, capable of focusing the excitation light to sub-wavelength spot and to extract the fluorescence from nanoscale emitters. The fabrication process is fast, single-shot, and leads to a low surface roughness. The volumetric lithography provides fast fabrication of microstructures of high optical quality in sub second single laser shots over large areas, which can be also aligned with individual emitters. This is an ideal approach to objective-free microscopy, especially for cryogenic and vacuum conditions, and could become a powerful tool in nanoscale quantum optics. The presented design of the micro parabolic mirror covers 2$\pi$ solid angle around an emitter in the focal point, which limits the focal spot size as well as the photon extraction efficiency. The mirror directs photons out of the sample plane, while more sophisticated designs could be exploited for applications where photons need to be guided in plane. The in-situ volumetric lithography allows for fabrication of structures beyond the parabolic shape at the microscale. More complex spatial intensity distribution could be achieved by using wavefront-shaping techniques, such as spatial light modulators (SLMs) or digital micromirror devices (DMDs) Jenness _et al._ (2010); Tian and Wang (2020). ## Supporting Information See the Supporting Information for the effect of refractive index mismatch between parabolic mirror filling and glass substrate; scaling of the focal spot size with the refractive index of parabolic dish filling; evolution of the focal spot dimensions with further decreasing of parabolic mirror focal length; polarization of the parabolic mirror focal spot; simulation of experimental intensity distribution in the focal point; interference rings in the focal plane of parabolic mirror; measurements and simulations of radiation pattern. ###### Acknowledgements. S.M. and R.S. acknowledge funding by EPSRC (EP/P033369 and EP/M013812/1). A.H.K. and I.M. acknowledge funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (714876 PHOCONA). ## Data availability The data that support the findings of this study are openly available in Figshare at http://doi.org/10.6084/m9.figshare.12357356. ## References * Abbe (1883) E. Abbe, “The relation of aperture and power in the microscope,” Journal of the Royal Microscopical Society 3, 790–812 (1883). * Lieb and Meixner (2001) M. A. Lieb and A. J. Meixner, “A high numerical aperture parabolic mirror as imaging device for confocal microscopy,” Optics Express 8, 458 (2001). * Stadler _et al._ (2008) J. Stadler, C. Stanciu, C. Stupperich, and A. J. Meixner, “Tighter focusing with a parabolic mirror,” Optics Letters 33, 681 (2008). * Drechsler _et al._ (2001) A. Drechsler, M. Lieb, C. Debus, A. Meixner, and G. Tarrach, “Confocal microscopy with a high numerical aperture parabolic mirror,” Optics Express 9, 637 (2001). * Durand _et al._ (1999) Y. Durand, J. C. Woehl, B. Viellerobe, W. Göhde, and M. Orrit, “New design of a cryostat-mounted scanning near-field optical microscope for single molecule spectroscopy,” Review of Scientific Instruments 70, 1318–1325 (1999). * Sackrow _et al._ (2008) M. Sackrow, C. Stanciu, M. A. Lieb, and A. J. Meixner, “Imaging nanometre-sized hot spots on smooth au films with high-resolution tip-enhanced luminescence and raman near-field optical microscopy,” ChemPhysChem 9, 316–320 (2008). * Zhang _et al._ (2009) D. Zhang, X. Wang, K. Braun, H.-J. Egelhaaf, M. Fleischer, L. Hennemann, H. Hintz, C. Stanciu, C. J. Brabec, D. P. Kern, and A. J. Meixner, “Parabolic mirror-assisted tip-enhanced spectroscopic imaging for non-transparent materials,” Journal of Raman Spectroscopy 40, 1371–1376 (2009). * Kosten _et al._ (2013) E. D. Kosten, J. H. Atwater, J. Parsons, A. Polman, and H. A. Atwater, “Highly efficient GaAs solar cells by limiting light emission angle,” Light: Science & Applications 2, e45–e45 (2013). * Tanriseven, Maaskant, and Corbett (2008) S. Tanriseven, P. Maaskant, and B. Corbett, “Broadband quantum dot micro-light-emitting diodes with parabolic sidewalls,” Applied Physics Letters 92, 123501 (2008). * Penjweini _et al._ (2019) R. Penjweini, M. Weber, M. Sondermann, R. W. Boyd, and G. Leuchs, “Nonlinear optics with full three-dimensional illumination,” Optica 6, 878 (2019). * Sondermann and Leuchs (2015) M. Sondermann and G. Leuchs, “Photon-atom coupling with parabolic mirrors,” in _Engineering the Atom-Photon Interaction_ (Springer International Publishing, 2015) pp. 75–98. * Salakhutdinov _et al._ (2016) V. Salakhutdinov, M. Sondermann, L. Carbone, E. Giacobino, A. Bramati, and G. Leuchs, “Optical trapping of nanoparticles by full solid-angle focusing,” Optica 3, 1181 (2016). * Ma and Liu (2010) C. Ma and Z. Liu, “A super resolution metalens with phase compensation mechanism,” Applied Physics Letters 96, 183103 (2010). * Thongrattanasiri and Podolskiy (2009) S. Thongrattanasiri and V. A. Podolskiy, “Hypergratings: nanophotonics in planar anisotropic metamaterials,” Optics Letters 34, 890 (2009). * Ding _et al._ (2019) F. Ding, Y. Chen, Y. Yang, and S. I. Bozhevolnyi, “Multifunctional metamirrors for broadband focused vector-beam generation,” Advanced Optical Materials 7, 1900724 (2019). * Wang _et al._ (2016) C. Wang, W. Zhang, Z. Zhao, Y. Wang, P. Gao, Y. Luo, and X. Luo, “Plasmonic structures, materials and lenses for optical lithography beyond the diffraction limit: A review,” Micromachines 7, 118 (2016). * Shi _et al._ (2016) Q. Shi, B. Sontheimer, N. Nikolay, A. W. Schell, J. Fischer, A. Naber, O. Benson, and M. Wegener, “Wiring up pre-characterized single-photon emitters by laser lithography,” Scientific Reports 6 (2016). * Colautti _et al._ (2020) M. Colautti, P. Lombardi, M. Trapuzzano, F. S. Piccioli, S. Pazzagli, B. Tiribilli, S. Nocentini, F. S. Cataliotti, D. S. Wiersma, and C. Toninelli, “A 3d polymeric platform for photonic quantum technologies,” Advanced Quantum Technologies , 2000004 (2020). * Schumann _et al._ (2014) M. Schumann, T. Bückmann, N. Gruhler, M. Wegener, and W. Pernice, “Hybrid 2d–3d optical devices for integrated optics by direct laser writing,” Light: Science & Applications 3, e175–e175 (2014). * Schell _et al._ (2013) A. W. Schell, J. Kaschke, J. Fischer, R. Henze, J. Wolters, M. Wegener, and O. Benson, “Three-dimensional quantum photonic elements based on single nitrogen vacancy-centres in laser-written microstructures,” Scientific Reports 3 (2013). * Fischbach _et al._ (2017) S. Fischbach, A. Schlehahn, A. Thoma, N. Srocka, T. Gissibl, S. Ristok, S. Thiele, A. Kaganskiy, A. Strittmatter, T. Heindel, S. Rodt, A. Herkommer, H. Giessen, and S. Reitzenstein, “Single quantum dot with microlens and 3d-printed micro-objective as integrated bright single-photon source,” ACS Photonics 4, 1327–1332 (2017). * Gissibl _et al._ (2016) T. Gissibl, S. Thiele, A. Herkommer, and H. Giessen, “Two-photon direct laser writing of ultracompact multi-lens objectives,” Nature Photonics 10, 554–560 (2016). * Au _et al._ (2019) T. H. Au, S. Buil, X. Quélin, J.-P. Hermier, and N. D. Lai, “High directional radiation of single photon emission in a dielectric antenna,” ACS Photonics 6, 3024–3031 (2019). * Dousse _et al._ (2008) A. Dousse, L. Lanco, J. Suffczyński, E. Semenova, A. Miard, A. Lemaître, I. Sagnes, C. Roblin, J. Bloch, and P. Senellart, “Controlled light-matter coupling for a single quantum dot embedded in a pillar microcavity using far-field optical lithography,” Physical Review Letters 101 (2008). * Sapienza _et al._ (2015) L. Sapienza, M. Davanço, A. Badolato, and K. Srinivasan, “Nanoscale optical positioning of single quantum dots for bright and pure single-photon emission,” Nature Communications 6, 7833 (2015). * Sartison _et al._ (2017) M. Sartison, S. L. Portalupi, T. Gissibl, M. Jetter, H. Giessen, and P. Michler, “Combining in-situ lithography with 3d printed solid immersion lenses for single quantum dot spectroscopy,” Scientific Reports 7 (2017). * Schmidt _et al._ (2019) M. Schmidt, M. V. Helversen, S. Fischbach, A. Kaganskiy, R. Schmidt, A. Schliwa, T. Heindel, S. Rodt, and S. Reitzenstein, “Deterministically fabricated spectrally-tunable quantum dot based single-photon source,” Optical Materials Express 10, 76 (2019). * Schell _et al._ (2014) A. W. Schell, T. Neumer, Q. Shi, J. Kaschke, J. Fischer, M. Wegener, and O. Benson, “Laser-written parabolic micro-antennas for efficient photon collection,” Applied Physics Letters 105, 231117 (2014). * Morozov _et al._ (2018) S. Morozov, M. Gaio, S. A. Maier, and R. Sapienza, “Metal–dielectric parabolic antenna for directing single photons,” Nano Letters 18, 3060–3065 (2018). * Debus _et al._ (2003) C. Debus, M. A. Lieb, A. Drechsler, and A. J. Meixner, “Probing highly confined optical fields in the focal region of a high NA parabolic mirror with subwavelength spatial resolution,” Journal of Microscopy 210, 203–208 (2003). * Christodoulou _et al._ (2014) S. Christodoulou, G. Vaccaro, V. Pinchetti, F. D. Donato, J. Q. Grim, A. Casu, A. Genovese, G. Vicidomini, A. Diaspro, S. Brovelli, L. Manna, and I. Moreels, “Synthesis of highly luminescent wurtzite CdSe/CdS giant-shell nanocrystals using a fast continuous injection route,” Journal of Materials Chemistry C 2, 3439 (2014). * Delrot _et al._ (2018) P. Delrot, D. Loterie, D. Psaltis, and C. Moser, “Single-photon three-dimensional microfabrication through a multimode optical fiber,” Optics Express 26, 1766 (2018). * Jenness _et al._ (2010) N. J. Jenness, R. T. Hill, A. Hucknall, A. Chilkoti, and R. L. Clark, “A versatile diffractive maskless lithography for single-shot and serial microfabrication,” Optics Express 18, 11754 (2010). * Tian and Wang (2020) Y. Tian and L. Wang, “Complex three-dimensional microparticles from microfluidic lithography,” Electrophoresis (2020).
2024-09-04T02:54:57.793390
2020-03-06T10:32:03
2003.03125
{ "authors": "Radha Kopparti, Tillman Weyde", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26077", "submitter": "Radha Kopparti", "url": "https://arxiv.org/abs/2003.03125" }
arxiv-papers
# Bayesian Weight Priors for Learning Identity Relations Radha Kopparti Department of Computer Science City, University of London London, United Kingdom <EMAIL_ADDRESS> &Tillman Weyde Department of Computer Science City, University of London London, United Kigndom <EMAIL_ADDRESS> # Weight Priors for Learning Identity Relations Radha Kopparti Department of Computer Science City, University of London London, United Kingdom <EMAIL_ADDRESS> &Tillman Weyde Department of Computer Science City, University of London London, United Kigndom <EMAIL_ADDRESS> ###### Abstract Learning abstract and systematic relations has been an open issue in neural network learning for over 30 years. It has been shown recently that neural networks do not learn relations based on identity and are unable to generalize well to unseen data. The Relation Based Pattern (RBP) approach has been proposed as a solution for this problem. In this work, we extend RBP by realizing it as a Bayesian prior on network weights to model the identity relations. This weight prior leads to a modified regularization term in otherwise standard network learning. In our experiments, we show that the Bayesian weight priors lead to perfect generalization when learning identity based relations and do not impede general neural network learning. We believe that the approach of creating an inductive bias with weight priors can be extended easily to other forms of relations and will be beneficial for many other learning tasks. ## 1 Introduction Humans, including infants. are very effective at learning patterns of relations based on identity from sensory input and systematically applying them to new stimuli, even after a very brief exposure, while current neural networks in their standard form fail to detect these relations in unseen data [Marcus et al., 1999]. It has become evident that there are still relevant limitations to systematic generalization in current neural network architectures [Lake and Baroni, 2018, Marcus, 2018]. Neural networks are seen to be good at memorizing the numerical patterns seen in the training set but often not to extrapolate this representation outside the training set [Liska et al., 2018]. It was found that standard neural networks do not seem to learn identity relations, i.e. the equality of two vectors [Weyde and Kopparti, 2018], which are fundamental for many higher level tasks. A well known study in this direction was conducted by Marcus et al. [1999], where a recurrent neural network failed to distinguish abstract patterns, based on equality or identity relations between the input stimuli, although seven-month-old infants showed the ability to distinguish them after a few minutes of exposure. This was followed by an lively exchange on rule learning by neural networks and in human language acquisition in general, where results by Elman [1999], Altmann and Dienes [1999] and Shultz and Bale [2001] could not be reproduced by Vilcu and Hadley [2001, 2005], and Shultz and Bale [2006] disputed claims by Vilcu and Hadley [2005]. Other approaches, such as those by Shastri and Chang [1999] and Dominey and Ramus [2000], Alhama and Zuidema [2016], use specialized network architectures or different problem formulations or evaluation methods. Recently, the Relation Based Patterns (RBP) approach has been introduced as a way to create a suitable inductive bias for the problem of learning identity relations on binary vectors Weyde and Kopparti [2018, 2019]. The task is to classify whether two halves of the input vectors are equal. i.e. $u_{i}=u_{n+i}\forall i\in\\{1,\dots,n\\}$ for an input vector with $2n$ dimensions. The RBP model introduced in Weyde and Kopparti [2018, 2019] is based on the comparison of input neurons that correspond to each other in a relation, e.g. the corresponding dimensions in a pair of binary vectors. For the comparison they introduce Differentiator-Rectifier (DR) units, which calculate the absolute difference of two inputs: $f(x,y)=|x-y|$. For each dimension of the input vectors, a DR unit is introduced. This simplifies the learning problem because the summation of activations of the DR units is sufficient for generalisable identity detection. The results show no restriction on the learning of other tasks in practice. There are different ways of integrating DR units into neural networks in RBP: Early and Mid Fusion. In _Early Fusion_ , DR units are concatenated to input units, and in _Mid Fusion_ they are concatenated to the hidden layer. In both cases, the existing input and hidden units are unchanged. Adding units with hard-wired connections and unusual activation function limits the flexibility of the RBP approach. In this work, we introduce a modified RBP structure that can be formulated as a Bayesian prior to model the weight structure of Mid Fusion RBP in a standard feed forward network setting. ## 2 A Bayesian Approach to Relation Based Patterns In the weight prior approach, we replace each DR units with two standard neurons and model the fixed weights with a default weight matrix $D$. This matrix contains the weights that enable a dimension-wise comparison of the inputs. A diagram of the structure is given in Figure 1(a) where $\alpha$ and $\beta$ are the two input values being compared. A small example of this default matrix is shown in Figure 1b). The incoming connections from two corresponding inputs to a neuron in the hidden layer to be compared have values of 1 and $-1$. For the same pair we use another hidden neuron with inverted signs of the weights, as in rows 1 and 2. With a ReLU activation function, this means that the value of one of the hidden neurons will be positive, if one input neurons have different activations. We therefore need at least $n$ hidden units in the first hidden layer for a comparison of n two $n/2$-dimensional vectors. All other incoming connection weights are set to 0, including the bias. This ensures that in all cases where corresponding inputs are not equal, there will be a positive activation in one of the neurons. a) | b) $\displaystyle D={\text{\boldmath$\begin{pmatrix}+1&0&0&-1&0&0\\\ -1&0&0&+1&0&0\\\ 0&+1&0&0&-1&0\\\ 0&-1&0&0&+1&0\\\ 0&0&+1&0&0&-1\\\ 0&0&-1&0&0&+1\\\ 0&0&0&0&0&0\\\ &\vdots&&&\vdots&\\\ \end{pmatrix}$}}$ ---|--- Figure 1: a) For each input dimension there are two hidden layer neurons, e.g. $h_{1},h_{2}$ for dimension $1$. $\alpha$ and $\beta$ indicate two input vectors of dimensionality $n$. b) The default weight matrix $D$ for vector dim n=3. Each row corresponds to the incoming weights of a hidden neuron, e.g. the first row to $h_{1}$ in figure 1 a). If there are more hidden neurons than pairs of input vectors, the additional rows contain only zeros. The matrix $D$ is then used to define default values of the network weights, i.e. we impose a loss based on the difference between $D$ and the actual weight matrix $W$, that connects the input neurons to the first hidden layer. This loss term $l_{RBP_{1/2}}$ for L1 or L2 loss are defined as: $l_{RBP_{1}}=\sum_{i=1}^{k}|W_{k}-D_{k}|,\hskip 28.45274ptl_{RBP_{2}}=\sum_{i=1}^{k}(W_{k}-D_{k})^{2},$ (1) where $k$ is the number of elements in $W$. These loss functions correspond to Bayesian priors on the weights with the mean defined by the values of $D$. The $L2$ loss corresponds to a Gaussian and $L1$ loss to a Laplacian prior, such that backpropagation maximizes the posterior likelihood of the weights given the data Williams [1995]. The overall training loss $l_{t}$ is defined as $l_{t}=l_{c}+\lambda\times l_{RBP}$ (2) where $l_{c}$ is the cross entropy and $\lambda$ is the regularization parameter, corresponding to the inverse of the variance of the prior, effectively regulating the strength of the RBP regularization. We call these methods of embedding the RBP into a standard network ERBP L1 and ERBP L2 respectively. ## 3 Generalizing and Learning of Identity Relations For the task of learning identity relations, we generate synthetic data and use a standard feed-forward neural network. The input vector is binary and the target values are $[0,1]$ for unequal and $[1,0]$ for equal vector halves as described in section 1. We use a grid search over hyper-parameters: the number of epochs [10,20,30], the number of neurons per hidden layer was varied as [10,20,30]. For the Bayesian weight prior, we varied the regularization parameter $\lambda$ with values [0.01,0.03,0.1,0.3,1,3,10,30]. We ran a total of 10 simulations using the SGD and Adam optimizers, training for 20 epochs. We used a single hidden layer and a batch size of 1 unless indicated otherwise. The networks have been implemented using the PyTorch library111http://pytorch.org. The train/test split was set to 75% / 25% for all tasks. We tested with vector dimensionalities $n=3,10,30$. We generate all vectors with equal halves and take a random sample of all those with unequal halves to balance the classes. We downsample the size of the dataset to 1000 when it is greater. ### 3.1 Identity Relations Identity is an abstract relation in the sense that it is independent of the actual values of the individual arguments, it just depends on their combined configuration. In this task, pairs of vectors are presented to a feed-forward network and the task is to distinguish whether the two vectors are equal or not. We evaluated the performance of the network in different configurations on a held-out test set and the results are tabulated in Table 1 for different vector dimensions. As observed by Weyde and Kopparti [2018], standard networks do not improve much over random guessing, while ERBP L1 and ERBP L2, as well as Mid Fusion, almost always achieve perfect generalizsation with sufficiently strong $\lambda$ (see next section for details). Table 1: Test set classification accuracy (in %) and standard deviation over 10 simulations (in brackets) using different models for Identity Learning (vector dimensions $n=3,10,30$). The networks were trained with the Adam optimizer for 20 epochs. Type | Standard | Early Fusion | Mid Fusion | ERBP L1 | ERBP L2 ---|---|---|---|---|--- $n=3$ | 55 (1.91) | 65 (1.34) | 100 (0.04) | 100 (0.00) | 100 (0.00) $n=10$ | 51 (1.67) | 65 (1.32) | 100 (0.08) | 100 (0.04) | 100 (0.02) $n=30$ | 50 (1.52) | 65 (1.27) | 100 (0.07) | 100 (0.05) | 100 (0.04) ### 3.2 Parameter Variations We study the effect of several parameters: number of hidden layers, choice of optimizer, regularization factor and weight initialization on identity relation learning tasks. Network Depth: We tested identity learning with deeper neural network models, using $h={2,3,4,5}$ hidden layers. The results are tabulated in Table 2, showing only minor improvements in the network performance for deeper networks. However, ERBP L1 and L2 generalization is consistent and independent of the network depth. Table 2: Test set classification accuracy (in %) with standard deviation (in brackets) for identity learning ($n=3$) using deeper networks. The networks were trained with the Adam optimizer for 20 epochs. Hidden | No | Early | Mid | ERBP L1 | ERBP L2 ---|---|---|---|---|--- layers | RBP | Fusion | Fusion | | h = 2 | 55 (1.65) | 65 (1.26) | 100 (0.02) | 100 (0.00) | 100 (0.00) h = 3 | 55 (1.67) | 67 (1.14) | 100 (0.03) | 100 (0.00) | 100 (0.00) h = 4 | 58 (1.63) | 68 (1.25) | 100 (0.02) | 100 (0.00) | 100 (0.00) h = 5 | 59 (1.68) | 72 (1.23) | 100 (0.02) | 100 (0.00) | 100 (0.00) Table 3: Accuracy (in %) and standard deviation of identity learning ($n=3$) using Adam and SGD optimizer for ERBP L1 and L2. SGD can also lead to 100% accuracy on the test set, but needs higher $\lambda$ values. Type | ERBP L1 | ERBP L2 ---|---|--- Adam | 100 (0.00) | 100 (0.00) SGD | 98 (0.06) | 96 (0.05) Optimiser: We used both Stochastic Gradient Descent (SGD) and the Adam optimizer [Diederik P and Jimmy Lei, 2014] for training the ERBP L1 and L2. We observed faster convergence and greater improvement in the overall accuracy with the Adam compared to SGD. We observed similar results for both ERBP L1 and L2. Table 3 below summarizes the results of identity learning for both the optimizers with the regularization parameter $\lambda$ set to 1. We observe that the SGD does not reach full generalization in this setting, however it does so at higher values of $\lambda$. Regularization Factor: We varied the regularization factor $\lambda$ in the loss function of the ERBP models. We observed that a factor of 3 or above reliably leads to perfect generalization in identity learning task. Figure 2, shows how the effect depends on the size the regularization factor $\lambda$ using L1 and L2 loss functions for learning identity relations. Figure 2: Test set classification accuracy (in %) of the network with ERBP L1 and L2 when varying the regularization parameter $\lambda$ (shown in logarithmic scale) for identity learning with ($n=3$) and using the Adam optimizer. ### 3.3 Jointly learning identity relations and bit patterns In order to test, whether the RBP and ERBP impedes other learning tasks, we also tested learning with some non-relational patterns that are based on values of specific neurons. The simplest form is that we have one bit that determines the target class. This case is learned with 100% generalization performance for a network with 2 additional outputs for the pattern classification. Furthermore, we tested the learning of classification based on all even or odd elements in the vector being $0$. In this case, we also get perfect generalization for $\lambda$ up to 10. For $\lambda=30$ we see first deterioration of the accuracy, but there is a wide range of values where both the identity relations and the even/odd classification generalize perfectly. ## 4 Conclusions Identity based relations are a fundamental form of relational learning. In this work, we re-visit the problem of learning identity relations using standard neural networks and show that creating a weight prior on the network weights leads to generalisable solutions of learning identity based relations. This also did not affect learning of non-relational patterns in our preliminary experiments, although more thorough testing remains to be done here. We believe that addressing these issues and coming up with effective solutions is necessary for more higher level relational learning tasks and also is relevant to address problems in general neural network learning. In future, we would like to extend this work towards learning other complex relational learning tasks and come with effective ways of creating an inductive bias using weight priors within the standard neural network architectures. ## References * Marcus et al. [1999] Gary Marcus, Sujith Vijayan, S. Bandi Rao, and Peter M. Vishton. Rule learning by seven-month-old infants. _Science, 283_ , 5398:77–80, 1999. * Lake and Baroni [2018] Brenden Lake and Marco Baroni. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In _International Conference on Machine Learning_ , pages 2879–2888, 2018. * Marcus [2018] Gary Marcus. Deep learning : a critical appraisal. _arXiv:1801.00631_ , 2018. * Liska et al. [2018] Adam Liska, Germán Kruszewski, and Marco Baroni. Memorize or generalize? searching for a compositional RNN in a haystack. _CoRR_ , abs/1802.06467, 2018. URL http://arxiv.org/abs/1802.06467. * Weyde and Kopparti [2018] Tillman Weyde and Radha Kopparti. Feed-forward neural networks need inductive bias to learn equality relations. _https://arxiv.org/abs/1812.01662_ , 2018. * Elman [1999] Jeffrey Elman. Generalization, rules, and neural networks: A simulation of marcus et. al. Technical report, 1999. URL {https://crl.ucsd.edu/~elman/Papers/MVRVsimulation.html}. * Altmann and Dienes [1999] Gerry Altmann and Zoltan Dienes. Technical comment on rule learning by seven-month-old infants and neural networks. _In Science_ , 284(5416):875–875, 1999. * Shultz and Bale [2001] Thomas R. Shultz and Alan C. Bale. Neural network simulation of infant familiarization to artificial sentences: Rule-like behavior without explicit rules and variables. _Infancy_ , 2(4):501–536, 2001. * Vilcu and Hadley [2001] Marius Vilcu and Robert F Hadley. Generalization in simple recurrent networks. _Proceedings of the Annual Meeting of the Cognitive Science Society_ , 23:1072–1077, 2001. * Vilcu and Hadley [2005] Marius Vilcu and Robert F Hadley. Two apparent ‘counterexamples’ to marcus: A closer look. _Minds and Machines_ , 15(3-4):359–382, 2005\. * Shultz and Bale [2006] Thomas R Shultz and Alan C Bale. Neural networks discover a near-identity relation to distinguish simple syntactic forms. _Minds and Machines_ , 16(2):107–139, 2006. * Shastri and Chang [1999] Shastri and Chang. A spatiotemporal connectionist model of algebraic rule-learning. _International Computer Science Institute_ , pages TR–99–011, 1999\. * Dominey and Ramus [2000] Peter Dominey and Franck Ramus. Neural network processing of natural language: Sensitivity to serial, temporal and abstract structure of language in the infant. _Language and Cognitive Processes_ , pages 15(1),87–127, 2000. * Alhama and Zuidema [2016] Raquel G. Alhama and Willem Zuidema. Pre-wiring and pre-training: What does a neural network need to learn truly general identity rules. _CoCo at NIPS_ , 2016. * Weyde and Kopparti [2019] Tillman Weyde and Radha Kopparti. Modelling identity rules with neural networks. _Journal or Applied Logics_ , 6(4):745–769, 2019\. * Williams [1995] Peter M Williams. Bayesian regularization and pruning using a laplace prior. _Neural computation_ , 7(1):117–143, 1995. * Diederik P and Jimmy Lei [2014] Kingma Diederik P and Ba Jimmy Lei. Adam : A method for stochastic optimization. 2014\. URL https://arxiv.org/abs/1412.6980.
2024-09-04T02:54:57.801137
2020-03-06T10:54:46
2003.03130
{ "authors": "Sebastian J. M\\\"uller (1), Franziska Weigl (2), Carina Bezold (1), Ana\n Sancho (2,3), Christian B\\\"acher (1), Krystyna Albrecht (2) and Stephan Gekle\n (1) ((1) Theoretical Physics VI, Biofluid Simulation and Modeling, University\n of Bayreuth, (2) Department of Functional Materials in Medicine and Dentistry\n and Bavarian Polymer Institute (BPI), University of W\\\"urzburg, (3)\n Department of Automatic Control and Systems Engineering, University of the\n Basque Country UPV/EHU, San Sebastian, Spain)", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26078", "submitter": "Sebastian Johannes M\\\"uller", "url": "https://arxiv.org/abs/2003.03130" }
arxiv-papers
Published 2020 in: Biomechanics and Modeling in Mechanobiology DOI: 10.1007/s10237-020-01397-2 Unfortunately, the journal version misses an important contributor. The correct author list is the one given in this document. A hyperelastic model for simulating cells in flow Sebastian J. Müller 1, Franziska Weigl 2, Carina Bezold 1, Ana Sancho 2,3, Christian Bächer 1, Krystyna Albrecht 2 and Stephan Gekle 1 1 Theoretical Physics VI, Biofluid Simulation and Modeling, University of Bayreuth, Universitätsstraße 30, 95440 Bayreuth, Germany 2 Department of Functional Materials in Medicine and Dentistry and Bavarian Polymer Institute (BPI), University of Würzburg, Pleicherwall 2, 97070 Würzburg, Germany 3 Department of Automatic Control and Systems Engineering, University of the Basque Country UPV/EHU, San Sebastian, Spain ###### Abstract In the emerging field of 3D bioprinting, cell damage due to large deformations is considered a main cause for cell death and loss of functionality inside the printed construct. Those deformations, in turn, strongly depend on the mechano-elastic response of the cell to the hydrodynamic stresses experienced during printing. In this work, we present a numerical model to simulate the deformation of biological cells in arbitrary three-dimensional flows. We consider cells as an elastic continuum according to the hyperelastic Mooney–Rivlin model. We then employ force calculations on a tetrahedralized volume mesh. To calibrate our model, we perform a series of FluidFM® compression experiments with REF52 cells demonstrating that all three parameters of the Mooney–Rivlin model are required for a good description of the experimental data at very large deformations up to $80\text{\,}\mathrm{\char 37\relax}$. In addition, we validate the model by comparing to previous AFM experiments on bovine endothelial cells and artificial hydrogel particles. To investigate cell deformation in flow, we incorporate our model into Lattice Boltzmann simulations via an Immersed-Boundary algorithm. In linear shear flows, our model shows excellent agreement with analytical calculations and previous simulation data. Keywords: Hyperelasticity, Cell deformation, Mooney–Rivlin, Atomic force Microscopy, Shear flow, Lattice-Boltzmann ## 1 Introduction The dynamic behavior of flowing cells is central to the functioning of organisms and forms the base for a variety of biomedical applications. Technological systems that make use of the elastic behavior of cells are, for example, cell sorting [1], real-time deformability cytometry [2, 3] or probing techniques for cytoskeletal mechanics [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. In most, but not all, of these applications cell deformations typically remain rather small. A specific example where large deformations become important is 3D bioprinting. Bioprinting is a technology which, analogously to common 3D printing, pushes a suspension of cells in highly viscous hydrogels—a so-called bioink—through a fine nozzle to create three-dimensional tissue structures. A major challenge in this process lies in the control of large cell deformations and cell damage during printing. Those deformations arise from hydrodynamic stresses in the printer nozzle and ultimately affect the viability and functionality of the cells in the printed construct [16, 17, 18, 19, 20]. How exactly these hydrodynamic forces correlate with cell deformation, however, strongly depends on the elastic behavior of the cell and its interaction with the flowing liquid. Theoretical and computational modeling efforts in this area have thus far been restricted to pure fluid simulations without actually incorporating the cells [21, 22, 17] or simple 2D geometries [23, 24]. The complexity of cell mechanics and the diversity of possible applications make theoretical modeling of cell mechanics in flow a challenge which, to start with, requires reliable experimental data for large cell deformations. The most appropriate tool to measure cellular response at large deformations is atomic force microscopy (AFM) [25, 26, 27, 28, 8, 29, 30, 31, 32, 33, 34]. AFM cantilevers with pyramidal tips, colloidal probes, or flat geometries are used to indent or compress cells. Therefore, a common approach to characterize the elasticity of cells utilizes the Hertzian theory, which describes the contact between two linear elastic solids [35, p. 90-104], but is limited to the range of small deformations [36]. Experimental measurements with medium- to-large deformations typically show significant deviations from the Hertz prediction, e. g., for cells or hydrogel particles [37]. Instead of linear elasticity, a suitable description of cell mechanics for bioprinting applications requires more advanced hyperelastic material properties. While for simple anucleate fluid-filled cells such as, e.g., red blood cells, theoretical models abound [38, 39, 40, 41, 42], the availability of models for cells including a complex cytoskeleton is rather limited. In axisymmetric geometries, Caille et al. [43] and Mokbel et al. [44] used an axisymmetric finite element model with neo-Hookean hyperelasticity to model AFM and microchannel experiments on biological cells. In shear flow, approximate analytical treatments are possible [45, 46, 47, 48]. Computationally, Gao and Hu [46] carried out 2D simulations while in 3D Lykov et al. [49] utilized a DPD technique based on a bead-spring model. Furthermore, Villone et al. [50, 51] presented an arbitrary Lagrangian-Eulerian approach for elastic particles in viscoelastic fluids. Finally, Rosti et al. [52] and Saadat et al. [53] considered viscoelastic and neo-Hookean finite element models, respectively, in shear flow. In this work, we introduce and calibrate a computational model for fully three-dimensional simulations of cells in arbitrary flows. Our approach uses a Lattice-Boltzmann solver for the fluid and a direct force formulation for the elastic equations. In contrast to earlier works [43, 47, 44, 52, 53] our model uses a three-parameter Mooney–Rivlin elastic energy functional. To demonstrate the need for this more complex elastic model, we carry out extensive FluidFM® indentation experiments for REF52 (rat embryonic fibroblast) cells at large cell deformation up to $80\text{\,}\mathrm{\char 37\relax}$ [54]. In addition, our model compares favorably with previous AFM experiments on bovine endothelial cells [43] as well as artificial hydrogel particles [37]. Our model provides a much more realistic force–deformation behavior compared to the small-deformation Hertz approximation, but is still simple and fast enough to allow the simulation of dense cell suspensions in reasonable time. Particularly, our approach is less computationally demanding than conventional finite-element methods which usually require large matrix operations. Furthermore, it is easily extensible and allows, e.g., the inclusion of a cell nucleus by the choice of different elastic moduli for different parts of the volume. We finally present simulations of our cell model in different flow scenarios using an Immersed-Boundary algorithm to couple our model with Lattice Boltzmann fluid calculations. In a plane Couette (linear shear) flow, we investigate the shear stress dependency of single cell deformation, which we compare to the average cell deformation in suspensions with higher volume fractions, and show that our results in the neo-Hookean limit are in accordance with earlier elastic cell models [47, 52, 53]. ## 2 Theory In general, hyperelastic models are used to describe materials that respond elastically to large deformations [55, p. 93]. Many cell types can be subjected to large reversible shape changes. This section provides a brief overview of the hyperelastic Mooney–Rivlin model implemented in this work. The displacement of a point is given by $\displaystyle u_{i}=y_{i}-x_{i}\>,$ (1) where $x_{i}$ ($i=1,2,3$) refers to the undeformed configuration (material frame) and $y_{i}$ to the deformed coordinates (spatial frame). We define the deformation gradient tensor and its inverse as [55, p. 14,18] $\displaystyle\mathsf{F}_{ij}=\frac{\partial{y_{i}}}{\partial{x_{j}}}=\frac{\partial{u_{i}}}{\partial{x_{j}}}+\delta_{ij}\quad\mathrm{and}\quad\mathsf{F}_{ij}^{-1}=\frac{\partial{x_{i}}}{\partial{y_{j}}}\>.$ (2) Together with the right Cauchy-Green deformation tensor, $\mathsf{C}=\mathsf{F}^{\intercal}\mathsf{F}$ (material description), we can define the following invariants which are needed for the strain energy density calculation below: $\displaystyle J$ $\displaystyle=\det\mathsf{F}$ (3) $\displaystyle I$ $\displaystyle=T_{\mathsf{C}}J^{-2/3}$ (4) $\displaystyle K$ $\displaystyle=\tfrac{1}{2}\left(T_{\mathsf{C}}^{2}-T_{\mathsf{C}^{2}}\right)J^{-4/3}$ (5) Here, $\displaystyle T_{\mathsf{C}}=\mathrm{tr}\,\mathsf{C}\quad\mathrm{and}\quad T_{\mathsf{C}^{2}}=\mathrm{tr}\,\left(\mathsf{C}^{2}\right)$ (6) are the trace of the right Cauchy-Green deformation tensor and its square, respectively. The nonlinear strain energy density of the Mooney–Rivlin model is given by [56, 57] $\displaystyle U=\left[\frac{\mu_{1}}{2}\left(I-3\right)+\frac{\mu_{2}}{2}\left(K-3\right)+\frac{\kappa}{2}\left(J-1\right)^{2}\right]\>,$ (7) where $\mu_{1}$, $\mu_{2}$, and $\kappa$ are material properties. They correspond—for consistency with linear elasticity in the range of small deformations—to the shear modulus $\mu=\mu_{1}+\mu_{2}$ and bulk modulus $\kappa$ of the material and are therefore related to the Young’s modulus $E$ and the Poisson ratio $\nu$ via [55, p. 74] $\displaystyle\mu=\frac{E}{2\left(1+\nu\right)}\quad\mathrm{and}\quad\kappa=\frac{E}{3\left(1-2\nu\right)}\>.$ (8) Through the choice $\mu_{2}=0$ in (7), we recover the simpler and frequently used [47, 53] neo-Hookean strain energy density: $\displaystyle U_{\mathrm{NH}}=\left[\frac{\mu}{2}\left(I-3\right)+\frac{\kappa}{2}\left(J-1\right)^{2}\right]$ (9) As we show later, this can be a sufficient description for some cell types. To control the strength of the second term and quickly switch between neo-Hookean and Mooney–Rivlin strain energy density calculation, we introduce a factor $w\in\left[0,1\right]$ and set $\displaystyle\mu_{1}=w\mu\quad\mathrm{and}\quad\mu_{2}=(1-w)\mu\>$ (10) such that $w=1$, which equals setting $\mu_{2}=0$ in (7), corresponds to the purely neo-Hookean description in (9), while $w<1$ increases the influence of the $\mu_{2}$-term and thus leads to a more pronounced strain hardening as shown in figure S-6 of the Supporting Information. ## 3 Tetrahedralized cell model In this section we apply the hyperelastic theory of section 2 to a tetrahedralized mesh as shown in figure 1. ### 3.1 Calculation of elastic forces We consider a mesh consisting of tetrahedral elements as depicted in figure 1. The superscript $\alpha$ refers to the four vertices of the tetrahedron. The elastic force acting on vertex $\alpha$ in direction $i$ is obtained from (7) by differentiating the strain energy density $U$ with respect to the vertex displacement as $\displaystyle f_{i}^{\alpha}=$ $\displaystyle- V_{0}\frac{\partial{U}}{\partial{u_{i}^{\alpha}}}\>,$ (11) where $V_{0}$ is the reference volume of the tetrahedron. In contrast to Saadat et al. [53], the numerical calculation of the force in our model does not rely on the integration of the stress tensor, but on a differentiation where the calculation of all resulting terms involves only simple arithmetics. Applying the chain rule for differentiation yields: $\displaystyle f_{i}^{\alpha}=-V_{0}\Bigg{[}$ $\displaystyle\left(\frac{\partial{U}}{\partial{I}}\frac{\partial{I}}{\partial{T_{\mathsf{C}}}}+\frac{\partial{U}}{\partial{K}}\frac{\partial{K}}{\partial{T_{\mathsf{C}}}}\right)\frac{\partial{T_{\mathsf{C}}}}{\partial{\mathsf{F}_{kl}}}$ $\displaystyle+\left(\frac{\partial{U}}{\partial{I}}\frac{\partial{I}}{\partial{J}}+\frac{\partial{U}}{\partial{K}}\frac{\partial{K}}{\partial{J}}+\frac{\partial{U}}{\partial{J}}\right)\frac{\partial{J}}{\partial{\mathsf{F}_{kl}}}$ $\displaystyle+\frac{\partial{U}}{\partial{K}}\frac{\partial{K}}{\partial{T_{\mathsf{C}^{2}}}}\frac{\partial{T_{\mathsf{C}^{2}}}}{\partial{\mathsf{F}_{kl}}}\Bigg{]}\frac{\partial{\mathsf{F}_{kl}}}{\partial{u_{i}^{\alpha}}}$ (12) The evaluation of (3.1) requires the calculation of the deformation gradient tensor $\mathsf{F}$, which is achieved by linear interpolation of the coordinates and displacements inside each tetrahedral mesh element as detailed in the next section. We note that our elastic force calculation is purely local making it straightforward to employ different elastic models in different regions of the cell and/or to combine it with elastic shell models. This flexibility can be used to describe, e.g., the cell nucleus [43] or an actin cortex [58] surrounding the cell interior. ### 3.2 Interpolation of the displacement field Following standard methods, e.g. Bower [55], we start by interpolating a point $x_{i}$ inside a single tetrahedron using the vertex positions $x_{i}^{\alpha}$ ($\alpha=1,2,3,4$). The interpolation uses an inscribed, dimensionless coordinate system, denoted by $\left(\xi_{1},\xi_{2},\xi_{3}\right)$ with $0\leq\xi_{i}\leq 1$111Bower [55, p. 481,483] erroneously states a range of $-1\leq\xi_{i}\leq 1$ for the tetrahedral element., as depicted in figure 1a. One vertex defines the origin while the remaining three indicate the coordinate axes. A set of shape functions, i. e., interpolation functions, $N^{\alpha}\left(\xi_{1},\xi_{2},\xi_{3}\right)$ is employed to interpolate positions inside the tetrahedron volume. An arbitrary point $x_{i}$ inside the element is interpolated as $\displaystyle x_{i}=\sum\limits_{\alpha=1}^{4}N^{\alpha}\left(\xi_{1},\xi_{2},\xi_{3}\right)x_{i}^{\alpha}\>,$ (13) where the shape functions are defined as [55, p. 483]: $\displaystyle N^{1}\left(\xi_{1},\xi_{2},\xi_{3}\right)$ $\displaystyle=\xi_{1}$ (14) $\displaystyle N^{2}\left(\xi_{1},\xi_{2},\xi_{3}\right)$ $\displaystyle=\xi_{2}$ (15) $\displaystyle N^{3}\left(\xi_{1},\xi_{2},\xi_{3}\right)$ $\displaystyle=\xi_{3}$ (16) $\displaystyle N^{4}\left(\xi_{1},\xi_{2},\xi_{3}\right)$ $\displaystyle=1-\xi_{1}-\xi_{2}-\xi_{3}$ (17) According to (1), the displacement of vertex $\alpha$ in $i$-direction is given by $\displaystyle u_{i}^{\alpha}=y_{i}^{\alpha}-x_{i}^{\alpha}\>.$ (18) Therefore similar to (13), the displacement at an arbitrary point in the volume can also be expressed in terms of the shape functions and the vertex displacements as $\displaystyle u_{i}=\sum\limits_{\alpha=1}^{4}N^{\alpha}\left(\xi_{1},\xi_{2},\xi_{3}\right)u_{i}^{\alpha}\>.$ (19) The calculation of the deformation gradient tensor according to (2) requires the spatial derivative of the displacement: $\displaystyle\mathsf{F}_{ij}-\delta_{ij}=\frac{\partial{u_{i}}}{\partial{x_{j}}}$ $\displaystyle=\frac{\partial{u_{i}}}{\partial{\xi_{k}}}\frac{\partial{\xi_{k}}}{\partial{x_{j}}}=\mathsf{A}_{ik}\mathsf{B}_{kj}$ (20) By inserting (19) into (20) and evaluating the shape functions, the components of the matrix $\mathsf{A}$ are easily determined to be the difference of the displacements between the origin (vertex 4) and the remaining vertices 1, 2 and 3: $\displaystyle\mathsf{A}_{ik}=u_{i}^{k}-u_{i}^{4}$ (21) Note that due to the linear interpolation $\mathsf{A}_{ik}$ is constant inside a given tetrahedron. The matrix $\mathsf{B}=\mathsf{\mathsf{J}}^{-1}$ is the inverse of the Jacobian matrix, obtained similarly to (21) as $\displaystyle\mathsf{J}_{ik}$ $\displaystyle=\frac{\partial{x_{i}}}{\partial{\xi_{k}}}=x_{i}^{k}-x_{i}^{4}\>.$ (22) Since $x_{i}$ refers to the reference coordinates, the calculation of the matrices $\mathsf{\mathsf{J}}$ and $\mathsf{B}$ has to be performed only once at the beginning of a simulation. With the interpolation of the displacement in each tetrahedron, we can write all derivatives occurring in (3.1), as listed in the following: $\frac{\partial{U}}{\partial{I}}=\frac{\mu_{1}}{2}$ $\frac{\partial{I}}{\partial{T_{\mathsf{C}}}}=J^{-\frac{2}{3}}$ $\frac{\partial{U}}{\partial{K}}=\frac{\mu_{2}}{2}$ $\frac{\partial{K}}{\partial{T_{\mathsf{C}}}}=T_{\mathsf{C}}J^{-\frac{4}{3}}$ $\frac{\partial{T_{\mathsf{C}}}}{\partial{\mathsf{F}_{il}}}=2\mathsf{F}_{il}$ $\frac{\partial{I}}{\partial{J}}=-\frac{2}{3}T_{\mathsf{C}}J^{-\frac{5}{3}}$ $\frac{\partial{K}}{\partial{J}}=-\frac{2}{3}\left(T_{\mathsf{C}}^{2}-T_{\mathsf{C}^{2}}\right)J^{-\frac{7}{3}}$ $\frac{\partial{U}}{\partial{J}}=\kappa\left(J-1\right)$ $\frac{\partial{J}}{\partial{\mathsf{F}_{il}}}=J\mathsf{F}_{li}^{-1}$ $\frac{\partial{K}}{\partial{T_{\mathsf{C}^{2}}}}=-\tfrac{1}{2}J^{-\frac{4}{3}}$ $\frac{\partial{T_{\mathsf{C}^{2}}}}{\partial{\mathsf{F}_{il}}}=4\mathsf{F}_{ik}\mathsf{C}_{kl}$ $\frac{\partial{\mathsf{F}_{kl}}}{\partial{u_{i}^{\alpha}}}=\delta_{ki}\mathsf{B}_{ml}\left(\delta_{m\alpha}-\delta_{4\alpha}\right)$ (a) (b) (c) Figure 1: (a) The four noded tetrahedron as mesh element within a local dimensionless coordinate system $\left\\{\xi_{1},\xi_{2},\xi_{3}\right\\}$. (b) The spherical cell model with its triangulated surface and (c) its inner tetrahedralized mesh ### 3.3 Taylor deformation parameter As a measure for the cell deformation, we use the Taylor deformation parameter [59, 60, 61, 53] $\displaystyle D=\frac{a_{3}-a_{1}}{a_{3}+a_{1}}\>,$ (23) where $a_{1}$ and $a_{3}$ are respectively the minor and major semi axis of an ellipsoid corresponding to the inertia tensor of the cell. The Taylor deformation is a good measure for approximately elliptic cell deformations, as they occur in shear flow (cf. section 6). To calculate $D$, first the components of the inertia tensor $\displaystyle\mathsf{\Theta}_{ij}=\int\limits_{V}x_{k}x_{k}\delta_{ij}-x_{i}x_{j}{\mathrm{d}{V}}\>,$ (24) where $\vec{x}$ is a vector inside the volume $V$, are calculated using our discretized cell with $N_{\mathrm{tet}}$ tetrahedra as $\displaystyle\mathsf{\Theta}_{ij}=\sum\limits_{l=1}^{N_{\mathrm{tet}}}V_{l}\left(r_{k}^{l}r_{k}^{l}\delta_{ij}-r_{i}^{l}r_{j}^{l}\right)\>.$ (25) The vector $\vec{r}^{\,l}$ denotes the center of mass of the $l^{\mathrm{th}}$ tetrahedron and $V_{l}$ is its current volume. The eigenvalues $\theta_{1}>\theta_{2}>\theta_{3}$ of $\mathsf{\Theta}$ can be used to fit the semi axes $a_{1}<a_{2}<a_{3}$ of the corresponding ellipsoid: $\displaystyle a_{1}$ $\displaystyle=\frac{5}{2M}\left(-\theta_{1}+\theta_{2}+\theta_{3}\right)$ $\displaystyle a_{2}$ $\displaystyle=\frac{5}{2M}\left(\theta_{1}-\theta_{2}+\theta_{3}\right)$ $\displaystyle a_{3}$ $\displaystyle=\frac{5}{2M}\left(\theta_{1}+\theta_{2}-\theta_{3}\right)$ (26) The prefactor contains the mass $M$ of the ellipsoid (considering uniform mass density) and drops out in the calculation of $D$. ## 4 Comparison of the numerical model to FluidFM® measurements on REF52 cells In this section, we validate compression simulations of our cell model with FluidFM® compression experiments of REF52 cells stably expressing paxillin-YFP [54]. These experiments provide as an output the required force to produce a certain deformation of the cell, which can be directly compared to our model. We start with a detailed description of the experiments and show the suitability of our model to describe the elastic behavior of REF52 cells afterwards. ### 4.1 FluidFM® indentation measurements We perform a series of compression measurements of REF52 cells with a Flex FPM (Nanosurf GmbH, Germany) system that combines the AFM with the FluidFM® technology (Cytosurge AG, Switzerland). In contrast to conventional AFM techniques, FluidFM® uses flat cantilevers that possess a microchannel connected to a pressure system. By applying a suction pressure, cells can be aspirated and retained at the aperture of the cantilever’s tip. A more detailed description of the setup and its functionality is already reported in [31]. All experiments are based on a cantilever with an aperture of $8\text{\,}\mathrm{\SIUnitSymbolMicro m}$ diameter and a nominal spring constant of $2\text{\,}\mathrm{N}\text{\,}{\mathrm{m}}^{-1}$. In order to measure the cellular deformation, a cell was sucked onto the tip and compressed between the cantilever and the substrate until a setpoint of $100\text{\,}\mathrm{nN}$ was reached. Immediately before the experiment, the cells were detached by using Accutase (Sigma Aldrich) and were therefore in suspension at the time of indentation. In this way, it can be ensured that only a single cell is deformed during each measurement. An example micrograph of the experiment before compression is shown in figure 2. Analogously to AFM, primary data in form of cantilever position (in $\mathrm{m}$) and deflection (in $\mathrm{V}$) has to be converted to force and deformation through the deflection sensitivity (in $\mathrm{m}\text{\,}{\mathrm{V}}^{-1}$) and the cantilevers’ spring constant. The cellular deformation further requires the determination of the contact point, which we choose as the cantilever position where the measured force starts to increase. The undeformed cell size is obtained as mean from a horizontal and vertical diameter measurement using the software imageJ. Figure 2: Example micrograph showing the FluidFM® cantilever and a cell viewed from the top. Scale bar is $30\text{\,}\mathrm{\SIUnitSymbolMicro m}$ ### 4.2 Simulation setup The experimental setup of the previous section is easily transferred and implemented for our cell model: the undeformed spherical cell rests on a fixed plate while a second plate approaches from above to compress the cell as depicted in figure 3 (a and b). In section 5.2 below we will also use a slightly modified version where a sphere indents the cell as shown in figure 3 (c and d). A repulsive force prevents the cell vertices from penetrating the plates or the spherical indenter. The elastic restoring forces (cf. section 3) acting against this imposed compression are transmitted throughout the whole mesh, deforming the cell. We use meshes consisting of $2000$ to $5000$ vertices and about $10\,000$ to $30\,000$ tetrahedra to build up a spherical structure. More details of the mesh and its generation (section S-2.4) as well as the algorithm (section S-3) are provided in the SI. (a) (b) (c) (d) Figure 3: (a and b) Cell compression simulations: The cell is compressed between a lower, resting, and an upper, moving, plate. (c and d) Colloidal probe cell indentation simulations: The cell rests on a plate, while being indented with a sphere ### 4.3 Results In our FluidFM® experiment series with REF52 cells, the cell radii lie between $7.1\text{\,}\mathrm{\SIUnitSymbolMicro m}$ and $10.4\text{\,}\mathrm{\SIUnitSymbolMicro m}$ with an overall average of $8.6(7)\text{\,}\mathrm{\SIUnitSymbolMicro m}$. In figure 4 we depict the force as function of the non-dimensionalized deformation, i. e., the absolute compression divided by the cell diameter. The experimental data curves share general characteristics: The force increases slowly in the range of small deformations up to roughly $40\text{\,}\mathrm{\char 37\relax}$, while a rapidly increasing force is observed for larger deformations. Although the variation of the cell radius in the different measurements is already taken into account in the deformation, the point of the force upturn differs significantly which indicates a certain variability in the elastic parameters of the individual cells. We use the compression simulation setup as detailed in section 4.2 to calculate force–deformation curves of our cell model. The Poisson ratio is chosen as $\nu=0.48$. In section S-2.7 of the Supporting Information we show that variations of the $\nu$ do not strongly affect the results. A best fit approach is used to determine the Young’s modulus and the ratio of shear moduli $w$ and leads to very good agreement between model prediction and experimental data as shown in figure 4 as well as section S-1 of the SI. While the general range of force values is controlled using the Young’s modulus, the Mooney–Rivlin ratio $w$ especially defines the point of the force upturn. We find Young’s moduli in the range $110\text{\,}\mathrm{Pa}$ to $160\text{\,}\mathrm{Pa}$ and $w=0.25$, $0.5$, and $1$. For very small deformations our hyperelastic model produces the same results as would be expected from a linear elastic model according to the Hertz theory. See the SI (section S-2.5) for further details on the calculation of the force–deformation according to the Hertzian theory. For large deformations, the force rapidly increases due to its nonlinear character, showing strain- hardening behavior and huge deviations from the Hertz theory. Overall, we find an excellent match between simulation and our FluidFM® measurements with REF52 cells. Figure 4: Our numerical model in comparison to our FluidFM® measurements on REF52 cells. The labels give the two fit parameters $E$ and $w$. We find Young’s moduli in the range of $110\text{\,}\mathrm{Pa}$ to $160\text{\,}\mathrm{Pa}$. The Hertz theory is shown for a Young’s modulus of $180\text{\,}\mathrm{Pa}$ ## 5 Comparison of our numerical model to other micromechanical setups In this section, we compare our simulations to axisymmetric calculations using the commercial software Abaqus and validate our cell model with further experimental data for bovine endothelial cells from [43] and very recent data for hydrogel particles from [37]. ### 5.1 Validation with axisymmetric simulations To validate our model numerically, we compare our simulated force–deformation curves to calculations using the commercial software Abaqus [62] (version 6.14). In Abaqus, we use a rotationally symmetric setup consisting of a two- dimensional semicircle, which is compressed between two planes, similar to our simulation setup in section 4.2 and the finite element model utilized in [43]. The semicircle has a radius $r=$15\text{\,}\mathrm{\SIUnitSymbolMicro m}$$, a Young’s modulus of $E=$2.25\text{\,}\mathrm{kPa}$$ and a Poisson ratio of $\nu=0.48$. We choose a triangular mesh and the built-in implementation of the hyperelastic neo-Hookean model. In figure 5 we see very good agreement between the results of the two different numerical methods. Figure 5: Comparison of force–deformation curves obtained from our model (red line) with the linear elastic Hertz theory (black line) and the two- dimensional simulation with Abaqus (red squares), showing good agreement between our three-dimensional and the axisymmetric model ### 5.2 Validation with AFM experiments To compare with the AFM experiments of Caille et al. [43], we simulate a cell with radius $15\text{\,}\mathrm{\SIUnitSymbolMicro m}$ using the setup of section 4.2. For the hydrogel particle indentation [37] we use the setup depicted in figure 3 (c and d) with a particle radius of $40\text{\,}\mathrm{\SIUnitSymbolMicro m}$ and a radius of the colloidal probe of $26.5\text{\,}\mathrm{\SIUnitSymbolMicro m}$. The Poisson ratio is chosen as $0.48$ in all simulations and the Young’s modulus is determined using a best fit to the experimental data points. Since the neo-Hookean description appears to be sufficient for these data sets, we further set $w=1$. In figure 6a, we show the experimental data for suspended, round, bovine endothelial cells of five separate measurements from [43] together with the prediction of the Hertz theory for a Young’s modulus of $1000\text{\,}\mathrm{Pa}$. Fitting our data with Young’s moduli in the range of $550\text{\,}\mathrm{Pa}$ to $2400\text{\,}\mathrm{Pa}$, we find good agreement between our calculations and the experimental data. We note that Caille et al. [43] observed similarly good agreement for their axisymmetric incompressible neo-Hookean FEM simulations which, however, cannot be coupled to external flows in contrast to the approach presented here. The same procedure is applied to the colloidal probe indentation data of hydrogel particles from [37], showing in figure 6b the experimental data and the prediction of the Hertz theory from [37]. We find excellent agreement between our model calculations for Young’s moduli in the range of $$580$\pm$100\text{\,}\mathrm{Pa}$$ and the experimental data. For both systems, figure 6 shows large deviations between the Hertzian theory and the experimental data for medium-to-large deformations. Our model provides a significant improvement in this range. (a) (b) Figure 6: (a) Our numerical model in comparison to experimental measurements of bovine endothelial cells from [43]. The black line depicts the prediction of the Hertz theory for a Young’s modulus of $1000\text{\,}\mathrm{Pa}$. (b) Our numerical model in comparison to experimental measurements of hydrogel particles from [37]. The indicated range corresponds to the experimentally found range of $\pm$100\text{\,}\mathrm{Pa}$$ for the Young’s modulus according to the depicted Hertz model ## 6 Application in shear flow We now apply our model to study the behavior of cells in a plane Couette (linear shear) flow setup and compare the steady cell deformation to other numerical and analytical cell models of Gao et al. [47], Rosti et al. [52] and Saadat et al. [53]. A sketch of the simulation setup is shown in figure 7. For simplicity, we choose $w=1$ to reduce the Mooney–Rivlin description (7) to two free parameters $\mu$ and $\kappa$ (or $E$ and $\nu$), obtaining a compressible neo-Hookean form. We use the Lattice Boltzmann implementation of the open source software package ESPResSo [63, 64]. Coupling between fluid and cell is achieved via the immersed-boundary algorithm [65, 53] which we implemented into ESPResSo [66, 58]. We note here that, in contrast to Saadat et al. [53], we do not subtract the fluid stress within the particle interior. This leads to a small viscous response of the cell material in addition to its elasticity. To obtain (approximately) the limit of a purely elastic particle, we exploit a recently developed method by Lehmann et al. [67] to discriminate between the cell interior and exterior during the simulation. Using this technique, we can tune the ratio between inner and outer viscosity $\lambda$ with $\lambda\to 0$ representing a purely elastic particle. For simplicity, we will nevertheless set $\lambda=1$ in the following, except where otherwise noted. Details of the method are provided in the SI (section S-4.1). As measure for the deformation, we investigate the Taylor parameter $D$ (23) of our initially spherical cell model in shear flow at different shear rates $\dot{\gamma}$. ### 6.1 Single cell simulation The first simulation setup, a single cell in infinite shear flow, is realized by choosing a simulation box of the dimensions $10\times 15\times 5$ ($x\times y\times z$) in units of the cell radius. The infinite shear flow is approximated by applying a tangential velocity $u_{\mathrm{wall}}$ on the $x$-$z$-planes at $y=0$ in negative and at $y=15$ in positive $x$-direction, as depicted in figure 7. The tangential wall velocity is calculated using the distance $H$ of the parallel planes and the constant shear rate $\dot{\gamma}$ via $\displaystyle u_{\mathrm{wall}}=\tfrac{1}{2}H\dot{\gamma}\>.$ (27) The box is periodic in $x$ and $z$. A single cell is placed at the center of the simulation box corresponding to a volume fraction of $\phi=0.0003$. We choose the following parameters: fluid mass density $\varrho=${10}^{3}\text{\,}\mathrm{kg}\text{\,}{\mathrm{m}}^{-3}$$, dynamic viscosity $\eta=${10}^{-3}\text{\,}\mathrm{Pa}\text{\,}\mathrm{s}$$, and shear rate $\dot{\gamma}=$4\text{\,}{\mathrm{s}}^{-1}$$. The capillary number is defined by [46] $\displaystyle\mathrm{Ca}=\frac{\eta\dot{\gamma}}{\mu}\>,$ (28) and is used to set the shear modulus $\mu$ of our cell relative to the fluid shear stress $\eta\dot{\gamma}$. Simulation snapshots of the steady state deformation of a single cell in shear flow are depicted in dependency of the capillary number in figure 8a. We compare the Taylor deformation parameter $D$ to previous approximate analytical calculations of Gao et al. [47] for a three-dimensional elastic solid in infinite shear flow in figure 8b and see reasonable agreement for our standard case of $\lambda=1$. Reducing the inner viscosity by setting $\lambda=0.05$, i.e. close to the limit of a purely elastic solid, the agreement is nearly perfect. Finally, we demonstrate that the elastic particle exhibits a tank-treading motion in section S-4.2. A possibly even more intuitive way to measure cell deformation is the net strain of the cell which we define as $\displaystyle\Delta\epsilon=\frac{\left(d_{\mathrm{max}}-d_{\mathrm{ref}}\right)}{d_{\mathrm{ref}}}\>.$ (29) It describes the relative stretching of the cell using the maximum elongation $d_{\mathrm{max}}$, i. e., the maximum distance of two cell vertices, and its reference diameter $d_{\mathrm{ref}}=2R$. A strain of $\Delta\epsilon=$1$$ thus corresponds to an elongation of the cell by an additional $100\text{\,}\mathrm{\char 37\relax}$ of its original size. In figure 8c, we depict the $\Delta\epsilon$ as function of $\mathrm{Ca}$. For small capillary numbers, i. e., small shear stresses, a linear stress-strain dependency is observed. Above $\mathrm{Ca}\approx 0.3$, the strain-hardening, nonlinear behavior of the neo-Hookean model can be seen. By stretching the cell up to $280\text{\,}\mathrm{\char 37\relax}$ of its initial size, this plot demonstrates again the capability of our model to smoothly treat large deformations. Figure 7: Schematic of the single cell in shear flow. The cell sits in the center of the box and shows an approximately elliptic deformation as well as tank-treading, i. e., a rotation of the membrane around the steady shape in the $x$-$y$-plane (a) (b) (c) Figure 8: (a) Converged shapes of a single cell in a $10\times 15\times 5$ ($x\times y\times z$) simulation box (in units of the cell radius) with a shear flow in $x$-direction as function of the capillary number $\mathrm{Ca}$. (b) Comparison of our model predictions for a single cell in shear flow to the analytical 3D calculations in figure 7 of Gao et al. [47] in the range of $\mathrm{Ca}\in\left[0.01,2.0\right]$. (c) The relative stretch $\Delta\epsilon$ of our cell model as function of the capillary number $\mathrm{Ca}$. A linear behavior is found for small capillary numbers up to $\mathrm{Ca}=0.3$, while increasing stress is required for larger deformations due to the strain-hardening quality of the neo-Hookean model. Lines are a guide to the eye ### 6.2 Multiple cell simulations The second simulation setup, implemented to investigate the multiple particle aspect of our model, consists of $4$ ($8$) cells in a $5\times 8\times 4$ simulation box (in units of the cell radius), corresponding to a volume fraction of $\phi=0.11$ ($\phi=0.22$) occupied by cells. The cells are inserted at random initial positions in the box and the flow parameters are the same as in the first setup (cf. section 6.1). Figure 9a shows simulation snapshots of the cells in suspensions with volume fraction $\phi=0.11$ and $\phi=0.22$ for $\mathrm{Ca}=0.2$. The Taylor deformation of the suspensions, depicted in figure 9b, is calculated as an average over all cells and over time after an initial transient timespan. We find good agreement when comparing the averaged cell deformation in suspension with Rosti et al. [52], Saadat et al. [53]. (a) (b) Figure 9: (a) Multiple cells in a $5\times 8\times 4$ ($x\times y\times z$) simulation box (in units of the cell radius) with a confined shear flow in $x$-direction for a capillary number of $\mathrm{Ca}=0.2$ and $4$ cells corresponding to a volume fraction of $\phi=0.11$, and $8$ cells corresponding to $\phi=0.22$. (b) Averaged deformation of multiple cell simulations with $\phi=0.11$ and $\phi=0.22$ in comparison to data from figure 3 of Rosti et al. [52] and figure 13 of Saadat et al. [53] ## 7 Conclusion We presented a simple but accurate numerical model for cells and other microscopic particles for the use in computational fluid-particle dynamics simulations. The elastic behavior of the cells is modeled by applying Mooney–Rivlin strain energy calculations on a uniformly tetrahedralized spherical mesh. We performed a series of FluidFM® compression experiments with REF52 cells as an example for cells used in bioprinting processes and found excellent agreement between our numerical model and the measurements if all three parameters of the Mooney–Rivlin model are used. In addition, we showed that the model compares very favorably to force versus deformation data from previous AFM compression experiments on bovine endothelial cells [43] as well as colloidal probe AFM indentation of artificial hydrogel particles [37]. At large deformations, a clear improvement compared to Hertzian contact theory has been observed. By coupling our model to Lattice Boltzmann fluid calculations via the Immersed-Boundary method, the cell deformation in linear shear flow as function of the capillary number was found in good agreement with analytical calculations by Gao et al. [47] on isolated cells as well as previous simulations of neo-Hookean and viscoelastic solids [52, 53] at various volume fractions. The presented method together with the precise determination of model parameters by FluidFM® /AFM experiments may provide an improved set of tools to predict cell deformation - and ultimately cell viability - in strong hydrodynamic flows as occurring, e.g., in bioprinting applications. ## Acknowledgements Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) — Project number 326998133 — TRR 225 “Biofabrication” (subproject B07). We gratefully acknowledge computing time provided by the SuperMUC system of the Leibniz Rechenzentrum, Garching. We further acknowledge support through the computational resources provided by the Bavarian Polymer Institute. Christian Bächer thanks the Studienstiftung des deutschen Volkes for financial support and acknowledges support by the study program “Biological Physics” of the Elite Network of Bavaria. Furthermore, we thank the laboratory of professor Alexander Bershadsky at Weizmann Insitute of Science in Isreal for providing the REF52 cells stably expressing paxillin-YFP. ## References * Shen et al. [2019] Yigang Shen, Yaxiaer Yalikun, and Yo Tanaka. Recent advances in microfluidic cell sorting systems. _Sensors and Actuators B: Chemical_ , 282:268–281, March 2019. ISSN 09254005. doi: 10.1016/j.snb.2018.11.025. * Otto et al. [2015] Oliver Otto, Philipp Rosendahl, Alexander Mietke, Stefan Golfier, Christoph Herold, Daniel Klaue, Salvatore Girardo, Stefano Pagliara, Andrew Ekpenyong, Angela Jacobi, Manja Wobus, Nicole Töpfner, Ulrich F Keyser, Jörg Mansfeld, Elisabeth Fischer-Friedrich, and Jochen Guck. Real-time deformability cytometry: On-the-fly cell mechanical phenotyping. _Nature Methods_ , 12(3):199–202, March 2015\. ISSN 1548-7091, 1548-7105. doi: 10.1038/nmeth.3281. * Fregin et al. [2019] Bob Fregin, Fabian Czerwinski, Doreen Biedenweg, Salvatore Girardo, Stefan Gross, Konstanze Aurich, and Oliver Otto. High-throughput single-cell rheology in complex samples by dynamic real-time deformability cytometry. _Nature Communications_ , 10(1):415, December 2019\. ISSN 2041-1723. doi: 10.1038/s41467-019-08370-3. * Kollmannsberger and Fabry [2011] Philip Kollmannsberger and Ben Fabry. Linear and Nonlinear Rheology of Living Cells. _Annual Review of Materials Research_ , 41(1):75–97, August 2011. ISSN 1531-7331, 1545-4118. doi: 10.1146/annurev-matsci-062910-100351. * Gonzalez-Cruz et al. [2012] Rafael D Gonzalez-Cruz, Vera C Fonseca, and Eric M Darling. Cellular mechanical properties reflect the differentiation potential of adipose-derived mesenchymal stem cells. _Proc. Nat. Acad. Sci. (USA)_ , 109(24):E1523–E1529, 2012. * Huber et al. [2013] F Huber, J Schnau, S Rönicke, P Rauch, K Müller, C Fütterer, and J Käs. Emergent complexity of the cytoskeleton: from single filaments to tissue. _Advances in Physics_ , 62(1):1–112, February 2013. * Bongiorno et al. [2014] Tom Bongiorno, Jacob Kazlow, Roman Mezencev, Sarah Griffiths, Rene Olivares-Navarrete, John F McDonald, Zvi Schwartz, Barbara D Boyan, Todd C McDevitt, and Todd Sulchek. Mechanical stiffness as an improved single-cell indicator of osteoblastic human mesenchymal stem cell differentiation. _J Biomechanics_ , 47(9):2197–2204, June 2014\. * Fischer-Friedrich et al. [2014] Elisabeth Fischer-Friedrich, Anthony A. Hyman, Frank Jülicher, Daniel J. Müller, and Jonne Helenius. Quantification of surface tension and internal pressure generated by single mitotic cells. _Scientific Reports_ , 4:6213, August 2014. ISSN 2045-2322. doi: 10.1038/srep06213. * Lange et al. [2015] Janina R Lange, Julian Steinwachs, Thorsten Kolb, Lena A Lautscham, Irina Harder, Graeme Whyte, and Ben Fabry. Microconstriction Arrays for High-Throughput Quantitative Measurements of Cell Mechanical Properties. _Biophys. J._ , 109(1):26–34, July 2015. * Fischer-Friedrich et al. [2016] Elisabeth Fischer-Friedrich, Yusuke Toyoda, Cedric J. Cattin, Daniel J. Müller, Anthony A. Hyman, and Frank Jülicher. Rheology of the Active Cell Cortex in Mitosis. _Biophysical Journal_ , 111(3):589–600, August 2016. ISSN 00063495. doi: 10.1016/j.bpj.2016.06.008. * Nyberg et al. [2017] Kendra D Nyberg, Kenneth H Hu, Sara H Kleinman, Damir B Khismatullin, Manish J Butte, and Amy C Rowat. Quantitative Deformability Cytometry: Rapid, Calibrated Measurements of Cell Mechanical Properties. _Biophys. J._ , 113(7):1574–1584, October 2017\. * Lange et al. [2017] Janina R Lange, Claus Metzner, Sebastian Richter, Werner Schneider, Monika Spermann, Thorsten Kolb, Graeme Whyte, and Ben Fabry. Unbiased High-Precision Cell Mechanical Measurements with Microconstrictions. _Biophys. J._ , 112(7):1472–1480, April 2017\. * Kubitschke et al. [2017] H Kubitschke, J Schnauss, K D Nnetu, E Warmt, R Stange, and J Kaes. Actin and microtubule networks contribute differently to cell response for small and large strains. _New J. Phys._ , 19(9):093003–13, September 2017\. * Jaiswal et al. [2017] Devina Jaiswal, Norah Cowley, Zichao Bian, Guoan Zheng, Kevin P. Claffey, and Kazunori Hoshino. Stiffness analysis of 3D spheroids using microtweezers. _PLoS One_ , 12(11):e0188346, 2017. * Mulla et al. [2019] Yuval Mulla, F C MacKintosh, and Gijsje H Koenderink. Origin of Slow Stress Relaxation in the Cytoskeleton. _Phys. Rev. Lett._ , 122(21):218102, May 2019\. * Snyder et al. [2015] Jessica Snyder, Ae Rin Son, Qudus Hamid, Chengyang Wang, Yigong Lui, and Wei Sun. Mesenchymal stem cell printing and process regulated cell properties. _Biofabrication_ , 7(4):044106, December 2015\. ISSN 1758-5090. doi: 10.1088/1758-5090/7/4/044106. * Blaeser et al. [2015] Andreas Blaeser, Daniela Filipa Duarte Campos, Uta Puster, Walter Richtering, Molly M. Stevens, and Horst Fischer. Controlling Shear Stress in 3D Bioprinting is a Key Factor to Balance Printing Resolution and Stem Cell Integrity. _Advanced Healthcare Materials_ , 5(3):326–333, December 2015. ISSN 2192-2640. doi: 10.1002/adhm.201500677. * Zhao et al. [2015] Yu Zhao, Yang Li, Shuangshuang Mao, Wei Sun, and Rui Yao. The influence of printing parameters on cell survival rate and printability in microextrusion-based 3D cell printing technology. _Biofabrication_ , 7(4):045002, November 2015\. ISSN 1758-5090. doi: 10.1088/1758-5090/7/4/045002. * Paxton et al. [2017] Naomi Paxton, Willi Smolan, Thomas Böck, Ferry Melchels, Jürgen Groll, and Tomasz Jungst. Proposal to assess printability of bioinks for extrusion-based bioprinting and evaluation of rheological properties governing bioprintability. _Biofabrication_ , 9(4):044107, November 2017\. ISSN 1758-5090. doi: 10.1088/1758-5090/aa8dd8. * Müller et al. [2020] Sebastian J. Müller, Elham Mirzahossein, Emil N. Iftekhar, Christian Bächer, Stefan Schrüfer, Dirk W. Schubert, Ben Fabry, and Stephan Gekle. Flow and hydrodynamic shear stress inside a printing needle during biofabrication. _PLOS ONE_ , 15(7):e0236371, July 2020. ISSN 1932-6203. doi: 10.1371/journal.pone.0236371. * Khalil and Sun [2007] Saif Khalil and Wei Sun. Biopolymer deposition for freeform fabrication of hydrogel tissue constructs. _Materials Science and Engineering: C_ , 27(3):469–478, April 2007. * Aguado et al. [2012] Brian A Aguado, Widya Mulyasasmita, James Su, Kyle J Lampe, and Sarah C Heilshorn. Improving Viability of Stem Cells During Syringe Needle Flow Through the Design of Hydrogel Cell Carriers. _Tissue Engineering Part A_ , 18(7-8):806–815, April 2012. * Tirella et al. [2011] Annalisa Tirella, Federico Vozzi, Giovanni Vozzi, and Arti Ahluwalia. PAM2 (Piston Assisted Microsyringe): A New Rapid Prototyping Technique for Biofabrication of Cell Incorporated Scaffolds. _Tissue Engineering Part C: Methods_ , 17(2):229–237, February 2011. * Li et al. [2015] Minggan Li, Xiaoyu Tian, Janusz A Kozinski, Xiongbiao Chen, and Dae Kun Hwang. Modeling mechanical cell damage in the bioprinting process employing a conical needle. _J. Mech. Med. Biol._ , 15(05):1550073–15, October 2015. * Lulevich et al. [2003] V. V. Lulevich, I. L. Radtchenko, G. B. Sukhorukov, and O. I. Vinogradova. Deformation Properties of Nonadhesive Polyelectrolyte Microcapsules Studied with the Atomic Force Microscope. _The Journal of Physical Chemistry B_ , 107(12):2735–2740, March 2003. ISSN 1520-6106, 1520-5207. doi: 10.1021/jp026927y. * Lulevich et al. [2006] Valentin Lulevich, Tiffany Zink, Huan-Yuan Chen, Fu-Tong Liu, and Gang-yu Liu. Cell Mechanics Using Atomic Force Microscopy-Based Single-Cell Compression. _Langmuir_ , 22(19):8151–8155, September 2006\. ISSN 0743-7463, 1520-5827. doi: 10.1021/la060561p. * Ladjal et al. [2009] Hamid Ladjal, Jean-Luc Hanus, Anand Pillarisetti, Carol Keefer, Antoine Ferreira, and Jaydev P. Desai. Atomic force microscopy-based single-cell indentation: Experimentation and finite element simulation. In _2009 IEEE/RSJ International Conference on Intelligent Robots and Systems_ , pages 1326–1332, St. Louis, MO, USA, October 2009. IEEE. ISBN 978-1-4244-3803-7. doi: 10.1109/IROS.2009.5354351. * Kiss [2011] Robert Kiss. Elasticity of Human Embryonic Stem Cells as Determined by Atomic Force Microscopy. _Journal of Biomechanical Engineering_ , 133(10):101009, November 2011. ISSN 0148-0731. doi: 10.1115/1.4005286. * Hecht et al. [2015] Fabian M Hecht, Johannes Rheinlaender, Nicolas Schierbaum, Wolfgang H Goldmann, Ben Fabry, and Tilman E Schäffer. Imaging viscoelastic properties of live cells by AFM: power-law rheology on the nanoscale. _Soft Matter_ , 11(23):4584–4591, 2015. * Ghaemi et al. [2016] Ali Ghaemi, Alexandra Philipp, Andreas Bauer, Klaus Last, Andreas Fery, and Stephan Gekle. Mechanical behaviour of micro-capsules and their rupture under compression. _Chem. Eng. Sci._ , 142(C):236–243, March 2016\. * Sancho et al. [2017] Ana Sancho, Ine Vandersmissen, Sander Craps, Aernout Luttun, and Jürgen Groll. A new strategy to measure intercellular adhesion forces in mature cell-cell contacts. _Sci. Rep._ , 7(1):46152–14, April 2017. * Efremov et al. [2017] Yuri M Efremov, Wen-Horng Wang, Shana D Hardy, Robert L Geahlen, and Arvind Raman. Measuring nanoscale viscoelastic parameters of cells directly from AFM force-displacement curves. _Sci. Rep._ , 7(1):1541–14, May 2017. * Ladjal et al. [2018] Hamid Ladjal, Jean-Luc Hanus, Anand Pillarisetti, Carol Keefer, Antoine Ferreira, and Jaydev P Desai. Atomic force microscopy-based single-cell indentation: Experimentation and finite element simulation. In _2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009)_ , pages 1326–1332. IEEE, September 2018. * Chim et al. [2018] Ya Hua Chim, Louise M Mason, Nicola Rath, Michael F Olson, Manlio Tassieri, and Huabing Yin. A one-step procedure to probe the viscoelastic properties of cells by Atomic Force Microscopy. _Sci. Rep._ , 8(1):1–12, September 2018. * Johnson [2003] Kenneth L. Johnson. _Contact Mechanics_. Cambridge Univ. Press, Cambridge, 9. print edition, 2003. ISBN 978-0-521-34796-9. * Dintwa et al. [2008] Edward Dintwa, Engelbert Tijskens, and Herman Ramon. On the accuracy of the Hertz model to describe the normal contact of soft elastic spheres. _Granular Matter_ , 10(3):209–221, March 2008\. ISSN 1434-5021, 1434-7636. doi: 10.1007/s10035-007-0078-7. * Neubauer et al. [2019] Jens W. Neubauer, Nicolas Hauck, Max J. Männel, Maximilian Seuss, Andreas Fery, and Julian Thiele. Mechanoresponsive Hydrogel Particles as a Platform for Three-Dimensional Force Sensing. _ACS Applied Materials & Interfaces_, 11(29):26307–26313, July 2019. ISSN 1944-8244, 1944-8252. doi: 10.1021/acsami.9b04312. * Freund [2014] Jonathan B Freund. Numerical Simulation of Flowing Blood Cells. _Annu. Rev. Fluid Mech._ , 46(1):67–95, January 2014. * Závodszky et al. [2017] Gábor Závodszky, Britt van Rooij, Victor Azizi, and Alfons Hoekstra. Cellular Level In-silico Modeling of Blood Rheology with An Improved Material Model for Red Blood Cells. _Front. Physiol._ , 8:061006–14, August 2017. * Mauer et al. [2018] Johannes Mauer, Simon Mendez, Luca Lanotte, Franck Nicoud, Manouk Abkarian, Gerhard Gompper, and Dmitry A Fedosov. Flow-Induced Transitions of Red Blood Cell Shapes under Shear. _Phys. Rev. Lett._ , 121(11):118103, September 2018. * Guckenberger et al. [2018] Achim Guckenberger, Alexander Kihm, Thomas John, Christian Wagner, and Stephan Gekle. Numerical-experimental observation of shape bistability of red blood cells flowing in a microchannel. _Soft Matter_ , 14(11):2032–2043, March 2018\. * Kotsalos et al. [2019] Christos Kotsalos, Jonas Latt, and Bastien Chopard. Bridging the computational gap between mesoscopic and continuum modeling of red blood cells for fully resolved blood flow. _J. Comput. Phys._ , 398:108905, December 2019. * Caille et al. [2002] Nathalie Caille, Olivier Thoumine, Yanik Tardy, and Jean-Jacques Meister. Contribution of the nucleus to the mechanical properties of endothelial cells. _Journal of Biomechanics_ , 35(2):177–187, February 2002. ISSN 00219290. doi: 10.1016/S0021-9290(01)00201-9. * Mokbel et al. [2017] M Mokbel, D Mokbel, A Mietke, N Träber, S Girardo, O Otto, J Guck, and S Aland. Numerical Simulation of Real-Time Deformability Cytometry To Extract Cell Mechanical Properties. _ACS Biomater. Sci. Eng._ , 3(11):2962–2973, January 2017. * Roscoe [1967] R Roscoe. On the rheology of a suspension of viscoelastic spheres in a viscous liquid. _J. Fluid Mech._ , 28(02):273–21, March 1967\. * Gao and Hu [2009] Tong Gao and Howard H. Hu. Deformation of elastic particles in viscous shear flow. _Journal of Computational Physics_ , 228(6):2132–2151, April 2009. ISSN 00219991. doi: 10.1016/j.jcp.2008.11.029. * Gao et al. [2011] Tong Gao, Howard H Hu, and Pedro Ponte Castañeda. Rheology of a suspension of elastic particles in a viscous shear flow. _J. Fluid Mech._ , 687:209–237, October 2011. * Gao et al. [2012] Tong Gao, Howard H Hu, and Pedro Ponte Castañeda. Shape Dynamics and Rheology of Soft Elastic Particles in a Shear Flow. _Phys. Rev. Lett._ , 108(5):058302–4, January 2012. * Lykov et al. [2017] Kirill Lykov, Yasaman Nematbakhsh, Menglin Shang, Chwee Teck Lim, and Igor V Pivkin. Probing eukaryotic cell mechanics via mesoscopic simulations. _PLoS Comput Biol_ , 13(9):e1005726–22, September 2017. * Villone et al. [2014] M M Villone, M A Hulsen, P D Anderson, and P L Maffettone. Simulations of deformable systems in fluids under shear flow using an arbitrary Lagrangian Eulerian technique. _Computers & Fluids_, 90(C):88–100, February 2014. * Villone et al. [2015] M M Villone, G D’Avino, M A Hulsen, and P L Maffettone. Dynamics of prolate spheroidal elastic particles in confined shear flow. _Phys. Rev. E_ , 92(6):062303–12, December 2015\. * Rosti et al. [2018] Marco E. Rosti, Luca Brandt, and Dhrubaditya Mitra. Rheology of suspensions of viscoelastic spheres: Deformability as an effective volume fraction. _Physical Review Fluids_ , 3(1):012301, January 2018. ISSN 2469-990X. doi: 10.1103/PhysRevFluids.3.012301. * Saadat et al. [2018] Amir Saadat, Christopher J. Guido, Gianluca Iaccarino, and Eric S. G. Shaqfeh. Immersed-finite-element method for deformable particle suspensions in viscous and viscoelastic media. _Physical Review E_ , 98(6):063316, December 2018\. ISSN 2470-0045, 2470-0053. doi: 10.1103/PhysRevE.98.063316. * Alexandrova et al. [2008] Antonina Y. Alexandrova, Katya Arnold, Sébastien Schaub, Jury M. Vasiliev, Jean-Jacques Meister, Alexander D. Bershadsky, and Alexander B. Verkhovsky. Comparative Dynamics of Retrograde Actin Flow and Focal Adhesions: Formation of Nascent Adhesions Triggers Transition from Fast to Slow Flow. _PLoS ONE_ , 3(9):e3234, September 2008. ISSN 1932-6203. doi: 10.1371/journal.pone.0003234. * Bower [2010] Allan F. Bower. _Applied Mechanics of Solids_. CRC Press, Boca Raton, 2010. ISBN 978-1-4398-0247-2. * Mooney [1940] M. Mooney. A Theory of Large Elastic Deformation. _Journal of Applied Physics_ , 11(9):582–592, September 1940. ISSN 0021-8979, 1089-7550. doi: 10.1063/1.1712836. * Rivlin [1948] R. S. Rivlin. Large Elastic Deformations of Isotropic Materials. I. Fundamental Concepts. _Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences_ , 240(822):459–490, January 1948. ISSN 1364-503X, 1471-2962. doi: 10.1098/rsta.1948.0002. * Bächer and Gekle [2019] Christian Bächer and Stephan Gekle. Computational modeling of active deformable membranes embedded in three-dimensional flows. _Phys. Rev. E_ , 99(6):062418, June 2019. * Ramanujan and Pozrikidis [1998] S. Ramanujan and C. Pozrikidis. Deformation of liquid capsules enclosed by elastic membranes in simple shear flow: Large deformations and the effect of fluid viscosities. _Journal of Fluid Mechanics_ , 361:117–143, April 1998\. ISSN 0022-1120, 1469-7645. doi: 10.1017/S0022112098008714. * Clausen and Aidun [2010] Jonathan R. Clausen and Cyrus K. Aidun. Capsule dynamics and rheology in shear flow: Particle pressure and normal stress. _Physics of Fluids_ , 22(12):123302, December 2010\. ISSN 1070-6631, 1089-7666. doi: 10.1063/1.3483207. * Guckenberger et al. [2016] Achim Guckenberger, Marcel P Schraml, Paul G Chen, Marc Leonetti, and Stephan Gekle. On the bending algorithms for soft objects in flows. _Comput. Phys. Commun._ , 207:1–23, October 2016. * Smith [2009] Michael Smith. _ABAQUS/Standard User’s Manual, Version 6.9_. Dassault Systèmes Simulia Corp, United States, 2009. * Limbach et al. [2006] H.J. Limbach, A. Arnold, B.A. Mann, and C. Holm. ESPResSo—an extensible simulation package for research on soft matter systems. _Computer Physics Communications_ , 174(9):704–727, May 2006. ISSN 00104655. doi: 10.1016/j.cpc.2005.10.005. * Roehm and Arnold [2012] D. Roehm and A. Arnold. Lattice Boltzmann simulations on GPUs with ESPResSo. _The European Physical Journal Special Topics_ , 210(1):89–100, August 2012. ISSN 1951-6355, 1951-6401. doi: 10.1140/epjst/e2012-01639-6. * Devendran and Peskin [2012] Dharshi Devendran and Charles S Peskin. An immersed boundary energy-based method for incompressible viscoelasticity. _J. Comput. Phys._ , 231(14):4613–4642, May 2012\. * Bächer et al. [2017] Christian Bächer, Lukas Schrack, and Stephan Gekle. Clustering of microscopic particles in constricted blood flow. _Phys. Rev. Fluids_ , 2(1):013102, January 2017\. * Lehmann et al. [2020] Moritz Lehmann, Sebastian Johannes Müller, and Stephan Gekle. Efficient viscosity contrast calculation for blood flow simulations using the lattice Boltzmann method. _Int. J. Numer. Meth. Fluids_ , 103(18):1–15, April 2020. See pages - of Supplementary_Information.pdf
2024-09-04T02:54:57.813735
2020-03-06T11:00:12
2003.03133
{ "authors": "Kshitij Tiwari, Ville Kyrki, Allen Cheung, Naohide Yamamoto", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26079", "submitter": "Kshitij Tiwari", "url": "https://arxiv.org/abs/2003.03133" }
arxiv-papers
# DeFINE: Delayed Feedback based Immersive Navigation Environment for Studying Goal-Directed Human Navigation ###### Abstract With the advent of consumer-grade products for presenting an immersive virtual environment (VE), there is a growing interest in utilizing VEs for testing human navigation behavior. However, preparing a VE still requires a high level of technical expertise in computer graphics and virtual reality, posing a significant hurdle to embracing the emerging technology. To address this issue, this paper presents Delayed Feedback based Immersive Navigation Environment (DeFINE), a framework that allows for easy creation and administration of navigation tasks within customizable VEs via intuitive graphical user interfaces and simple settings files. Importantly, DeFINE has a built-in capability to provide performance feedback to participants during an experiment, a feature that is critically missing in other similar frameworks. To show the usability of DeFINE from both experimentalists’ and participants’ perspectives, a demonstration was made in which participants navigated to a hidden goal location with feedback that differentially weighted speed and accuracy of their responses. In addition, the participants evaluated DeFINE in terms of its ease of use, required workload, and proneness to induce cybersickness. The demonstration exemplified typical experimental manipulations DeFINE accommodates and what types of data it can collect for characterizing participants’ task performance. With its out-of-the-box functionality and potential customizability due to open-source licensing, DeFINE makes VEs more accessible to many researchers. ###### keywords: Virtual reality, Software, Closed-loop, Locomotion, Gamification, Unity Kshitij Tiwari* and Ville KyrkiAllen CheungNaohide Yamamoto* Department of Electrical Engineering and Automation, Aalto UniversityQueensland Brain Institute, The University of QueenslandSchool of Psychology and Counselling, Queensland University of Technology (QUT) Orcid IDs: Kshitij Tiwari https://orcid.org/0000-0003-1789-7961 Ville Kyrki https://orcid.org/0000-0002-5230-5549 Allen Cheung https://orcid.org/0000-0001-9770-217X Naohide Yamamoto https://orcid.org/0000-0001-9734-7470 The authors would like to thank Onur Sari and Ville Sinkkonen for their contributions to the development of the framework presented in this article. *Corresponding authors—KT: Center of Ubiquitous Computing, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland (email: [email protected]); NY: School of Psychology and Counselling, Queensland University of Technology, Brisbane, Australia (email: [email protected]). Behavioral researchers are increasingly becoming interested in understanding the underlying mechanisms for goal-directed navigation in humans (Spiers Maguire, 2006; Cornwell ., 2008; Pezzulo ., 2014). Whilst it would be ideal to gather data on navigation in a well-controlled real-world setting, not every researcher has access to such a setup. However, with the advances in virtual reality (VR), it is becoming easier to set up economical game-like environments to test hypotheses about human navigational behaviors. Although VR technologies are becoming more accessible, they are primarily for playing computer games and not for carrying out behavioral experiments. Thus, to use them for investigating human navigation, researchers need to have working knowledge of graphics design and game engines like Unity (https://unity.com/) and Panda3D (https://www.panda3d.org/), which may not be the case even when they are proficient in general scientific programming. To address this issue, attempts have been made at developing easy-to-use VR frameworks for researchers who are novice at computer graphics and game creation. Notable examples include Python-based Experiment Programming Library (PyEPL; Geller ., 2007), Maze Suite (Ayaz ., 2008, 2011), PandaEPL (Solway ., 2013), Experiments in Virtual Environments (EVE; Grübel ., 2017), Virtual Reality Experiments (VREX; Vasser ., 2017), VRmaze (Machado ., 2019), Unity Experiment Framework (UXF; Brookes ., 2020), Route Learning and Navigation Test Battery (Wiener ., 2020), NavWell (Commins ., 2020), BiomotionLab Toolkit for Unity Experiments (bmlTUX; Bebko Troje, 2020), and Landmarks (Starrett ., 2020). These existing frameworks are similar in that they offer some or all of the following functions for conducting behavioral experiments in VR: modeling and rendering virtual environments (VEs), designing the structure of an experiment (e.g., setting the number and order of trials), executing the trials, recording data (e.g., participants’ navigation trajectories), and performing preset analyses of the recorded data. A major difference between the frameworks is in the extent to which they are designed for specific purposes. For example, Maze Suite is specialized for creating and running experiments in standard mazes (i.e., mazes created by dividing an enclosed space into connected paths by walls). By limiting its scope, Maze Suite achieves a high level of ease of use—that is, everything can be done in graphical user interfaces (GUIs) and no coding is necessary on the part of end-users. At the other end of the spectrum, PyEPL and PandaEPL are Python libraries for programming behavioral experiments in general (PyEPL) and spatial navigation experiments in particular (PandaEPL). Because they are not compiled software packages but programming libraries, researchers can create any experiments using PyEPL and PandaEPL, as far as their programming proficiency goes. The other frameworks lie in between, balancing ease-of-use and flexibility to varying degrees by preparing some ready-made functionality while providing users with the way of doing their own coding for fine customization. Some of them are still relatively specific to certain types of experiments (e.g., VREX focuses on indoor environments that consist of connected rooms). The others offer generic modules for experimental design and data recording, leaving high-level features of experiments including stimulus presentation up to users’ programming (e.g., UXF). Despite the different purposes these frameworks are designed to achieve, one aspect shared by them is that they focus on the stimulus–response relationship in examining navigational behavior. That is, participants are presented with a stimulus with which they carry out a navigation response (e.g., walking to a goal location indicated by visual cues), and during and at the completion of the response, they do not receive any feedback on their performance. Although such a research design is appropriate and even required for investigating certain aspects of navigational behavior (Philbeck Loomis, 1997; Loomis Knapp, 2003), it makes it impossible to examine how participants modulate their subsequent response to the stimulus using feedback (Brand, 2008). Importantly, when goal-directed navigation is performed in real-world settings, navigators often receive feedback with which they can adjust their behavior. For example, when they walk in the dark to reach a door and fail to touch its knob, the lack of tactile sensation serves as feedback and informs them that they still have a few more steps to go. In this instance, the navigational behavior should be characterized as consisting of a closed stimulus–response–feedback loop, instead of an open stimulus–response loop as in the foregoing frameworks. Indeed, studies have been conducted to capture human navigation by the stimulus–response–feedback loop. For instance, Carton . (2016) have found evidence that humans tend to adapt their trajectory planning horizon (i.e., how far in the future they plan their locomotion trajectory) when they detect potential collision situations. In this case, impending collisions constitute a stimulus that triggers a response, which is witnessed in the form of a shortened planning horizon. The successful avoidance of collision or a failure thereof functions as feedback that can subsequently be used to tune the length of planning horizons. In this manner, experiments on human navigation come in a variety of designs, utilizing both stimulus–response and stimulus–response–feedback loops. To fully accommodate this diversity of the experiments, there is a need for a new VR framework that allows for incorporation of feedback into the experimental design. To this end, we developed the Delayed Feedback based Immersive Navigation Environment (DeFINE). ## 1 DeFINE: Delayed Feedback based Immersive Navigation Environment DeFINE is freely available to download via GitHub at https://github.com/ktiwari9/define-VR. DeFINE is based on the Unity game engine, and hence, relies heavily on C# as a programming language. All the low-level implementation is already taken care of to minimize the workload of end-users who will use DeFINE in its default settings or with minimal customization as required by their experimental design. DeFINE aims specifically to provide an easy-to-use experimental framework that is based on the stimulus–response–feedback architecture, which can be used to study goal- directed spatial navigation in an immersive three-dimensional space. In order to reduce the burden of researchers when setting up an experiment, DeFINE allows them to make simple alterations of experimental parameters through GUIs and by changing settings files that are in a straightforward JSON format. We provide a short video that succinctly explains how to use the basic functionality of DeFINE “out-of-the-box” (https://youtu.be/OVYiSHygye0). To allow for further customization of the experiment and the framework itself, DeFINE is being released open-source under the MIT license. Thus, interested researchers can modify its code directly. To facilitate this, DeFINE’s codebase is made modular, making it possible to alter a particular pre- programmed functionality by adjusting the code in a specific module only, and also to incorporate a new functionality into the framework by creating a new module and placing it in an appropriate folder. The file hierarchy is kept intuitive for this purpose. Another video we provide offers a quick tutorial of how to change various elements of DeFINE using the Unity software (https://youtu.be/smIp5n9kyAM). A detailed user manual is also available (https://github.com/ktiwari9/define-VR/blob/master/user_manual.pdf). Currently, DeFINE can be integrated into Unity tasks built for Windows personal computers. It is assumed that DeFINE will be used with a head-mounted display (HMD) such as HTC Vive and Oculus Rift. For example, DeFINE is designed to utilize the HMD’s motion tracking sensors for implementing various methods of participants’ locomotion within VEs (see the locomotion methods section below for details). In addition to the HMD worn by the participants, DeFINE simultaneously presents a VE to a desktop display so that experimentalists can monitor the progress of an experiment. Further details about hardware and software requirements for DeFINE are available in the user manual. The main capabilities and options of DeFINE are detailed below in the following order: (1) the generic experimental structure, (2) time- and accuracy-based feedback, (3) the GUI, (4) a diverse suite of locomotion methods, (5) static and dynamic goals, (6) performance leader-board, and (7) intra-VR surveys. ### 1.1 Experiment Structure Human behavioral experiments are often defined by a trial–block–session architecture which allows experimentalists to repeat a task multiple times to acquire requisite data (Figure 1). Just like the UXF (Brookes ., 2020), DeFINE adopts this architecture. A trial is an instance of the experiment where participants are presented with a stimulus and their response is recorded. At the end of the trial, the participants receive feedback. To clarify that the feedback is given after the response is made, as opposed to during the trial as the response unfolds, this feedback is referred to as “delayed” in DeFINE. Trials are often repeated multiple times for various purposes (e.g., to measure variability of responses to the same stimulus, to decipher the learning effects over trials, or to train the participants on the task), constituting a block. At the end of the block, the participants are assumed to be familiar with the environment and to have formulated a behavior of choice for the task at hand. In order to evaluate the quality of this behavior, the experimentalists may choose to make some modifications to the environment before proceeding with the next block. The experiment can consist of a single or multiple blocks, and when there are multiple blocks, a single iteration of the task over the blocks is called a session. Figure 1: Structure of typical human spatial navigation experiments. Participants are presented with a stimulus, to which they provide a response and are given feedback after the trial has been fully executed. This feedback is then used to modify performance in the next trial. Usually, multiple trials are carried out under the same condition, which constitute a block. Once the participants are finished with the block, they progress to the next block which may involve modifications of the condition, and their navigation performance is evaluated again over multiple trials. A single iteration of the task is called a session. This trial–block–session architecture as well as the diagram itself are modeled after those presented by Brookes . (2020). Figure 2: The architecture of the Delayed Feedback based Immersive Navigation Environment (DeFINE). The left panel shows the low-level functionality provided by DeFINE while the high-level experimentalist-defined functionalities are shown on the right. The surveys/questionnaires are optional, and hence, shown using dashed lines. Modules shown in purple are parsed during a trial, those in red during a block and those in green during a session. The logo of DeFINE shown at the top was inspired by that of UXF (Brookes ., 2020), symbolizing that DeFINE is built as a navigation-focused extension of UXF. DeFINE has primarily two levels of abstraction as shown by the two columns (demarcated by orange dotted lines) in Figure 2. All modules mentioned on the left are those implemented at the low-level abstraction that come pre- programmed with DeFINE, while those on the right are implemented at a higher level for easy and quick modifications. Such an arrangement streamlines customization of an experiment to suit different research studies because the modules at the low level of abstraction are common across many experiments and thus, the experimentalists can rely on DeFINE’s built-in functionality, which in turn allows them to focus their effort on customizing high-level modules that tend to be unique to each experiment. All modules shown by dotted black lines represent optional modules that can be utilized as seen fit by the experimentalists. In keeping with the trial–block–session architecture as described in Figure 1, all modules clustered under the green polygon (“Begin Session”) represent a session, those under the red polygon (“Begin Block”) represent blocks, and those under the purple polygon (“Begin Trial”) represent trials. All other modules (under either column of abstraction) are preset before the session starts. The flow of an experiment starts from the top of Figure 2. DeFINE takes care of initializing and rendering an immersive environment and setting up a session, blocks, and trials, but the experimentalists are required to configure the VE (e.g., setting its size) and specify the experimental design (e.g., the number of trials per block) by modifying the settings files in order to make DeFINE behave in the way they need for their experiment. Details of the settings files are described later. If the experimentalists want questionnaires to be filled within the VE, these questionnaires need to be created online, and the respective links to the questionnaires need to be added to the environment settings. Setting up the questionnaires is followed by inputting a participant’s information and selecting preset settings that are appropriate for the experiment. Once the participant information has been entered, the experimentalists can start the experimental session that DeFINE will generate. If there are block-specific settings, those are applied by DeFINE before starting the first trial. The trials follow the stimulus–response–feedback structure, by showing the participant a stimulus in the environment, to which the participant responds, and after the experimentalist-defined end condition has been met, the trial ends and feedback is shown to the participant. A new trial is started automatically after the feedback has been given, until the program reaches the end of the block. At the end of the block, the participant will be shown the questionnaires, if any were specified during the initial setup. During the trials the experimentalists can take down notes, or manually mark the session as bad to indicate that it should not be taken into account for data analysis. At any point of the session the experimentalists can abort the session, which will also mark the session as bad in the stored session notes. If there are more blocks remaining in the session, the next block is started after DeFINE has applied its specific settings to the environment. After the final block of trials has been completed, and the questionnaires following it have been answered, the session ends and the participant and experimentalists are returned to the startup screen. For each of the trials executed with DeFINE, the participant’s trajectory during the trial, the status and changes of environment variables, the feedback score obtained at the end of the trial (see the next section for details), trial start time, trial end time, total time taken for the trial, and straight-line distance to the goal when the participant ends the response are logged. ### 1.2 Feedback As opposed to other closed-loop systems where real-time feedback may be made available to participants as they carry out a response, DeFINE provides delayed feedback at the end of each trial. This is because continuous feedback will most likely go against the purpose of typical goal-directed navigation experiments. Usually, these experiments are to test participants’ ability to estimate their location relative to a goal by using sensory and other spatial cues in an environment (Loomis Philbeck, 2008). If the participants were given external feedback about whether they were moving in the right direction for every step they took, this feedback would essentially be a non-spatial cue that would directly aid them in their location estimation. In extreme cases, with continuous feedback, the participants could perform the task by moving in an arbitrary direction without processing the sensory and spatial cues and seeing if that would result in positive feedback. Such a strategy would lead them to take myopic unstructured paths to the goal, causing non-optimal navigation performance. Thus, by default, DeFINE is designed to give performance feedback only after the trial is completed. However, it is possible for experimentalists to modify DeFINE’s source code in Unity and have it provide real-time feedback, if they so choose. Feedback on a goal-directed navigation behavior can be given in a number of different forms, but DeFINE adopts a reward/cost function that evaluates participants’ performance and provides feedback as gains and losses of scores. It has been shown that feedback of this type is very effective in affecting participants’ behavior and decision making under a variety of conditions (Brand, 2008; Hossain List, 2012; Yechiam Hochman, 2014). The reward and cost in the context of navigation can also be defined in various ways, and it is up to experimentalists’ discretion how they formulate the reward/cost function in DeFINE, but one straightforward method would be to define them by using speed and accuracy of navigation. That is, the quicker the participants are in performing a trial, the greater the reward (or the smaller the cost); and the more accurate they are in reaching a goal, the greater the reward (or the smaller the cost). By default, DeFINE implements a reward/cost function of this form. Specifically: $R=\beta_{1}\exp(-\alpha_{1}t)+\beta_{2}\exp(-\alpha_{2}d)$ (1) In Equation (1), $t$ refers to the time taken for navigating towards a goal in a trial, and $d$ refers to the residual straight-line distance to the goal from the location at which participants end the trial. $\beta_{1}\text{ and }\beta_{2}$ are weights used for combining the time and distance into the decaying reward function, which penalizes both the time taken and the residual distance to the goal (i.e., rewarding shorter times and smaller distances with higher scores). $\alpha_{1}\text{ and }\alpha_{2}$ are factors for scaling the effects of time and distance. If experimentalists choose to use this function in their experiments, they can assign values of their choice to these parameters simply by specifying them in a settings file (details shown later). If they are to calculate the reward/cost scores using their own equation, they can do so by modifying a relevant section of DeFINE’s codes in Unity. It should further be noted that, by changing the relevant codes and implementing their own equation, the experimentalists can use any kinds of feedback that do not take the form of a cost/reward function. For example, it is possible to simply present how far away participants are from the goal at the end of each trial. ### 1.3 Graphical User Interface (GUI) Utilizing the GUI of UXF (Brookes ., 2020), DeFINE allows experimentalists to log participant information including, but not limited to, name (or participant identification), age, gender, and educational qualification (Figure 3(a)). Should other personal particulars be required, they can be easily added to the framework by modifying relevant settings files. As an extension to UXF’s original GUI, DeFINE allows the experimentalists to quickly set up the environment of choice with a desired locomotion method (see the next section for details about the locomotion methods). This unique feature can also be scaled and automated to handle multiple combinations of environments and locomotion methods via DeFINE’s auto-pilot mode. In this mode, the experimentalists can provide DeFINE with preset instructions so that it loads specific combinations of the environment and the locomotion method in a specified order. This way, a sequence of participants can be tested automatically, doing away with the need to individually set up an appropriate combination of the environment and the locomotion method for every participant. For example, if an experimental design requires that each participant is shown a different environment, a sequence of environments can be explicitly listed in settings files which will then be autonomously parsed when executing the auto-pilot mode. If each participant is to do trials with a different locomotion method, explicit participant-locomotion method combinations can be listed in the settings files in a similar fashion. As a significant extension to the predecessor, UXF, DeFINE also provides functionalities to study the role of lighting conditions and auditory cues in spatial navigation. At any point during a trial, experimentalists can toggle the lights and sound of a VE on or off by clicking on the dedicated buttons on the user interface, shown in Figure 3(b). The change of the status of these environmental variables are logged along with the information about participants’ performance in a navigation task (e.g., their position within the environment at a given time point). (a) Drop-down menus to specify experimental parameters (adapted from UXF; Brookes ., 2020). (b) The buttons for toggling sound and lights. Figure 3: DeFINE’s intuitive GUI for experimentalists to set up an experiment. ### 1.4 Locomotion Methods In order to provide a locomotion suite for participants to perform goal- directed navigation in VR, DeFINE comes equipped with a variety of locomotion methods. #### 1.4.1 Teleoperated locomotion In order to allow teleoperated locomotion, DeFINE is compatible with both a keyboard based and a VR controller based teleoperated methods. A typical use case may involve head direction sensors in an HMD being used to update participants’ headings in a VE while the keyboard or the VR controller being used to linearly traverse at preset velocity (which is to be specified in a settings file by experimentalists). The necessary key-bindings and further details are available in the user manual (https://github.com/ktiwari9/define- VR/blob/master/user_manual.pdf). #### 1.4.2 Arm-swing locomotion Arm-swing locomotion is an implementation of walking-in-place locomotion. In this method, the participants walk in place, including swinging their arms in a manner consistent with their pace of walking. It has been shown that such arm swings are effective in having the participants experience a naturalistic sense of locomotion without actually moving in real space (Kunz ., 2009; Yamamoto ., 2018). This locomotion method uses the physical movement of the VR controller(s), held by the participants, to determine forward speed in the VE. The movement speed is calculated from the positional difference of the tracked controller(s) between two consecutive frames. This calculation can be done either by requiring movement of two controllers (typically, one in each hand) or by using one controller (or either of the two controllers) that moves more than a given threshold amount between the frames. When the use of the two controllers is required, the forward speed in the environment is set to be zero, unless both of the controllers exceed the threshold value. #### 1.4.3 Head-bob locomotion Head-bob locomotion is another implementation of walking-in-place locomotion. In order to move forward in the VE, the participants need to walk in place, and as they do, their head, and in particular, the HMD, bobs slightly vertically. This locomotion method uses this vertical bobbing to determine the forward velocity. DeFINE tracks the vertical direction of the bobbing and its starting position. Once the direction changes, the participants are considered to have stepped, if the vertical height difference between two successive flexion points exceeds a threshold value specified in the settings of the locomotion method. The detected physical step is then translated into a step in the VE so that the participants walk forward in the VE at preset velocity. Due to the fact that the HMD is in front of the participants’ face, turning their head up or down causes the HMD to move vertically. In order to avoid reading these vertical movements as steps of the participants, DeFINE also tracks the participants’ rotational head movements about the pitch axis and ignores any “bobs” that are accompanied with the rotational head movements that exceed a specified threshold value. #### 1.4.4 Physical walking Physical walking is the only locomotion method in which the participants physically move around in the real world. The movement of the participants is tracked by using the HMD’s motion tracking sensors and the participants’ position in the VE is updated accordingly. Owing to the limited size of a physical area in which the participants’ movement can be tracked (which is typically around $10\times 10$ m), the size of the VE is going to be limited. To alleviate this limitation, sometimes modified physical walking methods such as redirected walking are adapted (Paludan ., 2016; Nilsson ., 2018). In these methods, the rotations and translations of the participants are slightly altered between physical and virtual worlds in order to steer the participants away from the edges of the available physical area. However, DeFINE does not utilize these methods because they can induce disruption to mental and neural spatial representations as well as to navigational behavior by causing a mismatch between intended (and physically carried out) movements and consequent virtual movements (Du ., 2020; Tcheang ., 2011). In DeFINE, a visible grid barrier, shown in Figure 4, is displayed in the HMD when the participants approach the limits of a configured area in which they can safely move around. The grid barrier serves two purposes. First and foremost, it prevents the participants from going out of the physical safe area, ensuring their safety. It is advisable that a navigation task in DeFINE be well confined to an area smaller than the safe area so that the participants will never encounter the barriers in the first place. If they do view the barrier, it essentially functions as an extra landmark that informs about the boundary of an environment, which can induce significant bias in their navigational behavior (Cheung, 2014; Mou Zhou, 2013; Bird ., 2010). Second, the barrier makes it possible to extend a navigable virtual space beyond the physical safe area, in case it is necessary for an experiment. To do this, the participants hold a trigger button in the VR controller, which locks the VE in place. While the VE is locked, the participants’ physical rotation in real space is not reflected in their virtual heading. Thus, the participants appear to keep facing the same direction in the VE, despite physically turning to face away from the edge of the safe area. The grid barrier remains visible and in correct orientation in respect to the physical safe area, allowing the participants to reorient themselves before continuing. In order to minimize the motion sickness caused by the VE remaining still during the participants’ physical rotation, DeFINE blurs the VE during the rotation. Unlike similar approaches used in the literature (Williams ., 2007), DeFINE does not require the participants to rotate a fixed amount as long as they steer clear of the physical boundary. Although this method of extending the virtual space can be practical, it must be used with caution because by physically rotating in the locked VE, the participants will most likely be forced to go through a mental process of dissociating real and virtual spaces once and realigning them after the physical rotation. It is very probable that this process will have significant impact on the participants’ mental and neural spatial representations, and in turn, on their subsequent navigational behavior. Figure 4: The visible blue grid marking the edge of a configured safe area. #### 1.4.5 Teleportation Teleportation locomotion differs from all of the other locomotion methods in that the participants never move through space, but instead teleport directly to their desired location some distance away. Before and after teleporting, the direction of the participants’ body and head remain unchanged relative to the environment. In DeFINE, the participants teleport by holding the trigger of a VR controller, which brings up the teleportation marker, as seen in Figure 5. Then the participants place the marker on the desired teleportation target location by aiming the controller at the location. Once the participants release the trigger, they are teleported to the marked location, given that the location is on the horizontal X-Z plane and clear of all collision regions around the objects in the environment. A valid target location is indicated by the blue color of the marker, whereas invalid locations turn the marker red. Although teleportation is not a naturalistic method of locomotion, its use is increasingly common in VEs including those for spatial navigation research (Cherep ., 2020; Kelly ., 2020). Figure 5: Teleportation locomotion with the teleportation marker visible. ### 1.5 Goal Demarcation To accommodate a variety of experiments, DeFINE offers two possible ways of demarcating the goal location for a navigation task: firstly, presenting static objects at goal locations (e.g., arrows, exclamation marks, or other similar objects) and secondly, showing dynamic objects like a buzzing fly that can give imprecise indication of the goal location. An example of the dynamic goal markers is available in the demonstration section below. ### 1.6 Leader-board Gamification of learning has shown increased participant engagement be it for online programs (Looyestyn ., 2017) or education (Barata ., 2013). Thus, as an optional feature, DeFINE is equipped with a leader-board which provides a ranking based on scores obtained using Equation (1) or other equivalent equations implemented by experimentalists (Figure 6). DeFINE keeps track of the scores and displays ten best scores in the leader-board. A new high-score is indicated with red font in the leader-board, while a score that was not high enough to get to the leader-board is shown at the bottom to illustrate the difference between the latest score and pre-existing scores. If participants are to carry out some practice or training trials first, it may not be appropriate to compare their scores against the pre-existing scores before they become fully familiarized with an experimental task. In that case, it is possible to show a provisional ranking which is not integrated with the leader-board. For clarity, this is labeled with a red Practice tag in the board. Once the practice phase is finished, the actual scores of the participants are integrated into the leader-board that includes their own previous high scores. (a) A new high-score is shown in red in the leader-board. (b) The latest score, which does not make it to the top ten, is shown at the bottom. Figure 6: Leader-boards inform participants about their performance in a given trial relative to their own and other participants’ previous trials. While having a leader-board can motivate participants, it can also cause the conditions of an experiment to be different between the participants. As earlier participants obtain their place in the leader-board, they keep replacing lower scores on it. As such, it gets systematically more difficult for later participants to score high enough to make it to the top-ten scores of all time. Having a leader-board that is seemingly unreachable might provide a different motivation to the later participants than having an easily reachable one would. In order to ensure that each participant can have an equal experimental condition, DeFINE offers two options. First, because the leader-board is an optional feature, experimentalists can choose to remove it entirely. Second, they can use a fake leader-board that behaves like a normal leader-board during the session of one participant, except that the changes to the board are not in fact stored to a log file. Once the next participant begins a session, the board reverts to its original condition, giving subsequent participants the same competitive challenge. ### 1.7 Surveys Often in behavioral studies, experimentalists would like participants to fill surveys for quality assurance or other purposes. Some of the most commonly used surveys for such studies using VR include the simulator sickness questionnaire (SSQ; Kennedy ., 1993) and the NASA task load index (NASA TLX; Hart, 2006). The SSQ studies the onset of simulator sickness symptoms like nausea or headache owing to being immersed in VR. It contains 27 questions and the participants answer each of them using a scale ranging from none (0) to severe (3). The NASA TLX is a survey for evaluating the workload of a given task utilizing six questions. Administering these and any other surveys has been made conveniently possible in DeFINE (Figure 7). The surveys are visible in an HMD to the participants and also on a desktop display to the experimentalists. While questions that have preset choices can be answered directly by the participants using the VR controller(s), questions that require free-form responses are to be typed in by the experimentalists on behalf of the participants. While DeFINE’s survey system allows the experimentalists to administer surveys while keeping the participants immersed in VR, other systems typically require them to take off an HMD to answer the surveys. Thus, if an experiment involves multiple sessions or blocks, each of which contains surveys, the participants need to be re-immersed in VR every time they remove the HMD and put it back on (Schatz ., 2017). This can be very cumbersome and make the participants feel uncomfortable, possibly inducing cybersickness. An alternative could be that the experimentalists orally ask questions and fill in the surveys on behalf of the participants, but this can feel intrusive to the participants and reduce the sense of immersion in VR because the participants have to directly communicate with the experimentalists who do not belong to the virtual world (Bowman ., 2002). DeFINE remedies these issues by displaying the surveys in the HMD. To our knowledge, only Grübel . (2017) and Regal . (2018) implemented a similar system previously. Figure 7: Filling in surveys whilst being immersed in VR using DeFINE. ## 2 Demonstration To demonstrate various built-in functionalities of DeFINE and its overall usability for human navigation experiments, we asked actual participants to perform a simple goal-directed navigation task in DeFINE. In each trial, the participants navigated to a hidden goal that was indicated by a dynamic visual cue and received a feedback score on the leader-board. The participants’ navigation response was characterized by the total duration of the response, the residual distance to the goal at the end of the trial, the feedback score, and detailed time-course plots of their locomotion trajectory, exemplifying the types of data DeFINE can record. To implement the trial–block–session architecture, two blocks of trials were used, between which the environment and the dynamic cue were modified. Two ways of deriving the feedback scores, which differentially weighted the response duration and the residual distance, were defined through DeFINE’s settings files. The participants also answered the SSQ and NASA TLX. This was to give a demonstration of DeFINE’s intra-VR survey feature, and also to examine whether the participants tolerated the use of DeFINE in the current experiment, which was of typical scale as a behavioral experiment. Figure 8: The default experimentalist view of an environment containing walls and floor with non-repeating textures. A countdown timer and score window helps participants keep track of their performance. Light and sound can be toggled by experimentalists. The firefly serves as a noisy visual cue to guide the participants to a goal. The light/audio toggle switches (top right) and experimentalist button descriptors (top left) are only visible to the experimentalists on a desktop running DeFINE, and are not made visible to the participants in an HMD. Except for this difference, the participant and experimentalist views are the same. The arrows specifying X, Y, and Z axes in the lower-left corner are shown in this figure only; they appear neither in the participant nor experimentalist view. See text for further details. ### 2.1 Method #### 2.1.1 Participants Twenty-four participants (15 males, 8 females, and 1 other) took part in this experiment. Twenty-three of them were students of Aalto University, and one was from the vocational university Metropolia, Finland. The mean age of the participants was $24.8\pm 2.9$ years. The participants’ educational background ranged from having graduated from high school to having a master’s degree. All participants gave written informed consent to participate in the experiment and received a movie ticket in return for their participation. The protocol of the experiment was approved by the Aalto University Research Ethics Committee. #### 2.1.2 Design and materials The participants were asked to navigate from start to goal positions within a virtual room of $10\leavevmode\nobreak\ \times 10$ m with non-repeating textures as shown in Figure 8. The walls were $4$-m tall. The participants navigated using the controller teleoperation method. Specifically, they turned the head to face the direction they wanted to go and moved in that direction by pressing a button of a VR controller. The participants’ eye height in the virtual room was set at $1.36$ m, which approximately corresponded to their actual eye height while seated in a real room. The participants first went through a block containing $15$ trials in which the room walls were visible. Subsequently, they performed the same navigation task in the second block in which the walls were removed and the floor texture was extended to the horizon. This block also consisted of $15$ trials. The starting and goal positions were fixed across the trials as well as across the blocks. Relative to the center of the room, in a left-handed coordinate system, the starting position was at ($4.5$ m, $4.5$ m) and the goal position was at ($-3$ m, $-1$ m). At the starting position, the participants directly faced the hidden goal position with an orientation of $-135^{\circ}$ about the vertical Y-axis. As a dynamic goal marker, a firefly buzzed around the goal position in such a way that its randomly fluctuating flying trajectory had its center directly over the goal position. Specifically, in each frame, the fly’s position along the horizontal X-Z plane was randomly sampled within the radii of 0.75 and 1.5 m from the goal in the first and second blocks, respectively. The height of the fly along the Y-axis was randomly sampled in the range of $0.75$–$1.25\leavevmode\nobreak\ $m. To make the fly move smoothly, its position was incremented with a step size of $5$ mm. For a graphical presentation of X, Y, and Z axes, see Figure 8. In this manner, the fly represented a noisy visual cue to guide the participants to the goal position. That is, the exact goal position was never revealed to the participants, and instead they were told that the goal was somewhere inside the area delimited by the fly’s trajectory. Hence, the goal position was provided imprecisely to the participants via the noisy visual cue, and also the feedback score. The participants were assigned to one of two groups, both of which received scores for their performance that depended on both navigation speed and accuracy, but with different weights. Time group received feedback that put more importance on speed while the feedback in accuracy group was weighted in favor of being as close as possible to the goal. The feedback provided to the participants was computed using Equation (1) with the constants shown in Table 1. The score was presented to the participants at the end of each trial. Table 1: Constants of the reward function for two participant groups. Constant | Time group value | Accuracy group value ---|---|--- $\alpha_{1}$ | $-0.05$ | 0.2 $\alpha_{2}$ | 0.2 | 1 $\beta_{1}$ | $-2$ | 0.5 $\beta_{2}$ | 6.2 | 3.4 The participants were informed that they would be graded according to the time elapsed and residual distance to the actual goal. However, they were not told about the existence of the two groups or which group they belonged to. Instead, the participants of both groups were told that the scores obtained for the trials would be based on both speed and accuracy. In order to make the scores easier for the participants to follow, they were scaled up by a factor of 300 and their minimum value was set at 0 (i.e., no negative scores). DeFINE was run on a high-performance Windows 10 personal computer with an Intel Core i5 processor, 32 gigabytes of random-access memory, and an Nvidia GTX 1070 graphics card. The HTC Vive Pro with a wireless extension was used for a VR HMD. DeFINE was used for presenting the VE and recording data for each frame (approximately 90 frames per second). Specifically, the following log files were created for each participant per session: ##### Environment settings This file specified the size of the virtual room, the fly’s trajectory (buzzing speed, minimum and maximum heights of flight, and a buzzing radius), and whether to remove or retain the bounding walls during the second block. Additionally, the links to survey forms were added here. ##### Locomotion settings This log file specified the locomotion method used and its presets like traversal gain and rotation speed. ##### Scenario settings This log file specified how many trials were to be presented in first and second blocks, the longest possible duration of a trial (with the maximum of 120 s), and start and goal locations. ##### Participant particulars The participant information as collected via the GUI (i.e., identification, age, gender, and highest qualification achieved) was recorded. ##### Movement logs This log file recorded each participant’s X and Z positions and rotation about the Y axis with time stamps. Owing to flexibility provided by DeFINE to toggle lights and sounds even during a trial, the status of these parameters was also logged every frame in these logs. A new log was created per trial along with trial numbers. ##### Trial results This log file assimilated component-wise and cumulative rewards per participant along with distance covered and time elapsed during a trial. ##### Notes The experimentalists’ notes during the experiment were recorded in this file. For instance, if some participants felt dizzy and opted for early termination, the particular session can be marked as bad and further details can be stored as notes for later use. #### 2.1.3 Procedure The participants sat in the middle of a room that was clear of any obstacles. At the outset of the experiment, the participants were asked to fill the SSQ to log their state of health before being immersed in VR. Their age, gender, and the highest qualification achieved were also recorded using DeFINE’s GUI (Figure 3(a)). An experimentalist then put an HMD on the participants’ head (over the spectacles, as and when necessary) and handed them hand-held VR controllers. The participants were run individually. As soon as the participants had verbally confirmed to be ready, the first block was started by the experimentalist. The participants began a trial by leaving the start position, using the controller teleoperation method to navigate, and ended their locomotion by pressing a key on a VR controller when they thought they had reached the goal position. The goal was positioned diagonally across the other side of the room and remained unchanged across trials. The participants then received a score from the trial in a leader- board (Figure 6). The fake leader-board feature was used so that all participants performed the navigation task with the same competitive challenge. The board was filled with made-up scores at the beginning, which were set low to give everyone a reasonable chance of making it to the top ten. Pilot testing was conducted to empirically determine how low should be low enough for this purpose. The leader-board was displayed for 10 s (or until the participants pressed the “End Trial” key on the VR controller), and the room was automatically shown from the start position again for the next trial thereby resetting the scene to the exact same configuration for each trial. The participants completed the trials at their own pace, until reaching the end of the block. The participants were allowed to have a short break between trials. When necessary, the participants were able to skip a trial by pressing a controller key. At the end of the first block, the participants filled the NASA TLX in the DeFINE’s form system (i.e., without taking off the HMD) using a 7-point Likert scale. Upon having filled the form, the participants started the second block at their own input, prompted on the HMD. Once again the participants performed the trials at their own pace, until filling the NASA TLX one more time at the end of the second block. Filling the form completed the VR part of the experiment. After taking off the HMD and the controllers, the participants filled the SSQ again to evaluate their simulation sickness after the exposure to the immersive VR. In addition, the participants were invited to provide feedback about the experiment and DeFINE by indicating the degree of agreement with each of the following five statements in a 5-point Likert scale: “Instructions were easy to understand”; “I understood what the score depended on”; “moving in the VE was easy”; “the walls in the practice phase were helpful”; and “filling a form in the VE was easy”. ### 2.2 Results Two participants from each group misunderstood task instructions and simply chased the fly rather than navigating towards the goal it indicated. This was determined through real-time observation of their locomotion patterns on the experimentalist’s desktop (Figure 8). Although it was not intended, this gave another demonstration of the utility of the experimentalist view that allowed the experimentalist to observe participants’ responses as they took place during trials. Due to this behavior, data from the four participants were excluded from analysis. The data presented in this section represent the results of the remaining $10$ participants per group, accounting for $20$ participants in total. In addition, in $0.7\%$ of all trials, the participants accidentally pressed the button to end the trial immediately after it had begun. These trials were also discarded for the analysis presented herewith. For each trial, the total elapsed time, the residual distance to the goal, and the score were derived as dependent measures from log files. The entire trajectory of a participant’s locomotion was also reconstructed from recorded position data. For each dependent measure, data points that were more than three standard deviations away from each participant’s mean of each block were defined as outliers and removed from analysis. This resulted in removal of 1.17% of trials on average. Table 2: Means and standard deviations of the total elapsed time, the residual distance to the goal, and the score as a function of participant group and block. | Time group | | Accuracy group ---|---|---|--- | First block | Second block | | First block | Second block Time (s) | 11.13 (5.93) | 7.81 (4.44) | | 10.97 (5.25) | 13.33 (10.77) Distance (m) | 0.43 (0.18) | 0.67 (0.32) | | 0.63 (0.34) | 0.80 (0.36) Score | 664.97 (239.02) | 718.87 (255.85) | | 624.77 (141.36) | 542.41 (160.26) Note. Standard deviations are shown in parentheses. #### 2.2.1 Time, distance, and score Table 2 shows descriptive statistics of the dependent measures as a function of participant groups and blocks. Overall, participants in the time group performed trials more quickly in the second block than in the first block, and those in the accuracy group showed the opposite pattern. It is likely that this resulted from the types of feedback the participants received in each group—that is, the swiftness of a response was more heavily rewarded in the time group, whereas the closeness to the goal was given more emphasis in the accuracy group. In terms of the residual distance to the goal, both groups performed worse in the second block, reflecting the fact that the navigation task was more challenging in the second block because of the lack of walls and decreased precision of the dynamic goal marker. Capturing these outcomes, the scores increased in the time group and decreased in the accuracy group between the first and second blocks. Figure 9: Mean times participants spent for performing each trial (top row), mean residual distances to the goal (middle row), and mean feedback scores the participants received (bottom row) as a function of participant group, block, and trial number. Error bars represent $\pm 1$ standard error of the mean. Figure 9 shows the elapsed time, residual distance, and score in each trial, providing a more detailed picture of the participants’ performance. In terms of speed, the two groups performed similarly in the first block, but they differed in the second block. Specifically, the time group maintained approximately the same speed throughout the block, performing the trials consistently quicker than the accuracy group. This pattern suggests that the feedback scores affected the participants’ navigation differently in the two groups. On the other hand, with the reward parameters used in this experiment (Table 1), the effects of the scores were less clear on the accuracy of performance. The participants in the accuracy group showed no visible improvement of accuracy in later trials, even though they received scores that rewarded accurate performance. This might indicate that the parameters did not favor accuracy enough to elicit observable change in behavior within a block. Between the blocks, it is also possible that the increase of task difficulty overrode the effects of the scores on navigation performance, as suggested by the overall lower scores in the second block. To statistically examine the time, distance, and score data, they were analyzed separately by mixed analyses of variance (ANOVAs) in which block (first and second) was a within-participant factor and group (time and accuracy) was a between-participant factor. Because the data plotted in Figure 9 suggest that the first trial of the first block yielded rather different results (particularly in time and score), the ANOVAs were run with and without this trial. The two sets of ANOVAs showed the same outcomes, and thus those including the first trial are reported here. The main effect of block on distance was significant, $F(1,18)=10.52,p=0.005,{\eta_{G}}^{2}=0.11$, reflecting the overall decrease of accuracy in the second block. All the other main effects and interactions were not significant, $F\text{s}(1,18)<3.09,p\text{s}>0.096,{\eta_{G}}^{2}\text{s}<0.075$. #### 2.2.2 Trajectory Figure 10: The residual distance to the goal as a function of participant group and time in a trial (first 30 s only). Panels A and B show sample participants’ 30 trials (black lines), one participant from each group. The red lines represent the mean time courses of the residual distance derived from the 30 trials. To calculate the means, trials that lasted shorter than 30 s were extended by using the end values of each such trial. The insets display the corresponding trajectories of each participant from the start (cyan circle) to the goal (red cross). Panels C and D show the mean time courses of each participant (black lines) separately for the two groups. The red lines indicate group means. In calculating them, short time series were dealt with in the same way as in Panels A and B. In addition to the time, distance, and score measures that aggregate participants’ performance in a trial, DeFINE can provide dynamic information about their movement within the trial. To illustrate this feature, the time courses of the residual distance to the goal as well as sample participants’ trajectories are plotted in Figure 10. By visualizing the trajectories in this manner, researchers can gain additional insights into the way participants navigate in their experiments. For example, by comparing panels A and B of Figure 10, it can be seen that the trajectories of the sample participant from the time group were more dispersed than those of the sample participant from the accuracy group. Such an observation is easily possible in this visualization, but it is not readily available from the aggregate measures. #### 2.2.3 Simulation sickness questionnaire (SSQ) Responses to the SSQ are summarized in Table 3. As shown in the table, the participants scored very low not only before but also after exposure to the VE. Because the scores were very low overall, we used total raw scores for analysis, instead of deriving weighted scores for each sub-scale of the SSQ (Kennedy ., 1993). The total raw SSQ scores were analyzed by a mixed ANOVA with exposure (before and after) as a within-participant factor and group (time and accuracy) as a between-participant factor. This ANOVA yielded no significant outcomes, $F\text{s}(1,18)<0.22,p\text{s}>0.64,{{\eta_{G}}^{2}}\text{s}<0.010$, suggesting that the SSQ scores did not differ between pre- and post-exposure to the VE as well as between the time and accuracy groups. These results indicate that the use of DeFINE did not induce any major symptoms of cybersickness. Because non-significant results in an ANOVA do not necessarily constitute positive evidence for null hypotheses, we also conducted a Bayes factor analysis to gauge the extent to which the data actually supported the claim that neither exposure nor group had an effect on the SSQ scores (Rouder ., 2012). When the null model was compared against the full model that included the main effects of exposure and group as well as the interaction between the two, it yielded a Bayes factor of 13.9. This constitutes positive evidence for the null hypothesis (Kass Raftery, 1995), supporting the conclusion that doing the navigation task in DeFINE (and which group each participant was in) did not cause cybersickness above and beyond what the participants had prior to navigating in the VE. Table 3: Means and standard deviations of raw scores of the simulator sickness questionnaire (SSQ). | Pre-exposure | | Post-exposure ---|---|---|--- | Time group | Accuracy group | | Time group | Accuracy group General discomfort | 0.1 (0.32) | 0.2 (0.63) | | 0.2 (0.42) | 0.3 (0.48) Fatigue | 0.4 (0.52) | 0.7 (0.95) | | 0.2 (0.42) | 0.6 (0.70) Boredom | 0.3 (0.67) | 0.1 (0.32) | | 0.3 (0.67) | 0.1 (0.32) Drowsiness | 0.5 (0.71) | 0.2 (0.42) | | 0 (0) | 0.2 (0.63) Headache | 0.1 (0.32) | 0 (0) | | 0.1 (0.32) | 0.2 (0.42) Eyestrain | 0.5 (0.53) | 0.5 (0.97) | | 0.7 (0.67) | 0.5 (0.97) Difficulty focusing | 0.2 (0.42) | 0.4 (0.70) | | 0.1 (0.32) | 0.3 (0.48) Salivation increase/decrease | 0.1 (0.32) | 0.2 (0.42) | | 0.1 (0.32) | 0 (0) Sweating | 0 (0) | 0.1 (0.32) | | 0.2 (0.42) | 0.2 (0.63) Nausea | 0 (0) | 0 (0) | | 0.2 (0.42) | 0.1 (0.32) Difficulty concentrating | 0.3 (0.48) | 0.3 (0.67) | | 0.1 (0.32) | 0.3 (0.48) Mental depression | 0.2 (0.42) | 0.2 (0.42) | | 0.2 (0.42) | 0.2 (0.42) Fullness of the head | 0.3 (0.48) | 0.4 (0.52) | | 0.3 (0.48) | 0.2 (0.42) Blurred vision | 0.1 (0.32) | 0.3 (0.48) | | 0.2 (0.42) | 0.4 (0.52) Dizziness with eyes open/closed | 0.1 (0.32) | 0.2 (0.63) | | 0.2 (0.42) | 0.3 (0.48) Vertigo | 0 (0) | 0 (0) | | 0.1 (0.32) | 0.1 (0.32) Visual flashbacks | 0 (0) | 0 (0) | | 0.3 (0.67) | 0.1 (0.33) Faintness | 0.1 (0.32) | 0 (0) | | 0 (0) | 0.1 (0.32) Breathing awareness | 0.4 (0.52) | 0 (0) | | 0.3 (0.48) | 0.1 (0.32) Stomach awareness | 0 (0) | 0.1 (0.32) | | 0.1 (0.32) | 0.1 (0.32) Loss of appetite | 0 (0) | 0 (0) | | 0 (0) | 0.1 (0.32) Increase of appetite | 0.1 (0.32) | 0.3 (0.48) | | 0.1 (0.32) | 0.4 (0.52) Desire to move bowels | 0.1 (0.32) | 0 (0) | | 0.1 (0.32) | 0 (0) Confusion | 0 (0) | 0.4 (0.70) | | 0.1 (0.32) | 0.1 (0.32) Burping | 0 (0) | 0 (0) | | 0 (0) | 0 (0) Vomiting | 0 (0) | 0 (0) | | 0 (0) | 0.1 (0.32) Others | 0 (0) | 0 (0) | | 0 (0) | 0 (0) Total | 3.9 (3.51) | 4.6 (5.66) | | 4.2 (3.74) | 5.1 (4.12) Note. Standard deviations are shown in parentheses. The possible range of the total score was from 0 to 81. Table 4: Means and standard deviations of scores of the NASA task load index (NASA TLX). | Time group | | Accuracy group ---|---|---|--- | First block | Second block | | First block | Second block Mental demand | 3.0 (1.33) | 3.7 (1.64) | | 2.3 (1.25) | 3.0 (1.41) Physical demand | 1.7 (0.82) | 2.2 (1.14) | | 1.7 (1.25) | 2.7 (1.77) Temporal demand | 3.5 (1.35) | 4.1 (1.79) | | 2.9 (1.60) | 2.9 (1.85) Effort | 3.4 (1.58) | 3.6 (1.71) | | 2.8 (1.14) | 3.2 (1.32) Performance | 3.5 (1.43) | 2.6 (1.35) | | 3.9 (1.60) | 4.0 (1.70) Frustration level | 2.6 (1.71) | 3.2 (1.62) | | 1.9 (1.20) | 2.5 (1.43) Total | 17.7 (6.38) | 19.4 (7.29) | | 15.5 (5.15) | 18.3 (7.89) Note. Standard deviations are shown in parentheses. The possible range of the total score was from 6 to 42. #### 2.2.4 NASA task load index (NASA TLX) Responses to each item of the NASA TLX ranged from one to seven, with smaller scores indicating lower task load. As shown in Table 4, participants generally indicated that doing the navigation task in DeFINE required medium workload. There was some variation of the scores between groups, blocks, and questions. For example, the scores of the temporal demand question suggest that the time group felt stronger time pressure than the accuracy group, which is consistent with the feedback function that put emphasis on speedy response in the time group. In addition, scores in the second block tended to be higher than those in the first block, which corresponds to the fact that the task was made more difficult in the second block. In line with these observations, a mixed ANOVA with block (first and second) and question (six questions of the NASA TLX) as within-participant factors and group (time and accuracy) as a between- participant factor yielded a significant interaction between question and group, $F(5,90)=2.96,p=0.035,{\eta_{G}}^{2}=0.049,\epsilon=0.66$ (this ANOVA was corrected for non-sphericity with the Greenhouse-Geisser method when appropriate). The interaction between question and block as well as the main effect of question were also significant, $F(5,90)=3.31,p=0.008,{\eta_{G}}^{2}=0.019$ and $F(5,90)=7.14,p<0.001,{\eta_{G}}^{2}=0.11,\epsilon=0.66$, respectively. The main effect of block was not significant, $F(1,18)=3.58,p=0.075,{\eta_{G}}^{2}=0.018$. The interaction between block and group and the main effect of group were virtually non-existent, $F\text{s}(1,18)<0.36,p\text{s}>0.55,{{\eta_{G}}^{2}}\text{s}<0.010$, suggesting that overall, the two groups tolerated the workload of using DeFINE in a similar way. #### 2.2.5 Participant feedback on the experiment and DeFINE Scores of the participant feedback survey at the end of the experiment are summarized in Table 5. Larger scores denote stronger agreement with the statements. Overall, participants gave high scores, indicating that DeFINE provided an easy-to-use interface for doing the navigation experiment. The scores were analyzed by a mixed ANOVA in which statement (five statements in the survey) was a within-participant factor and group (time and accuracy) was a between-participant factor. The main effect of statement was significant, $F(4,72)=6.51,p<0.001,{\eta_{G}}^{2}=0.20$, which suggests that scores were reliably lower in the statement about the usefulness of walls than in the other statements. The interaction between statement and group and the main effect of group were not significant, $F(4,72)=1.73,p=0.15,{\eta_{G}}^{2}=0.062$ and $F(1,18)=0.18,p=0.67,{\eta_{G}}^{2}=0.003$, respectively, suggesting that there was no overall difference between the groups in the way they responded to the feedback survey. Table 5: Means and standard deviations of scores of the participant feedback survey. | Time group | Accuracy group ---|---|--- Clarity of instructions | 4.2 (0.63) | 4.1 (0.99) Score interpretation | 4.5 (0.71) | 4.0 (1.05) Ease of movement | 4.3 (0.67) | 4.2 (0.79) Usefulness of walls | 2.9 (1.37) | 3.3 (0.82) Ease of filling forms in DeFINE | 3.4 (0.97) | 4.2 (1.03) Total | 19.3 (2.63) | 19.8 (2.62) Note. Standard deviations are shown in parentheses. The possible range of the total score was from 5 to 25. ### 2.3 Discussion The purpose of this demonstration was to showcase the key functionalities of DeFINE—namely, its ability to set up an experiment in a trial–block–session structure, run goal-directed navigation trials in a stimulus–response–feedback loop, and collect both moment-by-moment and aggregate measures of participants’ task performance. In this experiment, the measures included the duration of a trial, the residual distance to the goal, the feedback score given to the participants, and the time course of the participants’ positions within the trial. It may be worth noting that among these measures, those that capture temporal information about the participants’ locomotion (i.e., the trial duration and dynamic position data) can be of particular use in future studies because past studies of goal-directed navigation in small-scale space tended to put emphasis on accuracy or precision of responses with little regard for how quickly they were carried out (e.g., Chen ., 2017; Chrastil Warren, 2014; Harris Wolbers, 2012; Yamamoto, Meléndez Menzies, 2014; Yamamoto, Philbeck ., 2014). However, it is important to consider the speed of the responses in evaluating their accuracy because there can be a trade-off relationship between them (Bogacz ., 2010). DeFINE allows researchers to examine the speed and accuracy of navigation either in conjunction as in the current experiment or in isolation by setting the parameters of the reward function accordingly (e.g., $\beta_{1}=0$ makes the reward function exclusively focused on the accuracy). A defining feature of DeFINE is that with its default settings, it provides feedback on participants’ performance in each trial. Results from this experiment suggested that by using differential weights on speed and accuracy of navigation in calculating feedback scores, DeFINE has potential for eliciting different responses from the participants. Although the behavioral measures of navigation were largely non-significant in the statistical analyses, the effects of the feedback scores, particularly those on navigation speed, were implied in Figure 9. The time group improved the speed of responses in early trials and kept the same speed during the rest of the experiment because this helped increase and then maintain the feedback scores. On the other hand, the accuracy group appeared to care less for making a speedy response toward the end of the experiment, as the slower speed had little influence on the feedback scores in this group. It is likely that the feedback scores were more effective in affecting the speed because of the specific way in which the current experiment was designed—that is, the participants were self-aware of the speed of their response, but the accuracy was never explicitly revealed to them, making it harder for the participants to improve the accuracy. Importantly, this pattern is a result of one particular installation of DeFINE, and its architecture flexibly enables researchers to set up a suitable balance between the speed and accuracy according to the objectives of their studies. For example, by giving heavier weights to accuracy in the feedback function, researchers can make feedback scores more directly informative about how well participants are reaching a goal. Similarly, by demarcating the goal location more specifically by using different goal markers (e.g., a static marker or a dynamic marker with less variability) and environmental features (e.g., walls that provide spatial cues), researchers can run experiments in which focus is entirely on speed (i.e., accuracy is a given) or subtle changes in accuracy are scrutinized. This experiment also examined the participants’ experience in using DeFINE. Results from the SSQ indicated that DeFINE caused no major symptoms of cybersickness. Considering that the participants repeatedly experienced sensory conflicts between their vision and body-based (i.e., vestibular and proprioceptive) senses due to the use of the teleoperation method, the absence of cybersickness is notable (Bos ., 2008). The NASA TLX showed that the participants found doing the navigation task in DeFINE moderately challenging but not unreasonably taxing. In the feedback survey, the participants gave a positive evaluation to DeFINE itself and the design of the experiment. Generally, these results did not differ between the two groups of the participants, suggesting that DeFINE provided a versatile platform that accommodates different types of experiments. In sum, this demonstration showed that by using DeFINE in its default settings, we were able to run a human navigation experiment of typical scale, involving 24 participants and consisting of a single continuous session with two blocks of 15 trials each. It exemplified an assortment of data types that DeFINE can collect, which allow for detailed characterization of navigational behavior. Both objective and subjective measures of the participants’ experience indicated that they found DeFINE easy to use and the navigation task it implemented well tolerable in terms of cybersickness and task workload, irrespective of the ways in which they carried out the navigation task (i.e., whether they were implicitly driven to perform it more quickly or accurately via feedback scores). Together, these results validated DeFINE’s capability and potential as a tool for investigating goal-directed navigation in humans under a variety of conditions. ## 3 Conclusions This paper presented the Delayed Feedback based Immersive Navigation Environment (DeFINE) for studying goal-directed navigation in humans using VR. Although similar frameworks have already been developed (Brookes ., 2020; Vasser ., 2017; Commins ., 2020; Machado ., 2019; Wiener ., 2020; Geller ., 2007; Ayaz ., 2008, 2011; Solway ., 2013; Grübel ., 2017; Bebko Troje, 2020; Starrett ., 2020), they are based on an open-loop stimulus–response architecture that omits performance feedback to participants. DeFINE distinguishes itself from the previous frameworks by implementing the closed- loop stimulus–response–feedback architecture as its core element (Figures 1 and 2). The feedback is delayed by default in order to suit the needs of typical navigation experiments, but if necessary, it can be made real-time through relatively simple changes in DeFINE’s code so that the stimulus–response–feedback loop is even more tightly closed. As discussed in the introduction, the VR frameworks for navigation research mainly differ in whether they are geared toward ease of use by limiting their scope or wide applicability by providing general-purpose tools that demand technical skills of end-users. In this spectrum, DeFINE aims to position itself toward the ease-of-use end by focusing primarily on goal-directed navigation tasks and also by making it possible to set up an experiment mostly through GUIs and simple settings files (demonstrated in the video clips available online). However, this does not mean that coding is absolutely unnecessary or impossible in DeFINE. Indeed, some customization that goes beyond the GUIs and settings files is expected, and for this purpose the software is made open-source and its codebase is modularized. The demonstration experiment showed the utility of DeFINE as a platform for navigation research and its general friendliness to participants in VR experiments. Additionally, this experiment demonstrated DeFINE’s potential as a tool for testing hypotheses about the temporal aspects of navigational behavior. The optional feature of seamlessly administering surveys within an HMD enhances the immersion of the participants in VR, thereby improving the quality of data collected via DeFINE. The optional leader-board enables investigation of the effect of gamification on spatial navigation. Previous studies have shown its impact in other domains of learning (Barata ., 2013; Looyestyn ., 2017), but it is yet to be thoroughly explored for navigation- related applications (Coutrot ., 2018; Coughlan ., 2019). These out-of-the-box features of DeFINE, together with its customizability via the Unity software, open up many new possibilities for human navigation research. ## 4 Open Practices Statements The software used in the experiment reported in this article—the Delayed Feedback based Immersive Navigation Environment (DeFINE)—is available at https://github.com/ktiwari9/define-VR. The data and other materials for the experiment are available upon request. The experiment was not preregistered. ## References * Ayaz . (2008) ayaz2008Ayaz, H., Allen, SL., Platek, SM. Onaral, B. 2008\. Maze Suite 1.0: A complete set of tools to prepare, present, and analyze navigational and spatial cognitive neuroscience experiments Maze Suite 1.0: A complete set of tools to prepare, present, and analyze navigational and spatial cognitive neuroscience experiments. Behavior Research Methods40353–359. 10.3758/BRM.40.1.353 * Ayaz . (2011) ayaz2011Ayaz, H., Shewokis, PA., Curtin, A., Izzetoglu, M., Izzetoglu, K. Onaral, B. 2011\. Using MazeSuite and functional near infrared spectroscopy to study learning in spatial navigation Using MazeSuite and functional near infrared spectroscopy to study learning in spatial navigation. Journal of Visualized Experiments56e3443. 10.3791/3443 * Barata . (2013) barata2013improvingBarata, G., Gama, S., Jorge, J. Gonçalves, D. 2013\. Improving participation and learning with gamification Improving participation and learning with gamification. LE. Nacke, K. Harrigan N. Randall (), Gamification 2013: Proceedings of the First International Conference on Gameful Design, Research, and Applications Gamification 2013: Proceedings of the first International Conference on Gameful Design, Research, and Applications ( 10–17). New York, NYAssociation for Computing Machinery. 10.1145/2583008.2583010 * Bebko Troje (2020) bebko2020Bebko, AO. Troje, NF. 2020\. bmlTUX: Design and control of experiments in virtual reality and beyond bmlTUX: Design and control of experiments in virtual reality and beyond. i-Perception1141–12. 10.1177/2041669520938400 * Bird . (2010) bird_establishing_2010Bird, CM., Capponi, C., King, JA., Doeller, CF. Burgess, N. 2010\. Establishing the Boundaries: The Hippocampal Contribution to Imagining Scenes Establishing the boundaries: The hippocampal contribution to imagining scenes. Journal of Neuroscience3011688–11695. 10.1523/JNEUROSCI.0723-10.2010 * Bogacz . (2010) bogacz2010humansBogacz, R., Hu, PT., Holmes, PJ. Cohen, JD. 2010\. Do humans produce the speed–accuracy trade-off that maximizes reward rate? Do humans produce the speed–accuracy trade-off that maximizes reward rate? The Quarterly Journal of Experimental Psychology63863–891. 10.1080/17470210903091643 * Bos . (2008) bos2008Bos, JE., Bles, W. Groen, EL. 2008\. A theory on visually induced motion sickness A theory on visually induced motion sickness. Displays2947–57. 10.1016/j.displa.2007.09.002 * Bowman . (2002) bowman2002surveyBowman, DA., Gabbard, JL. Hix, D. 2002\. A survey of usability evaluation in virtual environments: Classification and comparison of methods A survey of usability evaluation in virtual environments: Classification and comparison of methods. Presence: Teleoperators & Virtual Environments11404–424. 10.1162/105474602760204309 * Brand (2008) brand2008doesBrand, M. 2008\. Does the feedback from previous trials influence current decisions? A study on the role of feedback processing in making decisions under explicit risk conditions Does the feedback from previous trials influence current decisions? A study on the role of feedback processing in making decisions under explicit risk conditions. Journal of Neuropsychology2431–443. 10.1348/174866407x220607 * Brookes . (2020) Brookes2019Brookes, J., Warburton, M., Alghadier, M., Mon-Williams, M. Mushtaq, F. 2020\. Studying human behavior with virtual reality: The Unity Experiment Framework Studying human behavior with virtual reality: The Unity Experiment Framework. Behavior Research Methods52455–463. 10.3758/s13428-019-01242-0 * Carton . (2016) carton2016towardsCarton, D., Nitsch, V., Meinzer, D. Wollherr, D. 2016\. Towards assessing the human trajectory planning horizon Towards assessing the human trajectory planning horizon. PLoS ONE11e0167021. 10.1371/journal.pone.0167021 * Chen . (2017) chen_cue_2017Chen, X., McNamara, TP., Kelly, JW. Wolbers, T. 2017\. Cue combination in human spatial navigation Cue combination in human spatial navigation. Cognitive Psychology95105–144. 10.1016/j.cogpsych.2017.04.003 * Cherep . (2020) cherep_spatial_2020Cherep, LA., Lim, AF., Kelly, JW., Acharya, D., Velasco, A., Bustamante, E.Gilbert, SB. 2020\. Spatial cognitive implications of teleporting through virtual environments Spatial cognitive implications of teleporting through virtual environments. Journal of Experimental Psychology: Applied26480–492. 10.1037/xap0000263 * Cheung (2014) cheung_estimating_2014Cheung, A. 2014\. Estimating Location without External Cues Estimating location without external cues. PLOS Computational Biology10e1003927. 10.1371/journal.pcbi.1003927 * Chrastil Warren (2014) chrastil_does_2014Chrastil, ER. Warren, WH. 2014\. Does the human odometer use an extrinsic or intrinsic metric? Does the human odometer use an extrinsic or intrinsic metric? Attention, Perception, & Psychophysics76230–246. 10.3758/s13414-013-0549-3 * Commins . (2020) commins_2019Commins, S., Duffin, J., Chaves, K., Leahy, D., Corcoran, K., Caffrey, M.Thornberry, C. 2020\. NavWell: A simplified virtual-reality platform for spatial navigation and memory experiments Navwell: A simplified virtual-reality platform for spatial navigation and memory experiments. Behavior Research Methods521189–1207. 10.3758/s13428-019-01310-5 * Cornwell . (2008) cornwell2008humanCornwell, BR., Johnson, LL., Holroyd, T., Carver, FW. Grillon, C. 2008\. Human hippocampal and parahippocampal theta during goal-directed spatial navigation predicts performance on a virtual Morris water maze Human hippocampal and parahippocampal theta during goal-directed spatial navigation predicts performance on a virtual morris water maze. Journal of Neuroscience285983–5990. 10.1523/JNEUROSCI.5001-07.2008 * Coughlan . (2019) coughlan_toward_2019Coughlan, G., Coutrot, A., Khondoker, M., Minihane, AM., Spiers, H. Hornberger, M. 2019\. Toward personalized cognitive diagnostics of at-genetic-risk Alzheimer’s disease Toward personalized cognitive diagnostics of at-genetic-risk Alzheimer’s disease. Proceedings of the National Academy of Sciences of the United States of America1169285–9292. 10.1073/pnas.1901600116 * Coutrot . (2018) coutrot_global_2018Coutrot, A., Silva, R., Manley, E., de Cothi, W., Sami, S., Bohbot, VD.Spiers, HJ. 2018\. Global Determinants of Navigation Ability Global determinants of navigation ability. Current Biology282861–2866. 10.1016/j.cub.2018.06.009 * Du . (2020) du_unidirectional_2018Du, Y., Mou, W. Zhang, L. 2020\. Unidirectional influence of vision on locomotion in multimodal spatial representations acquired from navigation Unidirectional influence of vision on locomotion in multimodal spatial representations acquired from navigation. Psychological Research841284–1303. 10.1007/s00426-018-1131-3 * Geller . (2007) geller2007Geller, AS., Schleifer, IK., Sederberg, PB., Jacobs, J. Kahana, MJ. 2007\. PyEPL: A cross-platform experiment-programming library PyEPL: A cross-platform experiment-programming library. Behavior Research Methods39950–958. 10.3758/BF03192990 * Grübel . (2017) grubel2017Grübel, J., Weibel, R., Jiang, MH., Hölscher, C., Hackman, DA. Schinazi, VR. 2017\. EVE: A framework for experiments in virtual environments EVE: A framework for experiments in virtual environments. T. Barkowsky, H. Burte, C. Hölscher H. Schultheis (), Lecture Notes in Computer Science: Vol. 10523. Spatial Cognition X Lecture Notes in Computer Science: Vol. 10523\. Spatial Cognition X ( 159–176). Cham, SwitzerlandSpringer International Publishing. 10.1007/978-3-319-68189-4_10 * Harris Wolbers (2012) harris_ageing_2012Harris, MA. Wolbers, T. 2012\. Ageing effects on path integration and landmark navigation Ageing effects on path integration and landmark navigation. Hippocampus221770–1780. 10.1002/hipo.22011 * Hart (2006) hart2006nasaHart, SG. 2006\. NASA-task load index (NASA-TLX); 20 years later NASA-task load index (NASA-TLX); 20 years later. Proceedings of the Human Factors and Ergonomics Society Annual Meeting50904–908. 10.1177/154193120605000909 * Hossain List (2012) hossain_behavioralist_2012Hossain, T. List, JA. 2012\. The Behavioralist Visits the Factory: Increasing Productivity Using Simple Framing Manipulations The behavioralist visits the factory: Increasing productivity using simple framing manipulations. Management Science582151–2167. 10.1287/mnsc.1120.1544 * Kass Raftery (1995) kass_bayes_1995Kass, RE. Raftery, AE. 1995\. Bayes factors Bayes factors. Journal of the American Statistical Association90773–795. 10.1080/01621459.1995.10476572 * Kelly . (2020) kelly_teleporting_2020Kelly, JW., Ostrander, AG., Lim, AF., Cherep, LA. Gilbert, SB. 2020\. Teleporting through virtual environments: Effects of path scale and environment scale on spatial updating Teleporting through virtual environments: Effects of path scale and environment scale on spatial updating. IEEE Transactions on Visualization and Computer Graphics261841–1850. 10.1109/TVCG.2020.2973051 * Kennedy . (1993) kennedy_simulator_1993Kennedy, RS., Lane, NE., Berbaum, KS. Lilienthal, MG. 1993\. Simulator Sickness Questionnaire: An Enhanced Method for Quantifying Simulator Sickness Simulator sickness questionnaire: An enhanced method for quantifying simulator sickness. The International Journal of Aviation Psychology3203–220. 10.1207/s15327108ijap0303_3 * Kunz . (2009) kunz_evidence_2009Kunz, BR., Creem-Regehr, SH. Thompson, WB. 2009\. Evidence for motor simulation in imagined locomotion Evidence for motor simulation in imagined locomotion. Journal of Experimental Psychology: Human Perception and Performance351458–1471. 10.1037/a0015786 * Loomis Knapp (2003) loomis_visual_2003Loomis, JM. Knapp, J. 2003\. Visual perception of egocentric distance in real and virtual environments Visual perception of egocentric distance in real and virtual environments. LJ. Hettinger MW. Haas (), Virtual and adaptive environments: Applications, implications, and human performance issues Virtual and adaptive environments: Applications, implications, and human performance issues ( 21–46). Mahwah, NJLawrence Erlbaum Associates. * Loomis Philbeck (2008) loomis_measuring_2008Loomis, JM. Philbeck, JW. 2008\. Measuring spatial perception with spatial updating and action Measuring spatial perception with spatial updating and action. RL. Klatzky, B. MacWhinney M. Behrmann (), Embodiment, ego-space, and action Embodiment, ego-space, and action ( 1–43). New York, NYPsychology Press. * Looyestyn . (2017) looyestyn2017doesLooyestyn, J., Kernot, J., Boshoff, K., Ryan, J., Edney, S. Maher, C. 2017\. Does gamification increase engagement with online programs? A systematic review Does gamification increase engagement with online programs? A systematic review. PLoS ONE12e0173403. 10.1371/journal.pone.0173403 * Machado . (2019) machado2019newMachado, M., Lefèvre, N., Philoxene, B., Le Gall, A., Madeleine, S., Fleury, P.Besnard, S. 2019\. New Software dedicated to Virtual Mazes for Human NeuroCognitive investigations New software dedicated to virtual mazes for human neurocognitive investigations. Journal of Neuroscience Methods327108388. 10.1016/j.jneumeth.2019.108388 * Mou Zhou (2013) mou_defining_2013Mou, W. Zhou, R. 2013\. Defining a boundary in goal localization: Infinite number of points or extended surfaces Defining a boundary in goal localization: Infinite number of points or extended surfaces. Journal of Experimental Psychology: Learning, Memory, and Cognition391115–1127. 10.1037/a0030535 * Nilsson . (2018) nilsson201815Nilsson, NC., Peck, T., Bruder, G., Hodgson, E., Serafin, S., Whitton, M.Rosenberg, ES. 2018\. 15 years of research on redirected walking in immersive virtual environments 15 years of research on redirected walking in immersive virtual environments. IEEE Computer Graphics and Applications38244–56. 10.1109/MCG.2018.111125628 * Paludan . (2016) paludan2016disguisingPaludan, A., Elbaek, J., Mortensen, M., Zobbe, M., Nilsson, NC., Nordahl, R.Serafin, S. 2016\. Disguising rotational gain for redirected walking in virtual reality: Effect of visual density Disguising rotational gain for redirected walking in virtual reality: Effect of visual density. T. Höllerer, V. Interrante, A. Lécuyer E. Suma (), 2016 IEEE Virtual Reality (VR) 2016 IEEE virtual reality (VR) ( 259–260). Piscataway, NJInstitute of Electrical and Electronics Engineers. 10.1109/VR.2016.7504752 * Pezzulo . (2014) pezzulo2014internallyPezzulo, G., van der Meer, MA., Lansink, CS. Pennartz, CM. 2014\. Internally generated sequences in learning and executing goal-directed behavior Internally generated sequences in learning and executing goal-directed behavior. Trends in Cognitive Sciences18647–657. 10.1016/j.tics.2014.06.011 * Philbeck Loomis (1997) philbeck_comparison_1997Philbeck, JW. Loomis, JM. 1997\. Comparison of two indicators of perceived egocentric distance under full-cue and reduced-cue conditions Comparison of two indicators of perceived egocentric distance under full-cue and reduced-cue conditions. Journal of Experimental Psychology: Human Perception and Performance2372–85. 10.1037/0096-1523.23.1.72 * Regal . (2018) regal2018vrateRegal, G., Schatz, R., Schrammel, J. Suette, S. 2018\. VRate: A Unity3D asset for integrating subjective assessment questionnaires in virtual environments VRate: A Unity3D asset for integrating subjective assessment questionnaires in virtual environments. 2018 Tenth International Conference on Quality of Multimedia Experience (QoMEX)1–3. 10.1109/QoMEX.2018.8463296 * Rouder . (2012) rouder_default_2012Rouder, JN., Morey, RD., Speckman, PL. Province, JM. 2012\. Default Bayes factors for ANOVA designs Default Bayes factors for ANOVA designs. Journal of Mathematical Psychology56356–374. 10.1016/j.jmp.2012.08.001 * Schatz . (2017) schatz2017towardsSchatz, R., Sackl, A., Timmerer, C. Gardlo, B. 2017\. Towards subjective quality of experience assessment for omnidirectional video streaming Towards subjective quality of experience assessment for omnidirectional video streaming. 2017 Ninth International Conference on Quality of Multimedia Experience (QoMEX)1–6. 10.1109/QoMEX.2017.7965657 * Solway . (2013) solway2013Solway, A., Miller, JF. Kahana, MJ. 2013\. PandaEPL: A library for programming spatial navigation experiments PandaEPL: A library for programming spatial navigation experiments. Behavior Research Methods451293–1312. 10.3758/s13428-013-0322-5 * Spiers Maguire (2006) spiers2006thoughtsSpiers, HJ. Maguire, EA. 2006\. Thoughts, behaviour, and brain dynamics during navigation in the real world Thoughts, behaviour, and brain dynamics during navigation in the real world. NeuroImage311826–1840. 10.1016/j.neuroimage.2006.01.037 * Starrett . (2020) starrett2020Starrett, MJ., McAvan, AS., Huffman, DJ., Stokes, JD., Kyle, CT., Smuda, DN.Ekstrom, AD. 2020\. Landmarks: A solution for spatial navigation and memory experiments in virtual reality Landmarks: A solution for spatial navigation and memory experiments in virtual reality. Behavior Research MethodsAdvance online publication. 10.3758/s13428-020-01481-6 * Tcheang . (2011) tcheang_visual_2011Tcheang, L., Bülthoff, HH. Burgess, N. 2011\. Visual influence on path integration in darkness indicates a multimodal representation of large-scale space Visual influence on path integration in darkness indicates a multimodal representation of large-scale space. Proceedings of the National Academy of Sciences of the United States of America1081152–1157. 10.1073/pnas.1011843108 * Vasser . (2017) vasser2017vrexVasser, M., Kängsepp, M., Magomedkerimov, M., Kilvits, K., Stafinjak, V., Kivisik, T.Aru, J. 2017\. VREX: An open-source toolbox for creating 3D virtual reality experiments VREX: An open-source toolbox for creating 3D virtual reality experiments. BMC Psychology54. 10.1186/s40359-017-0173-4 * Wiener . (2020) wiener2019Wiener, JM., Carroll, D., Moeller, S., Bibi, I., Ivanova, D., Allen, P. Wolbers, T. 2020\. A novel virtual-reality-based route-learning test suite: Assessing the effects of cognitive aging on navigation A novel virtual-reality-based route-learning test suite: Assessing the effects of cognitive aging on navigation. Behavior Research Methods52630–640. 10.3758/s13428-019-01264-8 * Williams . (2007) williams2007exploringWilliams, B., Narasimham, G., Rump, B., McNamara, TP., Carr, TH., Rieser, J. Bodenheimer, B. 2007\. Exploring large virtual environments with an HMD when physical space is limited Exploring large virtual environments with an HMD when physical space is limited. APGV07: Proceedings of the 4th Symposium on Applied Perception in Graphics and Visualization41–48. 10.1145/1272582.1272590 * Yamamoto . (2018) yamamoto_imagined2018Yamamoto, N., Mach, DE., Philbeck, JW. Van Pelt, J. 2018\. Why is the Duration of Imagined Walking Underproduced? A dual-representation view on the mental imagery of locomotion Why is the duration of imagined walking underproduced? A dual-representation view on the mental imagery of locomotion. PsyArXiv. 10.31234/osf.io/rm298 * Yamamoto, Meléndez Menzies (2014) yamamoto_homing_2014Yamamoto, N., Meléndez, JA. Menzies, DT. 2014\. Homing by path integration when a locomotion trajectory crosses itself Homing by path integration when a locomotion trajectory crosses itself. Perception431049–1060. 10.1068/p7624 * Yamamoto, Philbeck . (2014) yamamoto_medial_2014Yamamoto, N., Philbeck, JW., Woods, AJ., Gajewski, DA., Arthur, JC., Potolicchio, SJ., Jr.Caputy, AJ. 2014\. Medial temporal lobe roles in human path integration Medial temporal lobe roles in human path integration. PLoS ONE9e96583. 10.1371/journal.pone.0096583 * Yechiam Hochman (2014) yechiam_loss_2014Yechiam, E. Hochman, G. 2014\. Loss Attention in a Dual-Task Setting Loss attention in a dual-task setting. Psychological Science25494–502. 10.1177/0956797613510725
2024-09-04T02:54:57.841423
2020-03-06T13:09:56
2003.03175
{ "authors": "Jens Hesse", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26080", "submitter": "Jens Hesse", "url": "https://arxiv.org/abs/2003.03175" }
arxiv-papers
# Central leaves on Shimura varieties with parahoric reduction Jens Hesse Technische Universität Darmstadt<EMAIL_ADDRESS> ###### Abstract We investigate the geometry of the special fiber of the integral model of a Shimura variety with parahoric level at a given prime place. To be more precise, we deal with the definition of central leaves in this situation, their local closedness, and the relationship between the folations for varying parahoric level. This is connected to the verification of axioms for integral models formulated by He and Rapoport. ###### Contents 1. 1 Background 1. 1.1 Shimura data of Hodge type 2. 1.2 Bruhat-Tits buildings 3. 1.3 Alteration of the Hodge embedding 4. 1.4 Siegel integral models 5. 1.5 Construction of the integral model 6. 1.6 Maps between Shimura varieties 7. 1.7 Local structure of the integral model 1. 1.7.1 Generizations and irreducible components 2. 1.7.2 Normalization and completion 8. 1.8 Hodge tensors and (lack of) moduli interpretation 1. 1.8.1 The story for the $\mathbb{C}$-valued points 2. 1.8.2 The story for the integral model 2. 2 Central leaves in the case of parahoric reduction 1. 2.1 Definition of central leaves 1. 2.1.1 An alternative characterization of the central leaves 2. 2.2 Local closedness of central leaves 1. 2.2.1 Topological lemmas 2. 2.2.2 Local closedness 3. 2.3 Quasi-isogenies of $p$-divisible groups 4. 2.4 Almost product structure 5. 2.5 Change of parahoric level 1. 2.5.1 The change-of-parahoric morphism 2. 2.5.2 Newton-Igusa variety and change-of-parahoric Introduction Shimura varieties are objects of arithmetic geometry (namely varieties over number fields) that naturally arise in the search for generalized, non-abelian reciprocity laws (i.e., in the Langlands program) and as moduli spaces of abelian varieties (with certain extra structures on them). One way of approaching these objects is to try to understand their mod-$p$ reduction (which has to be carefully defined first). Insofar as a moduli interpretation in the above sense exists and continues to exist likewise for the mod-$p$ reduction111There need not be a _literal_ moduli interpretation, but in any event the stratifications in question derive from a close connection to moduli problems., it allows us to stratify the moduli space according to several invariants of the abelian varieties parametrized, e.g., the isomorphism classes of their $p$-torsion. (An important observation is that these stratifications genuinely live in the characteristic $p$ world, making use of Frobenius endomorphisms and so on.) This, very roughly, is the general theme everything in this article revolves around. More precisely, we will be dealing with Shimura varieties of Hodge type and parahoric level structure, at some fixed prime $v\mid p$ of the number field over which the Shimura variety is defined. Under some reasonably mild assumptions, cf. 1.16, Kisin and Pappas [KP15] constructed a canonical integral model for such a Shimura variety. We try to understand some aspects of the geometry of the special fiber of said integral model, namely the central leaves (roughly the patches where the isomorphism class of the $p$-divisible group associated with the abelian variety is constant); their being locally closed and how they vary as the (parahoric) level is varied. Let us now go into more detail. On the integral model $\mathscr{S}_{K}$ ($K$ parahoric level) we have a “universal” abelian scheme (the quotation marks indicating that it is not really universal for some moduli problem on $\mathscr{S}_{K}$, but it comes from a universal abelian scheme via pullback) and we have various kinds of Hodge tensors. We also have a “universal” isogeny chain of abelian schemes tightly connected to the “universal” abelian scheme. We define the _naive central leaves_ on the special fiber $\overline{\mathscr{S}}_{K}$ to be the loci where the isomorphism type of the geometric fibers of the $p$-divisible group associated with the “universal” abelian scheme (the “universal” $p$-divisible group) is constant (alternatively, we may use the “universal” isogeny chain). We arrive at the non-naive version by taking into account the Hodge tensors. This is the content of section 2.1. Next we show ###### Theorem A: (Corollary 2.12) The central leaves are locally closed. using a somewhat simpler construction than the one given in [HK17]. By foundational work of Oort [Oor04] we already know the naive central leaves to be locally closed. We show that the central leaves are open and closed inside the naive central leaves. Some basic topological considerations allow us to treat this question in perfected formal neighborhoods, allowing us to phrase it as a question about $p$-divisible groups with crystalline Tate tensors; a question which was answered by Hamacher [Ham17]. This forms section 2.2. Then we consider (tensor-respecting) self-quasi-isogenies of $p$-divisible groups (with tensors). The main take-away here is that, if we consider the geometric fibers of the “universal” isogeny chain of $p$-divisible groups in the above setting, then ###### Theorem B: (Example 2.21) The self-quasi-isogenies are independent of the level. In section 2.4 we recall the almost product structure (under an additional technical assumption 2.23 concerning the Rapoport-Zink uniformization map, which is satisfied e.g. if our reductive group over $\mathbb{Q}_{p}$ is residually split222A reductive group $G/\mathbb{Q}_{p}$ is _residually split_ if it has the same rank as its base change to the maximal unramified extension of $\mathbb{Q}_{p}$. The name “residually split” derives from the fact that in this case the maximal reductive quotients of parahoric group schemes associated with $G$ are split reductive. [Zho18]), which expresses a variant of the Igusa variety (the _Newton-Igusa variety_ , with quasi-isogenies instead of isomorphisms) as a product of the Igusa variety and the Rapoport- Zink space. We apply this to the question of how the central leaves behave under change of the parahoric level. We begin by constructing the change-of-parahoric morphisms on the Shimura varieties, Igusa varieties, Rapoport-Zink spaces and Newton-Igusa varieties. The almost product isomorphism is compatible with these morphisms (Lemma 2.38). From this we can derive ###### Theorem C: (Corollary 2.42) The map between Igusa varieties for varying parahoric level is an isomorphism. This implies in particular ###### Theorem D: (Corollary 2.43) The change-of-parahoric map between central leaves is surjective. This is the most difficult part of the axioms on integral models He and Rapoport give in [HR17]. Our proof is independent of the one by Rong Zhou in [Zho18], even though we use some of the results he gave in the first version of the cited preprint, which did not contain a proof of the surjectivity. Positively answering another conjecture by He and Rapoport [HR17, Rmk. 3.4], we show ###### Theorem E: (Corollary 2.48) The change-of-parahoric map between central leaves is the composition of a flat universal homeomorphism of finite type and a finite étale morphism. ### Acknowledgements This article essentially is an extract of my doctoral thesis [Hes20] (another extract333In particular, there is a large overlap between the “Background” sections of the two articles., dealing with the EKOR stratification, is [Hes20a]). I thank Torsten Wedhorn for suggesting the topic of the dissertation, his support and patiently answering my questions. Moreover I thank Eva Viehmann and Paul Hamacher for their hospitality and helpful discussions during a month-long stay in Munich at the TU München. I am also grateful to Timo Richarz and Timo Henkel for numerous helpful discussions. This research was supported by the Deutsche Forschungsgemeinschaft (DFG), project number WE 2380/5. ## 1 Background ### 1.1 Shimura data of Hodge type This article deals with aspects of the geometry of Shimura varieties (of Hodge type), which are the (systems of) varieties associated with Shimura data (of Hodge type). ###### Definition 1.1. A _Shimura datum of Hodge type_ is a pair $(G,X)$, where $G$ is a reductive algebraic group over $\mathbb{Q}$ and $X\subseteq\operatorname{Hom}_{\mathbb{R}\text{-grp}}(\mathbb{S},G_{\mathbb{R}})$ is a $G(\mathbb{R})$-conjugacy class ($\mathbb{S}:=\operatorname{Res}_{\mathbb{C}/\mathbb{R}}\mathbb{G}_{m,\mathbb{C}}$ being the Deligne torus) subject to the following conditions: 1. (1) For $h\in X$, the induced Hodge structure $\mathbb{S}\xrightarrow{h}G_{\mathbb{R}}\xrightarrow{\mathrm{Ad}}\operatorname{GL}(\operatorname{Lie}(G_{\mathbb{R}}))$ satisfies $\operatorname{Lie}(G_{\mathbb{C}})=\operatorname{Lie}(G_{\mathbb{C}})^{-1,1}\oplus\operatorname{Lie}(G_{\mathbb{C}})^{0,0}\oplus\operatorname{Lie}(G_{\mathbb{C}})^{1,-1}$. 2. (2) $\operatorname{int}(h(i))\colon G^{\mathrm{ad}}_{\mathbb{R}}\to G^{\mathrm{ad}}_{\mathbb{R}}$ is a Cartan involution, i.e., $\\{g\in G^{\mathrm{ad}}(\mathbb{C})\;|\;gh(i)=h(i)\overline{g}\\}$ is compact. Another way of phrasing this condition: Every finite-dimensional real representation $V$ of $G^{\mathrm{ad}}_{\mathbb{R}}$ carries a $G^{\mathrm{ad}}_{\mathbb{R}}$-invariant bilinear form $\varphi$ such that $(u,v)\mapsto\varphi(u,h(i)v)$ is symmetric and positive definite. It is enough to show that this holds for one _faithful_ finite-dimensional real representation $V$. 3. (3) $G^{\mathrm{ad}}$ _cannot_ be non-trivially written as $G^{\mathrm{ad}}\cong H\times I$ over $\mathbb{Q}$ with $\mathbb{S}\to G_{\mathbb{R}}\xrightarrow{\mathrm{proj}}H_{\mathbb{R}}$ trivial. 4. (4) There exists an embedding $(G,X)\hookrightarrow(\operatorname{GSp}(V),S^{\pm})$, where $(\operatorname{GSp}(V),S^{\pm})$ is the Shimura datum associated with a finite-dimensional symplectic $\mathbb{Q}$-vector space $V$ (see below). That is, we have an embedding $G\hookrightarrow\operatorname{GSp}(V)$ of $\mathbb{Q}$-group schemes such that the induced map $\operatorname{Hom}_{\mathbb{R}\text{-grp}}(\mathbb{S},G_{\mathbb{R}})\hookrightarrow\operatorname{Hom}_{\mathbb{R}\text{-grp}}(\mathbb{S},\operatorname{GSp}(V_{\mathbb{R}}))$ restricts to a map $X\hookrightarrow S^{\pm}$. ###### Example 1.2. Let $W$ be a finite-dimensional $\mathbb{R}$-vector space. $\mathbb{R}$-group homomorphisms $\mathbb{S}\to\operatorname{GL}(W)$ then correspond to Hodge decompositions of $W$, i.e., to decompositions $W_{\mathbb{C}}=\oplus_{(p,q)\in\mathbb{Z}^{2}}W_{\mathbb{C}}^{p,q}$, such that $W_{\mathbb{C}}^{p,q}$ is the complex conjugate of $W_{\mathbb{C}}^{q,p}$ for all $(p,q)\in\mathbb{Z}^{2}$. Under this correspondence, $h\colon\mathbb{S}\to\operatorname{GL}(W)$ corresponds to the Hodge decomposition $W_{\mathbb{C}}^{p,q}=\\{w\in W_{\mathbb{C}}\;|\;\forall z\in\mathbb{S}(\mathbb{R})=\mathbb{C}^{\times}\colon h(z)w=z^{-p}\bar{z}^{-q}w\\}$. Hodge decompositions of $W$ of type $(-1,0)+(0,-1)$ correspond to complex structures on $W$: If $h\colon\mathbb{S}\to\operatorname{GL}(W)$ yields such a Hodge decomposition, then $h(i)$ gives an $\mathbb{R}$-endomorphism $J$ of $W$ with $J\circ J=-\operatorname{id}_{W}$. Let $V=(V,\psi)$ be a finite-dimensional symplectic $\mathbb{Q}$-vector space. We say that a complex structure $J$ on $V_{\mathbb{R}}$ is positive (resp. negative) if $\psi_{J}:=\psi_{\mathbb{R}}(\\_,J\\_)$ is a positive definite (resp. negative definite) symmetric bilinear form on $V_{\mathbb{R}}$. Define $S^{+}$ (resp. $S^{-}$) to be the set of positive (resp. negative) complex structures on $(V_{\mathbb{R}},\psi_{\mathbb{R}})$ and $S^{\pm}:=S^{+}\sqcup S^{-}$. We can make this more concrete: A symplectic basis of $(V_{\mathbb{R}},\psi_{\mathbb{R}})$ is a basis $e_{1},\dotsc,\allowbreak e_{g},e_{-g},\dotsc,\allowbreak e_{-1}$, such that $\psi_{\mathbb{R}}$ is of the form $\begin{pmatrix}&\tilde{I}_{g}\\\ -\tilde{I}_{g}&\end{pmatrix}$ with respect to this basis, where $\tilde{I}_{g}=\begin{pmatrix}&&1\\\ &\iddots&\\\ 1&&\end{pmatrix}$ is the antidiagonal identity matrix.444Occasionally (in particular when doing concrete matrix calculations), it is more convenient to number the basis vectors $1,\dotsc,g,-1,\dotsc,-g$ instead of $1,\dotsc,g,-g,\dotsc,-1$. Then the standard symplectic form is given by $\left(\begin{smallmatrix}&I_{g}\\\ -I_{g}&\end{smallmatrix}\right)$, $I_{g}$ being the $g\times g$ identity matrix. Let $J$ be the endomorphism of $V_{\mathbb{R}}$ of the form $\begin{pmatrix}&-\tilde{I}_{g}\\\ \tilde{I}_{g}&\end{pmatrix}$ with respect to this basis. Then $J\in S^{+}$ and what we have described is a surjective map $\\{\text{symplectic bases of }(V_{\mathbb{R}},\psi_{\mathbb{R}})\\}\twoheadrightarrow S^{+}.$ In particular we see that $\operatorname{Sp}(V_{\mathbb{R}},\psi_{\mathbb{R}}):=\\{f\in\operatorname{GL}(V_{\mathbb{R}})\;|\;\psi_{\mathbb{R}}(f(\\_),f(\\_))=\psi_{\mathbb{R}}\\}$ (by virtue of acting simply transitively on the symplectic bases) acts transitively on $S^{+}\cong\operatorname{Sp}(V_{\mathbb{R}},\psi_{\mathbb{R}})/\operatorname{SpO}(V_{\mathbb{R}},\psi_{\mathbb{R}},J)$ (where we define $\operatorname{SpO}(V_{\mathbb{R}},\psi_{\mathbb{R}},J):=\operatorname{Sp}(V_{\mathbb{R}},\psi_{\mathbb{R}})\cap O(V_{\mathbb{R}},\psi_{J})=U((V_{\mathbb{R}},J),\psi_{J})$ for a fixed choice of $J\in S^{+}$) and therefore the general symplectic group $\operatorname{GSp}(V_{\mathbb{R}},\psi_{\mathbb{R}}):=\\{f\in\operatorname{GL}(V_{\mathbb{R}})\;|\;\psi_{\mathbb{R}}(f(\\_),f(\\_))=c\cdot\psi_{\mathbb{R}}\text{ for some }c\in\mathbb{R}^{\times}\\}$ acts transitively on $S^{\pm}$ (note that the element of the form $e_{\pm i}\mapsto e_{\mp i}$ of $\operatorname{GSp}(V_{\mathbb{R}},\psi_{\mathbb{R}})$ for any given choice of symplectic basis $\left(e_{i}\right)_{i}$ permutes $S^{+}$ and $S^{-}$). ###### Definition 1.3. Condition (1) of Definition 1.1 implies that the action of $\mathbb{G}_{m,\mathbb{R}}$ (embedded in $\mathbb{S}$ in the natural way) on $\operatorname{Lie}(G_{\mathbb{R}})$ is trivial, so that $h$ induces a homomorphism ${w\colon\mathbb{G}_{m,\mathbb{R}}\to\operatorname{Cent}(G_{\mathbb{R}})}$. This homomorphism is independent of the choice of $h\in X$ and is called the _weight homomorphism_ of $(G,X)$. Moreover, we denote by $\\{\mu\\}$ the the $G(\mathbb{C})$-conjugacy class of the cocharacter $\mu_{h}:=h\circ(\operatorname{id}_{\mathbb{G}_{m,\mathbb{C}}},1)\colon\mathbb{G}_{m,\mathbb{C}}\to\mathbb{G}_{m,\mathbb{C}}^{2}\cong\mathbb{S}_{\mathbb{C}}\to G_{\mathbb{C}}$, where $h$ is as above. Obviously, the conjugacy class $\\{\mu\\}$ is independent of the particular choice of $h\in X$. ###### Remark 1.4. Let $L/\mathbb{Q}$ be a field extension such that $G_{L}$ contains a split maximal torus $T$. Let $W:=\operatorname{Norm}_{G(L)}(T)/T$ be the Weyl group. Then the natural map $W\backslash\operatorname{Hom}_{L\text{-grp}}(\mathbb{G}_{m,L},T)\to G(L)\backslash\operatorname{Hom}_{L\text{-grp}}(\mathbb{G}_{m,L},G_{L})$ is bijective. Since the left hand side remains unchanged if we go from $L=\bar{\mathbb{Q}}$ (where as usual $\bar{\mathbb{Q}}$ denotes an algebraic closure of $\mathbb{Q}$) to $L=\mathbb{C}$, we see that $\\{\mu\\}$ contains a cocharacter defined over $\bar{\mathbb{Q}}$ and that we may then also consider $\\{\mu\\}$ as a $G(\bar{\mathbb{Q}})$-conjugacy class. ###### Definition 1.5. The _reflex field_ $\mathbf{E}=\mathbf{E}(G,X)$ of $(G,X)$ is the field of definition of $\\{\mu\\}$, i.e., the fixed field in $\bar{\mathbb{Q}}$ of $\\{\gamma\in\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\;|\;\gamma(\\{\mu\\})=\\{\mu\\}\\}$. ###### Example 1.6. The reflex field of the Shimura datum $(\operatorname{GSp}_{2g,\mathbb{Q}},S^{\pm})$ of Example 1.2 is $\mathbb{Q}$. To wit, one of the cocharacters in the conjugacy class $\\{\mu\\}$ is $\mu(z)=\left(\begin{smallmatrix}z&&&&&\\\ &\ddots&&&&\\\ &&z&&&\\\ &&&1&&\\\ &&&&\ddots&\\\ &&&&&1\end{smallmatrix}\right).$ ###### Notation 1.7. We denote the ring of (rational) adeles by $\mathbb{A}:=\mathbb{A}_{\mathbb{Q}}$, the subring of finite adeles by $\mathbb{A}_{f}:=\mathbb{A}_{\mathbb{Q},f}$ and the subring of finite adeles away from some fixed prime $p$ by $\mathbb{A}_{f}^{p}$. ###### Definition and Remark 1.8. Let $K\subseteq G(\mathbb{A}_{f})$ be a compact open subgroup. The _Shimura variety of level $K$ associated with $(G,X)$_ is the double coset space $\operatorname{Sh}_{K}(G,X):=G(\mathbb{Q})\backslash(X\times(G(\mathbb{A}_{f})/K)).$ A priori, this is just a set, but if $K$ is sufficiently small (i.e., “neat” in the sense of [Bor69, Pin90]), $\operatorname{Sh}_{K}(G,X)$ can be canonically written as a finite disjoint union of hermitian symmetric domains.555If $K$ fails to be sufficiently small, one might very reasonably argue that our definition of the Shimura variety of level $K$ really is the definition of the _coarse_ Shimura variety and that one should be working with stacks instead. Since we will only be interested in sufficiently small level, this is inconsequential for us. In particular, this gives $\operatorname{Sh}_{K}(G,X)$ the structure of a complex manifold. In fact, by the theorem of Baily-Borel, this complex manifold attains the structure of a quasi-projective complex variety in a canonical way. By work of Deligne, Milne and Borovoi, this variety is defined already (and again in a canonical way) over the reflex field $\mathbf{E}$. So in particular, it is defined over a number field independent of $K$. This is important when varying $K$ and it is the reason why we consider the whole Shimura variety instead of its connected components over $\mathbb{C}$ on their own. It is possible for the Shimura variety to have multiple connected components over $\mathbb{C}$ while being connected over $\mathbf{E}$. More detailed explanations may be found in [Mil05]. ### 1.2 Bruhat-Tits buildings Let $K$ be a complete discrete valuation field with ring of integers $\mathcal{O}$, uniformizer $\varpi$ and perfect residue field $\kappa:=\mathcal{O}/\varpi$. ###### Notation 1.9. For a (connected) reductive group $G$ over $K$, we denote by $\mathcal{B}(G,K)$ the extended (or enlarged) and by $\mathcal{B}^{\mathrm{red}}(G,K)$ the reduced (i.e., non-extended) Bruhat-Tits building of $G$ over $K$ [BT84]. Moreover, $\mathcal{B}^{\mathrm{abstract}}(G,K)$ denotes the underlying abstract simplicial complex. ###### Remark 1.10. Let $V$ be a finite-dimensional $K$-vector space. As described in [KP15, 1.1.9] (originally in [BT84a]), the points of $\mathcal{B}(\operatorname{GL}(V),K)$ correspond to graded periodic lattice chains $(\mathcal{L},c)$, i.e., * • $\emptyset\neq\mathcal{L}$ is a totally ordered set of full $\mathcal{O}$-lattices in $V$ stable under scalar multiplication (i.e., $\Lambda\in\mathcal{L}\iff\varpi\Lambda\in\mathcal{L}$), * • $c\colon\mathcal{L}\to\mathbb{R}$ is a strictly decreasing function such that $c(\varpi^{n}\Lambda)=c(\Lambda)+n$. ###### Remark 1.11. Fix such an $\mathcal{L}$ and let $\Lambda^{0}\in\mathcal{L}$. Then every homothety class of lattices has a unique representative $\Lambda$ such that $\Lambda\subseteq\Lambda^{0}$ and $\Lambda\not\subseteq\varpi\Lambda^{0}$. Consider such representatives $\Lambda^{i}$ for all of the distinct homothety classes of lattices that make up $\mathcal{L}$. Because $\mathcal{L}$ is totally ordered and $\Lambda^{i}\not\subseteq\varpi\Lambda^{0}$, it follows that $\Lambda^{i}\supseteq\varpi\Lambda^{0}$ for all $i$ and that $\left\\{\Lambda^{i}/\varpi\Lambda^{0}\right\\}_{i}$ is a flag of non-trivial linear subspaces of $\Lambda^{0}/\varpi\Lambda^{0}\cong\kappa^{n}$, where $n:=\dim V$. Consequently, the number $r$ of homothety classes is in $\\{1,\dotsc,n\\}$; it is called the _period length_ (or _rank_) of $\mathcal{L}$. Numbering the $\Lambda^{i}$ in descending order we hence obtain $r$ lattices $\Lambda^{0},\Lambda^{1},\dotsc,\Lambda^{r-1}$ such that $\Lambda^{0}\supsetneqq\Lambda^{1}\supsetneqq\dotsb\supsetneqq\Lambda^{r-1}\supsetneqq\varpi\Lambda^{0}$ (1.12) and $\mathcal{L}$ is given by the the strictly descending sequence of lattices $\Lambda^{qr+i}=\varpi^{q}\Lambda^{i},\quad q\in\mathbb{Z},\;0\leq i<r.$ ###### Remark 1.13. Let $V$ be a finite-dimensional symplectic $K$-vector space. $\mathcal{B}(\operatorname{GSp}(V),K)$ embeds into the subset of $\mathcal{B}(\operatorname{GL}(V),K)$ consisting of those $(\mathcal{L},c)$ such that $\Lambda\in\mathcal{L}\implies\Lambda^{\vee}\in\mathcal{L}$. Passing to the underlying abstract simplicial complexes means forgetting about the grading $c$ and $\mathcal{B}^{\mathrm{abstract}}(\operatorname{GSp}(V),K)=\\{\mathcal{L}\in\mathcal{B}^{\mathrm{abstract}}(\operatorname{GL}(V),K)\;|\;\Lambda\in\mathcal{L}\implies\Lambda^{\vee}\in\mathcal{L}\\}.$ If $\mathcal{L}\in\mathcal{B}^{\mathrm{abstract}}(\operatorname{GSp}(V),K)$ and $\\{\Lambda^{i}\\}_{i}$ is as in Remark 1.11, then there is an involution $t\colon\mathbb{Z}\to\mathbb{Z}$ with $\left(\Lambda^{i}\right)^{\vee}=\Lambda^{t(i)}$, $t(i+qr)=t(i)-qr$, and $i<j\implies t(i)>t(j)$. So $-a:=t(0)>t(1)>\dotsb>t(r)=-a-r$, which implies $t(i)=-i-a$. Thus $i_{0}-t(i_{0})=2i_{0}+a\in\\{0,1\\}$ for some unique $i_{0}\in\mathbb{Z}$. Hence, upon renumbering the $\Lambda^{i}$, we may assume that $a\in\\{0,1\\}$. We therefore have $\displaystyle\varpi\Lambda^{0}\subsetneqq\Lambda^{r-1}\subsetneqq\Lambda^{r-2}\subsetneqq\dotsb\subsetneqq\Lambda^{0}\subseteq\left(\Lambda^{0}\right)^{\vee}=\Lambda^{-a}\subsetneqq\left(\Lambda^{1}\right)^{\vee}=\Lambda^{-1-a}$ $\displaystyle\subsetneqq\dotsb\subsetneqq\left(\Lambda^{r-1}\right)^{\vee}=\Lambda^{-r+1-a}\subseteq\Lambda^{-r}=\varpi^{-1}\Lambda^{0}.$ ### 1.3 Alteration of the Hodge embedding ###### Notation 1.14. Let $E$ be a finite field extension of $\mathbb{Q}_{p}$. Denote by $\breve{E}$ the completion of the maximal unramified extension of $E$ (hence $\breve{E}=E\cdot\breve{\mathbb{Q}}_{p}$). ###### Remark 1.15. If $E/\mathbb{Q}_{p}$ is unramified, then ${{\cal O}_{\breve{E}}}=W(\bar{\mathbb{F}}_{p})$, $\bar{\mathbb{F}}_{p}$ denoting an algebraic closure of $\mathbb{F}_{p}$ and $W\colon\mathrm{Ring}\to\mathrm{Ring}$ being the ($p$-adic) Witt vectors functor. This generalizes to the ramified case using _ramified Witt vectors_ instead, see e.g. [Haz78, Chap. IV, (18.6.13)] or [Ahs11, Chapter 1]. Let $(G,X)$ be a Shimura datum of Hodge type, let $(G,X)\hookrightarrow(\operatorname{GSp}(V),S^{\pm})$ be an embedding as in Definition 1.1 (4), and let $x\in\mathcal{B}(G,\mathbb{Q}_{p})$ be a point in the Bruhat-Tits building of $G$ over $\mathbb{Q}_{p}$. We consider the associated Bruhat-Tits scheme ${\cal G}_{x}$, i.e., the affine smooth model of $G_{\mathbb{Q}_{p}}$ over $\mathbb{Z}_{p}$ such that ${\cal G}_{x}(\breve{\mathbb{Z}}_{p})\subseteq G(\breve{\mathbb{Q}}_{p})$ is the stabilizer of the facet of $x$ in ${\cal B}(G,\breve{\mathbb{Q}}_{p})\overset{\text{\cite[cite]{[\@@bibref{}{landvogt}{}{}, Prop.\leavevmode\nobreak\ 2.1.3]}}}{=}\mathcal{B}(G,\mathbb{Q}^{\mathrm{ur}}_{p})$. Let $K_{p}:={\cal G}_{x}(\mathbb{Z}_{p})\subseteq G(\mathbb{Q}_{p})$ and let $K^{p}\subseteq G(\mathbb{A}_{f}^{p})$ be a sufficiently small open compact subgroup. Define $K:=K_{p}K^{p}\subseteq G(\mathbb{A}_{f})$. ###### Assumptions 1.16. From now on, we will always make the following assumptions: * • $\mathcal{G}_{x}=\mathcal{G}_{x}^{\circ}$ is connected. * • $G$ splits over a tamely ramified extension of $\mathbb{Q}_{p}$. * • $p\nmid\\#\pi_{1}(G^{\mathrm{der}})$. ###### Notation 1.17. In order not to make notation overly cumbersome, we usually denote the base change $G_{\mathbb{Q}_{p}}$ of $G$ to $\mathbb{Q}_{p}$ by $G$ again. (Later, we will almost exclusively be dealing with $G_{\mathbb{Q}_{p}}$.) Under the above assumptions, Kisin and Pappas construct in [KP15, section 1.2] (building on [Lan00]) a toral $G(\breve{\mathbb{Q}}_{p})$\- and $\operatorname{Gal}(\breve{\mathbb{Q}}_{p}/\mathbb{Q}_{p})$-equivariant embedding $\iota\colon\mathcal{B}(G,\breve{\mathbb{Q}}_{p})\to\mathcal{B}(\operatorname{GL}(V),\breve{\mathbb{Q}}_{p}),$ restricting666In this case, this is obvious from Galois-equivariance, since $\mathcal{B}(G,K)=\mathcal{B}(G,K^{\prime})^{\operatorname{Gal}(K^{\prime}/K)}$ if $K^{\prime}/K$ is an unramified extension. to a map $\iota\colon\mathcal{B}(G,\mathbb{Q}_{p})\to\mathcal{B}(\operatorname{GL}(V),\mathbb{Q}_{p})$. $\iota$ being a toral embedding means the following: $\iota$ is isometric after a suitable normalization of the norm on $\mathcal{B}(G,\breve{\mathbb{Q}}_{p})$; moreover it is the canonical extension of a map ${\mathcal{B}^{\mathrm{red}}(G,\breve{\mathbb{Q}}_{p})\to\mathcal{B}^{\mathrm{red}}(\operatorname{GL}(V),\breve{\mathbb{Q}}_{p})}$ such that for each maximal $\breve{\mathbb{Q}}_{p}$-split torus $S\subseteq G$ there exists a maximal $\breve{\mathbb{Q}}_{p}$-split torus $T\subseteq\operatorname{GL}(V)$ such that $G\hookrightarrow\operatorname{GL}(V)$ maps $S$ into $T$ and $\iota$ restricts to a map between the reduced apartments associated with $(G,S)$ and $(\operatorname{GL}(V),T)$, respectively, compatible with translations. This embedding $\iota$ depends on some choices: By assumption, there exists a tamely ramified Galois extension $\tilde{K}/\mathbb{Q}_{p}$ with finite inertia group such that $G_{\tilde{K}}$ is split reductive. Let $H\to\operatorname{Spec}\mathbb{Z}_{p}$ be the split Chevalley form of $G$ over $\mathbb{Z}_{p}$. Then $\iota$ depends on the choice of * • an isomorphism $G_{\tilde{K}}\cong H_{\tilde{K}}$, * • a pinning $\left(T,M,f,R,\Delta,\left(X_{\alpha}\right)_{\alpha\in\Delta}\right)$ of $H$ (cf. [SGA3, Exp. XXIII]777Since $\operatorname{Spec}\mathbb{Z}_{p}$ is connected, some technicalities from _loc. cit._ disappear here.), which also entails the following: Let $B$ be the Borel subgroup of $H$ corresponding to $\Delta$. When we talk about roots, it is with respect to $T$, and when we talk about positive roots and so on, it is with respect to $(T,B)$. We also fix a hyperspecial vertex $x_{o}$ in $\mathcal{B}(H,\mathbb{Q}_{p})$ with stabilizer $H(\mathbb{Z}_{p})$. * • for every $\mathbb{Q}_{p}$-irreducible summand888Note that by reductivity and $\operatorname{char}(\mathbb{Q}_{p})=0$, every finite-dimensional representation of $G_{\mathbb{Q}_{p}}$ is completely reducible. $V_{i}$ of the representation $G_{\mathbb{Q}_{p}}\to\operatorname{GL}(V)_{\mathbb{Q}_{p}}$ a lattice $\Lambda_{i}=U(\mathfrak{n}^{-})v_{i}\subseteq V_{i}$, where $v_{i}\neq 0$ is a highest weight vector of $V_{i}$ and $\mathfrak{n}^{-}$ is the (strictly) negative root space inside the Lie algebra of $H$ over $\mathbb{Z}_{p}$, * • a grading $c_{\Lambda_{i}}+t_{i}$ of the lattice chain $\\{p^{n}\Lambda_{i}\\}_{n\in\mathbb{Z}}$ given by real numbers $t_{i}\in\mathbb{R}$. Here ${(c_{\Lambda_{i}}+t_{i})(p^{n}\Lambda_{i})}:=n+t_{i}$. By [KP15, Lemma 2.3.3], we can (and do) arrange these choices in such a way that $\iota$ factors $\mathcal{B}(G,\breve{\mathbb{Q}}_{p})\xrightarrow{j}\mathcal{B}(\operatorname{GSp}(V),\breve{\mathbb{Q}}_{p})\to\mathcal{B}(\operatorname{GL}(V),\breve{\mathbb{Q}}_{p}),$ where the last map is the canonical toral embedding (whose definition will be clear from Remark 1.13). Again, $j$ restricts to a map $j\colon\mathcal{B}(G,\mathbb{Q}_{p})\hookrightarrow\mathcal{B}(\operatorname{GSp}(V),\mathbb{Q}_{p}).$ (1.18) Let $y=(\mathcal{L},c)$ be the image of $x\in\mathcal{B}(G,\mathbb{Q}_{p})$ under the map (1.18) and let $\left(\Lambda^{i}\right)_{i},r,a$ be as in Remark 1.13. We define $N_{p}:=\operatorname{Stab}_{\operatorname{GSp}(V)(\mathbb{Q}_{p})}(\mathcal{L})$. Consider the symplectic $\mathbb{Q}$-vector space $V^{\S}:=\bigoplus_{i=-(r-1)-a}^{r-1}V$ (direct sum of symplectic spaces, i.e., if $\psi$ denotes the symplectic form on $V$, then the symplectic form $\psi^{\S}$ on $V^{\S}$ is given by $\bigoplus_{i=-(r-1)-a}^{r-1}\psi$) and the lattice in $V^{\S}_{\mathbb{Q}_{p}}$ $\Lambda^{\S}:=\bigoplus_{i=-(r-1)-a}^{r-1}\Lambda^{i}.$ By replacing $\Lambda^{\S}$ by a homothetic lattice, we may assume that $\Lambda^{\S}\subseteq\left(\Lambda^{\S}\right)^{\vee}$ (hence $\left(\Lambda^{\S}\right)^{\vee\vee}=\Lambda^{\S}$). We have a diagonal embedding $\operatorname{GSp}(V)\hookrightarrow\operatorname{GSp}(V^{\S})$ and (with calligraphic script meaning that we talk about Bruhat-Tits group schemes) $\mathcal{GSP}(V)_{y}=\operatorname{Stab}_{\operatorname{GSp}(V)}(\mathcal{L})=\bigcap\operatorname{Stab}_{\operatorname{GSp}(V)}(\Lambda^{i})\subseteq\operatorname{GSp}(\Lambda^{\S})\subseteq\operatorname{GL}(\Lambda^{\S})\subseteq\mathcal{GL}(V)_{\Lambda^{\S}},$ so that upon replacing our original embedding $G\hookrightarrow\operatorname{GSp}(V)$ by the embedding $G\hookrightarrow\operatorname{GSp}(V)\hookrightarrow\operatorname{GSp}(V^{\S})$, we can assume that the parahoric subgroup on the Siegel side is given as $\operatorname{Stab}_{\operatorname{GL}(V^{\S})(\mathbb{Q}_{p})}(\Lambda^{\S})\cap\operatorname{GSp}(V^{\S})(\mathbb{Q}_{p}),$ i.e., that our level is essentially given by a single lattice, albeit one that is (in general) not self-dual. This is [KP15, 2.3.15]. ### 1.4 Siegel integral models With notation as above let $\displaystyle N_{p}$ $\displaystyle:=\operatorname{Stab}_{\operatorname{GSp}(V)(\mathbb{Q}_{p})}(\mathcal{L})\quad\text{(as before)},$ $\displaystyle J_{p}$ $\displaystyle:=\operatorname{Stab}_{\operatorname{GL}(V^{\S})(\mathbb{Q}_{p})}(\Lambda^{\S})\cap\operatorname{GSp}(V^{\S})(\mathbb{Q}_{p}).$ Let $N^{p}\subseteq\operatorname{GSp}(V)(\mathbb{A}_{f}^{p})$ and $J^{p}\subseteq\operatorname{GSp}(V^{\S})(\mathbb{A}_{f}^{p})$ be sufficiently small open compact subgroups, and $N:=N_{p}N^{p}$, $J:=J_{p}J^{p}$. In this subsection, we are going to describe integral models of $\operatorname{Sh}_{N}(\operatorname{GSp}(V),S^{\pm})$ and of $\operatorname{Sh}_{J}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$ over $\mathbb{Z}_{(p)}$ and relate the two. ###### Remark 1.19. By [RZ96, Definition 6.9], the integral model $\mathscr{S}_{N}(\operatorname{GSp}(V),S^{\pm})$ is given by the moduli problem $(\mathbb{Z}_{(p)}\text{-scheme})\ni S\mapsto\left\\{(A,\bar{\lambda},\eta^{p})\right\\}/{\scriptstyle\cong}$, where: 1. (a) $A=\left(A_{\Lambda}\right)_{\Lambda\in\mathcal{L}}$ is an $\mathcal{L}$-set of abelian schemes, i.e., * • for every $\Lambda\in\mathcal{L}$, an abelian $S$-scheme up to $\mathbb{Z}_{(p)}$-isogeny $A_{\Lambda}$ (i.e., $A_{\Lambda}$ is an object of the category $(\text{abelian }S\text{-schemes})\otimes\mathbb{Z}_{(p)}$, where the category $\mathcal{A}\otimes R$ for $\mathcal{A}$ an preadditive category and $R$ a ring has the same objects as $\mathcal{A}$ and $\operatorname{Hom}_{\mathcal{A}\otimes R}(X,Y)=\operatorname{Hom}(X,Y)\otimes_{\mathbb{Z}}R$ for all objects $X,Y$), * • for every inclusion $\Lambda_{1}\subseteq\Lambda_{2}$ a $\mathbb{Z}_{(p)}$-isogeny $\rho_{\Lambda_{2},\Lambda_{1}}\colon A_{\Lambda_{1}}\to A_{\Lambda_{2}}$, * • $\rho_{\Lambda_{3},\Lambda_{1}}=\rho_{\Lambda_{3},\Lambda_{2}}\circ\rho_{\Lambda_{2},\Lambda_{1}}$ if $\Lambda_{1}\subseteq\Lambda_{2}\subseteq\Lambda_{3}$ in $\mathcal{L}$, * • the height of $\rho_{\Lambda_{2},\Lambda_{1}}$ is $\log_{p}|\Lambda_{2}/\Lambda_{1}|$. Here $\rho_{\Lambda_{2},\Lambda_{1}}$ gives rise to a well-defined homomorphism of $p$-divisible groups, and what we mean is that the kernel of this homomorphism (which is a finite locally free commutative group scheme, which we also refer to simply as the kernel of $\rho_{\Lambda_{2},\Lambda_{1}}$) is to have order $|\Lambda_{2}/\Lambda_{1}|$. * • For every $\Lambda\in\mathcal{L}$, there is an isomorphism (called _periodicity isomorphism_) $\theta_{\Lambda}\colon A_{\Lambda}\to A_{p\Lambda}$ such that $\rho_{\Lambda,p\Lambda}\circ\theta_{\Lambda}=[p]\colon A_{\Lambda}\to A_{\Lambda}$ is the multiplication-by-$p$ isogeny. 2. (b) $\bar{\lambda}\colon A\to\tilde{A}$ is a $\mathbb{Q}$-homogeneous principal polarization, i.e., a $\underline{\mathbb{Q}^{\times}}$-orbit of a principal polarization $\lambda\colon A\to\tilde{A}$. Here $\tilde{A}$ is the $\mathcal{L}$-set of abelian schemes over $S$ up to prime-to-$p$ isogeny given by $\tilde{A}_{\Lambda}:=(A_{\Lambda^{\vee}})^{\vee}$. And being a polarization $\lambda$ means being a quasi-isogeny of $\mathcal{L}$-sets $\lambda\colon A\to\tilde{A}$ such that $A_{\Lambda}\xrightarrow{\lambda_{\Lambda}}\tilde{A}_{\Lambda}=(A_{\Lambda^{\vee}})^{\vee}\xrightarrow{\varrho_{\Lambda^{\vee},\Lambda}^{\vee}}(A_{\Lambda})^{\vee}$ is a polarization of $A_{\Lambda}$ for all $\Lambda$. If $\lambda_{\Lambda}$ can be chosen to be an isomorphism up to prime-to-$p$ isogeny for all $\Lambda$, then we speak of a principal polarization. In that case, when referring to $\lambda_{\Lambda}$, we mean a $\lambda_{\Lambda}$ which is an isomorphism up to prime-to-$p$ isogeny. 3. (c) $\eta^{p}$ is a level-$N^{p}$-structure, i.e. (if $S$ is connected), it is a $\pi_{1}(S,s)$-invariant $N^{p}$-orbit of symplectic similitudes $\eta^{p}\colon V_{\mathbb{A}_{f}^{p}}\to H_{1}(A_{s},\mathbb{A}_{f}^{p})$ (where $s$ is some geometric basepoint and $H_{1}(A_{s},\mathbb{A}_{f}^{p})$ with its $\pi_{1}(S,s)$-action corresponds to the Tate $\mathbb{A}_{f}^{p}$-module of $A$ (cf. [RZ96, 6.8]), which is a smooth $\mathbb{A}_{f}^{p}$-sheaf). Note that this forces the abelian schemes $A_{\Lambda}$ to be $(\dim_{\mathbb{Q}}V)$-dimensional. ###### Definition 1.20. Set $\Lambda^{\S}_{\mathbb{Z}_{(p)}}:=\Lambda^{\S}_{\mathbb{Z}_{p}}\cap V^{\S}_{\mathbb{Q}}=\prod_{i=-(r-1)-a}^{r-1}\Lambda_{\mathbb{Z}_{(p)}}^{i}$. We choose a lattice $\Lambda^{\S}_{\mathbb{Z}}\subseteq V^{\S}$ such that $\Lambda^{\S}_{\mathbb{Z}}\otimes_{\mathbb{Z}}\mathbb{Z}_{(p)}=\Lambda^{\S}_{\mathbb{Z}_{(p)}}$ and $\Lambda^{\S}_{\mathbb{Z}}\subseteq(\Lambda^{\S}_{\mathbb{Z}})^{\vee}$. ###### Remark 1.21. Set $d:=\bigl{|}\left(\Lambda_{\mathbb{Z}}^{\S}\right)^{\vee}/\Lambda_{\mathbb{Z}}^{\S}\bigr{|}$. By [Kis10, 2.3.3, 3.2.4], the integral model $\mathscr{S}_{J}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$ is given by the moduli problem $(\mathbb{Z}_{(p)}\text{-schemes})\ni S\mapsto\left\\{(A^{\S},\lambda^{\S},\epsilon^{p})\right\\}/{\scriptstyle\cong}$, where 1. (a) $A^{\S}$ is an abelian scheme over $S$ up to $\mathbb{Z}_{(p)}$-isogeny, 2. (b) $\lambda^{\S}\colon A^{\S}\to\left(A^{\S}\right)^{\vee}$ is a polarization of degree $d$ (i.e., the polarization of the (well-defined) associated $p$-divisible group has degree $d$), 3. (c) $\epsilon^{p}$ is a level-$J^{p}$-structure, i.e. (if $S$ is connected), it is a $\pi_{1}(S,s)$-invariant $J^{p}$-orbit of symplectic similitudes $\epsilon^{p}\colon V^{\S}_{\mathbb{A}_{f}^{p}}\to H_{1}(A^{\S}_{s},\mathbb{A}_{f}^{p})$. Note that this forces the abelian schemes $A^{\S}$ to be $(\dim_{\mathbb{Q}}V^{\S})$-dimensional. This completes the descriptions of the moduli problems, and we turn to the question of the relationship between the two. Consider (for appropriate $N^{p},J^{p}$; see below) the morphism $\chi\colon\mathscr{S}_{N}(\operatorname{GSp}(V),S^{\pm})\to\mathscr{S}_{J}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$ given on $S$-valued points by sending $(A,\bar{\lambda},\eta^{p})$ to $(A^{\S},\lambda^{\S},\epsilon^{p})$, where 1. (a) $\displaystyle A^{\S}:=\prod_{i=-(r-1)-a}^{r-1}A_{\Lambda^{i}}$, 2. (b) $\displaystyle\lambda^{\S}:=\prod_{i=-(r-1)-a}^{r-1}\left(\rho_{\left(\Lambda^{i}\right)^{\vee},\Lambda^{i}}^{\vee}\circ\lambda_{\Lambda^{i}}\right)$, 3. (c) $\epsilon^{p}$ is the product $\prod_{i=-(r-1)-a}^{r-1}\eta^{p}$, to be interpreted as the product over $\eta^{p}\colon V_{\mathbb{A}_{f}^{p}}\to H_{1}(A_{\Lambda^{i},s},\mathbb{A}_{f}^{p})\cong H_{1}(A_{s},\mathbb{A}_{f}^{p})$, where the isomorphism $H_{1}(A_{\Lambda^{i},s},\mathbb{A}_{f}^{p})\cong H_{1}(A_{s},\mathbb{A}_{f}^{p})$ is by definition the identity for some fixed $i=i_{0}$ and otherwise induced by the transition map $\rho_{\Lambda^{i},\Lambda^{i_{0}}}$. We need that $N^{p}$ is mapped into $J^{p}$ by $\operatorname{GSp}(V)\hookrightarrow\operatorname{GSp}(V^{\S})$ for this to make sense. ###### Lemma 1.22. Let $S$ be a scheme, $\ell\neq p$ prime numbers. If $\ell$ does not appear as a residue characteristic of $S$, then the Tate module functors $\displaystyle H_{1}(\\_,\mathbb{Z}_{\ell})$ $\displaystyle\colon(\text{abelian }S\text{-schemes})\to(\text{\~{A}©tale }\mathbb{Z}_{\ell}\text{-local systems on }S),$ $\displaystyle H_{1}(\\_,\mathbb{Q}_{\ell})$ $\displaystyle\colon(\text{abelian }S\text{-schemes})\to(\text{\~{A}©tale }\mathbb{Q}_{\ell}\text{-local systems on }S)$ (cf. [Gro74, III, 5.4 and 6.2] for precise definitions) are faithful. If only $p$ and $0$ appear as residue characteristics of $S$, then the Tate module functor $H_{1}(\\_,\mathbb{A}_{f}^{p})\colon(\text{abelian }S\text{-schemes})\to(\text{\~{A}©tale }\mathbb{A}_{f}^{p}\text{-local systems on }S)$ is faithful. ###### Proof: First note that the statements about $H_{1}(\\_,\mathbb{Q}_{\ell})$ and $H_{1}(\\_,\mathbb{A}_{f}^{p})$ follows from the statement about $H_{1}(\\_,\mathbb{Z}_{\ell})$, which is why it is enough to only look at $H_{1}(\\_,\mathbb{Z}_{\ell})$. A homomorphism of abelian $S$-schemes $f\colon A\to B$ vanishes if and only if it vanishes over every (geometric) fiber of $S$: Indeed, if it vanishes fiberwise, then it is flat by the fiber criterion for flatness. Applying that criterion again we see that the closed immersion and fiberwise isomorphism $\ker(f)\hookrightarrow A$ is flat, which means that is an isomorphism. This way we are reduced to the case where $R$ is an (algebraically closed) field of characteristic different from $\ell$. In this setting the faithfulness is well-known (the salient point being that the $\ell$-primary torsion is dense). □ ###### Lemma 1.23. Let $H$ be a totally disconnected locally compact999By (our) definition, locally compact implies Hausdorff. group (i.e., a locally profinite group) and let $N\subseteq H$ be a compact subgroup. Then $N=\bigcap_{\begin{subarray}{c}N\subseteq J\\\ J\subseteq H\text{ open compact subgrp.}\end{subarray}}J.$ Note that this is (a variant of) a well-known theorem by van Dantzig if $N=\\{1\\}$ [Dan36]. ###### Proof: We make use of the following fact [AT08, Prop. 3.1.7]: A Hausdorff space is locally compact and totally disconnected if and only if the open compact sets form a basis of the topology. (Van Dantzig’s theorem is the group version of this, which talks only about a neighborhood basis of the identity and open compact _subgroups_.) First we show that $N$ is contained in some open compact subset $K\subseteq H$. For every $x\in N$ choose a compact open neighborhood $x\in K_{x}\subseteq H$. This is possible by the fact cited above. Then there is a finite subset $I\subseteq N$ such that $N\subseteq\bigcup_{x\in I}K_{x}=:K$. Next, for every $x\in N$ choose an open neighborhood of the identity $U_{x}$ such that $xU_{x}K\subseteq K$. With $N\subseteq U:=\bigcup_{x\in N}xU_{x}$ we obtain $UK\subseteq K$. Replacing $U$ by $U\cap U^{-1}$, we may moreover assume it is symmetric. The subgroup generated by $U$ is open (hence closed) and contained in $K$, hence is an open compact subgroup. Thus $N$ even is contained in an open compact sub _group_ ; in other words, we may assume that $H$ is compact, i.e., is a profinite group. Then $H/N$ is compact101010Hausdorff quotient spaces of compact spaces are compact again, but for “locally compact” the analogous statement is not true in general! and totally disconnected111111Take $x,y\in H$ such that $xN\neq yN$. We show that any subspace $S\subseteq H/N$ containing both $xN$ and $yN$ is disconnected. Let $U\subseteq H/N$ be a neighborhood of $xN$ not containing $yN$. Let $x\in V\subseteq\pi^{-1}(U)$ be open and compact, where $\pi\colon H\to H/N$ is the projection. Then $yN\notin\pi(V)\subseteq H/N$ is open and compact (hence closed) and we have $S=(\pi(V)\cap S)\sqcup S\setminus\pi(V)$ where both $\pi(V)\cap S$ and $S\setminus\pi(V)$ are open in $S$. This shows that $S$ is disconnected. (i.e., is a Stone space). By the fact cited above, $H/N\supseteq\\{1\\}=\bigcap_{L\subseteq H/N\text{ open compact subset}}L.$ Observe that the quotient map $H\to H/N$ is proper to deduce $N=\bigcap_{\begin{subarray}{c}N\subseteq M\\\ M\subseteq H\text{ open compact subset}\end{subarray}}M.$ Say $M$ is an open and compact subset of $H$ containing $N$. As we have shown above, there is an open compact subgroup $J\subseteq H$ in between $N$ and $M$, and this is all we need to complete the proof. □ ###### Proposition 1.24. For every compact open subgroup $N^{p}\subseteq\operatorname{GSp}(V)(\mathbb{A}_{f}^{p})$ $\chi\colon\mathscr{S}_{N}(\operatorname{GSp}(V),S^{\pm})\to\mathscr{S}_{J}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$ is a well-defined morphism for all compact open subgroups $N^{p}\subseteq J^{p}\subseteq\operatorname{GSp}(V^{\S})(\mathbb{A}_{f}^{p})$ and is a closed immersion for all sufficiently small compact open subgroups $N^{p}\subseteq J^{p}\subseteq\operatorname{GSp}(V^{\S})(\mathbb{A}_{f}^{p})$. ###### Proof: The fact that it’s well-defined is clear from the construction. To show the second statement, as in [Del71, Prop. 1.15], it is enough to show that $\mathscr{S}_{N_{p}N^{p}}(\operatorname{GSp}(V),S^{\pm})\to\varprojlim_{J^{p}}\mathscr{S}_{J_{p}J^{p}}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$ is a closed immersion, i.e., a proper monomorphism. We begin by proving that it is a monomorphism, i.e., injective on $S$-valued points ($S$ arbitrary $\mathbb{Z}_{(p)}$-scheme). So, say $(A_{1},\lambda_{1},\eta_{1}^{p})$ and $(A_{2},\lambda_{2},\eta_{2}^{p})$ both map to $(A^{\S},\lambda^{\S},\epsilon_{J^{p}}^{p})$. That means precisely that there is an isomorphism of abelian $S$-schemes up to $\mathbb{Z}_{(p)}$-isogeny $\phi\colon\prod_{i=-(r-1)-a}^{r-1}A_{1,\Lambda^{i}}\xrightarrow{\cong}\prod_{i=-(r-1)-a}^{r-1}A_{2,\Lambda^{i}}$ such that $\phi^{\vee}\circ\prod_{i=-(r-1)-a}^{r-1}\left(\rho_{2,\left(\Lambda^{i}\right)^{\vee},\Lambda^{i}}^{\vee}\circ\lambda_{2,\Lambda^{i}}\right)\circ\phi=\prod_{i=-(r-1)-a}^{r-1}\left(\rho_{1,\left(\Lambda^{i}\right)^{\vee},\Lambda^{i}}^{\vee}\circ\lambda_{1,\Lambda^{i}}\right)$ and $H_{1}(\phi,\mathbb{A}_{f}^{p})\circ\epsilon_{1,J^{p}}^{p}=\epsilon_{2,J^{p}}^{p}\mod{J^{p}}.$ We claim that $\phi$ comes from isomorphisms $\phi_{i}\colon A_{1,\Lambda^{i}}\xrightarrow{\cong}A_{2,\Lambda^{i}}.$ Certainly there is but one candidate for $\phi_{i}$: define $\phi_{i}$ to be the composition $A_{1,\Lambda^{i}}\xrightarrow{\mathrm{incl}}\prod_{i=-(r-1)-a}^{r-1}A_{1,\Lambda^{i}}\xrightarrow{\phi}\prod_{i=-(r-1)-a}^{r-1}A_{2,\Lambda^{i}}\xrightarrow{\mathrm{proj}}A_{2,\Lambda^{i}}.$ Our claim then is that $\phi=\prod_{i=-(r-1)-a}^{r-1}\phi_{i}.$ Apply $H^{1}(\\_,\mathbb{A}_{f}^{p})$ on both sides. For the left hand side, we have $H_{1}(\phi,\mathbb{A}_{f}^{p})=\epsilon_{2,J^{p}}^{p}\circ\left(\epsilon_{1,J^{p}}^{p}\right)^{-1}\mod{J^{p}}.$ and the right hand side of this equation is block diagonal. So $H_{1}(\phi,\mathbb{A}_{f}^{p})=\prod_{i=-(r-1)-a}^{r-1}H_{1}(\phi_{i},\mathbb{A}_{f}^{p})\mod{J^{p}}.$ Since (by Lemma 1.23) $N^{p}=\bigcap_{\begin{subarray}{c}N_{\ell}\subseteq J_{\ell}\\\ J_{\ell}\subseteq\operatorname{GSp}(V^{\S})(\mathbb{Q}_{\ell})\text{ cpt. open subgrp.}\end{subarray}}J_{\ell},$ it follows that (with $\ell\neq p$) $H_{1}(\phi,\mathbb{Q}_{\ell})=\prod_{i=-(r-1)-a}^{r-1}H_{1}(\phi_{i},\mathbb{Q}_{\ell})\mod{N_{\ell}},$ hence (since $N_{\ell}$ acts block-diagonally) that $H_{1}(\phi,\mathbb{Q}_{\ell})=\prod_{i=-(r-1)-a}^{r-1}H_{1}(\phi_{i},\mathbb{Q}_{\ell})$. Since $H_{1}(\\_,\mathbb{Q}_{\ell})$ is faithful (Lemma 1.22), this implies $\phi=\prod_{i=-(r-1)-a}^{r-1}\phi_{i}$, as desired. Next, consider the extension by zero of $\left(H_{1}(\rho_{1/2,\Lambda^{j},\Lambda^{i}},\mathbb{A}_{f}^{p})\right)_{i,j}$ (where for “$1/2$” either “$1$” or “$2$” can be plugged in) to a map $H_{1}(A^{\S},\mathbb{A}_{f}^{p})\to H_{1}(A^{\S},\mathbb{A}_{f}^{p})$. Under the isomorphism given by the $J^{p}$-level structure this corresponds, up to the $J^{p}$-action, to the map $V^{\S}_{\mathbb{A}_{f}^{p}}\to V^{\S}_{\mathbb{A}_{f}^{p}}$ given by mapping the $i$’th copy of $V_{\mathbb{A}_{f}^{p}}$ identically to the $j$’th copy and the rest to zero. Thus $\rho_{1/2,i,j}$ yield the same up to $J^{p}$ after applying $H_{1}(\\_,\mathbb{A}_{f}^{p})$, hence they are equal in the $\mathbb{Z}_{(p)}$-isogeny category. Consequently, $\chi$ is a monomorphism. For properness, we will use the valuative criterion. Let $R$ be a discrete valuation ring with field of fractions $K$ and assume that a $K$-point $A^{\S}=\prod_{i=-(r-1)-a}^{r-1}A_{\Lambda^{i}}$ with its additional structures coming from $(A_{\Lambda^{i}})_{i}$ extends to an $R$-point $\mathcal{A}^{\S}$. Consider the map $A^{\S}\to A_{\Lambda^{i_{0}}}\to A^{\S}$ where the first map is a projection and the second an inclusion. By the Néron mapping property, this extends to a map $\mathcal{A}^{\S}\to\mathcal{A}^{\S}$. Define $\mathcal{A}_{\Lambda^{i_{0}}}$ to be the image of this map. The Néron mapping property also allows us to extend the transition isogenies $\rho_{\Lambda^{i_{0}},\Lambda^{j_{0}}}\colon\allowbreak{A_{\Lambda^{j_{0}}}\to A_{\Lambda^{i_{0}}}}$, $i_{0}\leq j_{0}$, the periodicity isomorphisms, and the polarization. Since $\pi_{1}(\operatorname{Spec}K)$ surjects onto $\pi_{1}(\operatorname{Spec}R)$ (see [Stacks, Tag 0BQM]), extending the level structure away from $p$ is trivial. □ ### 1.5 Construction of the integral model Let $\mathbf{E}$ be the reflex field of $(G,X)$ and $v\mid p$ a place of $\mathbf{E}$. As the first step towards the construction of the integral model, we define $\mathscr{S}_{K}^{-}(G,X)\to\operatorname{Spec}\mathcal{O}_{\mathbf{E},(v)}$ to be the closure of $\operatorname{Sh}_{K}(G,X)$ in $\mathscr{S}_{N}(\operatorname{GSp}(V),S^{\pm})_{\mathcal{O}_{\mathbf{E},(v)}}$ (or, equivalently by Proposition 1.24, in $\mathscr{S}_{J}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})_{\mathcal{O}_{\mathbf{E},(v)}}$). By this we mean the topological closure with the reduced subscheme structure or, equivalently (since $\operatorname{Sh}_{K}(G,X)$ is reduced), the flat closure (in the sense of [EGA4, § 2.8]). Then we define $\mathscr{S}_{K}(G,X)\to\operatorname{Spec}\mathcal{O}_{\mathbf{E},(v)}$ to be the normalization of $\mathscr{S}_{K}^{-}(G,X)\to\operatorname{Spec}\mathcal{O}_{\mathbf{E},(v)}$. This can be regarded as “the obvious definition”, but it is entirely non- obvious and due to Kisin and Pappas that this really behaves as one would expect from a canonical integral model. These expectations, properly explained in [KP15], mainly concern the satisfaction of an extension property which is a weaker version of the valuative criterion for properness, and the existence of a local model diagram that essentially shows that the integral model is étale-locally isomorphic to the local model described by Pappas and Zhu [PZ13]. Define $E:=\mathbf{E}_{v}$, the $v$-adic completion of $\mathbf{E}$, and denote by $\kappa$ the residue field of $E$, a finite extension of $\mathbb{F}_{p}$. By abuse of notation, we also denote the base change of $\mathscr{S}_{K}(G,X)$ to $\mathcal{O}_{E}$ by $\mathscr{S}_{K}(G,X)$ again. By $\overline{\mathscr{S}}_{K}(G,X)$ we denote the base change of $\mathscr{S}_{K}(G,X)$ to $\kappa$. ### 1.6 Maps between Shimura varieties This section is based on [Zho18], where more detailed explanations may be found. We let $K_{p}^{\prime}=\mathcal{G}^{\prime}(\mathbb{Z}_{p})$ be another parahoric where $\mathcal{G}^{\prime}$ is a Bruhat-Tits group scheme with $\mathcal{G}^{\prime}=(\mathcal{G}^{\prime})^{\circ}$. We assume that $K_{p}\subseteq K_{p}^{\prime}$, i.e., that the facet (in the Bruhat-Tits building) associated with $K_{p}^{\prime}$ is contained in the closure of the one associated with $K_{p}$. ###### Theorem 1.25. [Zho18, Theorem 7.1] For sufficiently small $K^{p}$ there exists a morphism $\pi_{K_{p}K^{p},K_{p}^{\prime}K^{p}}\colon\mathscr{S}_{K_{p}K^{p}}(G,X)\to\mathscr{S}_{K_{p}^{\prime}K^{p}}(G,X).$ In moving towards a proof, let us first note that $\operatorname{GSp}(V,\psi)\to\operatorname{GSp}(V^{\S},\psi^{\S})$ factors through $M:=\left\\{\left(g_{i}\right)_{i}\in\prod_{i=-(r-1)-a}^{r-1}\operatorname{GSp}(V,\psi)\;|\;c(g_{-(r-1)-a})=\dotsb=c(g_{r-1})\right\\},$ where $c\colon\operatorname{GSp}(V,\psi)\to\mathbb{G}_{m}$ is the multiplicator homomorphism. There is a natural $X_{M}$ that makes $(M,X_{M})$ into a Shimura datum. Define $H_{p}:=\operatorname{Stab}_{M(\mathbb{Q}_{p})}(\Lambda^{\S}),\quad J_{p}:=\operatorname{Stab}_{\operatorname{GSp}(V^{\S}_{\mathbb{Q}_{p}},\psi^{\S}_{\mathbb{Q}_{p}})}(\Lambda^{\S}).$ Then for sufficiently small $H^{p},J^{p}$, we obtain a closed immersion $i\colon\operatorname{Sh}_{H_{p}H^{p}}(M,X_{M})\hookrightarrow\operatorname{Sh}_{J_{p}J^{p}}(\operatorname{GSp}(V^{\S},\psi^{\S}),S^{\S,\pm}).$ $\operatorname{Sh}_{H_{p}H^{p}}$ has a moduli interpretation over $\mathbb{Z}_{(p)}$ (essentially: $\\#\\{-(r-1)-a,\dotsc,r-1\\}=2(r-1)+a+1$ abelian schemes up to prime-to-$p$ isogeny endowed with certain polarizations and prime-to-$p$ level structure). ###### Proposition 1.26. [Zho18, Prop. 7.2] If $J^{p}$ is sufficiently small, then $i$ extends to a closed immersion $i\colon\mathscr{S}_{H_{p}H^{p}}(M,X_{M})\to\mathscr{S}_{J_{p}J^{p}}(\operatorname{GSp}(V^{\S},\psi^{\S}),S^{\S,\pm}).$ Now consider the embedding $i\colon\mathcal{B}(G,\mathbb{Q}_{p})\to\mathcal{B}(\operatorname{GSp}(V,\psi),\mathbb{Q}_{p})$. We still have $K_{p}=\mathcal{G}(\mathbb{Z}_{p})$ with $\mathcal{G}=\mathcal{G}_{x}$, $x\in\mathcal{B}(G,\mathbb{Q}_{p})$. Let $\mathfrak{g}$ be the minimal facet in $\mathcal{B}(\operatorname{GSp}(V,\psi),\mathbb{Q}_{p})$ containing $i(x)$. So $\mathfrak{g}$ corresponds to some lattice chain $\Lambda^{0}\supseteq\dotsb\supseteq\Lambda^{r-1}$ as above. We have $\operatorname{Sh}_{K_{p}K^{p}}(G,X)\xrightarrow{\text{cl. imm.}}\operatorname{Sh}_{H_{p}H^{p}}(M,X_{M})_{\mathbf{E}}\xrightarrow{\text{cl. imm.}}\operatorname{Sh}_{J_{p}J^{p}}(\operatorname{GSp}(V^{\S},\psi^{\S}),S^{\S,\pm})_{\mathbf{E}}$ and $\mathscr{S}^{-}_{K}(G,X)$ is defined to be the closure of $\operatorname{Sh}_{K_{p}K^{p}}(G,X)$ in $\mathscr{S}_{J_{p}J^{p}}(\operatorname{GSp}(V^{\S},\psi^{\S}),S^{\S,\pm})_{\mathcal{O}_{\mathbf{E},(v)}}$. ###### Corollary 1.27. [Zho18, Cor. 7.3] $\mathscr{S}^{-}_{K}(G,X)$ also can be described as the closure of $\operatorname{Sh}_{K_{p}K^{p}}(G,X)$ in $\mathscr{S}_{H_{p}H^{p}}(M,X_{M})_{\mathcal{O}_{\mathbf{E},(v)}}$. Let $\mathfrak{f}^{\prime}$ be a facet of $\mathcal{B}(\operatorname{GSp}(V^{\S},\psi^{\S}),\mathbb{Q}_{p})$ in the closure of $\mathfrak{g}$. Then $\mathfrak{f}^{\prime}$ corresponds to a “sub- lattice chain” $\Lambda^{i_{1}}\supseteq\dotsb\supseteq\Lambda^{i_{s}}$, $\left\\{i_{1},\dotsc,i_{s}\right\\}\subseteq\\{0,\dotsc,r-1\\}$. Defining $M^{\prime},H_{p}^{\prime}$ as above, we naturally obtain morphisms $M\to M^{\prime}$ and $\omega_{H_{p}H^{p},H_{p}^{\prime}\left.H^{\prime}\right.^{p}}\colon\mathscr{S}_{H_{p}H^{p}}(M,X_{M})\to\mathscr{S}_{H^{\prime}_{p}\left.H^{\prime}\right.^{p}}(M^{\prime},X_{M^{\prime}})$ for suitable levels $H^{p},\left.H^{\prime}\right.^{p}$ away from $p$. ###### Sketch of Proof (of Theorem 1.25): Let $\mathfrak{f}$ be the facet of $K_{p}$ (that is to say $\mathfrak{f}$ is the minimal facet satisfying $x\in\mathfrak{f}$), $\mathfrak{f}^{\prime}$ that of $K_{p}^{\prime}$. Then $\mathfrak{f}\subseteq\overline{\mathfrak{f}^{\prime}}$. Let $x^{\prime}\in\mathfrak{f}^{\prime}$ be so close to $x$ that $\mathfrak{g}\subseteq\overline{\mathfrak{g}^{\prime}}$ in $\mathcal{B}(\operatorname{GSp}(V,\psi),\mathbb{Q}_{p})$, where $\mathfrak{g}$ and $\mathfrak{g}^{\prime}$ denote the minimal facets containing $i(x)$ and $i(x^{\prime})$ respectively. The constructions from above yield: $\mathscr{S}_{K_{p}K^{p}}^{-}(G,X)$$\mathscr{S}_{K^{\prime}_{p}\left.K^{\prime}\right.^{p}}^{-}(G,X)$$\mathscr{S}_{H_{p}^{\prime}\left.H^{\prime}\right.^{p}}(M^{\prime},X_{M^{\prime}})_{\mathcal{O}_{\mathbf{E},(v)}}$$\mathscr{S}_{H_{p}\left.H\right.^{p}}(M,X_{M})_{\mathcal{O}_{\mathbf{E},(v)}}$closed imm.closed imm.$\omega_{H_{p}H^{p},H_{p}^{\prime}\left.H^{\prime}\right.^{p}}$ (1.28) On the generic fiber we can complete this to a commutative square. By Corollary 1.27, this implies that (1.28) also completes to a commutative square. Now normalize. □ ### 1.7 Local structure of the integral model #### 1.7.1 Generizations and irreducible components Let $\mathscr{X}\to\operatorname{Spec}\mathcal{O}_{\breve{E}}$ be a flat scheme locally of finite type; denote the special fiber by $X\to\operatorname{Spec}\bar{\mathbb{F}}_{p}$ and the generic fiber by $\mathcal{X}\to\operatorname{Spec}\breve{E}$. We assume that $\mathcal{X}$ is locally integral (e.g. smooth). For example, we can consider $(\mathscr{X},X,\mathcal{X})=(\mathscr{S}^{-}_{K}(G,X)_{{{\cal O}_{\breve{E}}}},\mathscr{S}^{-}_{K}(G,X)_{{{\cal O}_{\breve{E}}}}\otimes_{{{\cal O}_{\breve{E}}}}\bar{\mathbb{F}}_{p},\allowbreak{\operatorname{Sh}_{K}(G,X)\otimes_{E}\breve{E}})$. Let $\bar{x}\in X(\bar{\mathbb{F}}_{p})$. ###### Lemma 1.29. There is a generization $x$ of $\bar{x}$ which lies in the generic fiber $\mathcal{X}$, and is a closed point in there, i.e., $x\in\mathcal{X}(L)$ for a finite extension $L/\breve{E}$. ###### Definition 1.30. We shall call such a point $x$ a _closed point generization_ of $\bar{x}$ for short. ###### Proof: Due to flatness (going-down) there is _some_ generization in the generic fiber; call it $x_{0}$. By [Stacks, Tag 053U] the following set is dense (and in particular non-empty) in the closure of $\\{x_{0}\\}$ in $\mathcal{X}$: $\left\\{x\in\mathscr{X}\;|\;x\text{ is a specialization of }x_{0}\text{ and a closed point generization of }\bar{x}\right\\}.$ □ ###### Lemma 1.31. Notation as in the preceding lemma. The specialization $x\leadsto\bar{x}$ can be realized by an ${\cal O}_{L}$-valued point of $\mathscr{X}$. ###### Proof: First off, by [EGA2, 7.1.9], it can be realized by a morphism $\operatorname{Spec}R=\\{\eta,s\\}\to\mathscr{X}$ of ${{\cal O}_{\breve{E}}}$-schemes, where $R$ is a discrete valuation ring such that $L\cong\kappa(\eta)=\operatorname{Quot}(R)$ as field extensions of $\kappa(x)$. We hence get local homomorphisms of local rings ${{\cal O}_{\breve{E}}}\to{\cal O}_{\mathscr{X},\bar{x}}\to R$. Thus the discrete valuation on $L$ defined by $R$ extends the discrete valuation on $\breve{E}$. But there is but one such extension and its valuation ring is ${\cal O}_{L}$ (by definition). □ ###### Lemma 1.32. Mapping $x$ to the unique irreducible component of $\mathscr{X}$ that contains $x$ establishes a surjection from the set of closed point generizations $x$ of $\bar{x}$ to the set of irreducible components of $\mathscr{X}$ containing $\bar{x}$. ###### Proof: If $x_{0}\in\mathcal{X}$ is a generization of $\bar{x}$, then $x_{0}$ lies in a unique irreducible component of $\mathscr{X}$ because $\mathcal{X}$ is locally irreducible. Hence the map described above is well-defined. Now for surjectivity: Given an irreducible component $C$ of $\mathscr{X}$ containing $\bar{x}$, let $x_{0}\in C$ be the generic point. Then $x_{0}$ must be in the generic fiber (else we would be able to find a generization in the generic fiber by going-down). Now go through the proof of Lemma 1.29 with this particular choice of $x_{0}$. □ #### 1.7.2 Normalization and completion For reference, we collect some facts concerning the passage to normalization and completion and in particular how it applies to integral models of Shimura varieties. ###### Fact 1.33. $\mathscr{S}^{-}_{K}(G,X)\to\operatorname{Spec}{{\cal O}_{\breve{E}}}$ is quasi-projective, so in particular of finite type. Hence $\mathscr{S}^{-}_{K}(G,X)$ and $\mathscr{S}^{-}_{K}(G,X)_{{{\cal O}_{\breve{E}}}}$ are excellent. As a consequence the normalization $\mathscr{S}_{K}(G,X)_{{{\cal O}_{\breve{E}}}}\xrightarrow{\nu}\mathscr{S}^{-}_{K}(G,X)_{{{\cal O}_{\breve{E}}}}$ is finite. (This really is a normalization because normalization and completion behave well together in the excellent case (to get from ${\cal O}_{E^{\mathrm{ur}}}$ to ${{\cal O}_{\breve{E}}}$ and from ${\cal O}_{\mathbf{E},(v)}$ to ${\cal O}_{E}$) and because normalization commutes with base change along filtered colimits of smooth morphisms (to get from ${\cal O}_{E}$ to ${\cal O}_{E^{\mathrm{ur}}}$)). We will always denote by $(\;)^{\sim}$ the integral closure of a ring in its total ring of fractions. $\bar{x}$ still denotes an $\bar{\mathbb{F}}_{p}$-valued point of $\mathscr{S}^{-}_{K}(G,X)$. Let $\nu^{-1}(\bar{x})=\\{\bar{y}=\bar{y}_{1},\dotsc,\bar{y}_{n}\\}$. Let $\mathscr{S}^{-}:=\mathscr{S}^{-}_{K}(G,X)_{{{\cal O}_{\breve{E}}}}$ and $\mathscr{S}:=\mathscr{S}_{K}(G,X)_{{{\cal O}_{\breve{E}}}}$. ###### Fact 1.34. $\left(\nu_{*}{\cal O}_{\mathscr{S}^{-}_{K}(G,X)}\right)_{\bar{x}}^{\wedge}=\prod_{j=1}^{n}\widehat{\cal O}_{\mathscr{S},\bar{y}_{j}}$. ###### Fact 1.35. By [Stacks, Tag 0C3B]: $\left(\nu_{*}{\cal O}_{\mathscr{S}^{-}_{K}(G,X)}\right)_{\bar{x}}=\widetilde{\cal O}_{\mathscr{S}^{-},\bar{x}}$. ###### Fact 1.36. By [Stacks, Tag 035P]: $\widetilde{A}=\prod_{\mathfrak{q}\in\operatorname{Min}(A)}(A/\mathfrak{q})^{\sim}$, if $A$ is a reduced ring and $\\#\operatorname{Min}(A)<\infty$. ###### Fact 1.37. $(\;)^{\sim}$ and $(\;)^{\wedge}$ commute in the case of excellence (see e.g. [EGA4, 7.6.1, 7.8.3.1(vii)]). For instance, $\left.{\cal O}_{\mathscr{S}^{-},\bar{x}}^{\wedge}\right.^{\sim}=\left.{\cal O}_{\mathscr{S}^{-},\bar{x}}^{\sim}\right.^{\wedge}$ (on the right hand side, we have a completion of a ${\cal O}_{\mathscr{S}^{-},\bar{x}}$-module). ###### Fact 1.38. In the case of excellence, the completion of a normal domain is a normal domain (see e.g. [GW10, 12.50]). Thus we have $\displaystyle\prod_{\mathfrak{q}\in\operatorname{Min}(\widehat{\cal O}_{\mathscr{S}^{-},\bar{x}})}(\widehat{\cal O}_{\mathscr{S}^{-},\bar{x}}/\mathfrak{q})^{\sim}$ $\displaystyle\cong\left.\widehat{\cal O}_{\mathscr{S}^{-},\bar{x}}^{\wedge}\right.^{\sim}\cong\left.\widehat{\cal O}_{\mathscr{S}^{-},\bar{x}}^{\sim}\right.^{\wedge}$ (1.39) $\displaystyle\cong\left(\nu_{*}{\cal O}_{\mathscr{S}^{-}_{K}(G,X)}\right)_{\bar{x}}^{\wedge}\cong\prod_{j=1}^{n}\widehat{\cal O}_{\mathscr{S},\bar{y}_{j}}$ and the rings $\widehat{\cal O}_{\mathscr{S},\bar{y}_{j}}$ are normal domains. Hence we obtain a bijection $\operatorname{Min}(\widehat{\cal O}_{\mathscr{S}^{-},\bar{x}})\xleftrightarrow{\;1:1\;}\nu^{-1}(\bar{x})$ such that there exists a numbering $\mathfrak{q}_{1},\dotsc,\mathfrak{q}_{n}$ of the elements of $\operatorname{Min}(\widehat{\cal O}_{\mathscr{S}^{-},\bar{x}})$ such that (1.39) restricts to an isomorphism $\widehat{\cal O}_{\mathscr{S}^{-},\bar{x}}/\mathfrak{q}_{j}\cong\widehat{\cal O}_{\mathscr{S},\bar{y}_{j}}$ (also see [EGA4, 7.6.2]). Also: $\displaystyle\left.\widehat{\cal O}_{\mathscr{S}^{-},\bar{x}}^{\sim}\right.^{\wedge}$ $\displaystyle\cong\left(\prod_{\mathfrak{q}\in\operatorname{Min}({\cal O}_{\mathscr{S}^{-},\bar{x}})}({\cal O}_{\mathscr{S}^{-},\bar{x}}/\mathfrak{q})^{\sim}\right)^{\wedge}$ $\displaystyle\cong\prod_{\mathfrak{q}\in\operatorname{Min}({\cal O}_{\mathscr{S}^{-},\bar{x}})}\left.\left({\cal O}_{\mathscr{S}^{-},\bar{x}}/\mathfrak{q}\right)^{\sim}\right.^{\wedge}$ $\displaystyle\cong\prod_{\mathfrak{q}\in\operatorname{Min}({\cal O}_{\mathscr{S}^{-},\bar{x}})}\left.\left({\cal O}_{\mathscr{S}^{-},\bar{x}}/\mathfrak{q}\right)^{\wedge}\right.^{\sim}$ $\displaystyle\overset{\mathclap{\text{Artin- Rees}}}{\cong}\prod_{\mathfrak{q}\in\operatorname{Min}({\cal O}_{\mathscr{S}^{-},\bar{x}})}\left(\widehat{\cal O}_{\mathscr{S}^{-},\bar{x}}/\widehat{\mathfrak{q}}\right)^{\sim},\quad\widehat{\mathfrak{q}}=\mathfrak{q}\widehat{\cal O}_{\mathscr{S}^{-},\bar{x}}.$ Now by [KP15, 2.1.2, 4.2.2] for all $\mathfrak{q}\in\operatorname{Min}(\widehat{\cal O}_{\mathscr{S}^{-},\bar{x}})$ we have that $\left(\widehat{\cal O}_{\mathscr{S}^{-},\bar{x}}/\mathfrak{q}\right)^{\sim}=\widehat{\cal O}_{\mathscr{S}^{-},\bar{x}}/\mathfrak{q}\cong R_{G,\bar{x},\mathfrak{q}}=R_{G}$ is normal and $\widehat{\cal O}_{\mathscr{S}^{-},\bar{x}}/\mathfrak{q}\cong\widehat{\cal O}_{\mathscr{S},\bar{y}}$ for an appropriate choice of $\mathfrak{q}$ (i.e., of $x$). (The notation $R_{G}$ is from [KP15], where it is defined as the formal local ring of the local model.) ### 1.8 Hodge tensors and (lack of) moduli interpretation We discuss the moduli interpretation of Shimura varieties of Hodge type and the partial extension of this interpretation to the integral model as constructed in Section 1.5. Let $(G,X)$ be a Shimura datum of Hodge type, let $(G,X)\hookrightarrow(\operatorname{GSp}(V),S^{\pm})$ be an embedding as in Definition 1.1 (4). #### 1.8.1 The story for the $\mathbb{C}$-valued points ###### Lemma 1.40. [Del82, Prop. 3.1] There exist numbers $n,r_{i},s_{i}\in\mathbb{Z}_{\geq 0}$ and tensors $t_{i}\in V^{\otimes r_{i}}\otimes(V^{*})^{\otimes s_{i}}$, $1\leq i\leq n$, such that $G$ is the subgroup of $\operatorname{GSp}(V)$ fixing the $t_{i}$. ###### Remark 1.41. (See [Mil05, Section 7].) For $K$ a compact open subgroup of $G(\mathbb{A}_{f})$, the set $\operatorname{Sh}_{K}(G,X)(\mathbb{C})$ of Definition 1.8 has the following moduli interpretation: isomorphism classes of triples $((W,h),\left(u_{i}\right)_{0\leq i\leq n},\eta K)$, where 1. (a) $(W,h)$ rational Hodge structure of type $(-1,0)+(0,-1)$, 2. (b) $\pm u_{0}$ polarization of $(W,h)$ (i.e., either $u_{0}$ or $-u_{0}$ is a polarization), 3. (c) $u_{i}\in V^{\otimes r_{i}}\otimes(V^{*})^{\otimes s_{i}}$ for $1\leq i\leq n$, 4. (d) $\eta K$ is a $K$-orbit of isomorphisms $V\otimes\mathbb{A}_{f}\xrightarrow{\sim}W\otimes\mathbb{A}_{f}$, mapping $\psi$ to a $\mathbb{A}_{f}^{\times}$-multiple of $u_{0}$, and every $t_{i}$ to $u_{i}$ ($i\geq 1$), such that there exists an isomorphism $a\colon W\xrightarrow{\sim}V$ mapping $u_{0}$ to a $\mathbb{Q}^{\times}$-multiple of $\psi$, every $u_{i}$ to $t_{i}$ ($i\geq 1$), and $h$ to an element of $X$ (so ${}^{a}h:=\operatorname{GL}(a)\circ h\colon\mathbb{S}\to\operatorname{GL}(V)$ is in $X$). ###### Sketch of Proof: Given $((W,h),\left(u_{i}\right)_{0\leq i\leq n},\eta K)$ and $a$ as above, we consider the pair ${(^{a},a\circ\eta)}$. By assumption, ${}^{a}h\in X$ and $a\circ\eta$ is a symplectic similitude fixing the $t_{i}$; hence $(^{a}h,a\circ\eta)\in X\times G(\mathbb{A}_{f})$. The double quotient now results precisely from the ambiguity in the choices of $a$ and $\eta\in\eta K$. □ ###### Remark 1.42. Denote by $\mathbb{Q}(r)$ the rational Hodge structure of type $(-r,-r)$ with underlying vector space $\mathbb{Q}$, and denote the multiplier character $\operatorname{GSp}(V)\to\mathbb{G}_{m}$ as well as its restriction to $G$ by $c$. Then for $h\in X$, $c\circ h\cong\mathbb{Q}(1)$ as rational Hodge structure, and the symplectic form gives an isomorphism $V\cong V^{*}\otimes\mathbb{Q}(1)$. With this, we may also interpret a tensor $t\in V^{\otimes r}\otimes(V^{*})^{\otimes s}$ as a multilinear map $V^{r+s}\to\mathbb{Q}$, and, if $t$ is fixed by $G$, as a morphism $(V,h)^{\otimes(r+s)}\to\mathbb{Q}(r)$ of Hodge structures. Provided $t\neq 0$, this implies $r+s=2r$, i.e. $r=s$. ###### Remark 1.43. Since $A\mapsto H_{1}(A(\mathbb{C}),\mathbb{Q})$ yields an equivalence between the category of complex abelian varieties up to isogeny and the category of polarizable rational Hodge structures of type $(-1,0)+(0,-1)$, we may also take $\operatorname{Sh}_{K}(G,X)(\mathbb{C})$ to be a moduli problem of abelian varieties with extra structure: consider isomorphism classes of triples $(A,\left(u_{i}\right)_{0\leq i\leq n},\eta K)$, where 1. (a) $A$ is a complex abelian variety up to isogeny, 2. (b) $\pm u_{0}$ is a polarization of the rational Hodge structure $V:=H_{1}(A(\mathbb{C}),\mathbb{Q})$, 3. (c) $u_{1},\dotsc,u_{n}$ is as in Remark 1.41, 4. (d) $\eta K$ is a $K$-orbit of $\mathbb{A}_{f}$-linear isomorphisms $V\otimes\mathbb{A}_{f}\to V_{f}(A):=T_{f}(A)\otimes_{\mathbb{Z}}\mathbb{Q}=\left(\varprojlim A(k)[n]\right)\otimes_{\mathbb{Z}}\mathbb{Q}$, mapping $\psi$ to a $\mathbb{A}_{f}^{\times}$-multiple of $u_{0}$ and every $t_{i}$ to $u_{i}$ ($i\geq 1$), such that there exists an isomorphism $a\colon H_{1}(A(\mathbb{C}),\mathbb{Q})\xrightarrow{\sim}V$ mapping $u_{0}$ to a $\mathbb{Q}^{\times}$-multiple of $\psi$, $u_{i}$ to $t_{i}$ ($i\geq 1$) and $h$ to an element of $X$. ###### Remark 1.44. When rephrasing the moduli problem appropriately, the Shimura variety as an $\mathbf{E}$-variety is given by such a moduli problem, see [Mil05, Section 14]. In particular, we have a universal abelian scheme $\mathcal{A}\to\operatorname{Sh}_{K}(G,X)$. #### 1.8.2 The story for the integral model Integral models of Hodge type Shimura varieties in general seem not to afford nice straightforward moduli interpretations. Still, the moduli interpretation of the $\mathbb{C}$-valued points extends to _some_ degree. First, we do have a generalization of Lemma 1.40 as follows. ###### Proposition 1.45. [Kis10, Prop. 1.3.2] Let $R$ be a discrete valuation ring of mixed characteristic, $M$ a finite free $R$-module, and $\mathcal{G}\subseteq\operatorname{GL}(M)$ a closed $R$-flat subgroup with reductive generic fiber. Then $\mathcal{G}$ is defined by a finite collection of tensors $\left(s_{\alpha}\right)_{\alpha}\subset M^{\otimes}$. Here $M^{\otimes}$ is the direct sum of all the $R$-modules which can be formed from $M$ by taking duals and (finite) tensor products.121212Kisin in _loc. cit._ also allows for taking symmetric and exterior powers. As Deligne pointed out, this is unnecessary. [Del11] We return to the setting of Section 1.3; in particular we consider a Bruhat- Tits scheme $\mathcal{G}:=\mathcal{G}_{x}$. By the proposition just stated, we find a collection of tensors $\left(s_{\alpha}\right)_{\alpha}\subset(\Lambda^{\S}_{\mathbb{Z}_{(p)}})^{\otimes}$ whose pointwise stabilizer is the Zariski closure $G_{\mathbb{Z}_{(p)}}$ of $G$ in $\operatorname{GL}(\Lambda^{\S}_{\mathbb{Z}_{(p)}})$. ###### Remark 1.46. We can for example consider, for every $j\in\\{-(r-1)-a,\dotsc,r-1\\}$, the projection $\operatorname{pr}_{j}\colon\Lambda_{\mathbb{Z}_{(p)}}^{\S}\to\Lambda_{\mathbb{Z}_{(p)}}^{j}$. Since the $G$-action on $\Lambda_{\mathbb{Z}_{(p)}}^{\S}=\prod_{i=-(r-1)-a}^{r-1}\Lambda_{\mathbb{Z}_{(p)}}^{i}$ is diagonal, the $\operatorname{pr}_{j}$ are fixed, i.e., we may count the $\left(\operatorname{pr}_{j}\right)_{j}$ among the $\left(s_{\alpha}\right)_{\alpha}$. ###### Lemma 1.47. $G_{\mathbb{Z}_{(p)}}\otimes_{\mathbb{Z}_{(p)}}\mathbb{Z}_{p}={\cal G}$. ###### Proof: First, $G_{\mathbb{Z}_{(p)}}\otimes_{\mathbb{Z}_{(p)}}\mathbb{Z}_{p}$ is the Zariski closure $G_{\mathbb{Z}_{p}}$ of $G$ in $\operatorname{GL}(\Lambda^{\S}_{\mathbb{Z}_{p}})$ and $G_{\mathbb{Z}_{p}}\otimes\mathbb{Q}_{p}=G$: This follows from [GW10, Lemma 14.6]. According to [Stacks, Tag 056B], $G_{\mathbb{Z}_{p}}$ is the topological closure of $G$ in $\operatorname{GL}(\Lambda^{\S}_{\mathbb{Z}_{p}})$ endowed with the reduced subscheme structure. The special fiber therefore is reduced as well, i.e. ($\mathbb{F}_{p}$ being perfect), geometrically reduced, i.e. (being a group scheme), smooth. $G_{\mathbb{Z}_{p}}$ is flat over $\mathbb{Z}_{p}$ by [GW10, Prop. 14.14]. $G_{\mathbb{Z}_{p}}$ obviously is of finite presentation over $\mathbb{Z}_{p}$. Hence, $G_{\mathbb{Z}_{p}}$ is smooth over $\mathbb{Z}_{p}$. It also is affine and is the stabilizer of $\Lambda^{\S}_{\mathbb{Z}_{p}}$ (or rather, $G_{\mathbb{Z}_{p}}(\mathbb{Z}_{p})\subseteq G(\mathbb{Q}_{p})$ is (and analogously for $G_{\mathbb{Z}_{p}}(\breve{\mathbb{Z}}_{p})\subseteq G(\breve{\mathbb{Q}}_{p})$)). □ ###### Remarks 1.48. 1. (1) Obtaining an embedding of the Bruhat-Tits group scheme to which we can apply Proposition 1.45 is the main reason for altering the Hodge embedding in the way outlined in Section 1.3. 2. (2) We can also interpret the $s_{\alpha}$ as tensors in $((\Lambda^{\S}_{\mathbb{Z}_{(p)}})^{*})^{\otimes}$ and that $G_{\mathbb{Z}_{(p)}}$ also is the Zariski closure of $G$ in $\operatorname{GL}((\Lambda^{\S}_{\mathbb{Z}_{(p)}})^{*})$ (using the contragredient representation). ##### Hodge tensors: generic fiber. ###### Notation 1.49. Recall the universal abelian scheme $h\colon\mathcal{A}\to\operatorname{Sh}_{K}(G,X)$ from Remark 1.44. We consider the local system $\mathcal{L}_{B}:=R^{1}h_{*}^{\mathrm{an}}\underline{\mathbb{Q}}$ on $\operatorname{Sh}_{K}(G,X)_{\mathbb{C}}^{\mathrm{an}}$ and the flat vector bundle $\mathcal{V}_{\mathrm{dR}}:=R^{1}h_{*}\Omega^{\bullet}$ with Gauß-Manin connection $\nabla$. ###### Lemma 1.50. (See [CS17, Lemma 2.3.1].) $\mathcal{L}_{B}$ can be identified with the local system of $\mathbb{Q}$-vector spaces on $\operatorname{Sh}_{K}(G,X)^{\mathrm{an}}$ given by the $G(\mathbb{Q})$-representation $V$ and the $G(\mathbb{Q})$-torsor $p\colon X\times(G(\mathbb{A}_{f})/K)\to G(\mathbb{Q})\backslash(X\times(G(\mathbb{A}_{f})/K)=\operatorname{Sh}_{K}(G,X)(\mathbb{C}).$ Said torsor is isomorphic to the $G(\mathbb{Q})$-torsor $I:=\underline{\mathrm{Isom}}((V,s_{\alpha}),({\cal L}_{B},s_{\alpha,B}))$ that maps an open subset $U\subseteq\operatorname{Sh}_{K}(\mathbb{C})$ to $I(U)=\\{\beta\colon V\times U\xrightarrow{\sim}\left.{\cal L}_{B}\right|_{U}\;|\;\beta(s_{\alpha})=s_{\alpha,B}\\},$ where the $\left(s_{\alpha,B}\right)_{\alpha}\subset\mathcal{L}_{B}^{\otimes}$ are the global sections corresponding to the $G(\mathbb{Q})$-invariant tensors $\left(s_{\alpha}\right)_{\alpha}$. ###### Remark 1.51. By de Rham’s theorem, $\mathcal{V}_{\mathrm{dR},\mathbb{C}}^{\mathrm{an}}\cong\mathcal{L}_{B}\otimes_{\mathbb{Q}}\mathcal{O}_{\operatorname{Sh}_{K}(G,X)^{\mathrm{an}}}.$ In particular the global sections $\left(s_{\alpha,B}\right)_{\alpha}\subset\mathcal{L}_{B}^{\otimes}$ yield flat global sections $\left(s_{\alpha,\mathrm{dR}}\right)_{\alpha}\subset(\mathcal{V}_{\mathrm{dR},\mathbb{C}}^{\mathrm{an}})^{\otimes}$. All such sections arise from sections in $\mathcal{V}_{\mathrm{dR},\mathbb{C}}^{\otimes}$ (which we will denote the same), see [Kis10, 2.2]. ###### Remark 1.52. Let $k/E$ be a field extension embeddable into $\mathbb{C}$, and choose an embedding $\mathbb{Q}_{p}\hookrightarrow\mathbb{C}$ and an $E$-embedding $\sigma\colon\bar{k}\hookrightarrow\mathbb{C}$. Let $x\in\operatorname{Sh}_{K}(G,X)(k)$. By $p$-adic Hodge theory, the embedding $\sigma$ gives rise to isomorphisms $H^{1}_{\mathrm{dR}}(\mathcal{A}_{x}/k)\otimes_{k}\mathbb{C}\xrightarrow{\sim}H^{1}_{B}(\mathcal{A}_{x}(\mathbb{C}),\mathbb{Q})\otimes_{\mathbb{Q}}\mathbb{C}\xrightarrow{\sim}H^{1}_{\mathrm{\acute{e}t}}(\mathcal{A}_{x,\bar{k}},\mathbb{Q}_{p})\otimes_{\mathbb{Q}_{p}}\mathbb{C},$ and by proper base change $(\mathcal{L}_{B})_{x}:=x^{-1}\mathcal{L}_{B}\cong H^{1}_{B}(\mathcal{A}_{x}(\mathbb{C}),\mathbb{Q})$. ###### Notation 1.53. We denote by $s_{\alpha,B,x}$ the fiber of $s_{\alpha,B}$ at $x$, and by $s_{\alpha,\mathrm{dR},x}$ and $s_{\alpha,\mathrm{\acute{e}t},x}$, respectively, the images of $s_{\alpha,B,x}$ under the above isomorphisms. For the Betti–étale comparison, we don’t need to go all the way to $\mathbb{C}$ and have $s_{\alpha,\mathrm{\acute{e}t},x}\in H^{1}_{\mathrm{\acute{e}t}}(\mathcal{A}_{x,\bar{k}},\mathbb{Q}_{p})$. ###### Remark 1.54. $(s_{\alpha,\mathrm{dR},x},s_{\alpha,\mathrm{\acute{e}t},x})$ is an absolute Hodge cycle in the sense of [Del82], for every $\alpha$. ###### Lemma 1.55. [Kis10, Lemma 2.2.1] The $\operatorname{Gal}(\bar{k}/k)$-action on $H^{1}_{\mathrm{\acute{e}t}}(\mathcal{A}_{x,\bar{k}},\mathbb{Q}_{p})$ fixes every $s_{\alpha,\mathrm{\acute{e}t},x}$ and factors through $G(\mathbb{Q}_{p})$. Moreover, $s_{\alpha,\mathrm{dR},x}\in H^{1}_{\mathrm{dR}}(\mathcal{A}_{x}/k)$. In particular, $(s_{\alpha,\mathrm{dR},x},s_{\alpha,\mathrm{\acute{e}t},x})$ is independent of the choices made above. ###### Corollary 1.56. [Kis10, Lemma 2.2.2] $s_{\alpha,\mathrm{dR}}$ is defined over $E$ for all $\alpha$, i.e., $s_{\alpha,\mathrm{dR}}\in\mathcal{V}_{\mathrm{dR},E}^{\otimes}$. ##### Hodge tensors: special fiber. ###### Notation 1.57. Denote by $\mathscr{A}_{\mathrm{Siegel}}$ the universal abelian scheme over $\mathscr{S}_{J}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$, cf. Remark 1.21, and by $\mathscr{A}$ the pullback of $\mathscr{A}_{\mathrm{Siegel}}$ to $\mathscr{S}_{K}(G,X)$. Then the pullback of $\mathscr{A}$ to $\operatorname{Sh}_{K}(G,X)_{E}$ agrees with the pullback of $\mathcal{A}$ of Remark 1.44. We will occasionally call $\mathscr{A}$ the _“universal” abelian scheme_ (with quotation marks). Let $\bar{x}\in\mathscr{S}_{K}(G,X)(\bar{\mathbb{F}}_{p})$ and let $\mathbb{D}_{\bar{x}}:=\mathbb{D}(\mathscr{A}_{\bar{x}}[p^{\infty}])(W)$ be the Dieudonné module of the associated fiber of the “universal” abelian scheme, $W=W(\bar{\mathbb{F}}_{p})=\breve{\mathbb{Z}}_{p}$. Choose a closed point generization $x\in\operatorname{Sh}_{K}(G,X)(L)$, $L/E$ finite field extension, in the sense of Definition 1.30. Note that $L$ can be embedded into $\mathbb{C}_{p}\cong\mathbb{C}$. Then $H^{1}_{\mathrm{\acute{e}t}}(\mathscr{A}_{x,\bar{E}},\mathbb{Z}_{p})\cong(T_{p}\mathscr{A}_{x,\bar{E}})^{*}\cong(T_{p}\mathscr{A}_{x,\bar{E}}^{*})(-1)\cong(\Lambda^{\S})^{*},$ allowing us to identify the tensors $s_{\alpha}$ with tensors $s_{\alpha,\mathrm{\acute{e}t},x}\in H^{1}(\mathscr{A}_{x,\bar{E}},\mathbb{Z}_{p})^{\otimes}$. Again, these tensors can be shown to be $\operatorname{Gal}(\bar{E}/L)$-invariant. Now we have the $p$-adic comparison isomorphism $H_{\mathrm{\acute{e}t}}^{1}(A_{x,\bar{E}},\mathbb{Q}_{p})\otimes_{\mathbb{Q}_{p}}B_{\mathrm{cris}}\cong H^{1}_{\mathrm{cris}}(\mathscr{A}_{\bar{x}}/W)\otimes_{W}B_{\mathrm{cris}}=\mathbb{D}(\mathscr{A}_{\bar{x}}[p^{\infty}])(W)\otimes_{W}B_{\mathrm{cris}},$ (1.58) and by [KP15, 3.3.8], via this isomorphism, the $s_{\alpha,\mathrm{\acute{e}t},x}$ also correspond to tensors $s_{\alpha,0}:=s_{\alpha,0,\bar{x}}$ in $\mathbb{D}_{\bar{x}}^{\otimes}$. In fact, we get an isomorphism $(\Lambda^{\S})^{*}\otimes_{\mathbb{Z}_{p}}\breve{\mathbb{Z}}_{p}\cong\mathbb{D}_{\bar{x}}$ (1.59) identifying $s_{\alpha}\otimes 1$ with $s_{\alpha,0,\bar{x}}$. ##### Globalizing crystalline tensors. Above we constructed crystalline tensors fiberwise. This will not suffice to understand the local geometry of the special fiber of the integral model. Therefore we now “globalize” the tensors. Let $\bar{x}\in\mathscr{S}_{K}(G,X)(\bar{\mathbb{F}}_{p})$. As in Section 1.7.2, we write $R_{G}:=R_{G,\bar{x}}:=\widehat{\mathcal{O}}_{\mathscr{S}_{K}(G,X)_{{\cal O}_{\breve{E}}},\bar{x}}$. (So $\operatorname{Spf}R_{G}$ is the $\bar{x}$-adic completion of $\mathscr{S}_{K}(G,X)_{{\cal O}_{\breve{E}}}$.) ###### Remark 1.60. In [KP15, 3.2.12, 3.2.14] a construction of a “universal” deformation $p$-divisible group $\mathscr{G}_{R_{G}}\to\operatorname{Spec}R_{G}$ (1.61) is given, characterized by its Dieudonné display, to wit $\mathrm{DDisp}(\mathscr{G}_{R_{G}})=\mathbb{D}_{\bar{x}}\otimes_{W}\hat{W}(R_{G})$ endowed with a natural Dieudonné display structure. $\mathscr{G}_{R_{G}}$ can be identified with the pullback of the $p$-divisible group of the “universal” abelian scheme $\mathscr{A}$. ###### Remark 1.62. $R_{G}$ is a complete local normal noetherian ring with perfect residue field. ###### Remark 1.63. By [Lau14, Theorem B], the Dieudonné crystal associated with $\mathrm{DDisp}(\mathscr{G}_{R_{G}})$ coincides with the Dieudonné crystal $\mathbb{D}(\mathscr{G}_{R_{G}})$ of $\mathscr{G}_{R_{G}}$. In fact, $\mathrm{DDisp}(\mathscr{G}_{R_{G}})=\mathbb{D}(\mathscr{G}_{R_{G}})^{\vee}(\hat{W}(R_{G}))$ endowed with a natural Dieudonné display structure, cf. [KP15, 3.1.7]. In particular we get tensors $t_{\alpha,\bar{x}}^{\mathrm{def}}\in\mathbb{D}(\mathscr{G}_{R_{G}})^{\vee}(\hat{W}(R_{G}))^{\otimes}$ corresponding to $s_{\alpha,0,\bar{x}}\otimes 1\in(\mathbb{D}_{\bar{x}}\otimes_{W}\hat{W}(R_{G}))^{\otimes}$, and accordingly tensors $t_{\alpha,\bar{x}}^{\mathrm{def}}\otimes 1\in\mathbb{D}(\mathscr{G}_{R_{G}})^{\vee}(W(R_{G}))^{\otimes}$. ###### Definition 1.64. (See [Ham17, Definition 2.8].) Let $\mathscr{G}\to S$ be a $p$-divisible group over a formally smooth $\mathbb{F}_{p}$-scheme $S$. Denote by $\mathbb{D}(\mathscr{G})$ its contravariant Dieudonné crystal of $\mathscr{G}$ [BBM82, Def. 3.3.6], and by $\mathds{1}:=\mathbb{D}(\underline{\mathbb{Q}_{p}/\mathbb{Z}_{p}}_{S})$ the unit object in the tensor category of locally free $\mathcal{O}_{S/\mathbb{Z}_{p},\mathrm{CRIS}}$-modules. Note that $\mathbb{D}(\mathscr{G})$ comes with a Frobenius morphism $\mathbb{D}(\mathscr{G})^{(p)}\to\mathbb{D}(\mathscr{G})$, and similar for $\mathds{1}$. 1. (1) A _tensor_ $t$ of $\mathbb{D}(\mathscr{G})$ is a morphism $\mathds{1}\to\mathbb{D}(\mathscr{G})^{\otimes}$ of locally free $\mathcal{O}_{S/\mathbb{Z}_{p},\mathrm{CRIS}}$-modules.131313$(\;)^{\otimes}$ here (and also in point (2)) is defined as it is defined for $R$-modules in Prop. 1.45. 2. (2) A tensor $t$ of $\mathbb{D}(\mathscr{G})$ is called a _crystalline Tate tensor on $\mathscr{G}$_ if it induces a morphism of $F$-isocrystals $\mathds{1}\to\mathbb{D}(\mathscr{G})[\tfrac{1}{p}]^{\otimes}$. ###### Remarks 1.65. 1. (1) To give a tensor as in the preceding definition therefore means the following: For every $(U,T,i,\delta)$ with 1. (a) an $S$-scheme $U$, 2. (b) a $\mathbb{Z}_{p}$-scheme $T$ on which $p$ is locally nilpotent, 3. (c) a closed $\mathbb{Z}_{p}$-immersion $i\colon U\hookrightarrow T$, 4. (d) a pd-structure $\delta$ on the ideal in $\mathcal{O}_{T}$ defining the immersion $i$, compatible with the canonical pd-structure on $p\mathbb{Z}_{p}$, functorially to give a morphism $\mathds{1}(U,T,i,\delta)=\mathcal{O}_{T}(T)\to\mathbb{D}(\mathscr{G})(U,T,i,\delta)^{\otimes}$ of $\mathcal{O}_{T}(T)$-modules, i.e., functorially to give elements of $\mathbb{D}(\mathscr{G})(U,T,i,\delta)^{\otimes}$. 2. (2) Assume $S=\operatorname{Spec}A$, where $A$ is an $\mathbb{F}_{p}$-algebra which has a $p$-basis (e.g., $A$ perfect) _or_ which satisfies [Jon95, (1.3.1.1)], the latter signifying the existence of an ideal $I\subseteq A$ such that * • $A$ is noetherian and $I$-adically complete, * • $A$ is formally smooth as a topological $\mathbb{F}_{p}$-algebra with the $I$-adic topology, * • $A/I$ contains a field with a finite $p$-basis and is finitely generated as an algebra over this field. Also fix a _lift_ $\tilde{A}$ of $A$ in the sense of [Jon95, Def. 1.2.1], i.e., a $p$-adically complete $\mathbb{Z}_{p}$-flat ring $\tilde{A}$ together with an isomorphism $\tilde{A}/p\tilde{A}\cong A$ and a ring endomorphism $\sigma\colon\tilde{A}\to\tilde{A}$ such that $\sigma(a)\equiv a^{p}\mod{p\tilde{A}}$. Note that if $A$ is perfect, then $\tilde{A}:=W(A)$ with the usual Frobenius lift works. Then by [BM90, Prop. 1.3.3] and [Jon95, Cor. 2.2.3] the category of crystals of quasi-coherent $\mathcal{O}_{\operatorname{Spec}(A)/\mathbb{Z}_{p},\mathrm{CRIS}}$-modules is equivalent to the category of $p$-adically complete $\tilde{A}$-modules $M$ endowed with an integrable topologically quasi-nilpotent connection $\nabla\colon M\to M\otimes_{\tilde{A}}\hat{\Omega}_{\tilde{A}}^{1}$. 3. (3) In the setting of (2), say $(M,\nabla)$ is the image of $\mathbb{D}(\mathscr{G})$ under the equivalence. Then $M=\mathbb{D}(\mathscr{G})(\tilde{A})$, and to give a tensor $\mathds{1}\to\mathbb{D}(\mathscr{G})^{\otimes}$ means to give a horizontal section of $M^{\otimes}$. ###### Proposition 1.66. [HK17, Prop. 3.3.1, Cor. 3.3.7] Denote by $\mathscr{G}_{\overline{\mathscr{S}}^{\mathrm{perf}}}$ and $\mathscr{G}_{\hat{\mathscr{S}}}$ the pullbacks of the $p$-divisible group $\mathscr{A}[p^{\infty}]$ to the perfection141414This being the inverse perfection of $\mathscr{S}_{K}(G,X)\otimes_{\mathcal{O}_{E}}\bar{\mathbb{F}}_{p}$ in the terminology of [BG18, Section 5]. $\overline{\mathscr{S}}^{\mathrm{perf}}:=(\mathscr{S}_{K}(G,X)\otimes_{\mathcal{O}_{E}}\bar{\mathbb{F}}_{p})^{\mathrm{perf}}$ and the $p$-adic completion $\hat{\mathscr{S}}$ of $\mathscr{S}_{K}(G,X)\otimes_{\mathcal{O}_{E}}{{\cal O}_{\breve{E}}}$, respectively. For a $p$-divisible group $\mathscr{X}$ over a $p$-adic formal scheme $\mathfrak{S}$ denote by $P(\mathscr{X})$ the locally free $W(\mathcal{O}_{\mathfrak{S}})$-module given by $P(\mathscr{X})(\operatorname{Spf}A)=\mathbb{D}(\mathscr{X}_{A})^{\vee}(W(A))$ for every open affine formal subscheme $\operatorname{Spf}A\subseteq\mathfrak{S}$. Then there exist tensors $\left(t_{\alpha}\right)_{\alpha}\subset P(\mathscr{G}_{\hat{\mathscr{S}}})^{\otimes}$ whose pullback to $P(\mathscr{G}_{R_{G},\bar{x}})^{\otimes}$ coincides with $t_{\alpha,\bar{x}}^{\mathrm{def}}\otimes 1$ for all $\bar{x}\in\mathscr{S}_{K}(G,X)(\bar{\mathbb{F}}_{p})$. Via pullback, these $t_{\alpha}$ yield crystalline Tate tensors $(\bar{t}_{\alpha})_{\alpha}$ on $\mathscr{G}_{\overline{\mathscr{S}}^{\mathrm{perf}}}$. Moreover, if $x\in\mathscr{S}_{K}(G,X)(\mathcal{O}_{L})$, $L/\breve{E}$ finite field extension, is a closed point generization of $\bar{x}\in\mathscr{S}_{K}(G,X)(\bar{\mathbb{F}}_{p})$, then $s_{\alpha,\mathrm{\acute{e}t},x}\in(T_{p}\mathscr{A}_{x,\bar{E}})^{\otimes}$ gets identified with $\bar{t}_{\alpha,\bar{x}}\in\mathbb{D}(\mathscr{A}_{\bar{x}}[p^{\infty}])(W(\bar{\mathbb{F}}_{p}))$ by the $p$-adic comparison isomorphism (1.58). ## 2 Central leaves in the case of parahoric reduction We begin by giving some history. The foliation given by the central leaves was introduced by Oort [Oor04, Oor09] in the setting of a $p$-divisible group over a characteristic $p$ scheme (cf. Remark 2.2 below). As already indicated in the introduction, the idea is to consider for every isomorphism class of $p$-divisible groups over an algebraically closed field of characteristic $p$ the locus where the isomorphism class of the geometric fibers of the $p$-divisible group is the given one. It is not at all readily apparent why this would be a reasonable notion in geometrical or topological terms. The key tool here is given by the slope filtrations introduced by Zink [Zin01a], and a key insight is that there is a number $N\geq 0$ such that a $p$-divisible group over an algebraically closed field of characteristic $p$ is in the prescribed isomorphism class if and only if the analogous statement holds for the $p^{N}$-torsion subgroups. For Shimura varieties of Hodge type and good reduction, the central leaves were studied by Mantovan [Man04, Man05] (PEL cases), Vasiu [Vas08] and Zhang [Zha15] (a good survey), among others. He and Rapoport formulated a set of axioms that integral models for Shimura varieties with parahoric level are supposed to satisfy in order for them to merit the label “canonical”. Among these is having a notion of well-behaved central leaves. This is what we investigate in the Hodge type situation. As already mentioned, Zhou [Zho18] concurrently and independently also worked on this and obtained very similar results in a different way, and we use some of his results from the first version of the preprint, which didn’t yet address the change-of-parahoric map between central leaves. Central leaves in the parahoric level case also are subject in work of Hamacher and Kim [Ham17, Kim19, HK17], but they do not deal with changing the parahoric level. We still fix a Shimura datum $(G,X)$ of Hodge type, a parahoric subgroup $K_{p}\subseteq G(\mathbb{Q}_{p})$ (associated with a Bruhat-Tits group scheme $\mathcal{G}\to\operatorname{Spec}\mathbb{Z}_{p}$) and a sufficiently small open compact subgroup $K^{p}\subseteq G(\mathbb{A}_{f}^{p})$. We also keep up our standard assumptions 1.16. ### 2.1 Definition of central leaves We use the notation for Hodge tensors, in particular $\left(s_{\alpha}\right)_{\alpha}$ and $\left(s_{\alpha,0}\right)_{\alpha}$, established in Section 1.8.2. ###### Definition 2.1. Two points $\bar{x}_{1},\bar{x}_{2}\in\mathscr{S}_{K}(G,X)(\bar{\mathbb{F}}_{p})$ lie in the same _central leaf_ if there is an isomorphism $\mathbb{D}_{\bar{x}_{1}}\cong\mathbb{D}_{\bar{x}_{2}}$ of Dieudonné modules that takes $s_{\alpha,0,\bar{x}_{1}}$ to $s_{\alpha,0,\bar{x}_{2}}$. They lie in the same _naive central leaf_ if there is an arbitrary isomorphism $\mathbb{D}_{\bar{x}_{1}}\cong\mathbb{D}_{\bar{x}_{2}}$ of Dieudonné modules (i.e., an isomorphism $\mathscr{A}_{\bar{x}_{1}}[p^{\infty}]\cong\mathscr{A}_{\bar{x}_{1}}[p^{\infty}]$ of $p$-divisible groups). ###### Remark 2.2. More generally, one can make analogous definitions when given any abelian scheme $\mathcal{A}\to S$ over any $\mathbb{F}_{p}$-scheme $S$ (such that for every point we are given tensors on the Dieudonné module of the corresponding abelian variety). Given two arbitrary points, one compares them after going to a common algebraically closed extension of the residue fields, cf. [Oor04]. ###### Notation 2.3. Recall that we denote the completion of the maximal unramified extension $\mathbb{Q}_{p}^{\mathrm{ur}}$ of $\mathbb{Q}_{p}$ by $\breve{\mathbb{Q}}_{p}$, and its ring of integers accordingly by $\breve{\mathbb{Z}}_{p}$. We set $\breve{K}:=\breve{K}_{p}:=\mathcal{G}(\breve{\mathbb{Z}}_{p})$ and define $\breve{K}_{\sigma}$ to be the graph of the Frobenius $\sigma\colon\breve{K}\to\breve{K}$. So dividing out the action of $\breve{K}_{\sigma}$ (which is mostly how $\breve{K}_{\sigma}$ will make an appearance) means dividing out the action of $\breve{K}$ by $\sigma$-conjugation. ###### Definition 2.4. The central leaves are the fibers of the map $\Upsilon=\Upsilon_{K}\colon\mathscr{S}_{K}(G,X)(\bar{\mathbb{F}}_{p})\to G(\breve{\mathbb{Q}}_{p})/\breve{K}_{\sigma}$ given as follows: For $\bar{x}\in\mathscr{S}_{K}(G,X)(\bar{\mathbb{F}}_{p})$ there is an isomorphism $\beta\colon V_{\mathbb{Z}_{p}}^{*}\otimes_{\mathbb{Z}_{p}}\breve{\mathbb{Z}}_{p}\cong\mathbb{D}_{\bar{x}}$ (equation (1.59)) sending $s_{\alpha}\otimes 1$ to $s_{\alpha,0,\bar{x}}$. We hence can interpret the Frobenius on $\mathbb{D}_{\bar{x}}$ as an element of $G(\breve{\mathbb{Q}}_{p})/\breve{K}_{\sigma}$, where dividing out $\breve{K}_{\sigma}$ rids us of the ambiguity introduced by the choice of the isomorphism $\beta$. ###### Notation 2.5. We write $\displaystyle G(\breve{\mathbb{Q}}_{p})$ $\displaystyle\to{}$ $\displaystyle C(G):=G(\breve{\mathbb{Q}}_{p})/\breve{K}_{\sigma}$ $\displaystyle\to B(G):=G(\breve{\mathbb{Q}}_{p})/G(\breve{\mathbb{Q}}_{p})_{\sigma},$ $\displaystyle b$ $\displaystyle\mapsto{}$ $\displaystyle[\mkern-2.0mu[b]\mkern-2.0mu]$ $\displaystyle\mapsto[b].$ #### 2.1.1 An alternative characterization of the central leaves Consider two points $\bar{x}_{1},\bar{x}_{2}\in\mathscr{S}_{K}(G,X)(\bar{\mathbb{F}}_{p})$ in the same central leaf, i.e., such that there is an isomorphism $\mathbb{D}_{\bar{x}_{1}}\cong\mathbb{D}_{\bar{x}_{2}}$ of Dieudonné modules that takes $s_{\alpha,0,\bar{x}_{1}}$ to $s_{\alpha,0,\bar{x}_{2}}$. $\bar{x}_{j}$ for $j=1,2$ has an associated isogeny chain of abelian schemes in $\mathscr{S}_{N}(\operatorname{GSp}(V),S^{\pm})$, cf. Remark 1.19, and $\mathbb{D}_{\bar{x}_{j}}=\mathbb{D}(\prod_{i=-(r-1)-a}^{r-1}A_{j,i})=\prod_{i=-(r-1)-a}^{r-1}\mathbb{D}(A_{j,i})$, where $(A_{j,i})_{i}$ is the isogeny chain associated with $\bar{x}_{j}$ under $\mathscr{S}_{K}(G,X)\to\mathscr{S}_{J}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$, cf. Remark 1.21 and Proposition 1.24. Note that $A_{j}=\prod_{i=-(r-1)-a}^{r-1}A_{j,i}$ (plus extra structure) is the point associated with $\bar{x}_{j}$ under $\mathscr{S}_{K}(G,X)\to\mathscr{S}_{J}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$. ###### Lemma 2.6. Any tensor-preserving isomorphism $\mathbb{D}_{\bar{x}_{1}}\cong\mathbb{D}_{\bar{x}_{2}}$ yields, for all $i$, isomorphisms ${\mathbb{D}(A_{1,i})\cong\mathbb{D}(A_{2,i})}$. ###### Proof: This follows immediately from Remark 1.46. □ We may thus rephrase our definition of central leaves as follows. ###### Lemma 2.7. Let $\bar{x}_{1},\bar{x}_{2}\in\mathscr{S}_{K}(G,X)(\bar{\mathbb{F}}_{p})$ and, as above, denote by $A_{j,i}$ the abelian varieties in the associated isogeny chains, $j=1,2$. $\bar{x}_{1}$ and $\bar{x}_{2}$ lie in the same central leaf if and only if there is an isomorphism between the associated rational Dieudonné modules of $A_{j,i}$ (which is common to all $A_{j,i}$, $j$ fixed, $i$ variable) respecting the tensors and identifying the lattices $\mathbb{D}(A_{1,i})$ and $\mathbb{D}(A_{2,i})$ for all $i$. “Respecting the tensors” means the following: We get an induced identification of the rational Dieudonné modules of $A_{1}$ and $A_{2}$. This identification has to preserve the tensors. ### 2.2 Local closedness of central leaves #### 2.2.1 Topological lemmas We begin with some purely topological preliminaries which we shall use to globalize statements about formal neighborhoods. ###### Lemma 2.8. Let $X$ be a topological space with a subset $A\subseteq X$, and for $x\in X$ let $U_{x}\subseteq X$ be the set of generizations of $x$. Consider the statements 1. (i) $A\cap U_{x}\subseteq U_{x}$ closed (resp. open) for all $x\in X$. 2. (ii) $A\cap U_{x}\subseteq U_{x}$ closed (resp. open) for all closed points $x\in X$. 3. (iii) $A\subseteq X$ stable under specialization (resp. generization). We have (i)$\implies$(ii),(iii), and if every $x\in X$ has a specialization in $x$ that is a closed point, we also have (ii)$\implies$(iii). ###### Proof: Denote the closure of $A$ in $X$ by $\operatorname{cl}_{X}(A)$. For “(i)$\implies$(iii)” (with “closed” and “specialization”, respectively) let $x\in A$ and $s\in\operatorname{cl}_{X}(\\{x\\})$. Then $s,x\in U_{s}$, and $s\in\operatorname{cl}_{X}(\\{x\\})\cap U_{s}=\operatorname{cl}_{U_{s}}(\\{x\\})$. By assumption $A\cap U_{s}\subseteq U_{s}$ is stable under specialization, hence $s\in A\cap U_{s}\subseteq A$. The rest is proven along similar lines. □ ###### Example 2.9. $\mathbb{N}\subseteq\mathbb{A}^{1}_{\mathbb{C}}$ satisfies property (i) from the lemma but is not closed (hence not constructible). ###### Lemma 2.10. Let $X$ be a sober topological space and let $A\subseteq X$ be stable under both generization and specialization. Then $A$ is a union of irreducible components of $X$. If additionally $X$ only has finitely many irreducible components, $A$ even is a union of connected components and is open and closed. ###### Proof: Consider the unique generic points $\left\\{\eta_{i}\right\\}_{i\in I}$ of the irreducible components of $X$. By assumption, we have, firstly, that if $A$ contains $\eta_{i}$ then it contains the entire irreducible component $\overline{\\{\eta_{i}\\}}$, and, secondly, that if $A$ contains any one point of $\overline{\\{\eta_{i}\\}}$, then it also contains $\eta_{i}$. Similarly, if $A$ contains any one irreducible component $\overline{\\{\eta_{i}\\}}$, it also contains all irreducible components that meet $\overline{\\{\eta_{i}\\}}$. □ #### 2.2.2 Local closedness To show the local closedness of central leaves, we can content ourselves with a construction in the spirit of Proposition 1.66 but somewhat simpler. Namely, we pull back (1.61) to obtain $\displaystyle\mathscr{G}_{\bar{x}}\to\operatorname{Spec}\bar{R}_{G}$ $\displaystyle:=\operatorname{Spec}R_{G}\otimes_{{{\cal O}_{\breve{E}}}}\bar{\mathbb{F}}_{p}$ $\displaystyle=\operatorname{Spec}(\widehat{\cal O}_{\mathscr{S}_{K}(G,X)_{{{\cal O}_{\breve{E}}}},\bar{x}}\otimes\bar{\mathbb{F}}_{p})=\operatorname{Spec}(\widehat{\cal O}_{\mathscr{S}_{K}(G,X)\otimes_{{\cal O}_{\mathbf{E},(v)}}\bar{\mathbb{F}}_{p},\bar{x}}).$ The Dieudonné display of $\mathscr{G}_{\bar{x}}$ then is $\mathrm{DDisp}(\mathscr{G}_{\bar{x}})=\mathbb{D}(\mathscr{G}_{\bar{x}})^{\vee}(\hat{W}(\bar{R}_{G}))=\mathbb{D}_{\bar{x}}\otimes_{\breve{\mathbb{Z}}_{p}}\hat{W}(\widehat{\cal O}_{\mathscr{S}_{K}(G,X)\otimes_{{\cal O}_{\mathbf{E},(v)}}\bar{\mathbb{F}}_{p},\bar{x}}).$ In particular we can consider the tensors $\left(s_{\alpha,0}\otimes 1\right)_{\alpha}$ on this Dieudonné display and on $\mathbb{D}(\mathscr{G}_{\bar{x}})^{\vee}(W(\bar{R}_{G}))$. Thus we get crystalline Tate tensors $\left(u_{\alpha,\bar{x}}\right)_{\alpha}$ on $\mathbb{D}(\mathscr{G}_{\bar{x},\mathrm{perf}})$, where $\mathscr{G}_{\bar{x},\mathrm{perf}}$ is the pullback of $\mathscr{G}_{\bar{x}}$ to $\bar{R}_{G}^{\mathrm{perf}}$. ###### Theorem 2.11. The central leaves on $\mathscr{S}_{K}(G,X)\otimes\bar{\mathbb{F}}_{p}$ are open and closed in the naive central leaves. ###### Proof: We consider the $p$-divisible group $\mathscr{G}_{\bar{x},\mathrm{perf}}$ over $\widehat{\cal O}_{\mathscr{S}_{K}(G,X)\otimes_{{\cal O}_{\mathbf{E},(v)}}\bar{\mathbb{F}}_{p},\bar{x}}^{\mathrm{perf}}$ together with its crystalline Tate tensors. Note that perfection makes no difference for topological considerations. On the naive central leaves on $\operatorname{Spec}(\widehat{\cal O}_{\mathscr{S}_{K}(G,X)\otimes_{{\cal O}_{\mathbf{E},(v)}}\bar{\mathbb{F}}_{p},\bar{x}})=\operatorname{Spec}(\widehat{\cal O}_{\mathscr{S}_{K}(G,X)\otimes_{{\cal O}_{\mathbf{E},(v)}}\bar{\mathbb{F}}_{p},\bar{x}}^{\mathrm{perf}})$ the $p$-divisible group is geometrically fiberwise constant. By a lemma of Hamacher [Ham17, 2.12], the tensors then are geometrically fiberwise constant as well. Using the fact that $\widehat{\cal O}_{\mathscr{S}_{K}(G,X)\otimes_{{\cal O}_{\mathbf{E},(v)}}\bar{\mathbb{F}}_{p},\bar{x}}$ is noetherian, we obtain that the central leaf on $\widehat{\cal O}_{\mathscr{S}_{K}(G,X)\otimes_{{\cal O}_{\mathbf{E},(v)}}\bar{\mathbb{F}}_{p},\bar{x}}$ is closed and open in the corresponding naive central leaf. $\operatorname{Spec}{\cal O}_{\mathscr{S}_{K}(G,X)\otimes_{{\cal O}_{\mathbf{E},(v)}}\bar{\mathbb{F}}_{p},\bar{x}}$ has the quotient topology with respect to the fpqc covering $\operatorname{Spec}\widehat{\cal O}_{\mathscr{S}_{K}(G,X)\otimes_{{\cal O}_{\mathbf{E},(v)}}\bar{\mathbb{F}}_{p},\bar{x}}\to\operatorname{Spec}{\cal O}_{\mathscr{S}_{K}(G,X)\otimes_{{\cal O}_{\mathbf{E},(v)}}\bar{\mathbb{F}}_{p},\bar{x}}$, hence we also get the same result after leaving away completion. By Lemma 2.10 we conclude that we also get the same result for the central leaves and naive central leaves on $\mathscr{S}_{K}(G,X)\otimes\bar{\mathbb{F}}_{p}$. □ ###### Corollary 2.12. The central leaves on $\mathscr{S}_{K}(G,X)\otimes\bar{\mathbb{F}}_{p}$ are locally closed. ###### Proof: Because Oort [Oor04] has proven that the naive central leaves on ${\mathscr{S}_{K}(G,X)\otimes\bar{\mathbb{F}}_{p}}$ are locally closed, this is an immediate consequence of the preceding theorem. □ ###### Corollary 2.13. In fact, Oort has shown that the naive central leaf is closed in the naive Newton stratum. So our argument even goes to show that the central leaves are closed in their respective Newton strata (naive and non-naive). ### 2.3 Quasi-isogenies of $p$-divisible groups Let $b\in G(\breve{\mathbb{Q}}_{p})=\mathcal{G}(\breve{\mathbb{Q}}_{p})\subseteq\operatorname{GL}(\Lambda^{\S})(\breve{\mathbb{Q}}_{p})$ and denote by $b^{\S}$ the image in $\operatorname{GL}(V^{\S})(\breve{\mathbb{Q}}_{p})$. Let $J_{b}$ denote the $\mathbb{Q}_{p}$-reductive group given on $R$-valued points, $R\in(\mathbb{Q}_{p}\text{-alg})$, by $J_{b}(R):=\left\\{g\in G(R\otimes_{\mathbb{Q}_{p}}\breve{\mathbb{Q}}_{p})=\operatorname{Res}_{\breve{\mathbb{Q}}_{p}/\mathbb{Q}_{p}}(G)(R)\;|\;gb\sigma(g)^{-1}=b\right\\}$ (cf. [RZ96, Prop. 1.12] and [Kim19, Prop. 2.2.6]). $J_{b}(\mathbb{Q}_{p})$ then naturally attains the structure of a locally profinite group. We make it into a formal group scheme $\underline{J_{b}(\mathbb{Q}_{p})}$ over $\operatorname{Spf}\breve{\mathbb{Z}}_{p}$; with $\underline{J_{b}(\mathbb{Q}_{p})}(U)=\mathrm{Map}_{\mathrm{cont}}(U,J_{b}(\mathbb{Q}_{p}))$ for formal test schemes $U\to\operatorname{Spf}\breve{\mathbb{Z}}_{p}$.151515Locally: If $G=\varprojlim_{n}G_{n}$ is a profinite set (or even profinite group), then $\underline{G}:=\varprojlim_{n}\underline{G_{n}}$ with $\underline{G_{n}}$ constant formal (group) scheme. Let $\mathbb{X}_{b}=\mathbb{X}_{b,K}=\mathbb{X}_{b^{\S}}$ be a polarized $p$-divisible group over $\bar{\mathbb{F}}_{p}$ with a distinguished isomorphism between its Dieudonné (symplectic) module and $(\Lambda^{\S})_{\breve{\mathbb{Z}}_{p}}$ under which the Frobenius is identified with $b$. The existence of such a $p$-divisible group depends on $b$, but we _assume_ that $\mathbb{X}_{b}$ exists from now on (see also 2.35 (2) below). By Dieudonné theory we obtain a bijection $\displaystyle J_{b^{\S}}(\mathbb{Q}_{p})$ $\displaystyle:=\left\\{g\in\operatorname{GL}(\Lambda^{\S}\otimes\breve{\mathbb{Q}}_{p})\;|\;gb^{\S}\sigma(g)^{-1}=b^{\S}\right\\}$ $\displaystyle\cong\\{\text{(self-)quasi-isogenies of }\mathbb{X}_{b}\text{ (without polarization)}\\}$ (and similar with polarization (replace $\operatorname{GL}$ by $\mathrm{(G)Sp}$)), and $J_{b}(\mathbb{Q}_{p})\subseteq J_{b^{\S}}(\mathbb{Q}_{p})$ under this bijection corresponds to the _tensor- preserving_ quasi-isogenies of $\mathbb{X}_{b}$. We need to understand more than this; at least we will need to understand the tensor-preserving quasi-isogenies of $(\mathbb{X}_{b})_{\Omega}$ for all algebraically closed fields of characteristic $p$. Using internal hom $p$-divisible groups first defined by Chai and Oort, Caraiani and Scholze [CS17, Prop. 4.2.11] worked out the structure of the quasi-isogeny group (albeit in a more special setting than ours; Kim [Kim19] generalized it to our setting). First of all, following [Kim19, section 3.2], we denote by $\operatorname{Qisg}(\mathbb{X}_{b})$ the formal group scheme over $\operatorname{Spf}\breve{\mathbb{Z}}_{p}$ given by $\operatorname{Nilp}_{\breve{\mathbb{Z}}_{p}}\to\mathrm{Grp},\quad R\mapsto\\{\text{quasi-isogenies of }(\mathbb{X}_{b})_{R/p}\\},$ where $\operatorname{Nilp}_{\breve{\mathbb{Z}}_{p}}$ is the category of $\breve{\mathbb{Z}}_{p}$-algebras on which $p$ is nilpotent (anti-equivalent to the category of affine schemes over $\operatorname{Spf}\breve{\mathbb{Z}}_{p}$161616That is, affine schemes together with a natural transformation of functors $\mathrm{Ring}\to\mathrm{Set}$ from it to $\operatorname{Spf}\breve{\mathbb{Z}}_{p}$.). ###### Definition 2.14. Let $R$ be a topological ring. 1. (1) $R$ is called _adic_ , if there is an ideal $I$ (called _ideal of definition_) such that $\left\\{I^{n}\right\\}_{n\in\mathbb{N}}$ is a basis of neighborhoods (or, equivalently, basis of _open_ neighborhoods) of $0$. 2. (2) $R$ is called _f-adic_ (or _Huber ring_), if there exists an open subring $A_{0}\subseteq A$ that is adic with finitely generated ideal of definition. Such a ring is called a _ring of definition_. ###### Definition 2.15. Let $R$ be a ring of characteristic $p$. 1. (1) $R$ is _semiperfect_ , if the Frobenius endomorphism $\Phi\colon R\to R$ is surjective. 2. (2) $R$ is _f-semiperfect_ , if it is semiperfect and the perfection $R^{\flat}:=\varprojlim_{\Phi}R$ (with the inverse limit topology) is f-adic. There is a functorial construction which assigns to every semiperfect ring $R$ its universal $p$-adically complete pd-thickening $A_{\mathrm{cris}}(R)$ [SW13, Prop. 4.1.3]. Set $B_{\mathrm{cris}}^{+}(R):=A_{\mathrm{cris}}(R)[\frac{1}{p}]$. We have $\mathbb{D}((\mathbb{X}_{b})_{R})(A_{\mathrm{cris}}(R))\cong\mathbb{D}(\mathbb{X}_{b})(\breve{\mathbb{Z}}_{p})\otimes_{\breve{\mathbb{Z}}_{p}}A_{\mathrm{cris}}(R)\cong\Lambda^{\S}\otimes_{\mathbb{Z}_{p}}A_{\mathrm{cris}}(R)$ by construction. ###### Assumption 2.16. We will assume from now on that $[b]\in B(G)$ is _neutral acceptable_ in the sense of [RV14, Def. 2.3], i.e., $[b]\in B(G,\\{\mu\\})$. (Of course this is a completely harmless assumption with regard to the setting of Shimura varieties.) There exists an internal tensor-preserving quasi-endomorphism $p$-divisible group: ###### Lemma 2.17. [Kim19, Lemma 3.1.3] There is a $p$-divisible group $\mathcal{H}_{b}^{G}$ such that for any f-semiperfect $\bar{\mathbb{F}}_{p}$-algebra $R$ there is a natural $\mathbb{Q}_{p}$-linear isomorphism $\tilde{\mathcal{H}}_{b}^{G}(R)\cong\operatorname{End}_{(s_{\alpha})}((\mathbb{X}_{b})_{R})\bigl{[}\tfrac{1}{p}\bigr{]},$ where $\tilde{\mathcal{H}}_{b}^{G}$ is the universal cover of $\mathcal{H}_{b}^{G}$, and where on the right hand side we have the set of $\gamma\in\operatorname{End}((\mathbb{X}_{b})_{R})[\frac{1}{p}]$ such that the endomorphism of $\Lambda^{\S}\otimes_{\mathbb{Z}_{p}}B_{\mathrm{cris}}^{+}(R)$ induced by $\gamma$ preserves the tensors $s_{\alpha}\otimes 1$. What we really need is an internal tensor-preserving quasi-isogeny $p$-divisible group. Still following [Kim19], to this end we consider the group sheaf $\operatorname{Qisg}(\mathbb{X}_{b})$ on $\operatorname{Nilp}_{\breve{\mathbb{Z}}_{p}}$ given by $R\mapsto\\{\text{self-quasi-isogenies of }(\mathbb{X}_{b})_{R}\\}$. This can be realized as a closed formal subscheme of $\tilde{\mathcal{H}}_{b}^{2}$ (where $\mathcal{H}_{b}:=\mathcal{H}_{b^{\S}}=\mathcal{H}_{b^{\S}}^{\operatorname{GL}(V^{\S})}$ (no tensors)). The closed formal subscheme $\operatorname{Qisg}_{G}(\mathbb{X}_{b})\subseteq\operatorname{Qisg}(\mathbb{X}_{b})$ over $\operatorname{Spf}\breve{\mathbb{Z}}_{p}$ then is defined by $\operatorname{Qisg}_{G}(\mathbb{X}_{b})=\operatorname{Qisg}(\mathbb{X}_{b})\times_{\tilde{\mathcal{H}}_{b}^{2}}(\tilde{\mathcal{H}}_{b}^{G})^{2}.$ ###### Proposition 2.18. [Kim19, Prop 3.2.4] We have a natural map $\underline{J_{b}(\mathbb{Q}_{p})}\to\operatorname{Qisg}_{G}(\mathbb{X}_{b})$, which has a natural retraction $\operatorname{Qisg}_{G}(\mathbb{X}_{b})\to\underline{J_{b}(\mathbb{Q}_{p})}$ with all fibers isomorphic to $\operatorname{Spf}\breve{\mathbb{Z}}_{p}[\mkern-2.0mu[x_{1}^{p^{-\infty}},\dotsc,x_{d}^{p^{-\infty}}]\mkern-2.0mu]$ as formal schemes, where $d=\langle 2\rho,\nu_{[b]}\rangle$ with $2\rho$ the sum of all the positive roots of $G_{\bar{\mathbb{Q}}_{p}}$ and $\nu_{[b]}$ the dominant171717This requires distinguishing a Weyl chamber as the positive one; this arises from the choice of a convenient maximal torus in [Kim19, just before Rmk. 2.1.4]. representative of the $G(\breve{\mathbb{Q}}_{p})$-conjugacy class of $\nu_{b}\in X_{*}(G_{\breve{\mathbb{Q}}_{p}})_{\mathbb{Q}}$.181818So $b\mapsto\nu_{[b]}$ is the _Newton map_ in the parlance of [Kot97, 256]. ###### Corollary 2.19. Let $R\in\operatorname{Nilp}_{\breve{\mathbb{Z}}_{p}}$ be such that there is no non-trivial continuous homomorphism of $\breve{\mathbb{Z}}_{p}$-algebras $\breve{\mathbb{Z}}_{p}[\mkern-2.0mu[x^{p^{-\infty}}]\mkern-2.0mu]\to R$. Put differently, there is no $r\in R$ having a compatible system of $p$-power roots such that all power series with $\breve{\mathbb{Z}}_{p}$-coefficients in $r$ and its $p$-power roots converge. Then $\operatorname{Qisg}_{G}(\mathbb{X}_{b})(R)\cong\underline{J_{b}(\mathbb{Q}_{p})}(R).$ ###### Proof: With $d$ as in the proposition, the assumption implies that there is no non- trivial continuous homomorphism of $\breve{\mathbb{Z}}_{p}$-algebras $\breve{\mathbb{Z}}_{p}[\mkern-2.0mu[x_{1}^{p^{-\infty}},\dotsc,x_{d}^{p^{-\infty}}]\mkern-2.0mu]\to R$. Define $\operatorname{Qisg}_{G}^{\circ}(\mathbb{X}_{b}):=\ker(\operatorname{Qisg}_{G}(\mathbb{X}_{b})\to\underline{J_{b}(\mathbb{Q}_{p})})\cong\operatorname{Spf}\breve{\mathbb{Z}}_{p}[\mkern-2.0mu[x_{1}^{p^{-\infty}},\dotsc,x_{d}^{p^{-\infty}}]\mkern-2.0mu]$ and consider the split exact sequence $0\to\operatorname{Qisg}_{G}^{\circ}(\mathbb{X}_{b})\to\operatorname{Qisg}_{G}(\mathbb{X}_{b})\to\underline{J_{b}(\mathbb{Q}_{p})}\to 0.$ □ ###### Example 2.20. Say $R\in\operatorname{Nilp}_{\breve{\mathbb{Z}}_{p}}$ has characteristic $p$, i.e., $R$ is a $\bar{\mathbb{F}}_{p}$-algebra and the $p$-adic topology is the discrete topology. Let $r$ be an element of $R$ having a compatible system of $p$-power roots such that all power series with $\breve{\mathbb{Z}}_{p}$-coefficients in $r$ and its $p$-power roots converge. Then $r$ must be nilpotent. Therefore, if $R\in\operatorname{Nilp}_{\breve{\mathbb{Z}}_{p}}$ has characteristic $p$ and is reduced (e.g., is a field), then the conditions of Corollary 2.19 are satisfied. In particular we deduce: ###### Example 2.21. Let $L/\bar{\mathbb{F}}_{p}$ be a field extension. Then $\operatorname{Qisg}_{G}(\mathbb{X}_{b})(L)\cong J_{b}(\mathbb{Q}_{p}).$ ### 2.4 Almost product structure ###### Notation 2.22. We shorten notation by writing $\mathscr{S}:=\mathscr{S}_{K}(G,X)\otimes_{\mathcal{O}_{\mathbf{E},(v)}}{{\cal O}_{\breve{E}}}\quad\text{and}\quad\overline{\mathscr{S}}:=\mathscr{S}_{K}(G,X)\otimes_{\mathcal{O}_{\mathbf{E},(v)}}\bar{\mathbb{F}}_{p},$ as well as $\mathscr{S}^{\S}:=\mathscr{S}_{J}(\operatorname{GSp}(V^{\S}),S^{\pm,\S})\otimes_{\mathbb{Z}_{(p)}}\breve{\mathbb{Z}}_{p}\quad\text{and}\quad\overline{\mathscr{S}}^{\S}:={\mathscr{S}_{J}(\operatorname{GSp}(V^{\S}),S^{\pm,\S})\otimes_{\mathbb{Z}_{(p)}}\bar{\mathbb{F}}_{p}}.$ Moreover, we denote the central leaf associated with $b$ by $\overline{C}^{b}:=\overline{C}^{b}_{K}:=\Upsilon_{K}^{-1}([\mkern-2.0mu[b]\mkern-2.0mu])\subseteq\overline{\mathscr{S}}$, and by $\overline{C}^{b^{\S}}\subseteq\overline{\mathscr{S}}^{\S}$ the corresponding central leaf in $\overline{\mathscr{S}}^{\S}$. ###### Assumption 2.23. From now on, we assume that Axiom A from [HK17] holds. ###### Remark 2.24. To explain Axiom A: Consider the _affine Deligne-Lusztig variety_ $X_{\mu}(b)_{K}(\bar{\mathbb{F}}_{p}):=\left\\{g\breve{K}_{p}\in G(\breve{\mathbb{Q}}_{p})/\breve{K}_{p}\;|\;g^{-1}b\sigma(g)\in\breve{K}_{p}\mathrm{Adm}(\\{\mu\\})\breve{K}_{p}\right\\}$ (here $\mathrm{Adm}(\\{\mu\\})$ is the _admissible subset_ as defined in [PRS13]). Choose a point $\bar{x}\in\overline{\mathscr{S}}(\bar{\mathbb{F}}_{p})$ with a quasi-isogeny $j\colon\mathbb{X}_{b}\to\mathscr{A}_{\bar{x}}[p^{\infty}]$ compatible with the extra structure. We have a map $i^{\S}_{(\bar{x},j)}\colon X_{\mu^{\S}}(b^{\S})(\bar{\mathbb{F}}_{p})\to\overline{\mathscr{S}}^{\S}(\bar{\mathbb{F}}_{p})$ given as follows: For $g\breve{J}_{p}\in X_{\mu^{\S}}(b^{\S})(\bar{\mathbb{F}}_{p})$, there is a $p$-divisible group $g\mathbb{X}_{b}$ isogenous to $\mathbb{X}_{b}$ with Frobenius on the Dieudonné module given by $g^{-1}b\sigma(g)$. The quasi-isogeny $g\mathbb{X}_{b}\to\mathbb{X}_{b}\to\mathscr{A}_{\bar{x}}[p^{\infty}]$ lifts to a quasi-isogeny of abelian schemes191919Idea: A lift of a quasi-isogeny $y\colon A[p^{\infty}]\to X$, $A$ abelian scheme, $X$ $p$-divisible group, is given by $A\to A/\ker(y)$.. We also get a polarization and a level structure away from $p$ on the new abelian scheme, which is the image of $g\breve{J}_{b}$ under $i^{\S}_{(\bar{x},j)}$. The content of Axiom A (= Assumption 2.23) now is that $X_{\sigma(\mu)}(b)_{K}(\bar{\mathbb{F}}_{p})\hookrightarrow X_{\mu^{\S}}(b^{\S})(\bar{\mathbb{F}}_{p})\xrightarrow{i^{\S}_{(\bar{x},j)}}\overline{\mathscr{S}}^{\S}(\bar{\mathbb{F}}_{p})$ is to factor through $\overline{\mathscr{S}}^{-}(\bar{\mathbb{F}}_{p})$ such that there is a unique lift $i\colon X_{\sigma(\mu)}(b)_{K}(\bar{\mathbb{F}}_{p})\to\overline{\mathscr{S}}(\bar{\mathbb{F}}_{p})$ with $s_{\alpha,0,i(g\breve{K}_{p})}=s_{\alpha,0,\bar{x}}$ for all $g\breve{K}_{p}\in X_{\sigma(\mu)}(b)_{K}(\bar{\mathbb{F}}_{p})$. ###### Remark 2.25. Assumption 2.23 holds in the hyperspecial case [Kis17, Thm. 1.4.4], and it holds if $G$ is residually split [Zho18, Prop. 6.4]. ###### Remark 2.26. With Assumption 2.23 in place, the central leaf $\overline{C}^{b}$ of $b$ is non-empty by [HK17, Rmk. 4.3.1]. We review some constructions from the paper [HK17]. By $\overline{\mathrm{Ig}}^{b^{\S}}$ we denote the (special fiber of the) Igusa variety over $\overline{C}^{b^{\S}}$, cf. [CS17, Prop. 1.12]. It parameterizes isomorphisms between the standard $p$-divisible group and the universal one, and it is a perfect scheme over $\bar{\mathbb{F}}_{p}$. Define $\overline{\mathrm{Ig}}^{b,\diamond}:=\overline{\mathrm{Ig}}^{b^{\S}}\times_{\overline{\mathscr{S}}^{\S,\mathrm{perf}}}\overline{\mathscr{S}}^{\mathrm{perf}}$. Furthermore, $\mathfrak{Ig}^{b^{\S}},\mathfrak{Ig}^{b,\diamond}$ are the unique flat lifts202020Locally: Let $R$ be a perfect $\bar{\mathbb{F}}_{p}$-algebra. Then $W_{\mathcal{O}_{E}}(R)$ (ramified Witt vectors, cf. Remark 1.15) is the unique $\varpi$-adically complete ${{\cal O}_{\breve{E}}}$-flat algebra lifting it (cf. [Ahs11, Prop. 1.3.3]). of $\overline{\mathrm{Ig}}^{b^{\S}}$ and $\overline{\mathrm{Ig}}^{b,\diamond}$ over $\operatorname{Spf}{{\cal O}_{\breve{E}}}$, given by the formula $\mathfrak{Ig}(R):=\overline{\mathrm{Ig}}(R/\varpi)\quad\text{for all }R\in\operatorname{Nilp}_{{{\cal O}_{\breve{E}}}}$ (2.27) in both cases, where $\varpi$ is a uniformizer of ${{\cal O}_{\breve{E}}}$. By definition, we have an isomorphism $j\colon\mathbb{X}_{b}\times\overline{\mathrm{Ig}}^{b,\diamond}\xrightarrow{\cong}\mathscr{A}_{\overline{C}^{b}}[p^{\infty}]\times_{\overline{C}^{b}}\overline{\mathrm{Ig}}^{b,\diamond}$ and $\overline{\mathrm{Ig}}^{b}$ is by definition the locus of geometric points of $\overline{\mathrm{Ig}}^{b,\diamond}$, where $j$ respects the crystalline Tate tensors. This is a closed union of connected components of $\overline{\mathrm{Ig}}^{b,\diamond}$ [HK17, Def./Lemma 5.1.1]. $\operatorname{Qisg}(\mathbb{X}_{b})$ acts on $\overline{\mathrm{Ig}}^{b^{\S}}$ and $\mathfrak{Ig}^{b^{\S}}$, $\overline{\mathrm{Ig}}^{b}\to\overline{\mathrm{Ig}}^{b^{\S}}$ is a closed immersion with $\operatorname{Qisg}_{G}(\mathbb{X}_{b})_{\bar{\mathbb{F}}_{p}}$-stable image, and $\mathfrak{Ig}^{b}\to\mathfrak{Ig}^{b^{\S}}$ is a closed immersion with $\operatorname{Qisg}_{G}(\mathbb{X}_{b})$-stable image [HK17, Prop. 5.1.2, Cor 5.1.3]. Let $\mathfrak{M}^{b^{\S}}\to\operatorname{Spf}\breve{\mathbb{Z}}_{p}$ be the Rapoport-Zink space given by $\mathfrak{M}^{b^{\S}}(R)=\left\\{(\mathscr{X},\rho)\;\middle|\;\begin{tabular}[]{@{}l@{}}$\mathscr{X}/R$ polarized $p$-divisible group and\\\ $\rho\colon\mathbb{X}_{b}\otimes R/p\to\mathscr{X}\otimes R/p$ quasi-isogeny resp. polarizations\end{tabular}\right\\}.$ Choose a point $\bar{x}\in\overline{\mathscr{S}}(\bar{\mathbb{F}}_{p})$ with a quasi-isogeny $j\colon\mathbb{X}_{b}\to\mathscr{A}_{\bar{x}}[p^{\infty}]$ compatible with the extra structure. $\mathfrak{M}^{b^{\S}}$ comes with the Rapoport-Zink uniformization map $\Theta^{\S}\colon\mathfrak{M}^{b^{\S}}\to\mathscr{S}^{\S}$ depending on this choice [RZ96, (6.3)]: Given $(\mathscr{X},\rho)$ as above, $\rho$ lifts to a quasi-isogeny $\tilde{\rho}\colon\mathbb{X}_{b}\otimes R\to\mathscr{X}$, and there is an abelian scheme $\mathscr{Y}$ with $p$-divisible group $\mathscr{X}$ and a unique lift $\tilde{\mathscr{A}}_{\bar{x}}\otimes R\to\mathscr{Y}$ of $\tilde{\rho}$ for a chosen lift $\tilde{\mathscr{A}}_{\bar{x}}$ of $\mathscr{A}_{\bar{x}}$ to $\breve{\mathbb{Z}}_{p}$. We also get a polarization and a level structure away from $p$ on $\mathscr{Y}$, and $\mathscr{Y}$ with these extra structures is the image of $(\mathscr{X},\rho)$ under $\Theta^{\S}$. ###### Remark 2.28. The affine Deligne-Lusztig variety is the perfection of the special fiber of the Rapoport-Zink space [Zhu17, Prop. 0.4]. Under this isomorphism, $\Theta^{\S}$ corresponds to $i^{\S}$ from Remark 2.24. Define $\mathfrak{M}^{b,\diamond}:=\mathfrak{M}^{b^{\S}}\times_{\mathscr{S}^{\S}}\mathscr{S}$ and define $\Theta^{\diamond}\colon\mathfrak{M}^{b,\diamond}\to\mathscr{S}$ to be the base change of $\Theta^{\S}$. Let $\mathscr{X}^{\diamond}\to\mathfrak{M}^{b,\diamond}$ be the pullback of the universal $p$-divisible group over $\mathfrak{M}^{b^{\S}}$. Then we have two families of tensors on $\mathbb{D}(\mathscr{X}^{\diamond})[\tfrac{1}{p}]$ (or more precisely, two families of maps $\mathfrak{M}^{b,\diamond}(\bar{\mathbb{F}}_{p})\ni\bar{y}\mapsto$ tensor on $\mathbb{D}(\mathscr{X}^{\diamond}_{\bar{y}})[\tfrac{1}{p}]$): 1. (1) $\left(t_{\alpha}^{\diamond}\right)_{\alpha}$ obtained from $\left(s_{\alpha}\right)_{\alpha}$ via $\mathbb{D}(\mathbb{X}_{b}\otimes\mathfrak{M}^{b,\diamond})[\tfrac{1}{p}]\cong\mathbb{D}(\mathscr{X}^{\diamond})[\tfrac{1}{p}]$, and 2. (2) for every $\bar{y}\in\mathfrak{M}^{b,\diamond}(\bar{\mathbb{F}}_{p})$, $\left(u_{\alpha,\bar{y}}^{\diamond}\right)_{\alpha}$ with $u_{\alpha,\bar{y}}^{\diamond}:=(\Theta^{\diamond})^{*}s_{\alpha,0,\Theta^{\diamond}(\bar{y})}$. $\mathfrak{M}^{b}$ is defined to be the formal subscheme of $\mathfrak{M}^{b,\diamond}$ corresponding to the locus where these agree. Also, $\Theta\colon\mathfrak{M}^{b}\to\mathscr{S}$ is defined to be the restriction of $\Theta^{\diamond}$. Moreover, $\mathfrak{M}^{b}$ has a natural $\operatorname{Qisg}_{G}(\mathbb{X}_{b})$-action. By [CS17, § 4], there is an isomorphism $\mathfrak{Ig}^{b^{\S}}\times\mathfrak{M}^{b^{\S}}\xrightarrow{\sim}\mathfrak{X}^{b^{\S}}$ (2.29) with the _Newton-Igusa variety_ $\mathfrak{X}^{b^{\S}}(R)=\left\\{(A,\lambda,\eta^{p};\psi)\;\middle|\;\begin{tabular}[]{@{}l@{}}$(A,\lambda,\eta^{p})\in\mathscr{S}^{\S}(R)$,\\\ $\psi\colon(\mathbb{X}_{b},\lambda_{\mathbb{X}_{b}})\otimes R/p\to(A[p^{\infty}],\lambda)\otimes R/p$ quasi- isogeny\end{tabular}\right\\}.$ ###### Remark 2.30. Let us quickly recall how the isomorphism (2.29) works (suppressing polarizations in the notation for simplicity). Let $(\mathcal{A},\xi)\in\mathfrak{Ig}^{b^{\S}}(R)$ and $(\mathscr{X},\rho)\in\mathfrak{M}^{b^{\S}}(R)$ be given. Consider the composition $\mathscr{X}\xrightarrow{\rho^{-1}}\mathbb{X}_{b}\otimes R\xrightarrow{\xi}\mathcal{A}[p^{\infty}],$ (2.31) where we denote a lift $\mathbb{X}_{b}\otimes R/p\xrightarrow{\rho}\mathscr{X}\otimes R/p$ by $\rho$ again. Lift (2.31) to a quasi-isogeny of abelian schemes $\mathcal{A}^{\prime}\to\mathcal{A}$. Then $(\mathcal{A}^{\prime},\rho)\in\mathfrak{X}^{b^{\S}}(R)$ is our image point. Define $\mathfrak{X}^{b}\subseteq\mathfrak{X}^{b^{\S}}$ to be the image of $\mathfrak{Ig}^{b}\times\mathfrak{M}^{b}$ under the isomorphism (2.29). This comes with a natural $\operatorname{Qisg}_{G}(\mathbb{X}_{b})$-action; cf. [HK17, section 5.2]. We have canonical maps $\pi_{\infty}^{\S}\colon\mathfrak{X}^{b^{\S}}\to\mathscr{S}^{\S}$ and $\pi_{\infty}\colon\mathfrak{X}^{b}\to\mathscr{S}$. ###### Remark 2.32. By [HK17, Thm. 5.2.6 (1)], $\pi_{\infty,\bar{\mathbb{F}}_{p}}^{\mathrm{perf}}\colon\mathfrak{X}^{b,\mathrm{perf}}_{\bar{\mathbb{F}}_{p}}\to\overline{\mathscr{S}}^{\mathrm{perf}}$ represents the moduli problem $\displaystyle(\text{perfect affine }\overline{\mathscr{S}}^{\mathrm{perf}}\text{-schemes})$ $\displaystyle\to\mathrm{Set},$ $\displaystyle(\operatorname{Spec}(R)\xrightarrow{Q}\overline{\mathscr{S}}^{\mathrm{perf}})$ $\displaystyle\mapsto\begin{Bmatrix}\psi\colon(\mathbb{X}_{b})_{R}\to(\mathscr{G}_{\overline{\mathscr{S}}^{\mathrm{perf}}})_{Q}\text{ quasi-isogeny}\\\ \text{compatible with crystalline Tate- tensors}\end{Bmatrix}$ with $\mathscr{G}_{\overline{\mathscr{S}}^{\mathrm{perf}}}$ and crystalline Tate tensors on the right hand side as in Proposition 1.66. ### 2.5 Change of parahoric level Now we consider the question of how the central leaves behave when the level at $p$ is changed from $K_{p}$ to a larger $K_{p}^{\prime}$, where $K_{p}$ and $K_{p}^{\prime}$ are associated with points $x,x^{\prime}\in\mathcal{B}(G,\mathbb{Q}_{p})$ as described in Chapter 1. Define $K:=K_{p}K^{p}$, as usual, and $K^{\prime}:=K_{p}^{\prime}K^{p}$. Let $b\in G(\breve{\mathbb{Q}}_{p})$. Let $(\mathcal{L},c)$ and $(\mathcal{L}^{\prime},c^{\prime})$ be the graded lattice chains that are the images of $x$ and $x^{\prime}$, respectively, under the embedding of buildings $\iota\colon\mathcal{B}(G,\mathbb{Q}_{p})\hookrightarrow\mathcal{B}(\operatorname{GSp}(V),\mathbb{Q}_{p})$ described in Section 1.2. Note that the gradings $c$ and $c^{\prime}$ play no role for our purposes. ###### Remarks 2.33. 1. (1) Observe that $K_{p}$ and $K_{p}^{\prime}$ depend only on the minimal facet $\mathfrak{f}$ and $\mathfrak{f}^{\prime}$ with $x\in\mathfrak{f}$ and $x^{\prime}\in\mathfrak{f}^{\prime}$, respectively. The inclusion $K_{p}\subseteq K_{p}^{\prime}$ means precisely that $\mathfrak{f}^{\prime}\subseteq\overline{\mathfrak{f}}$. We can move $x^{\prime}$ arbitrarily close to $x$ without altering $K_{p}^{\prime}$. The minimal facets $\mathfrak{g}$ and $\mathfrak{g}^{\prime}$ of $\mathcal{B}(\operatorname{GSp}(V),\mathbb{Q}_{p})$ containing $\iota(x)$ and $\iota(x^{\prime})$, respectively, do _not_ depend only on $\mathfrak{f},\mathfrak{f}^{\prime}$. Still, for $x^{\prime}$ sufficiently close to $x$, we may assume that $\mathfrak{g}^{\prime}\subseteq\overline{\mathfrak{g}}$ (if it were not so, we would not be able to move $\iota(x^{\prime})$ arbitrarily close to $\iota(x)$ — but recall that $\iota$ is continuous), i.e., that $\mathcal{L}^{\prime}$ is a thinned out version of $\mathcal{L}$. Say $\mathcal{L}$ is given by $\Lambda^{0}\supsetneqq\Lambda^{1}\supsetneqq\dotsb\supsetneqq\Lambda^{r-1}\supsetneqq p\Lambda^{0}$ (as in (1.12)); then $\mathcal{L}^{\prime}$ is given by $(\Lambda^{i_{j}})_{j}$ for some $0\leq i_{1}<i_{2}<\dotsb<i_{s}<r$. 2. (2) In proofs, we can often reduce to the case where “$\mathcal{L}$ and $\mathcal{L}^{\prime}$ differ by one element”, which of course is supposed to mean that $s=r-2$ in this notation. ###### Notation 2.34. We define $N_{p},N_{p}^{\prime},J_{p},J_{p}^{\prime},\Lambda^{\S},\left.\Lambda^{\prime}\right.^{\S},V^{\S},\left.V^{\prime}\right.^{\S}$ as in Section 1.4. #### 2.5.1 The change-of-parahoric morphism As explained in Section 1.6, we obtain (for $K^{p}$ sufficiently small and some compact open subgroup $N^{p}\subseteq\operatorname{GSp}(V)(\mathbb{A}_{f}^{p})$) a commutative diagram $\mathscr{S}_{K_{p}K^{p}}(G,X)$$\mathscr{S}_{K^{\prime}_{p}\left.K\right.^{p}}(G,X)$$\mathscr{S}_{N_{p}N^{p}}(\operatorname{GSp}(V),S^{\pm})_{\mathcal{O}_{\mathbf{E},(v)}}$$\mathscr{S}_{N_{p}^{\prime}\left.N\right.^{p}}(\operatorname{GSp}(V),S^{\pm})_{\mathcal{O}_{\mathbf{E},(v)}}$finitefinite$\pi_{K_{p}K^{p},K_{p}^{\prime}\left.K\right.^{p}}$$\pi_{N_{p}N^{p},N_{p}^{\prime}\left.N\right.^{p}}$ ###### Notation 2.35. 1. (1) The change-of-parahoric map $\pi_{K,K^{\prime}}:=\pi_{K_{p}K^{p},K_{p}^{\prime}\left.K\right.^{p}}$ restricts to a change-of-parahoric map $\Upsilon_{K_{p}K^{p}}^{-1}([\mkern-2.0mu[b]\mkern-2.0mu])\to\Upsilon_{K_{p}^{\prime}K^{p}}^{-1}([\mkern-2.0mu[b]\mkern-2.0mu])$ between leaves, which we denote by $\left.\pi_{K,K^{\prime}}\right|_{\Upsilon_{K}^{-1}([\mkern-2.0mu[b]\mkern-2.0mu])}$ or simply by $\pi_{K,K^{\prime}}$ again. 2. (2) We choose a base point $\bar{x}_{b}\in\overline{C}^{b}_{K}(\bar{\mathbb{F}}_{p})$ and take its image under $\pi_{K,K^{\prime}}$ as a base point $\bar{x}_{b}^{\prime}\in\overline{C}^{b}_{K^{\prime}}(\bar{\mathbb{F}}_{p})$. By $\mathbb{X}_{b,K}$ and $\mathbb{X}_{b,K^{\prime}}$, respectively, we denote the corresponding polarized $p$-divisible groups. 3. (3) Denote by $\mathscr{A}_{K}$ the universal polarized abelian scheme over $\overline{\mathscr{S}}_{K}$. By slight abuse of notation, we will also use the same notation for its pullback to $\overline{\mathscr{S}}_{K}^{\mathrm{perf}}$. ###### Remark 2.36. We have morphisms $\mathbb{X}_{b,K}\to\mathbb{X}_{b,K^{\prime}}$ and, more generally, $\mathscr{A}_{K}\to\mathscr{A}_{K^{\prime}}$ (lying over $\overline{\mathscr{S}}_{K}\to\overline{\mathscr{S}}_{K^{\prime}}$). This follows from the “thinning out” interpretation, Remark 2.33 (1). #### 2.5.2 Newton-Igusa variety and change-of-parahoric ###### Lemma 2.37. There are natural change-of-parahoric morphisms $\mathfrak{Ig}^{b}_{K}\to\mathfrak{Ig}^{b}_{K^{\prime}},\quad\mathfrak{M}^{b}_{K}\to\mathfrak{M}^{b}_{K^{\prime}},\quad\mathfrak{X}^{b}_{K}\to\mathfrak{X}^{b}_{K^{\prime}}.$ lying over the change-of-parahoric morphism $\mathscr{S}_{K}\to\mathscr{S}_{K^{\prime}}$ and compatible with the Siegel embeddings. ###### Proof: Consider the universal isomorphism $j\colon\mathbb{X}_{b,K}\times\overline{\mathrm{Ig}}_{K}^{b,\diamond}\xrightarrow{\cong}\mathscr{A}_{K,\overline{C}^{b}_{K}}[p^{\infty}]\times_{\overline{C}^{b}_{K}}\overline{\mathrm{Ig}}_{K}^{b,\diamond}=\mathscr{A}_{K}[p^{\infty}]\times_{\overline{\mathscr{S}}_{K}^{\mathrm{perf}}}\overline{\mathrm{Ig}}_{K}^{b,\diamond}$ and the diagram $\mathbb{X}_{b,K}\times\overline{\mathrm{Ig}}^{b,\diamond}_{K}$$\mathscr{A}_{K}[p^{\infty}]\times_{\overline{\mathscr{S}}_{K}^{\mathrm{perf}}}\overline{\mathrm{Ig}}_{K}^{b,\diamond}$$\mathbb{X}_{b,K^{\prime}}\times\overline{\mathrm{Ig}}^{b,\diamond}_{K}$$\mathscr{A}_{K^{\prime}}[p^{\infty}]\times_{\overline{\mathscr{S}}_{K^{\prime}}^{\mathrm{perf}}}\overline{\mathrm{Ig}}_{K}^{b,\diamond}$$\cong$$\cong$ The dashed arrow exists on $\overline{\mathrm{Ig}}^{b}_{K}$, i.e., $\mathbb{X}_{b,K^{\prime}}\times\overline{\mathrm{Ig}}^{b}_{K}\xrightarrow{\cong}\mathscr{A}_{K^{\prime}}[p^{\infty}]\times_{\overline{\mathscr{S}}_{K^{\prime}}^{\mathrm{perf}}}\overline{\mathrm{Ig}}_{K}^{b}$ exists. The reason is that the tensors are respected on $\overline{\mathrm{Ig}}_{K}^{b}$ which in particular forces $\mathbb{X}_{b,K}\times\overline{\mathrm{Ig}}^{b,\diamond}_{K}\xrightarrow{\cong}\mathscr{A}_{K}[p^{\infty}]\times_{\overline{\mathscr{S}}_{K}^{\mathrm{perf}}}\overline{\mathrm{Ig}}_{K}^{b,\diamond}$ to be “diagonal”, similar to the proof of Proposition 1.24. This isomorphism $\mathbb{X}_{b,K^{\prime}}\times\overline{\mathrm{Ig}}^{b}_{K}\xrightarrow{\cong}\mathscr{A}_{K^{\prime}}[p^{\infty}]\times_{\overline{\mathscr{S}}_{K^{\prime}}^{\mathrm{perf}}}\overline{\mathrm{Ig}}_{K}^{b}$ now corresponds to a morphism $\overline{\mathrm{Ig}}_{K}^{b}\to\overline{\mathrm{Ig}}_{K^{\prime}}^{b}$. By construction (2.27) of $\mathfrak{Ig}^{b}$, we also get a corresponding morphism $\mathfrak{Ig}_{K}^{b}\to\mathfrak{Ig}_{K^{\prime}}^{b}$. For $\mathfrak{X}$ and $\mathfrak{M}$, it is very similar. □ ###### Lemma 2.38. The isomorphism $\mathfrak{Ig}^{b}\times\mathfrak{M}^{b}\cong\mathfrak{X}^{b}$ is compatible with change-of-parahoric, where the change-of-parahoric map for $\mathfrak{Ig}^{b}\times\mathfrak{M}^{b}$ is by definition the product of those for $\mathfrak{Ig}^{b}$ and $\mathfrak{M}^{b}$, respectively. ###### Proof: All of $\mathfrak{Ig}^{b},\mathfrak{M}^{b},\mathfrak{X}^{b}$ are embedded into the corresponding Siegel versions as closed formal subschemes, compatible with change-of-parahoric. Hence we may assume that we are in the Siegel case. In that case, the isomorphism has a very concrete description, cf. Remark 2.30, from which the lemma is clear. □ ###### Definition 2.39. By $\overline{\mathscr{S}}_{K}^{b}:=\overline{\mathscr{S}}_{K}^{[b]}$ we denote the Newton stratum associated with ${[b]\in B(G)}$, i.e., $\bar{S}_{K,[b]}$ in the notation of [HR17]. ###### Lemma 2.40. Let $\Omega/\mathbb{F}_{p}$ be an algebraically closed field. The action of $J_{b}(\mathbb{Q}_{p})$ on the fibers of $\mathfrak{X}_{K}^{b}(\Omega)\to\overline{\mathscr{S}}^{b}_{K}(\Omega)$ is simply transitive and the change-of-parahoric map $\mathfrak{X}_{K}^{b}(\Omega)\to\mathfrak{X}_{K^{\prime}}^{b}(\Omega)$ is $J_{b}(\mathbb{Q}_{p})$-equivariant. ###### Proof: The moduli description of Remark 2.32 gives us that the action is simply transitive upon noting that $\operatorname{Qisg}_{G}(\mathbb{X}_{b,K})(\Omega)\overset{\ref{QisgJbField}}{\cong}J_{b}(\mathbb{Q}_{p})\overset{\ref{QisgJbField}}{\cong}\operatorname{Qisg}_{G}(\mathbb{X}_{b,K^{\prime}})(\Omega).$ Equivariance follows from the description of the action of $J_{b}(\mathbb{Q}_{p})$ and of the change-of-parahoric map in terms of lattice chains in Dieudonné theory: Given a point $Q\colon\operatorname{Spec}\Omega\to\overline{\mathscr{S}}_{K}^{b}$, we consider the map between the fiber of $Q$ under $\mathfrak{X}^{b}_{K}(\Omega)\to\overline{\mathscr{S}}^{b}_{K}(\Omega)$ and the fiber of the image of $Q$ under $\mathfrak{X}^{b}_{K^{\prime}}(\Omega)\to\overline{\mathscr{S}}^{b}_{K^{\prime}}(\Omega)$. We identify the lattice chain associated with $Q$ with a standard lattice chain such that the Frobenius is identified with a $b^{\prime}\in G(\breve{\mathbb{Q}}_{p})$, $b^{\prime}\equiv b\mod\breve{K}_{p,\sigma}$. Elements of $J_{b}(\mathbb{Q}_{p})$ then act naturally on the common rational Dieudonné module. The action on the fiber of $Q$ is given by altering the quasi-isogenies appearing in the description of that set by the quasi-isogeny obtained this way. Passing from $K$ to $K^{\prime}$, i.e., from $Q$ to the image of $Q$, means leaving out parts of the lattice chain and enlarging $\breve{K}_{p}$ (so one still has the same $b^{\prime}$). □ ###### Proposition 2.41. The change-of-parahoric map $\mathfrak{X}^{b}_{K,\bar{\mathbb{F}}_{p}}\to\mathfrak{X}^{b}_{K^{\prime},\bar{\mathbb{F}}_{p}}$ is surjective. ###### Proof: We check that $\mathfrak{X}^{b}_{K}(\Omega)\to\mathfrak{X}^{b}_{K^{\prime}}(\Omega)$ is surjective for every algebraically closed field $\Omega/\mathbb{F}_{p}$. Considering the diagram $\mathfrak{X}^{b}_{K}(\Omega)$$\mathfrak{X}^{b}_{K^{\prime}}(\Omega)$$\overline{\mathscr{S}}_{K}^{b}(\Omega)$$\overline{\mathscr{S}}_{K^{\prime}}^{b}(\Omega)$ this follows from the preceding lemma and the fact212121This follows simply from the surjectiveness of $\overline{\mathscr{S}}_{K}\to\overline{\mathscr{S}}_{K^{\prime}}$ and the commutativity of the diagram $\overline{\mathscr{S}}_{K}$$\overline{\mathscr{S}}_{K^{\prime}}$$B(G)$, non- horizontal maps being the Newton maps. that the change-of-parahoric map between Newton strata is surjective. □ ###### Corollary 2.42. The map $\overline{\mathrm{Ig}}_{K}^{b}\to\overline{\mathrm{Ig}}_{K^{\prime}}^{b}$ is an isomorphism. ###### Proof: Lemma 2.38 and Proposition 2.41 imply that it is surjective. Also, it is the restriction to closed subschemes of the isomorphism $\overline{\mathrm{Ig}}_{K}^{b^{\S}}\to\overline{\mathrm{Ig}}_{K^{\prime}}^{b^{\S}}$. Since all involved schemes are reduced, the corollary follows. □ ###### Corollary 2.43. The change-of-parahoric morphism between central leaves is surjective. ###### Proof: We have a diagram $\overline{\mathrm{Ig}}_{K}^{b}$$\overline{\mathrm{Ig}}_{K^{\prime}}^{b}$$\Upsilon^{-1}_{K}(b)$$\Upsilon^{-1}_{K^{\prime}}(b)$$\cong$ where we already know that all maps but the lower horizontal one are surjective. □ ###### Corollary 2.44. The separable rank of $\Upsilon^{-1}_{K}(b)\to\Upsilon^{-1}_{K^{\prime}}(b)$, i.e., the number of geometric points of the fibers, is finite and constant. ###### Proof: We have $\Upsilon^{-1}_{K}(b)\cong\overline{\mathrm{Ig}}_{K}^{b}/\operatorname{Aut}(\mathbb{X}_{b,K})\cong\overline{\mathrm{Ig}}_{K^{\prime}}^{b}/\operatorname{Aut}(\mathbb{X}_{b,K})\quad\text{and}\quad\Upsilon^{-1}_{K^{\prime}}(b)\cong\overline{\mathrm{Ig}}_{K^{\prime}}^{b}/\operatorname{Aut}(\mathbb{X}_{b,K^{\prime}}),$ so that all fibers are isomorphic to $\operatorname{Aut}(\mathbb{X}_{b,K^{\prime}})/\operatorname{Aut}(\mathbb{X}_{b,K})$. □ ###### Corollary 2.45. The change-of-parahoric map between central leaves is finite. ###### Proof: We have just seen it to be quasi-finite. Corollary 2.13, combined with the properness of the change-of-parahoric itself and therefore of the change-of-parahoric map restricted to Newton strata (similar argument as in footnote 21), implies the properness of the change-of- parahoric map between central leaves. □ ###### Remark 2.46. By [Kim19, Cor. 5.3.1], leaves are equidimensional smooth and the dimension of a leaf depends only on the Newton stratum it is in; in particular, if we consider a change-of-parahoric map between leaves, $\pi_{K,K^{\prime}}\colon\Upsilon^{-1}_{K}(y)\to\Upsilon_{K^{\prime}}^{-1}(y^{\prime}),$ then $\dim\Upsilon^{-1}_{K}(y)=\dim\Upsilon_{K^{\prime}}^{-1}(y^{\prime})$. ###### Corollary 2.47. The change-of-parahoric map between central leaves is finite locally free. ###### Proof: Combine Corollary 2.45 with the preceding remark and [GW10, Cor. 14.128]222222Note that “$y\in Y$” may be replaced by “$y\in f(X)$” in the statement of [GW10, Cor. 14.128].. □ ###### Corollary 2.48. The change-of-parahoric morphism between central leaves is the composition of a flat universal homeomorphism of finite type and a finite étale morphism. ###### Proof: This follows from what has been established about the morphism by using [Mes72, Lemma 4.8]. □ ## References * [1] P. Hamacher and W. Kim “$l$-adic étale cohomology of Shimura varieties of Hodge type with non-trivial coefficients” In _Math. Ann._ 375.3-4 Springer, Berlin/Heidelberg, 2019, pp. 973–1044 * [2] M. Kisin and G. Pappas “Integral models of Shimura varieties with parahoric level structure” In _Publ. Math., Inst. Hautes Étud. Sci._ 128 Springer, Berlin/Heidelberg; Institut des Hautes Études Scientifiques, Bures-sur-Yvette, 2018, pp. 121–218 * [Ahs11] Tobias Ahsendorf “$\mathcal{O}$-displays and $\pi$-divisible formal $\mathcal{O}$-modules”, 2011 URL: http://nbn-resolving.de/urn:nbn:de:hbz:361-24713520 * [AT08] Alexander Arhangel’skii and Mikhail Tkachenko “Topological groups and related structures” Hackensack, NJ: World Scientific; Paris: Atlantis Press, 2008 * [BBM82] Pierre Berthelot, Lawrence Breen and William Messing “Théorie de Dieudonné cristalline. II” 930, Lecture Notes in Mathematics Springer-Verlag, Berlin, 1982, pp. x+261 DOI: 10.1007/BFb0093025 * [BG18] Alessandra Bertapelle and Cristian D. González-Avilés “On the perfection of schemes” In _Expo. Math._ 36.2 Elsevier, Munich, 2018, pp. 197–220 * [BM90] Pierre Berthelot and William Messing “Théorie de Dieudonné cristalline. III. Théorèmes d’équivalence et de pleine fidélité” In _The Grothendieck Festschrift, Vol. I_ 86, Progr. Math. Birkhäuser Boston, Boston, MA, 1990, pp. 173–247 * [Bor69] Armand Borel “Introduction aux groupes arithmétiques”, Publications de l’Institut de Mathématique de l’Université de Strasbourg, XV. Actualités Scientifiques et Industrielles, No. 1341 Hermann, Paris, 1969 * [BT84] François Bruhat and Jacques Tits “Groupes réductifs sur un corps local. II. Schémas en groupes. Existence d’une donnée radicielle valuée.” In _Publ. Math., Inst. Hautes Étud. Sci._ 60 Springer, Berlin/Heidelberg; Institut des Hautes Études Scientifiques, Bures-sur-Yvette, 1984, pp. 1–194 * [BT84a] François Bruhat and Jacques Tits “Schémas en groupes et immeubles des groupes classiques sur un corps local” In _Bull. Soc. Math. Fr._ 112 Société Mathématique de France (SMF), Paris, 1984, pp. 259–301 DOI: 10.24033/bsmf.2006 * [CS17] Ana Caraiani and Peter Scholze “On the generic part of the cohomology of compact unitary Shimura varieties.” In _Ann. Math. (2)_ 186.3 Princeton University, Mathematics Department, Princeton, NJ; Mathematical Sciences Publishers (MSP), Berkeley, CA, 2017, pp. 649–766 DOI: 10.4007/annals.2017.186.3.1 * [Dan36] D. Dantzig “Zur topologischen Algebra. III: Brouwersche und Cantorsche Gruppen” In _Compos. Math._ 3 Cambridge University Press, Cambridge; London Mathematical Society, London, 1936, pp. 408–426 * [Del11] Pierre Deligne “Letter to Kisin”, http://people.math.binghamton.edu/adrian/Letter_Deligne.pdf, 2011 * [Del71] Pierre Deligne “Travaux de Shimura” In _Séminaire Bourbaki, 23ème année (1970/71), Exp. No. 389_ 244, Lecture Notes in Math. Springer, Berlin, 1971, pp. 123–165 * [Del82] Pierre Deligne “Hodge cycles on abelian varieties. (Notes by J. S. Milne).”, Hodge cycles, motives, and Shimura varieties, Lect. Notes Math. 900, 9-100 (1982)., 1982 * [EGA2] A. Grothendieck “Éléments de géométrie algébrique. II. Étude globale élémentaire de quelques classes de morphismes” In _Inst. Hautes Études Sci. Publ. Math._ , 1961 URL: http://www.numdam.org/item?id=PMIHES_1961__8__222_0 * [EGA4] Alexander Grothendieck “Éléments de géométrie algébrique IV: Étude locale des schémas et des morphismes de schémas” Publications mathématiques de l’I.H.É.S., 1964–1967 * [Gro74] Alexandre Grothendieck “Groupes de Barsotti-Tate et cristaux de Dieudonné” Séminaire de Mathématiques Supérieures, No. 45 (Été, 1970) Les Presses de l’Université de Montréal, Montreal, Que., 1974 * [GW10] Ulrich Görtz and Torsten Wedhorn “Algebraic geometry I” Schemes with examples and exercises, Advanced Lectures in Mathematics Vieweg + Teubner, Wiesbaden, 2010 DOI: 10.1007/978-3-8348-9722-0 * [Hai05] Thomas J. Haines “Introduction to Shimura varieties with bad reduction of parahoric type” In _Harmonic analysis, the trace formula, and Shimura varieties. Proceedings of the Clay Mathematics Institute 2003 summer school, Toronto, Canada, June 2–27, 2003_ Providence, RI: American Mathematical Society (AMS), 2005, pp. 583–642 * [Ham17] Paul Hamacher “The almost product structure of Newton strata in the deformation space of a Barsotti-Tate group with crystalline Tate tensors” In _Math. Z._ 287.3-4, 2017, pp. 1255–1277 DOI: 10.1007/s00209-017-1867-2 * [Haz78] Michiel Hazewinkel “Formal groups and applications”, Pure and Applied Mathematics, 78. New York-San Francisco-London: Academic Press. XXII, 573 p., 1978 * [Hes20] Jens Hesse “Central leaves and EKOR strata on Shimura varieties with parahoric reduction”, 2020 URL: http://nbn-resolving.de/urn:nbn:de:tuda-tuprints-115430 * [Hes20a] Jens Hesse “EKOR strata on Shimura varieties with parahoric reduction” Preprint, 2020 arXiv:2003.04738 [math.AG] * [HK17] P. Hamacher and W. Kim “$l$-adic étale cohomology of Shimura varieties of Hodge type with non-trivial coefficients” Preprint, 2017 arXiv:1711.07123v1 [math.NT] * [HR17] X. He and M. Rapoport “Stratifications in the reduction of Shimura varieties.” In _Manuscr. Math._ 152.3-4 Springer, Berlin/Heidelberg, 2017, pp. 317–343 DOI: 10.1007/s00229-016-0863-x * [Jon95] A.. Jong “Crystalline Dieudonné module theory via formal and rigid geometry” In _Inst. Hautes Études Sci. Publ. Math._ , 1995, pp. 5–96 URL: http://www.numdam.org/item?id=PMIHES_1995__82__5_0 * [Kim19] Wansu Kim “On central leaves of Hodge-type Shimura varieties with parahoric level structure” In _Math. Z._ 291.1-2 Springer, Berlin/Heidelberg, 2019, pp. 329–363 * [Kis10] Mark Kisin “Integral models for Shimura varieties of abelian type” In _J. Amer. Math. Soc._ 23.4, 2010, pp. 967–1012 DOI: 10.1090/S0894-0347-10-00667-3 * [Kis17] Mark Kisin “Mod $p$ points on Shimura varieties of abelian type” In _J. Am. Math. Soc._ 30.3 American Mathematical Society (AMS), Providence, RI, 2017, pp. 819–914 * [Kot92] Robert E. Kottwitz “Points on some Shimura varieties over finite fields” In _J. Amer. Math. Soc._ 5.2, 1992, pp. 373–444 DOI: 10.2307/2152772 * [Kot97] Robert E. Kottwitz “Isocrystals with additional structure. II” In _Compositio Math._ 109.3, 1997, pp. 255–339 DOI: 10.1023/A:1000102604688 * [KP15] M. Kisin and G. Pappas “Integral models of Shimura varieties with parahoric level structure” Preprint, 2015 arXiv:1512.01149v2 [math.AG] * [Lan00] Erasmus Landvogt “Some functorial properties of the Bruhat-Tits building” In _J. Reine Angew. Math._ 518 De Gruyter, Berlin, 2000, pp. 213–241 DOI: 10.1515/crll.2000.006 * [Lan96] Erasmus Landvogt “A compactification of the Bruhat-Tits building” 1619, Lecture Notes in Mathematics Springer-Verlag, Berlin, 1996, pp. viii+152 DOI: 10.1007/BFb0094594 * [Lau14] Eike Lau “Relations between Dieudonné displays and crystalline Dieudonné theory” In _Algebra Number Theory_ 8.9, 2014, pp. 2201–2262 DOI: 10.2140/ant.2014.8.2201 * [Man04] Elena Mantovan “On certain unitary group Shimura varieties” Variétés de Shimura, espaces de Rapoport-Zink et correspondances de Langlands locales In _Astérisque_ , 2004, pp. 201–331 * [Man05] Elena Mantovan “On the cohomology of certain PEL-type Shimura varieties” In _Duke Math. J._ 129.3, 2005, pp. 573–610 DOI: 10.1215/S0012-7094-05-12935-0 * [Mes72] William Messing “The crystals associated to Barsotti-Tate groups: with applications to abelian schemes”, Lecture Notes in Mathematics, Vol. 264 Springer-Verlag, Berlin-New York, 1972 * [Mil05] J.. Milne “Introduction to Shimura varieties.” In _Harmonic analysis, the trace formula, and Shimura varieties. Proceedings of the Clay Mathematics Institute 2003 summer school, Toronto, Canada, June 2–27, 2003_ Providence, RI: American Mathematical Society (AMS), 2005, pp. 265–378 * [Oor04] Frans Oort “Foliations in moduli spaces of abelian varieties” In _J. Amer. Math. Soc._ 17.2, 2004, pp. 267–296 DOI: 10.1090/S0894-0347-04-00449-7 * [Oor09] Frans Oort “Foliations in moduli spaces of abelian varieties and dimension of leaves” In _Algebra, arithmetic, and geometry: in honor of Yu. I. Manin. Vol. II_ 270, Progr. Math. Birkhäuser Boston, Inc., Boston, MA, 2009, pp. 465–501 DOI: 10.1007/978-0-8176-4747-6_15 * [OZ02] Frans Oort and Thomas Zink “Families of $p$-divisible groups with constant Newton polygon” In _Doc. Math._ 7, 2002, pp. 183–201 * [Pin90] Richard Pink “Arithmetical compactification of mixed Shimura varieties” 209, Bonner Mathematische Schriften [Bonn Mathematical Publications] Universität Bonn, Mathematisches Institut, 1990 * [PRS13] Georgios Pappas, Michael Rapoport and Brian Smithling “Local models of Shimura varieties, I. Geometry and combinatorics” In _Handbook of moduli. Vol. III_ 26, Adv. Lect. Math. (ALM) Int. Press, Somerville, MA, 2013, pp. 135–217 * [PZ13] Georgios Pappas and Xinwen Zhu “Local models of Shimura varieties and a conjecture of Kottwitz” In _Invent. Math._ 194.1 Springer, Berlin/Heidelberg, 2013, pp. 147–254 DOI: 10.1007/s00222-012-0442-z * [Rap05] Michael Rapoport “A guide to the reduction modulo $p$ of Shimura varieties.” In _Formes automorphes (I). Actes du Semestre du Centre Émile Borel, Paris, France, 17 février au 11 juillet 2000_ Paris: Société Mathématique de France, 2005, pp. 271–318 * [RV14] Michael Rapoport and Eva Viehmann “Towards a theory of local Shimura varieties” In _Münster J. Math._ 7.1 Universität Münster, Mathematical Institutes, Münster, 2014, pp. 273–326 * [RZ96] M. Rapoport and Th. Zink “Period spaces for $p$-divisible groups” 141, Annals of Mathematics Studies Princeton University Press, Princeton, NJ, 1996 DOI: 10.1515/9781400882601 * [SGA3] “Schémas en groupes”, Séminaire de Géométrie Algébrique du Bois Marie 1962/64 (SGA 3). Dirigé par M. Demazure et A. Grothendieck. Lecture Notes in Mathematics, Vols. 151–153 Springer-Verlag, Berlin-New York, 1970 * [Stacks] The Stacks Project Authors “Stacks Project”, http://stacks.math.columbia.edu, 2020 * [SW13] Peter Scholze and Jared Weinstein “Moduli of $p$-divisible groups” In _Camb. J. Math._ 1.2 International Press of Boston, Somerville, MA, 2013, pp. 145–237 DOI: 10.4310/CJM.2013.v1.n2.a1 * [Vas08] Adrian Vasiu “Level $m$ stratifications of versal deformations of $p$-divisible groups” In _J. Algebraic Geom._ 17.4, 2008, pp. 599–641 DOI: 10.1090/S1056-3911-08-00495-5 * [Wor13] D. Wortmann “The $\mu$-ordinary locus for Shimura varieties of Hodge type” Preprint, 2013 arXiv:1310.6444v1 [math.AG] * [Zha15] C. Zhang “Stratifications and foliations for good reductions of Shimura varieties of Hodge type” Preprint, 2015 arXiv:1512.08102v1 [math.AG] * [Zho18] R. Zhou “Mod-$p$ isogeny classes on Shimura varieties with parahoric level structure” Preprint, 2018 arXiv:1707.09685v2 [math.NT] * [Zhu17] Xinwen Zhu “Affine Grassmannians and the geometric Satake in mixed characteristic.” In _Ann. Math. (2)_ 185.2 Princeton University, Mathematics Department, Princeton, NJ; Mathematical Sciences Publishers (MSP), Berkeley, CA, 2017, pp. 403–492 * [Zin01] Thomas Zink “A Dieudonné theory for $p$-divisible groups.” In _Class field theory – its centenary and prospect. Proceedings of the 7th MSJ International Research Institute of the Mathematical Society of Japan, Tokyo, Japan, June 3–12, 1998_ Tokyo: Mathematical Society of Japan, 2001, pp. 139–160 * [Zin01a] Thomas Zink “On the slope filtration” In _Duke Math. J._ 109.1, 2001, pp. 79–95 DOI: 10.1215/S0012-7094-01-10913-7 * [Zin02] Thomas Zink “The display of a formal $p$-divisible group.” In _Cohomologies $p$-adiques et applications arithmétiques (I)_ Paris: Société Mathématique de France, 2002, pp. 127–248
2024-09-04T02:54:57.861480
2020-03-06T13:23:44
2003.03184
{ "authors": "ALICE Collaboration", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26081", "submitter": "ALICE Publications", "url": "https://arxiv.org/abs/2003.03184" }
arxiv-papers
2020 025 03 March (Anti-)Deuteron production in pp collisions at $\sqrt{s}=13$ TeV ALICE Collaboration††thanks: See Appendix A for the list of collaboration members ALICE Collaboration The study of (anti-)deuteron production in pp collisions has proven to be a powerful tool to investigate the formation mechanism of loosely bound states in high energy hadronic collisions. In this paper the production of (anti-)deuterons is studied as a function of the charged particle multiplicity in inelastic pp collisions at $\sqrt{s}=13$ TeV using the ALICE experiment. Thanks to the large number of accumulated minimum bias events, it has been possible to measure (anti-)deuteron production in pp collisions up to the same charged particle multiplicity ($\mbox{$\mathrm{d}N_{ch}/\mathrm{d}\eta$}\sim 26$) as measured in p–Pb collisions at similar centre-of-mass energies. Within the uncertainties, the deuteron yield in pp collisions resembles the one in p–Pb interactions, suggesting a common formation mechanism behind the production of light nuclei in hadronic interactions. In this context the measurements are compared with the expectations of coalescence and Statistical Hadronisation Models (SHM). ## 1 Introduction High energy collisions at the Large Hadron Collider (LHC) create a suitable environment for the production of light (anti-)nuclei. In ultra-relativistic heavy-ion collisions light (anti-)nuclei are abundantly produced [1, 2, 3], but in elementary pp collisions their production is lower [4, 5, 1, 6]. As a consequence, there are only few detailed measurements of (anti-)nuclei production rate in pp collisions. However, with the recently collected large data sample it is now possible to perform more differential measurements of light (anti-)nuclei production as a function of multiplicity and transverse momentum. In this paper, we present the detailed study of the multiplicity dependence of (anti-)deuteron production in pp collisions at $\sqrt{s}=$ 13 TeV, the highest collision energy so far delivered at the LHC. The production mechanism of light (anti-)nuclei in high energy hadronic collisions is not completely understood. However, two groups of models have turned out to be particularly useful, namely Statistical Hadronisation Models (SHM) and coalescence models. The SHMs, which assume particle production according to the thermal equilibrium expectation, have been very successful in explaining the yields of light (anti-)nuclei along with other hadrons in Pb–Pb collisions [7], suggesting a common chemical freeze-out temperature for light (anti-)nuclei and other hadron species. The ratio between the $p_{\mathrm{T}}$-integrated yields of deuterons and protons (d/p ratio) in Pb–Pb collisions remains constant as a function of centrality, but rises in pp and p–Pb collisions with increasing multiplicity, finally reaching the value observed in Pb–Pb [1, 8, 9]. The constant d/p ratio in Pb–Pb collisions as a function of centrality is consistent with thermal production, suggesting that the chemical freeze-out temperature in Pb–Pb collisions does not vary with centrality [10]. Assuming thermal production in pp collisions as well, the lower d/p ratio would indicate a lower freeze-out temperature [10]. On the other hand, the ratio between the $p_{\mathrm{T}}$-integrated yields of protons and pions (p/$\pi$ ratio) does not show a significant difference between pp and Pb–Pb collisions [11, 12]. Also, for p–Pb collisions the freeze-out temperature obtained with SHMs using only light-flavoured particles is constant with multiplicity and its value is similar to that obtained in Pb–Pb collisions [13]. Thus, the increase of the d/p ratio with multiplicity for smaller systems cannot be explained within the scope of the grand- canonical SHM as is done in case of Pb–Pb. It is also not consistent with a simple SHM that the d/p and p/$\pi$ ratios behave differently as a function of multiplicity even though numerator and denominator differ in both cases by one unit of baryon number. Nonetheless, a process similar to the canonical suppression of strange particles might be worth considering also for baryons. A recent calculation within the SHM approach with exact conservation of baryon number, electric charge, and strangeness focuses on this aspect [14]. In coalescence models (anti-)nuclei are formed by nucleons close in phase- space [15]. In this approach, the coalescence parameter $B_{2}$ quantitatively describes the production of (anti-)deuterons. $B_{2}$ is defined as $B_{2}\left(p_{\mathrm{T}}^{p}\right)=E_{d}\frac{\mathrm{d}^{3}N_{d}}{\mathrm{d}p_{d}^{3}}\bigg{/}\left(E_{p}\frac{\mathrm{d}^{3}N_{p}}{\mathrm{d}p_{p}^{3}}\right)^{2}=\frac{1}{2\pi p_{\mathrm{T}}^{d}}\frac{\mathrm{d}^{2}N_{d}}{\mathrm{d}y\mathrm{d}p^{d}_{\mathrm{T}}}\;\bigg{/}\left(\frac{1}{2\pi p_{\mathrm{T}}^{p}}\frac{\mathrm{d}^{2}N_{p}}{\mathrm{d}y\mathrm{d}p_{\mathrm{T}}^{p}}\right)^{2},$ (1) where $E$ is the energy, $p$ is the momentum, $p_{\mathrm{T}}$ is the transverse momentum and $y$ is the rapidity. The labels $p$ and $d$ are used to denote properties related to protons and deuterons, respectively. The invariant spectra of the (anti-)protons are evaluated at half of the transverse momentum of the deuterons, so that $p_{\mathrm{T}}^{p}=p_{\mathrm{T}}^{d}/2$. Neutron spectra are assumed to be equivalent to proton spectra, since neutrons and protons belong to the same isospin doublet. Since the coalescence process is expected to occur at the late stage of the collision, the parameter $B_{2}$ is related to the emission volume. In a simple coalescence approach, which describes the uncorrelated particle emission from a point-like source, $B_{2}$ is expected to be independent of $p_{\mathrm{T}}$ and multiplicity. However, it has been observed that $B_{2}$ at a given transverse momentum decreases as a function of multiplicity, suggesting that the nuclear emission volume increases with multiplicity [2, 16, 9]. In Pb–Pb collisions the $B_{2}$ parameter as a function of $p_{T}$ shows an increasing trend, which is usually attributed to the position-momentum correlations caused by radial flow or hard scatterings [17, 18]. Such an increase of $B_{2}$ as a function of $p_{\mathrm{T}}$ has in fact also been observed in pp collisions at $\sqrt{s}=7$ TeV [6]. However, if pp collisions are studied in separate intervals of multiplicity, $B_{2}$ is found to be almost constant as a function of $p_{\mathrm{T}}$ [8]. Similarly, $B_{2}$ does not depend on $p_{\mathrm{T}}$ in multiplicity selected p–Pb collisions [9]. Moreover, the highest multiplicities reached in pp collisions are comparable with those obtained in p–Pb collisions and not too far from peripheral Pb–Pb collisions. Therefore, the measure of $B_{2}$ as a function of $p_{\mathrm{T}}$ for finer multiplicity intervals in pp collisions at $\sqrt{s}=$ 13 TeV gives the opportunity to compare different collision systems and to evaluate the dependence on the system size. The paper is organized as follows. Section 2 discusses the details of the ALICE detector. Section 3 describes the data sample used for the analysis and the corresponding event and track selection criteria. Section 4 presents the data analysis steps in detail, such as raw yield extraction and various corrections, as well as the systematic uncertainty estimation. In Section 5, the results are presented and discussed. Finally, conclusions are given in Section 6. ## 2 The ALICE detector A detailed description of the ALICE detectors can be found in [19] and references therein. For the present analysis the main sub-detectors used are the V0, the Inner Tracking System (ITS), the Time Projection Chamber (TPC) and the Time-of-Flight (TOF), which are all located inside a 0.5 T solenoidal magnetic field. The V0 detector [20] is formed by two arrays of scintillation counters placed around the beampipe on either side of the interaction point: one covering the pseudorapidity range $2.8<\eta<5.1$ (V0A) and the other one covering $-3.7<\eta<-1.7$ (V0C). The collision multiplicity is estimated using the counts in the V0 detector, which is also used as trigger detector. More details will be given in Section 3. The ITS [21], designed to provide high resolution track points in the proximity of the interaction region, is composed of three subsystems of silicon detectors placed around the interaction region with a cylindrical symmetry. The Silicon Pixel Detector (SPD) is the subsystem closest to the beampipe and is made of two layers of pixel detectors. The third and the fourth layers consist of Silicon Drift Detectors (SDD), while the outermost two layers are equipped with double-sided Silicon Strip Detectors (SSD). The inner radius of the SPD, 3.9 cm, is essentially given by the radius of the beam pipe, while the inner field cage of the TPC limits the radial span of the entire ITS to be 43 cm. The ITS covers the pseudorapidity range $|\eta|<0.9$ and it is hermetic in azimuth. The same pseudorapidity range is covered by the TPC [22], which is the main tracking detector, consisting of a hollow cylinder whose axis coincides with the nominal beam axis. The active volume, filled with a Ne/CO2/N2 gas mixture (Ar/CO2/N2 in 2016), at atmospheric pressure, has an inner radius of about 85 cm, an outer radius of about 250 cm, and an overall length along the beam direction of 500 cm. The gas is ionised by charged particles traversing the detector and the ionisation electrons drift, under the influence of a constant electric field of $\sim$ 400 V/cm, towards the endplates, where their position and arrival time are measured. The trajectory of a charged particle is estimated using up to 159 combined measurements (clusters) of drift times and radial positions of the ionisation electrons. The charged-particle tracks are then formed by combining the hits in the ITS and the reconstructed clusters in the TPC. The TPC is used for particle identification by measuring the specific energy loss ($\mathrm{d}E/\mathrm{d}x$) in the TPC gas. The TOF system [23] covers the full azimuth for the pseudorapidity interval $|\eta|<0.9$. The detector is based on the Multi-gap Resistive Plate Chambers (MRPCs) technology and it is located, with a cylindrical symmetry, at an average distance of 380 cm from the beam axis. The particle identification is based on the difference between the measured time-of-flight and its expected value, computed for each mass hypothesis from track momentum and length. The overall resolution on the time-of-flight of particles is about 80 ps. A precise starting signal for the TOF system can be also provided by the T0 detector, consisting of two arrays of Cherenkov counters, T0A and T0C, which cover the pseudorapidity regions $4.61<\eta<4.92$ and $3.28<\eta<2.97$, respectively [24]. Alternatively, the start time can be provided by the TOF itself or the bunch-crossing time can be used, as described in [24]. ## 3 Data sample The data samples used in this work consist of approximately 950 million minimum bias pp events collected during the LHC proton runs in 2016 and 2017. The data were collected using a minimum-bias trigger requiring at least one hit in both the V0 detectors. Moreover, the timing information of the V0 scintillators is used for the offline rejection of events triggered by interactions of the beam with the residual gas in the LHC vacuum pipe. To ensure the best possible performance of the detector, events with more than one reconstructed primary interaction vertex (pile-up events) were rejected. The production of primary (anti-)deuterons is measured around mid-rapidity. In particular, the spectra are provided within a rapidity window of $|y|<0.5$. To ensure that all tracks have the maximal length, only those in the pseudorapidity interval $|\eta|<0.8$ are selected. In order to guarantee good track momentum and $\mathrm{d}E/\mathrm{d}x$ resolution in the relevant $p_{\mathrm{T}}$ ranges, the selected tracks are required to have at least 70 reconstructed points in the TPC and two points in the ITS. In addition, at least one of the ITS points has to be measured by the SPD in order to assure for the selected tracks a resolution better than 300 $\mu$m on the distance of closest approach to the primary vertex in the plane perpendicular (DCAxy) and parallel (DCAz) to the beam axis [19]. Furthermore, it is required that the $\chi^{2}$ per TPC reconstructed point is less than 4 and tracks originating from kink topologies of weak decays are rejected. Data are divided into ten multiplicity classes, identified by a roman number from I to X, going from the highest to the lowest multiplicity. However, in this analysis classes IV and V are merged into a single class to achieve a better statistical precision. The multiplicity classes are determined from the sum of the V0 signal amplitudes and defined in terms of percentiles of the INEL$>0$ pp cross section, where INEL $>$ 0 events are defined as collisions with at least one charged particle in the pseudo-rapidity region $|\eta|<$1 [25]. The mean charged particle multiplicity $\left<\mbox{$\mathrm{d}N_{ch}/\mathrm{d}\eta$}\right>$ for each class is reported in Table 2. ## 4 Data analysis ### 4.1 Raw yield extraction The identification of (anti-)deuterons is performed with two different methods, depending on their transverse momentum. For $p_{\mathrm{T}}<$ 1 GeV/c, the identification is done using a measurement of the $\mathrm{d}E/\mathrm{d}x$ in the TPC only. In particular, for each $p_{\mathrm{T}}$ interval the number of (anti-)deuterons is extracted through a fit with a Gaussian with two exponential tails to the $n_{\sigma}$ distribution. Here, $n_{\sigma}$ is the difference between the measured TPC $\mathrm{d}E/\mathrm{d}x$ and the expected one for (anti-)deuterons divided by the TPC $\mathrm{d}E/\mathrm{d}x$ resolution. However, for $p_{\mathrm{T}}\geq$ 1 GeV/c it is more difficult to separate (anti-)deuterons from other charged particles with this technique. Therefore, the particle identification in this kinematic region is performed using the TOF detector. The squared mass of the particle is computed as $m^{2}=p^{2}\left(t_{\mathrm{TOF}}^{2}/L^{2}-1/c^{2}\right)$, where $t_{TOF}$ is the measured time-of-flight, $L$ is the length of the track and $p$ is the momentum of the particle. In order to reduce the background, only the candidates with a d$E$/d$x$ measured in the TPC compatible within 3$\sigma$ with the expected value for a (anti-)deuteron are selected. The squared-mass- distributions are fitted with a Gaussian function with an exponential tail for the signal. A significant background is present for $p_{\mathrm{T}}\geq$ 1.8 GeV/c and is modelled with two exponential functions. In the range where the background is negligible, the raw yield is extracted by directly counting the candidates. Otherwise, the squared-mass distribution is fitted with the described model, using an extended-maximum-likelihood approach. The (anti-)deuteron yield is then obtained by a fit parameter. ### 4.2 Efficiency and acceptance correction A correction for the tracking efficiency and the detector acceptance must be applied to obtain the real yield. The correction is evaluated from Monte Carlo (MC) simulated events. The events are generated using the standard generator PYTHIA8 (Monash 2013)[26]. However, PYTHIA8 does not handle the production of nuclei. Therefore, in each event it is necessary to inject (anti-)deuterons. In each pp collision one deuteron or one anti-deuteron is injected, randomly chosen from a flat rapidity distribution in the range $|y|<$ 1 and a flat $p_{\mathrm{T}}$ distribution in the range $p_{\mathrm{T}}\in[0,10]$ GeV/c. The correction is defined as the ratio between the number of reconstructed (anti-)deuterons in the rapidity range $|y|<0.5$ and in the pseudorapidity interval $|\eta|<0.8$ and the number of generated ones in $|y|<0.5$. The correction is computed separately for deuterons and anti-deuterons and for the TPC and TOF analyses. Another correction is related to the trigger efficiency. All the selected events are required to have at least one charged particle in the acceptance, i.e. in the pseudo-rapidity region $|\eta|<$1 (INEL $>$ 0) [25]. Due to the imperfection of the trigger, some INEL $>$ 0 events are wrongly rejected (event loss). Consequently, all the (anti-)deuterons produced in the erroneously rejected events are lost as well (signal loss). Therefore, it is necessary to correct the spectra for the event and the signal losses. Event loss is more relevant at low multiplicity and almost negligible at high multiplicity ($\sim$ 12% for multiplicity class X and $<$ 1‰ for multiplicity class I). The corrections are computed from MC simulations, because both the number of rejected events and the number of (anti-)deuterons produced in those same events are known. However, it is not possible to count the number of lost (anti-)deuterons directly, because the artificial injection of one (anti-)deuteron per event will bias the number of lost candidates that can be extracted from this MC data set. Instead, the number of lost pions, kaons and protons are extracted from a different MC data set and then these values are extrapolated to the deuteron mass. The standard transport code used in ALICE simulations is GEANT3. However, it is known from other ALICE analyses on nuclei that GEANT4 provides a more realistic transport of (anti-)nuclei. The GEANT3 response is hence scaled to the GEANT4 one to take into account this effect. Moreover, the spectra obtained with TOF are further corrected to take into account the TPC-TOF matching efficiency using a data-driven approach. This correction was evaluated for the analysis of the (anti-)deuteron production in the p–Pb data sample collected in 2013 [9]. In that year not all the modules of the Transition Radiation Detector (TRD), which is located between the TPC and the TOF, were already installed. In this way it was possible to compute the effects of the presence of the TRD, comparing the (anti-)deuteron yields in the regions where the TRD modules were present and in those where they were not yet installed. This correction was also verified with Run 2 data, by comparing the yields extracted with the TPC with those extracted with the TOF in the $p_{\mathrm{T}}$ region where both the techniques can be used. ### 4.3 Subtraction of secondary deuterons Secondary deuterons are produced in the interaction of particles with the detector material and their contribution must be subtracted from the total measured deuteron yield. However, the production of secondary anti-deuterons is extremely rare due to baryon number conservation. Hence, the correction is applied only to the deuteron spectra. The fraction of primary deuterons is evaluated via a fit to the DCAxy distribution of the data, as described in [1]. The template for primary deuterons is obtained from the measured DCAxy of anti-deuterons. The template from secondary deuterons is instead obtained from MC simulations. The production of secondary deuterons is more relevant at low $p_{\mathrm{T}}$ (at $p_{\mathrm{T}}=$ 0.7 GeV/c the fraction of secondary deuterons is $\sim$ 40%) and decreases exponentially with the transverse momentum ($<$ 5% for $p_{\mathrm{T}}=$ 1.4 GeV/c). The only other possible contribution to secondary deuterons that is known is the decay ${}^{3}_{\Lambda}\mathrm{H}\rightarrow\mathrm{d}+\mathrm{p}+\pi$. However, ${}^{3}_{\Lambda}\mathrm{H}$ production has not yet been observed in pp collisions and its production yield is therefore lower than that of 3He, which is less than a thousandth of the deuteron production rate [6]. ### 4.4 Systematic uncertainties A list of all the sources of systematic uncertainty is shown in Table 1. The values are reported for the multiplicity classes I and X, for the lowest and highest $p_{\mathrm{T}}$ values. The track selection criteria are a source of systematic uncertainty. In this category we include all the contributions related to the single-track selection: DCA, number of clusters in the TPC and, for the TOF analysis, the width of the $\mathrm{d}E/\mathrm{d}x$ selection applied in the TPC. These uncertainties are evaluated by varying the relevant selections, as done in [8]. At low $p_{\mathrm{T}}$ ($p_{\mathrm{T}}<1$ GeV/c) the contribution is 2% for deuterons due to the DCAz and DCAxy selections, which influence the estimation of the fraction of primary deuterons, while for anti-deuterons this systematic uncertainty is around 1%. It increases with $p_{\mathrm{T}}$ and the growth is more pronounced for low multiplicity. The systematic uncertainty on the signal extraction is evaluated by directly counting the (anti-)deuteron candidates. It is obtained by varying the interval in which the direct counting is performed. Its contribution is $\sim$ 1% at low $p_{\mathrm{T}}$ and increases with $p_{\mathrm{T}}$. Another source of systematic uncertainty is given by the incomplete knowledge of the material budget of the detector in the Monte Carlo simulations. The effect is evaluated by comparing different MC simulations in which the material budget was increased and decreased by 4.5%. This value corresponds to the uncertainty on the determination of the material budget by measuring photon conversions. This particular systematic uncertainty is below 1%. The imperfect knowledge of the hadronic interaction cross section of (anti-)deuterons with the material contributes to the systematic uncertainty as well. Its effect is evaluated with the same data-driven approach used to investigate the TOF-matching efficiency, as described in section 4.2. Half of the correction, corresponding to the 1$\sigma$ confidence interval, is taken as its uncertainty contributing 4% to the systematic uncertainty for deuterons and 7.5% for anti-deuterons. Similarly, an uncertainty related to the ITS-TPC matching is considered. It is evaluated from the difference between the ITS-TPC matching efficiencies in data and MC and its contribution is less than 2.5%. Finally, a source of systematic uncertainties results from the signal loss correction. It is assumed to be half of the difference between the signal-loss correction (described in section 4.2) and 1. It is strongly dependent on the event multiplicity: it is negligible at high multiplicity (multiplicity classes from I to VII) and contributes up to 6% in the lowest multiplicity class (class X). Where present, it decreases with $p_{\mathrm{T}}$. Table 1: Summary of the main contributions to the systematic uncertainties for the extreme multiplicity classes I and X. Values in brackets are referred to anti-deuterons. If they are not present, the systematic uncertainty is common for deuterons and anti-deuterons. More details about the sources of the uncertainties can be found in the text. Source | d ($\bar{\mathrm{d}}$) ---|--- Multiplicity | | Class I | | Class X $p_{\mathrm{T}}$ (GeV/$c$) | | 0.7 | 3.8 | | 0.7 | 2.6 Track selection | | 2% (1%) | 2% (3%) | | 2% (1%) | 5% (6%) Signal extraction | | 1% | 7% (7%) | | 1% | 5% (5%) Material budget | | $<1\%$ | $<1\%$ | | $<1\%$ | $<1\%$ TPC-TOF matching | | 4% (7.5%) | 4% (7.5%) | | 4% (7.5%) | 4% (7.5%) ITS-TPC matching | | 1% | 2.5% | | 1% | 2.5% Signal Loss | | - | - | | 6% | 3% Total | | 5% (8%) | 9% (11%) | | 8% (10%) | 10% (12%) Figure 1: Transverse-momentum spectra of deuterons (top) and anti-deuterons (bottom) measured in pp collisions at $\sqrt{s}~{}=~{}$13 TeV in different multiplicity classes (circles) and in INEL$>$0 events (squares). The mean charged-particle multiplicity for classes I and X are reported in the figures and all the values for the multiplicity classes can be found in Table 2. For the analyses in multiplicity classes, the multiplicity increases moving from the bottom of the figure upwards. The statistical uncertainties are represented by vertical bars while the systematic uncertainties are represented by boxes. The dashed lines are individual fits with a Lévy-Tsallis function [27]. ## 5 Results and Discussion The transverse momentum spectra of deuterons and anti-deuterons in different multiplicity classes as well as INEL$>$0 pp collisions are reported in Figure 1. The spectra normalised to inelastic pp collisions (INEL) are included in the data provided with this paper. The mean charged-particle multiplicity $\left<\mbox{$\mathrm{d}N_{ch}/\mathrm{d}\eta$}\right>$ for each class is reported in Table 2. The spectra exhibit a slight hardening with increasing multiplicity: the slope of the spectra becomes less steep and the mean transverse momentum $\left<p_{\mathrm{T}}\right>$ moves towards higher values. This effect is similar to that observed in Pb–Pb collisions, where it is explained with the presence of increasing radial flow with centrality [1, 28]. However, in pp collisions the intensity of the hardening is not as dramatic. The ratio between the spectra of anti-deuterons and deuterons for all the multiplicity classes under study is reported in Figure 2. The ratio is compatible within uncertainties with unity in all multiplicity classes. To calculate the integrated yield ($\mathrm{d}N/\mathrm{d}y$) and the mean $p_{\mathrm{T}}$ the spectra have been fitted with the Lévy-Tsallis function [27, 29, 30]: $\frac{\mathrm{d}^{2}N}{\mathrm{d}y\,\mathrm{d}p_{\mathrm{T}}}=\frac{\mathrm{d}N}{\mathrm{d}y}\frac{p_{\mathrm{T}}\left(n-1\right)\left(n-2\right)}{nC[nC+m\left(n-2\right)]}\left(1+\frac{m_{\mathrm{T}}-m}{nC}\right)^{-n},$ (2) where $m$ is the particle rest mass (i.e. the mass of the deuteron), $m_{\mathrm{T}}=\sqrt{m^{2}+p_{\mathrm{T}}^{2}}$ is the transverse mass, while $n$, $\mathrm{d}N/\mathrm{d}y$ and $C$ are free fit parameters. The Lévy- Tsallis function is used to extrapolate the spectra in the unmeasured regions of $p_{\mathrm{T}}$. One contribution to the systematic uncertainty is obtained by shifting the data points to the upper border of their systematic uncertainty and to the corresponding lower border. The difference between these values and the reference one is taken as an uncertainty which amounts to $\sim$ 11%. Another contribution to the systematic uncertainty is estimated by using alternative fit functions such as simple exponentials depending on $p_{\mathrm{T}}$ and $m_{\mathrm{T}}$, as well as a Boltzmann function, and is found to be $\sim$ 3%.The two contributions are summed in quadrature. The extrapolation amounts to 25% of the total yield in the highest multiplicity class, where the widest $p_{\mathrm{T}}$ range is measured, and increases up to 35% in the lowest multiplicity class. The statistical uncertainty on the integrated yield is obtained by moving the data points randomly within their statistical uncertainties, using a Gaussian probability distribution centered at the measured data point, with a standard deviation corresponding to the statistical uncertainty. In the unmeasured regions at low and high $p_{\mathrm{T}}$, the value of the fit function at a given $p_{\mathrm{T}}$ is considered. In this case the statistical uncertainty is estimated using a Monte Carlo method to propagate the uncertainties on the fit parameters. Following the same procedure, the $\langle p_{\mathrm{T}}\rangle$ and its statistical and systematic uncertainties are computed. The resulting mean $p_{\mathrm{T}}$ and $\mathrm{d}N/\mathrm{d}y$, as well as the parameters of the individual Lévy-Tsallis fits, are listed in Table 2. Figure 2: Ratio between the transverse momentum spectra of anti-deuterons and deuterons in different multiplicity classes. The statistical uncertainties are represented by vertical bars while the systematic uncertainties are represented by boxes. Table 2: Summary of the relevant information about the multiplicity classes and the fits to the measured transverse momentum spectra of anti-deuterons. $\langle\mathrm{d}N_{ch}/\mathrm{d}\eta\rangle$ is the mean pseudorapidity density of the primary charged particles [25]. $n$ and $C$ are the parameters of the Lévy-Tsallis fit function [27]. $\mathrm{d}N/\mathrm{d}y$ is the integrated yield, with statistical uncertainties, multiplicity-uncorrelated and multiplicity-correlated systematic uncertainties (see the text for details). $\langle p_{\mathrm{T}}\rangle$ is the mean transverse momentum. Multiplicity | $\langle\mathrm{d}N_{ch}/\mathrm{d}\eta\rangle$ | $n$ | $C$ (GeV) | $\mathrm{d}N/\mathrm{d}y\left(\times 10^{-4}\right)$ | $\langle p_{\mathrm{T}}\rangle$ (GeV/c) ---|---|---|---|---|--- class I | 26.02 $\pm$ 0.35 | 7 $\pm$ 3 | 0.37 $\pm$ 0.05 | 16.0 $\pm$ 0.4 $\pm$ 0.5 $\pm$ 1.8 | 1.57 $\pm$ 0.08 $\pm$ 0.05 $\pm$ 0.03 II | 20.02 $\pm$ 0.27 | 7 $\pm$ 3 | 0.32 $\pm$ 0.04 | 12.2 $\pm$ 0.2 $\pm$ 0.4 $\pm$ 1.4 | 1.43 $\pm$ 0.04 $\pm$ 0.04 $\pm$ 0.02 III | 16.17 $\pm$ 0.22 | 6 $\pm$ 2 | 0.27 $\pm$ 0.03 | 9.4 $\pm$ 0.1 $\pm$ 0\. 3 $\pm$ 1.1 | 1.31 $\pm$ 0.03 $\pm$ 0.03 $\pm$ 0.04 IV + V | 12.91 $\pm$ 0.13 | 8 $\pm$ 3 | 0.27 $\pm$ 0.03 | 7.13 $\pm$ 0.08 $\pm$ 0.20 $\pm$ 0.79 | 1.21 $\pm$ 0.02 $\pm$ 0.01 $\pm$ 0.03 VI | 10.02 $\pm$ 0.14 | 7 $\pm$ 2 | 0.23 $\pm$ 0.03 | 5.34 $\pm$ 0.07 $\pm$ 0.20 $\pm$ 0.59 | 1.12 $\pm$ 0.02 $\pm$ 0.01 $\pm$ 0.03 VII | 7.95 $\pm$ 0.11 | 6 $\pm$ 2 | 0.19 $\pm$ 0.03 | 3.99 $\pm$ 0.07 $\pm$ 0.20 $\pm$ 0.44 | 1.06 $\pm$ 0.02 $\pm$ 0.01 $\pm$ 0.03 VIII | 6.32 $\pm$ 0.09 | 17 $\pm$ 13 | 0.23 $\pm$ 0.03 | 2.73 $\pm$ 0.04 $\pm$ 0.06 $\pm$ 0.30 | 0.98 $\pm$ 0.01 $\pm$ 0.01 $\pm$ 0.03 IX | 4.50 $\pm$ 0.07 | 10 $\pm$ 5 | 0.19 $\pm$ 0.03 | 1.64 $\pm$ 0.03 $\pm$ 0.06 $\pm$ 0.19 | 0.92 $\pm$ 0.01 $\pm$ 0.01 $\pm$ 0.03 X | 2.55 $\pm$ 0.04 | 10 $\pm$ 5 | 0.15 $\pm$ 0.02 | 0.59 $\pm$ 0.02 $\pm$ 0.04 $\pm$ 0.07 | 0.82 $\pm$ 0.01 $\pm$ 0.02 $\pm$ 0.02 The coalescence parameter as a function of the transverse momentum is shown in Figure 3. The transverse momentum spectra needed for the $B_{2}$ computation are taken from Ref. [31]. The $B_{2}$ values for INEL$>$0 collisions show a significant deviation from a transverse momentum independent coalescence parameter as expected by the simplest implementation of the coalescence model. However, it has been shown [8] that the the multiplicity-integrated coalescence parameter is distorted because deuterons are biased more towards higher multiplicity than protons, and consequently have harder $p_{\mathrm{T}}$ spectra than expected from inclusive protons. The coalescence parameter evaluated in fine multiplicity classes is consistent with a flat behaviour, in agreement with the expectation of the simple coalescence model. Figure 3: Coalescence parameter $B_{2}$ for anti-deuterons for different multiplicity classes (circles) and for INEL$>$0 collisions (squares). For the analyses in multiplicity classes, the multiplicity decreases moving from the bottom of the figure upwards. The statistical uncertainties are represented by vertical bars while the systematic uncertainties are represented by boxes. $B_{2}$ is shown as a function of $p_{\mathrm{T}}/A$, being $A~{}=~{}2$ the mass number of the deuteron. The evolution of the coalescence parameter as a function of the charged particle multiplicity is sensitive to the production mechanism of deuterons. Recent formulations of the coalescence model [32, 33] implement an interplay between the size of the collision system and the size of the light nuclei produced via coalescence. Figure 4 shows how the $B_{2}$, for a fixed transverse momentum interval, evolves in different systems as a function of the charged particle multiplicity. $B_{2}$ is shown at $p_{\mathrm{T}}~{}=~{}0.75$ GeV$/c$, which was measured in all the analyses. However, the trend is the same for other $p_{\mathrm{T}}$ values. The measurements are compared with the model descriptions detailed in [33]. The two descriptions use different parameterisations for the size of the source. Parameterisation A uses the ALICE measurements of system radii $R$ from HBT studies as a function of multiplicity[34]. These values are fitted with the function: $R=a\;\langle\mathrm{d}N/\mathrm{d}\eta\rangle^{1/3}+b,$ (3) where $a$ and $b$ are free parameters. In Parameterisation B the free parameters $a$ and $b$ in Eq. 3 are fixed to reproduce the $B_{2}$ of deuterons in Pb–Pb collisions at $\sqrt{s_{\mathrm{NN}}}=2.76$ TeV in the centrality class 0–10%. The first parameterisation (dashed red line) describes well the measured $B_{2}$ in pp and p–Pb collisions, while it overestimates the measurements in Pb–Pb collisions. However, as outlined by the authors in [33], a more refined parameterisation of the HBT radius evolution through different systems might reduce the observed discrepancy. The parameterisation of the source size fixed to the $B_{2}$ measurement in central Pb–Pb collisions already departs from the measurements in peripheral Pb–Pb collisions and it underestimates the coalescence parameter for small colliding systems. Figure 4: Coalescence parameter $B_{2}$ at $p_{\mathrm{T}}/A=$ 0.75 GeV/c as a function of multiplicity in pp collisions at $\sqrt{s}~{}=~{}13$ TeV (anti- deuterons) and in $\sqrt{s}~{}=~{}7$ TeV [8] (average of deuterons and anti- deuterons), in p–Pb collisions at $\sqrt{s_{\mathrm{NN}}}~{}=~{}5.02$ TeV [9] (deuterons) and in Pb–Pb collisions at $\sqrt{s_{\mathrm{NN}}}~{}=~{}2.76$ TeV [1] (deuterons). The statistical uncertainties are represented by vertical bars while the systematic uncertainties are represented by boxes. The two lines are theoretical predictions based on two different parameterisations of the HBT radius, see text for details. Figure 5 shows the ratio of the $p_{\mathrm{T}}$-integrated yields of deuterons and protons for different multiplicities in different collisions systems and at different energies. The ratio increases monotonically with multiplicity for pp and p–Pb collisions and eventually saturates for Pb–Pb collisions. The experimental data are compared with a SHM prediction. In this implementation of the model, called the Canonical Statistical Model (CSM), exact conservation of baryon number ($B$), charge ($Q$), and strangeness ($S$) is enforced using the recently developed THERMAL-FIST package [14]. The calculations with the CSM are performed using 155 MeV for the chemical freeze- out temperature, $B$ = $Q$ = $S$ = 0 and two different values of the correlation volume, which is expressed in terms of rapidity units $\mathrm{d}V/\mathrm{d}y$, corresponding to one and three units of rapidity, respectively. The model qualitatively reproduces the trend observed in data. This might suggest that for small collision systems the light (anti-) nuclei production could be canonically suppressed and that a canonical correlation volume might exist. The correlation volume required to describe the measurements is larger than one unit of rapidity. However, such a canonical suppression should also affect the p/$\pi$ ratio in a similar way and this is not observed in the experimental measurements [11, 35]. A full coalescence calculation, taking into account the interplay between the system size and the width of the wave function of the produced (anti-)deuterons, is also able to describe the measured trend of the d/p ratio [36] and it describes the data consistently better than CSM for all system sizes. Figure 5: Ratio between the $p_{\mathrm{T}}$-integrated yields of deuterons and protons (sum of protons and anti-protons) for different multiplicities in pp collisions at $\sqrt{s}~{}=~{}13$ TeV (anti-deuterons) and in $\sqrt{s}~{}=~{}7$ TeV [8] (deuterons), in p–Pb collisions at $\sqrt{s_{\mathrm{NN}}}~{}=~{}5.02$ TeV [9] (deuterons) and in Pb–Pb collisions at $\sqrt{s_{\mathrm{NN}}}~{}=~{}2.76$ TeV [1] (deuterons). The statistical uncertainties are represented by vertical bars while the systematic uncertainties are represented by boxes. The two black lines are the theoretical predictions of the Thermal-FIST statistical model [14] for two sizes of the correlation volume $V_{C}$, while the magenta line represents the expectation from a coalescence model [36]. ## 6 Conclusions The results on (anti-)deuteron production presented in this paper display a smooth evolution with multiplicity across different reaction systems, in agreement with the measurements of other light-flavoured hadrons. This suggests that a common physics process might be able to describe the nuclei production in all hadronic collision systems. Coalescence and statistical hadronisation models are able to describe qualitatively the observed trend in the d/p ratio and $B_{2}$ as a function of the charged particle multiplicity. However, with the precision of the current measurements it is not possible to distinguish which mechanism drives the (anti-)deuteron production. On the other hand, it is not clear whether the CSM would be able to describe simultaneously the d/p and the p/$\pi$ ratios with the same chemical freeze- out conditions. No substantial differences are seen in the dependence of nuclei production on the charged multiplicity in pp and p–Pb collisions and with the Pb–Pb data sample collected in Run 2 it will be also possible to perform a direct comparison with peripheral Pb–Pb collisions. With the enhanced luminosity in Run 3, it will be possible to measure pp collisions with multiplicities similar to those observed in mid-central Pb–Pb collisions. It will be interesting to see whether ALICE can confirm this dependence when measuring nuclei production in pp and Pb–Pb collisions at the same multiplicity. ## Acknowledgements The ALICE Collaboration would like to thank all its engineers and technicians for their invaluable contributions to the construction of the experiment and the CERN accelerator teams for the outstanding performance of the LHC complex. The ALICE Collaboration gratefully acknowledges the resources and support provided by all Grid centres and the Worldwide LHC Computing Grid (WLCG) collaboration. The ALICE Collaboration acknowledges the following funding agencies for their support in building and running the ALICE detector: A. I. Alikhanyan National Science Laboratory (Yerevan Physics Institute) Foundation (ANSL), State Committee of Science and World Federation of Scientists (WFS), Armenia; Austrian Academy of Sciences, Austrian Science Fund (FWF): [M 2467-N36] and Nationalstiftung für Forschung, Technologie und Entwicklung, Austria; Ministry of Communications and High Technologies, National Nuclear Research Center, Azerbaijan; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Financiadora de Estudos e Projetos (Finep), Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) and Universidade Federal do Rio Grande do Sul (UFRGS), Brazil; Ministry of Education of China (MOEC) , Ministry of Science & Technology of China (MSTC) and National Natural Science Foundation of China (NSFC), China; Ministry of Science and Education and Croatian Science Foundation, Croatia; Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN), Cubaenergía, Cuba; Ministry of Education, Youth and Sports of the Czech Republic, Czech Republic; The Danish Council for Independent Research | Natural Sciences, the VILLUM FONDEN and Danish National Research Foundation (DNRF), Denmark; Helsinki Institute of Physics (HIP), Finland; Commissariat à l’Energie Atomique (CEA), Institut National de Physique Nucléaire et de Physique des Particules (IN2P3) and Centre National de la Recherche Scientifique (CNRS) and Région des Pays de la Loire, France; Bundesministerium für Bildung und Forschung (BMBF) and GSI Helmholtzzentrum für Schwerionenforschung GmbH, Germany; General Secretariat for Research and Technology, Ministry of Education, Research and Religions, Greece; National Research, Development and Innovation Office, Hungary; Department of Atomic Energy Government of India (DAE), Department of Science and Technology, Government of India (DST), University Grants Commission, Government of India (UGC) and Council of Scientific and Industrial Research (CSIR), India; Indonesian Institute of Science, Indonesia; Centro Fermi - Museo Storico della Fisica e Centro Studi e Ricerche Enrico Fermi and Istituto Nazionale di Fisica Nucleare (INFN), Italy; Institute for Innovative Science and Technology , Nagasaki Institute of Applied Science (IIST), Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT) and Japan Society for the Promotion of Science (JSPS) KAKENHI, Japan; Consejo Nacional de Ciencia (CONACYT) y Tecnología, through Fondo de Cooperación Internacional en Ciencia y Tecnología (FONCICYT) and Dirección General de Asuntos del Personal Academico (DGAPA), Mexico; Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO), Netherlands; The Research Council of Norway, Norway; Commission on Science and Technology for Sustainable Development in the South (COMSATS), Pakistan; Pontificia Universidad Católica del Perú, Peru; Ministry of Science and Higher Education and National Science Centre, Poland; Korea Institute of Science and Technology Information and National Research Foundation of Korea (NRF), Republic of Korea; Ministry of Education and Scientific Research, Institute of Atomic Physics and Ministry of Research and Innovation and Institute of Atomic Physics, Romania; Joint Institute for Nuclear Research (JINR), Ministry of Education and Science of the Russian Federation, National Research Centre Kurchatov Institute, Russian Science Foundation and Russian Foundation for Basic Research, Russia; Ministry of Education, Science, Research and Sport of the Slovak Republic, Slovakia; National Research Foundation of South Africa, South Africa; Swedish Research Council (VR) and Knut & Alice Wallenberg Foundation (KAW), Sweden; European Organization for Nuclear Research, Switzerland; Suranaree University of Technology (SUT), National Science and Technology Development Agency (NSDTA) and Office of the Higher Education Commission under NRU project of Thailand, Thailand; Turkish Atomic Energy Agency (TAEK), Turkey; National Academy of Sciences of Ukraine, Ukraine; Science and Technology Facilities Council (STFC), United Kingdom; National Science Foundation of the United States of America (NSF) and United States Department of Energy, Office of Nuclear Physics (DOE NP), United States of America. ## References * [1] ALICE Collaboration, J. Adam et al., “Production of light nuclei and anti-nuclei in pp and Pb–Pb collisions at energies available at the CERN Large Hadron Collider”, Phys. Rev. C93 no. 2, (2016) 024917, arXiv:1506.08951 [nucl-ex]. * [2] STAR Collaboration, C. Adler et al., “Anti-deuteron and anti-3He production in $\sqrt{s_{\mathrm{NN}}}~{}=~{}130$ GeV Au+Au collisions”, Phys. Rev. Lett. 87 (2001) 262301, arXiv:nucl-ex/0108022 [nucl-ex]. [Erratum: Phys. Rev. Lett.87,279902(2001)]. * [3] PHENIX Collaboration, S. S. Adler et al., “Deuteron and antideuteron production in Au + Au collisions at $\sqrt{s_{\mathrm{NN}}}~{}=~{}200$ GeV”, Phys. Rev. Lett. 94 (2005) 122302, arXiv:nucl-ex/0406004 [nucl-ex]. * [4] B. Alper et al., “Large angle production of stable particles heavier than the proton and a search for quarks at the cern intersecting storage rings”, Phys. Lett. 46B (1973) 265–268. * [5] British-Scandinavian-MIT Collaboration, S. Henning et al., “Production of Deuterons and anti-Deuterons in Proton Proton Collisions at the CERN ISR”, Lett. Nuovo Cim. 21 (1978) 189. * [6] ALICE Collaboration, S. Acharya et al., “Production of deuterons, tritons, 3He nuclei and their antinuclei in pp collisions at $\mathbf{\sqrt{{\textit{s}}}}$ = 0.9, 2.76 and 7 TeV”, Phys. Rev. C97 no. 2, (2018) 024615, arXiv:1709.08522 [nucl-ex]. * [7] ALICE Collaboration, S. Acharya et al., “Production of 4He and ${}^{4}\overline{\textrm{He}}$ in Pb–Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV at the LHC”, Nucl. Phys. A971 (2018) 1–20, arXiv:1710.07531 [nucl-ex]. * [8] ALICE Collaboration, S. Acharya et al., “Multiplicity dependence of (anti-)deuteron production in pp collisions at $\sqrt{s}$ = 7 TeV”, Phys. Lett. B794 (2019) 50–63, arXiv:1902.09290 [nucl-ex]. * [9] ALICE Collaboration, S. Acharya et al., “Multiplicity dependence of light (anti-)nuclei production in p-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV”, Phys. Lett. B800 (2020) 135043, arXiv:1906.03136 [nucl-ex]. * [10] N. Sharma, J. Cleymans, B. Hippolyte, and M. Paradza, “A Comparison of p-p, p–Pb, Pb–Pb Collisions in the Thermal Model: Multiplicity Dependence of Thermal Parameters”, Phys. Rev. C99 no. 4, (2019) 044914, arXiv:1811.00399 [hep-ph]. * [11] ALICE Collaboration, B. Abelev et al., “Centrality dependence of $\pi$, K, p production in Pb–Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV”, Phys. Rev. C88 (2013) 044910, arXiv:1303.0737 [hep-ex]. * [12] ALICE Collaboration, J. Adam et al., “Measurement of pion, kaon and proton production in proton-proton collisions at $\sqrt{s}=7$ TeV”, Eur. Phys. J. C75 no. 5, (2015) 226, arXiv:1504.00024 [nucl-ex]. * [13] N. Sharma, J. Cleymans, and L. Kumar, “Thermal model description of p–Pb collisions at $\sqrt{s_{NN}}=5.02$ TeV”, Eur. Phys. J. C78 no. 4, (2018) 288, arXiv:1802.07972 [hep-ph]. * [14] V. Vovchenko, B. Dönigus, and H. Stoecker, “Multiplicity dependence of light nuclei production at LHC energies in the canonical statistical model”, Phys. Lett. B785 (2018) 171–174, arXiv:1808.05245 [hep-ph]. * [15] J. I. Kapusta, “Mechanisms for deuteron production in relativistic nuclear collisions”, Phys. Rev. C21 (1980) 1301–1310. * [16] NA49 Collaboration, T. Anticic et al., “Energy and centrality dependence of deuteron and proton production in Pb + Pb collisions at relativistic energies”, Phys. Rev. C69 (2004) 024902. * [17] A. Polleri, J. P. Bondorf, and I. N. Mishustin, “Effects of collective expansion on light cluster spectra in relativistic heavy ion collisions”, Phys. Lett. B419 (1998) 19–24, arXiv:nucl-th/9711011 [nucl-th]. * [18] N. Sharma, T. Perez, A. Castro, L. Kumar, and C. Nattrass, “Methods for separation of deuterons produced in the medium and in jets in high energy collisions”, Phys. Rev. C98 no. 1, (2018) 014914, arXiv:1803.02313 [hep-ph]. * [19] ALICE Collaboration, B. Abelev et al., “Performance of the ALICE Experiment at the CERN LHC”, Int. J. Mod. Phys. A29 (2014) 1430044, arXiv:1402.4476 [nucl-ex]. * [20] ALICE Collaboration, E. Abbas et al., “Performance of the ALICE VZERO system”, JINST 8 (2013) P10016, arXiv:1306.3130 [nucl-ex]. * [21] ALICE Collaboration, K. Aamodt et al., “Alignment of the ALICE Inner Tracking System with cosmic-ray tracks”, JINST 5 (2010) P03003, arXiv:1001.0502 [physics.ins-det]. * [22] J. Alme, Y. Andres, H. Appelshäuser, S. Bablok, N. Bialas, et al., “The ALICE TPC, a large 3-dimensional tracking device with fast readout for ultra-high multiplicity events”, Nucl. Instrum. Meth. A622 (2010) 316–367, arXiv:1001.1950 [physics.ins-det]. * [23] A. Akindinov et al., “Performance of the ALICE Time-Of-Flight detector at the LHC”, Eur. Phys. J. Plus 128 (2013) 44. * [24] ALICE Collaboration, J. Adam et al., “Determination of the event collision time with the ALICE detector at the LHC”, Eur. Phys. J. Plus 132 no. 2, (2017) 99, arXiv:1610.03055 [physics.ins-det]. * [25] ALICE Collaboration, S. Acharya et al., “Multiplicity dependence of (multi-)strange hadron production in proton-proton collisions at $\sqrt{s}$ = 13 TeV”, Eur. Phys. J. C 80 no. 2, (2020) 167, arXiv:1908.01861 [nucl-ex]. * [26] T. Sjostrand, S. Mrenna, and P. Z. Skands, “A Brief Introduction to PYTHIA 8.1”, Comput. Phys. Commun. 178 (2008) 852–867, arXiv:0710.3820 [hep-ph]. * [27] C. Tsallis, “Possible generalization of Boltzmann-Gibbs statistics”, Journal of statistical physics 52 no. 1-2, (1988) 479–487. * [28] ALICE Collaboration, S. Acharya et al., “Measurement of deuteron spectra and elliptic flow in Pb–Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV at the LHC”, Eur. Phys. J. C77 no. 10, (2017) 658, arXiv:1707.07304 [nucl-ex]. * [29] G. Wilk and Z. Wlodarczyk, “On the interpretation of nonextensive parameter q in Tsallis statistics and Lévy distributions”, Phys. Rev. Lett. 84 (2000) 2770, arXiv:hep-ph/9908459. * [30] STAR Collaboration, J. Adams et al., “K(892)* resonance production in Au+Au and p+p collisions at $\sqrt{s_{\mathrm{NN}}}$ = 200 GeV”, Phys. Rev. C 71 (2005) 064902, arXiv:nucl-ex/0412019. * [31] ALICE Collaboration, S. Acharya et al., “Multiplicity dependence of $\pi$, K, and p production in pp collisions at $\sqrt{s}=13$ TeV”, Eur. Phys. J. C 80 no. 8, (2020) 693, arXiv:2003.02394 [nucl-ex]. * [32] R. Scheibl and U. W. Heinz, “Coalescence and flow in ultrarelativistic heavy ion collisions”, Phys. Rev. C59 (1999) 1585–1602, arXiv:nucl-th/9809092 [nucl-th]. * [33] F. Bellini and A. P. Kalweit, “Testing coalescence and statistical-thermal production scenarios for (anti-)(hyper-)nuclei and exotic QCD objects at LHC energies”, Phys. Rev. C99 no. 5, (2019) 054905, arXiv:1807.05894 [hep-ph]. * [34] ALICE Collaboration, B. Abelev et al., “Charged kaon femtoscopic correlations in $pp$ collisions at $\sqrt{s}=7$ TeV”, Phys. Rev. D 87 no. 5, (2013) 052016, arXiv:1212.5958 [hep-ex]. * [35] ALICE Collaboration, B. Abelev et al., “Multiplicity dependence of light-flavor hadron production in pp collisions at $\sqrt{s}$ = 7 TeV”, Phys. Rev. C99 no. 2, (2019) 024906, arXiv:1807.11321 [nucl-ex]. * [36] K.-J. Sun, C. M. Ko, and B. Dönigus, “Suppression of light nuclei production in collisions of small systems at the Large Hadron Collider”, arXiv:1812.05175 [nucl-th]. ## Appendix A The ALICE Collaboration S. Acharyaorg141&D. Adamováorg94&A. Adlerorg74&J. Adolfssonorg80&M.M. Aggarwalorg99&G. Aglieri Rinellaorg33&M. Agnelloorg30&N. Agrawalorg10,org53&Z. Ahammedorg141&S. Ahmadorg16&S.U. Ahnorg76&A. Akindinovorg91&M. Al- Turanyorg106&S.N. Alamorg141&D.S.D. Albuquerqueorg122&D. Aleksandrovorg87&B. Alessandroorg58&H.M. Alfandaorg6&R. Alfaro Molinaorg71&B. Aliorg16&Y. Aliorg14&A. Aliciorg10,org26,org53&A. Alkinorg2&J. Almeorg21&T. Altorg68&L. Altenkamperorg21&I. Altsybeevorg112&M.N. Anaamorg6&C. Andreiorg47&D. Andreouorg33&H.A. Andrewsorg110&A. Andronicorg144&M. Angelettiorg33&V. Anguelovorg103&C. Ansonorg15&T. Antičićorg107&F. Antinoriorg56&P. Antonioliorg53&R. Anwarorg125&N. Apadulaorg79&L. Aphecetcheorg114&H. Appelshäuserorg68&S. Arcelliorg26&R. Arnaldiorg58&M. Arratiaorg79&I.C. Arseneorg20&M. Arslandokorg103&A. Augustinusorg33&R. Averbeckorg106&S. Azizorg61&M.D. Azmiorg16&A. Badalàorg55&Y.W. Baekorg40&S. Bagnascoorg58&X. Baiorg106&R. Bailhacheorg68&R. Balaorg100&A. Baldisseriorg137&M. Ballorg42&S. Balouzaorg104&R. Barberaorg27&L. Barioglioorg25&G.G. Barnaföldiorg145&L.S. Barnbyorg93&V. Barretorg134&P. Bartaliniorg6&K. Barthorg33&E. Bartschorg68&F. Baruffaldiorg28&N. Bastidorg134&S. Basuorg143&G. Batigneorg114&B. Batyunyaorg75&D. Bauriorg48&J.L. Bazo Albaorg111&I.G. Beardenorg88&C. Beddaorg63&N.K. Beheraorg60&I. Belikovorg136&A.D.C. Bell Hechavarriaorg144&F. Belliniorg33&R. Bellwiedorg125&V. Belyaevorg92&G. Bencediorg145&S. Beoleorg25&A. Bercuciorg47&Y. Berdnikovorg97&D. Berenyiorg145&R.A. Bertensorg130&D. Berzanoorg58&M.G. Besoiuorg67&L. Betevorg33&A. Bhasinorg100&I.R. Bhatorg100&M.A. Bhatorg3&H. Bhattorg48&B. Bhattacharjeeorg41&A. Bianchiorg25&L. Bianchiorg25&N. Bianchiorg51&J. Bielčíkorg36&J. Bielčíkováorg94&A. Bilandzicorg104,org117&G. Biroorg145&R. Biswasorg3&S. Biswasorg3&J.T. Blairorg119&D. Blauorg87&C. Blumeorg68&G. Bocaorg139&F. Bockorg33,org95&A. Bogdanovorg92&S. Boiorg23&L. Boldizsárorg145&A. Bolozdynyaorg92&M. Bombaraorg37&G. Bonomiorg140&H. Borelorg137&A. Borissovorg92,org144&H. Bossiorg146&E. Bottaorg25&L. Bratrudorg68&P. Braun-Munzingerorg106&M. Bregantorg121&M. Brozorg36&E.J. Bruckenorg43&E. Brunaorg58&G.E. Brunoorg105&M.D. Bucklandorg127&D. Budnikovorg108&H. Bueschingorg68&S. Bufalinoorg30&O. Bugnonorg114&P. Buhlerorg113&P. Buncicorg33&Z. Butheleziorg72,org131&J.B. Buttorg14&J.T. Buxtonorg96&S.A. Bysiakorg118&D. Caffarriorg89&A. Calivaorg106&E. Calvo Villarorg111&R.S. Camachoorg44&P. Cameriniorg24&A.A. Caponorg113&F. Carnesecchiorg10,org26&R. Caronorg137&J. Castillo Castellanosorg137&A.J. Castroorg130&E.A.R. Casulaorg54&F. Catalanoorg30&C. Ceballos Sanchezorg52&P. Chakrabortyorg48&S. Chandraorg141&W. Changorg6&S. Chapelandorg33&M. Chartierorg127&S. Chattopadhyayorg141&S. Chattopadhyayorg109&A. Chauvinorg23&C. Cheshkovorg135&B. Cheynisorg135&V. Chibante Barrosoorg33&D.D. Chinellatoorg122&S. Choorg60&P. Chochulaorg33&T. Chowdhuryorg134&P. Christakoglouorg89&C.H. Christensenorg88&P. Christiansenorg80&T. Chujoorg133&C. Cicaloorg54&L. Cifarelliorg10,org26&F. Cindoloorg53&J. Cleymansorg124&F. Colamariaorg52&D. Colellaorg52&A. Colluorg79&M. Colocciorg26&M. Concasorg58orgI&G. Conesa Balbastreorg78&Z. Conesa del Valleorg61&G. Continorg24,org127&J.G. Contrerasorg36&T.M. Cormierorg95&Y. Corrales Moralesorg25&P. Corteseorg31&M.R. Cosentinoorg123&F. Costaorg33&S. Costanzaorg139&P. Crochetorg134&E. Cuautleorg69&P. Cuiorg6&L. Cunqueiroorg95&D. Dabrowskiorg142&T. Dahmsorg104,org117&A. Daineseorg56&F.P.A. Damasorg114,org137&M.C. Danischorg103&A. Danuorg67&D. Dasorg109&I. Dasorg109&P. Dasorg85&P. Dasorg3&S. Dasorg3&A. Dashorg85&S. Dashorg48&S. Deorg85&A. De Caroorg29&G. de Cataldoorg52&J. de Cuvelandorg38&A. De Falcoorg23&D. De Gruttolaorg10&N. De Marcoorg58&S. De Pasqualeorg29&S. Deborg49&B. Debjaniorg3&H.F. Degenhardtorg121&K.R. Dejaorg142&A. Delofforg84&S. Delsantoorg25,org131&D. Devetakorg106&P. Dhankherorg48&D. Di Bariorg32&A. Di Mauroorg33&R.A. Diazorg8&T. Dietelorg124&P. Dillensegerorg68&Y. Dingorg6&R. Diviàorg33&D.U. Dixitorg19&Ø. Djuvslandorg21&U. Dmitrievaorg62&A. Dobrinorg33,org67&B. Dönigusorg68&O. Dordicorg20&A.K. Dubeyorg141&A. Dublaorg106&S. Dudiorg99&M. Dukhishyamorg85&P. Dupieuxorg134&R.J. Ehlersorg146&V.N. Eikelandorg21&D. Eliaorg52&H. Engelorg74&E. Eppleorg146&B. Erazmusorg114&F. Erhardtorg98&A. Erokhinorg112&M.R. Ersdalorg21&B. Espagnonorg61&G. Eulisseorg33&D. Evansorg110&S. Evdokimovorg90&L. Fabbiettiorg104,org117&M. Fagginorg28&J. Faivreorg78&F. Fanorg6&A. Fantoniorg51&M. Faselorg95&P. Fecchioorg30&A. Felicielloorg58&G. Feofilovorg112&A. Fernández Téllezorg44&A. Ferreroorg137&A. Ferrettiorg25&A. Festantiorg33&V.J.G. Feuillardorg103&J. Figielorg118&S. Filchaginorg108&D. Finogeevorg62&F.M. Fiondaorg21&G. Fiorenzaorg52&F. Flororg125&S. Foertschorg72&P. Fokaorg106&S. Fokinorg87&E. Fragiacomoorg59&U. Frankenfeldorg106&U. Fuchsorg33&C. Furgetorg78&A. Fursorg62&M. Fusco Girardorg29&J.J. Gaardhøjeorg88&M. Gagliardiorg25&A.M. Gagoorg111&A. Galorg136&C.D. Galvanorg120&P. Ganotiorg83&C. Garabatosorg106&E. Garcia- Solisorg11&K. Gargorg27&C. Gargiuloorg33&A. Garibliorg86&K. Garnerorg144&P. Gasikorg104,org117&E.F. Gaugerorg119&M.B. Gay Ducatiorg70&M. Germainorg114&J. Ghoshorg109&P. Ghoshorg141&S.K. Ghoshorg3&P. Gianottiorg51&P. Giubellinoorg58,org106&P. Giubilatoorg28&P. Glässelorg103&D.M. Goméz Coralorg71&A. Gomez Ramirezorg74&V. Gonzalezorg106&P. González-Zamoraorg44&S. Gorbunovorg38&L. Görlichorg118&S. Gotovacorg34&V. Grabskiorg71&L.K. Graczykowskiorg142&K.L. Grahamorg110&L. Greinerorg79&A. Grelliorg63&C. Grigorasorg33&V. Grigorievorg92&A. Grigoryanorg1&S. Grigoryanorg75&O.S. Groettvikorg21&F. Grosaorg30&J.F. Grosse-Oetringhausorg33&R. Grossoorg106&R. Guernaneorg78&M. Guittiereorg114&K. Gulbrandsenorg88&T. Gunjiorg132&A. Guptaorg100&R. Guptaorg100&I.B. Guzmanorg44&R. Haakeorg146&M.K. Habiborg106&C. Hadjidakisorg61&H. Hamagakiorg81&G. Hamarorg145&M. Hamidorg6&R. Hanniganorg119&M.R. Haqueorg63,org85&A. Harlenderovaorg106&J.W. Harrisorg146&A. Hartonorg11&J.A. Hasenbichlerorg33&H. Hassanorg95&D. Hatzifotiadouorg10,org53&P. Hauerorg42&S. Hayashiorg132&S.T. Heckelorg68,org104&E. Hellbärorg68&H. Helstruporg35&A. Herghelegiuorg47&T. Hermanorg36&E.G. Hernandezorg44&G. Herrera Corralorg9&F. Herrmannorg144&K.F. Hetlandorg35&T.E. Hildenorg43&H. Hillemannsorg33&C. Hillsorg127&B. Hippolyteorg136&B. Hohlwegerorg104&D. Horakorg36&A. Hornungorg68&S. Hornungorg106&R. Hosokawaorg15,org133&P. Hristovorg33&C. Huangorg61&C. Hughesorg130&P. Huhnorg68&T.J. Humanicorg96&H. Hushnudorg109&L.A. Husovaorg144&N. Hussainorg41&S.A. Hussainorg14&D. Hutterorg38&J.P. Iddonorg33,org127&R. Ilkaevorg108&M. Inabaorg133&G.M. Innocentiorg33&M. Ippolitovorg87&A. Isakovorg94&M.S. Islamorg109&M. Ivanovorg106&V. Ivanovorg97&V. Izucheevorg90&B. Jacakorg79&N. Jacazioorg53&P.M. Jacobsorg79&S. Jadlovskaorg116&J. Jadlovskyorg116&S. Jaelaniorg63&C. Jahnkeorg121&M.J. Jakubowskaorg142&M.A. Janikorg142&T. Jansonorg74&M. Jercicorg98&O. Jevonsorg110&M. Jinorg125&F. Jonasorg95,org144&P.G. Jonesorg110&J. Jungorg68&M. Jungorg68&A. Juskoorg110&P. Kalinakorg64&A. Kalweitorg33&V. Kaplinorg92&S. Karorg6&A. Karasu Uysalorg77&O. Karavichevorg62&T. Karavichevaorg62&P. Karczmarczykorg33&E. Karpechevorg62&A. Kazantsevorg87&U. Kebschullorg74&R. Keidelorg46&M. Keilorg33&B. Ketzerorg42&Z. Khabanovaorg89&A.M. Khanorg6&S. Khanorg16&S.A. Khanorg141&A. Khanzadeevorg97&Y. Kharlovorg90&A. Khatunorg16&A. Khuntiaorg118&B. Kilengorg35&B. Kimorg60&B. Kimorg133&D. Kimorg147&D.J. Kimorg126&E.J. Kimorg73&H. Kimorg17,org147&J. Kimorg147&J.S. Kimorg40&J. Kimorg103&J. Kimorg147&J. Kimorg73&M. Kimorg103&S. Kimorg18&T. Kimorg147&T. Kimorg147&S. Kirschorg38,org68&I. Kiselorg38&S. Kiselevorg91&A. Kisielorg142&J.L. Klayorg5&C. Kleinorg68&J. Kleinorg58&S. Kleinorg79&C. Klein-Bösingorg144&M. Kleinerorg68&A. Klugeorg33&M.L. Knichelorg33&A.G. Knospeorg125&C. Kobdajorg115&M.K. Köhlerorg103&T. Kolleggerorg106&A. Kondratyevorg75&N. Kondratyevaorg92&E. Kondratyukorg90&J. Konigorg68&P.J. Konopkaorg33&L. Koskaorg116&O. Kovalenkoorg84&V. Kovalenkoorg112&M. Kowalskiorg118&I. Králikorg64&A. Kravčákováorg37&L. Kreisorg106&B. Krimphofforg68&M. Krivdaorg64,org110&F. Krizekorg94&K. Krizkova Gajdosovaorg36&M. Krügerorg68&E. Kryshenorg97&M. Krzewickiorg38&A.M. Kuberaorg96&V. Kučeraorg60&C. Kuhnorg136&P.G. Kuijerorg89&L. Kumarorg99&S. Kumarorg48&S. Kunduorg85&P. Kurashviliorg84&A. Kurepinorg62&A.B. Kurepinorg62&A. Kuryakinorg108&S. Kushpilorg94&J. Kvapilorg110&M.J. Kweonorg60&J.Y. Kwonorg60&Y. Kwonorg147&S.L. La Pointeorg38&P. La Roccaorg27&Y.S. Laiorg79&R. Langoyorg129&K. Lapidusorg33&A. Lardeuxorg20&P. Larionovorg51&E. Laudiorg33&R. Lavickaorg36&T. Lazarevaorg112&R. Leaorg24&L. Leardiniorg103&J. Leeorg133&S. Leeorg147&F. Lehasorg89&S. Lehnerorg113&J. Lehrbachorg38&R.C. Lemmonorg93&I. León Monzónorg120&E.D. Lesserorg19&M. Lettrichorg33&P. Lévaiorg145&X. Liorg12&X.L. Liorg6&J. Lienorg129&R. Lietavaorg110&B. Limorg17&V. Lindenstruthorg38&S.W. Lindsayorg127&C. Lippmannorg106&M.A. Lisaorg96&V. Litichevskyiorg43&A. Liuorg19&S. Liuorg96&W.J. Llopeorg143&I.M. Lofnesorg21&V. Loginovorg92&C. Loizidesorg95&P. Loncarorg34&X. Lopezorg134&E. López Torresorg8&J.R. Luhderorg144&M. Lunardonorg28&G. Luparelloorg59&Y. Maorg39&A. Maevskayaorg62&M. Magerorg33&S.M. Mahmoodorg20&T. Mahmoudorg42&A. Maireorg136&R.D. Majkaorg146&M. Malaevorg97&Q.W. Malikorg20&L. Malininaorg75orgII&D. Mal’Kevichorg91&P. Malzacherorg106&G. Mandaglioorg55&V. Mankoorg87&F. Mansoorg134&V. Manzariorg52&Y. Maoorg6&M. Marchisoneorg135&J. Marešorg66&G.V. Margagliottiorg24&A. Margottiorg53&J. Marguttiorg63&A. Marínorg106&C. Markertorg119&M. Marquardorg68&N.A. Martinorg103&P. Martinengoorg33&J.L. Martinezorg125&M.I. Martínezorg44&G. Martínez Garcíaorg114&M. Martinez Pedreiraorg33&S. Masciocchiorg106&M. Maseraorg25&A. Masoniorg54&L. Massacrierorg61&E. Massonorg114&A. Mastroserioorg52,org138&A.M. Mathisorg104,org117&O. Matonohaorg80&P.F.T. Matuokaorg121&A. Matyjaorg118&C. Mayerorg118&M. Mazzilliorg52&M.A. Mazzoniorg57&A.F. Mechlerorg68&F. Meddiorg22&Y. Melikyanorg62,org92&A. Menchaca-Rochaorg71&C. Mengkeorg6&E. Meninnoorg29,org113&M. Meresorg13&S. Mhlangaorg124&Y. Miakeorg133&L. Michelettiorg25&D.L. Mihaylovorg104&K. Mikhaylovorg75,org91&A. Mischkeorg63org*&A.N. Mishraorg69&D. Miśkowiecorg106&A. Modakorg3&N. Mohammadiorg33&A.P. Mohantyorg63&B. Mohantyorg85&M. Mohisin Khanorg16orgIII&C. Mordasiniorg104&D.A. Moreira De Godoyorg144&L.A.P. Morenoorg44&I. Morozovorg62&A. Morschorg33&T. Mrnjavacorg33&V. Mucciforaorg51&E. Mudnicorg34&D. Mühlheimorg144&S. Muhuriorg141&J.D. Mulliganorg79&M.G. Munhozorg121&R.H. Munzerorg68&H. Murakamiorg132&S. Murrayorg124&L. Musaorg33&J. Musinskyorg64&C.J. Myersorg125&J.W. Myrchaorg142&B. Naikorg48&R. Nairorg84&B.K. Nandiorg48&R. Naniaorg10,org53&E. Nappiorg52&M.U. Naruorg14&A.F. Nassirpourorg80&C. Nattrassorg130&R. Nayakorg48&T.K. Nayakorg85&S. Nazarenkoorg108&A. Neaguorg20&R.A. Negrao De Oliveiraorg68&L. Nellenorg69&S.V. Nesboorg35&G. Neskovicorg38&D. Nesterovorg112&L.T. Neumannorg142&B.S. Nielsenorg88&S. Nikolaevorg87&S. Nikulinorg87&V. Nikulinorg97&F. Noferiniorg10,org53&P. Nomokonovorg75&J. Normanorg78,org127&N. Novitzkyorg133&P. Nowakowskiorg142&A. Nyaninorg87&J. Nystrandorg21&M. Oginoorg81&A. Ohlsonorg80,org103&J. Oleniaczorg142&A.C. Oliveira Da Silvaorg121,org130&M.H. Oliverorg146&C. Oppedisanoorg58&R. Oravaorg43&A. Ortiz Velasquezorg69&A. Oskarssonorg80&J. Otwinowskiorg118&K. Oyamaorg81&Y. Pachmayerorg103&V. Pacikorg88&D. Paganoorg140&G. Paićorg69&J. Panorg143&A.K. Pandeyorg48&S. Panebiancoorg137&P. Pareekorg49,org141&J. Parkorg60&J.E. Parkkilaorg126&S. Parmarorg99&S.P. Pathakorg125&R.N. Patraorg141&B. Paulorg23,org58&H. Peiorg6&T. Peitzmannorg63&X. Pengorg6&L.G. Pereiraorg70&H. Pereira Da Costaorg137&D. Peresunkoorg87&G.M. Perezorg8&E. Perez Lezamaorg68&V. Peskovorg68&Y. Pestovorg4&V. Petráčekorg36&M. Petroviciorg47&R.P. Pezziorg70&S. Pianoorg59&M. Piknaorg13&P. Pillotorg114&O. Pinazzaorg33,org53&L. Pinskyorg125&C. Pintoorg27&S. Pisanoorg10,org51&D. Pistoneorg55&M. Płoskońorg79&M. Planinicorg98&F. Pliquettorg68&J. Plutaorg142&S. Pochybovaorg145org*&M.G. Poghosyanorg95&B. Polichtchoukorg90&N. Poljakorg98&A. Poporg47&H. Poppenborgorg144&S. Porteboeuf-Houssaisorg134&V. Pozdniakovorg75&S.K. Prasadorg3&R. Preghenellaorg53&F. Prinoorg58&C.A. Pruneauorg143&I. Pshenichnovorg62&M. Puccioorg25,org33&J. Putschkeorg143&R.E. Quishpeorg125&S. Ragoniorg110&S. Rahaorg3&S. Rajputorg100&J. Rakorg126&A. Rakotozafindrabeorg137&L. Ramelloorg31&F. Ramiorg136&R. Raniwalaorg101&S. Raniwalaorg101&S.S. Räsänenorg43&R. Rathorg49&V. Ratzaorg42&I. Ravasengaorg30,org89&K.F. Readorg95,org130&K. Redlichorg84orgIV&A. Rehmanorg21&P. Reicheltorg68&F. Reidtorg33&X. Renorg6&R. Renfordtorg68&Z. Rescakovaorg37&J.-P. Revolorg10&K. Reygersorg103&V. Riabovorg97&T. Richertorg80,org88&M. Richterorg20&P. Riedlerorg33&W. Rieglerorg33&F. Riggiorg27&C. Risteaorg67&S.P. Rodeorg49&M. Rodríguez Cahuantziorg44&K. Røedorg20&R. Rogalevorg90&E. Rogochayaorg75&D. Rohrorg33&D. Röhrichorg21&P.S. Rokitaorg142&F. Ronchettiorg51&E.D. Rosasorg69&K. Roslonorg142&A. Rossiorg28,org56&A. Rotondiorg139&A. Royorg49&P. Royorg109&O.V. Ruedaorg80&R. Ruiorg24&B. Rumyantsevorg75&A. Rustamovorg86&E. Ryabinkinorg87&Y. Ryabovorg97&A. Rybickiorg118&H. Rytkonenorg126&O.A.M. Saarimakiorg43&S. Sadhuorg141&S. Sadovskyorg90&K. Šafaříkorg36&S.K. Sahaorg141&B. Sahooorg48&P. Sahooorg48,org49&R. Sahooorg49&S. Sahooorg65&P.K. Sahuorg65&J. Sainiorg141&S. Sakaiorg133&S. Sambyalorg100&V. Samsonovorg92,org97&D. Sarkarorg143&N. Sarkarorg141&P. Sarmaorg41&V.M. Sartiorg104&M.H.P. Sasorg63&E. Scapparoneorg53&B. Schaeferorg95&J. Schambachorg119&H.S. Scheidorg68&C. Schiauaorg47&R. Schickerorg103&A. Schmahorg103&C. Schmidtorg106&H.R. Schmidtorg102&M.O. Schmidtorg103&M. Schmidtorg102&N.V. Schmidtorg68,org95&A.R. Schmierorg130&J. Schukraftorg88&Y. Schutzorg33,org136&K. Schwarzorg106&K. Schwedaorg106&G. Scioliorg26&E. Scomparinorg58&M. Šefčíkorg37&J.E. Segerorg15&Y. Sekiguchiorg132&D. Sekihataorg132&I. Selyuzhenkovorg92,org106&S. Senyukovorg136&D. Serebryakovorg62&E. Serradillaorg71&A. Sevcencoorg67&A. Shabanovorg62&A. Shabetaiorg114&R. Shahoyanorg33&W. Shaikhorg109&A. Shangaraevorg90&A. Sharmaorg99&A. Sharmaorg100&H. Sharmaorg118&M. Sharmaorg100&N. Sharmaorg99&A.I. Sheikhorg141&K. Shigakiorg45&M. Shimomuraorg82&S. Shirinkinorg91&Q. Shouorg39&Y. Sibiriakorg87&S. Siddhantaorg54&T. Siemiarczukorg84&D. Silvermyrorg80&G. Simatovicorg89&G. Simonettiorg33,org104&R. Singhorg85&R. Singhorg100&R. Singhorg49&V.K. Singhorg141&V. Singhalorg141&T. Sinhaorg109&B. Sitarorg13&M. Sittaorg31&T.B. Skaaliorg20&M. Slupeckiorg126&N. Smirnovorg146&R.J.M. Snellingsorg63&T.W. Snellmanorg43,org126&C. Sonccoorg111&J. Songorg60,org125&A. Songmoolnakorg115&F. Soramelorg28&S. Sorensenorg130&I. Sputowskaorg118&J. Stachelorg103&I. Stanorg67&P. Stankusorg95&P.J. Steffanicorg130&E. Stenlundorg80&D. Stoccoorg114&M.M. Storetvedtorg35&L.D. Strittoorg29&A.A.P. Suaideorg121&T. Sugitateorg45&C. Suireorg61&M. Suleymanovorg14&M. Suljicorg33&R. Sultanovorg91&M. Šumberaorg94&S. Sumowidagdoorg50&S. Swainorg65&A. Szaboorg13&I. Szarkaorg13&U. Tabassamorg14&G. Taillepiedorg134&J. Takahashiorg122&G.J. Tambaveorg21&S. Tangorg6,org134&M. Tarhiniorg114&M.G. Tarzilaorg47&A. Tauroorg33&G. Tejeda Muñozorg44&A. Telescaorg33&C. Terrevoliorg125&D. Thakurorg49&S. Thakurorg141&D. Thomasorg119&F. Thoresenorg88&R. Tieulentorg135&A. Tikhonovorg62&A.R. Timminsorg125&A. Toiaorg68&N. Topilskayaorg62&M. Toppiorg51&F. Torales- Acostaorg19&S.R. Torresorg9,org120&A. Trifiroorg55&S. Tripathyorg49&T. Tripathyorg48&S. Trogoloorg28&G. Trombettaorg32&L. Tropporg37&V. Trubnikovorg2&W.H. Trzaskaorg126&T.P. Trzcinskiorg142&B.A. Trzeciakorg63&T. Tsujiorg132&A. Tumkinorg108&R. Turrisiorg56&T.S. Tveterorg20&K. Ullalandorg21&E.N. Umakaorg125&A. Urasorg135&G.L. Usaiorg23&A. Utrobicicorg98&M. Valaorg37&N. Valleorg139&S. Valleroorg58&N. van der Kolkorg63&L.V.R. van Doremalenorg63&M. van Leeuwenorg63&P. Vande Vyvreorg33&D. Vargaorg145&Z. Vargaorg145&M. Varga-Kofaragoorg145&A. Vargasorg44&M. Vasileiouorg83&A. Vasilievorg87&O. Vázquez Doceorg104,org117&V. Vecherninorg112&A.M. Veenorg63&E. Vercellinorg25&S. Vergara Limónorg44&L. Vermuntorg63&R. Vernetorg7&R. Vértesiorg145&L. Vickovicorg34&Z. Vilakaziorg131&O. Villalobos Baillieorg110&A. Villatoro Telloorg44&G. Vinoorg52&A. Vinogradovorg87&T. Virgiliorg29&V. Vislaviciusorg88&A. Vodopyanovorg75&B. Volkelorg33&M.A. Völklorg102&K. Voloshinorg91&S.A. Voloshinorg143&G. Volpeorg32&B. von Hallerorg33&I. Vorobyevorg104&D. Voscekorg116&J. Vrlákováorg37&B. Wagnerorg21&M. Weberorg113&S.G. Weberorg144&A. Wegrzynekorg33&D.F. Weiserorg103&S.C. Wenzelorg33&J.P. Wesselsorg144&J. Wiechulaorg68&J. Wikneorg20&G. Wilkorg84&J. Wilkinsonorg10,org53&G.A. Willemsorg33&E. Willsherorg110&B. Windelbandorg103&M. Winnorg137&W.E. Wittorg130&Y. Wuorg128&R. Xuorg6&S. Yalcinorg77&K. Yamakawaorg45&S. Yangorg21&S. Yanoorg137&Z. Yinorg6&H. Yokoyamaorg63&I.-K. Yooorg17&J.H. Yoonorg60&S. Yuanorg21&A. Yuncuorg103&V. Yurchenkoorg2&V. Zaccoloorg24&A. Zamanorg14&C. Zampolliorg33&H.J.C. Zanoliorg63&N. Zardoshtiorg33&A. Zarochentsevorg112&P. Závadaorg66&N. Zaviyalovorg108&H. Zbroszczykorg142&M. Zhalovorg97&S. Zhangorg39&X. Zhangorg6&Z. Zhangorg6&V. Zherebchevskiiorg112&D. Zhouorg6&Y. Zhouorg88&Z. Zhouorg21&J. Zhuorg6,org106&Y. Zhuorg6&A. Zichichiorg10,org26&M.B. Zimmermannorg33&G. Zinovjevorg2&N. Zurloorg140& ## Affiliation notes org*Deceased orgIDipartimento DET del Politecnico di Torino, Turin, Italy orgIIM.V. Lomonosov Moscow State University, D.V. Skobeltsyn Institute of Nuclear, Physics, Moscow, Russia orgIIIDepartment of Applied Physics, Aligarh Muslim University, Aligarh, India orgIVInstitute of Theoretical Physics, University of Wroclaw, Poland ## Collaboration Institutes org1A.I. Alikhanyan National Science Laboratory (Yerevan Physics Institute) Foundation, Yerevan, Armenia org2Bogolyubov Institute for Theoretical Physics, National Academy of Sciences of Ukraine, Kiev, Ukraine org3Bose Institute, Department of Physics and Centre for Astroparticle Physics and Space Science (CAPSS), Kolkata, India org4Budker Institute for Nuclear Physics, Novosibirsk, Russia org5California Polytechnic State University, San Luis Obispo, California, United States org6Central China Normal University, Wuhan, China org7Centre de Calcul de l’IN2P3, Villeurbanne, Lyon, France org8Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN), Havana, Cuba org9Centro de Investigación y de Estudios Avanzados (CINVESTAV), Mexico City and Mérida, Mexico org10Centro Fermi - Museo Storico della Fisica e Centro Studi e Ricerche “Enrico Fermi’, Rome, Italy org11Chicago State University, Chicago, Illinois, United States org12China Institute of Atomic Energy, Beijing, China org13Comenius University Bratislava, Faculty of Mathematics, Physics and Informatics, Bratislava, Slovakia org14COMSATS University Islamabad, Islamabad, Pakistan org15Creighton University, Omaha, Nebraska, United States org16Department of Physics, Aligarh Muslim University, Aligarh, India org17Department of Physics, Pusan National University, Pusan, Republic of Korea org18Department of Physics, Sejong University, Seoul, Republic of Korea org19Department of Physics, University of California, Berkeley, California, United States org20Department of Physics, University of Oslo, Oslo, Norway org21Department of Physics and Technology, University of Bergen, Bergen, Norway org22Dipartimento di Fisica dell’Università ’La Sapienza’ and Sezione INFN, Rome, Italy org23Dipartimento di Fisica dell’Università and Sezione INFN, Cagliari, Italy org24Dipartimento di Fisica dell’Università and Sezione INFN, Trieste, Italy org25Dipartimento di Fisica dell’Università and Sezione INFN, Turin, Italy org26Dipartimento di Fisica e Astronomia dell’Università and Sezione INFN, Bologna, Italy org27Dipartimento di Fisica e Astronomia dell’Università and Sezione INFN, Catania, Italy org28Dipartimento di Fisica e Astronomia dell’Università and Sezione INFN, Padova, Italy org29Dipartimento di Fisica ‘E.R. Caianiello’ dell’Università and Gruppo Collegato INFN, Salerno, Italy org30Dipartimento DISAT del Politecnico and Sezione INFN, Turin, Italy org31Dipartimento di Scienze e Innovazione Tecnologica dell’Università del Piemonte Orientale and INFN Sezione di Torino, Alessandria, Italy org32Dipartimento Interateneo di Fisica ‘M. Merlin’ and Sezione INFN, Bari, Italy org33European Organization for Nuclear Research (CERN), Geneva, Switzerland org34Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, University of Split, Split, Croatia org35Faculty of Engineering and Science, Western Norway University of Applied Sciences, Bergen, Norway org36Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Prague, Czech Republic org37Faculty of Science, P.J. Šafárik University, Košice, Slovakia org38Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe- Universität Frankfurt, Frankfurt, Germany org39Fudan University, Shanghai, China org40Gangneung-Wonju National University, Gangneung, Republic of Korea org41Gauhati University, Department of Physics, Guwahati, India org42Helmholtz-Institut für Strahlen- und Kernphysik, Rheinische Friedrich- Wilhelms-Universität Bonn, Bonn, Germany org43Helsinki Institute of Physics (HIP), Helsinki, Finland org44High Energy Physics Group, Universidad Autónoma de Puebla, Puebla, Mexico org45Hiroshima University, Hiroshima, Japan org46Hochschule Worms, Zentrum für Technologietransfer und Telekommunikation (ZTT), Worms, Germany org47Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest, Romania org48Indian Institute of Technology Bombay (IIT), Mumbai, India org49Indian Institute of Technology Indore, Indore, India org50Indonesian Institute of Sciences, Jakarta, Indonesia org51INFN, Laboratori Nazionali di Frascati, Frascati, Italy org52INFN, Sezione di Bari, Bari, Italy org53INFN, Sezione di Bologna, Bologna, Italy org54INFN, Sezione di Cagliari, Cagliari, Italy org55INFN, Sezione di Catania, Catania, Italy org56INFN, Sezione di Padova, Padova, Italy org57INFN, Sezione di Roma, Rome, Italy org58INFN, Sezione di Torino, Turin, Italy org59INFN, Sezione di Trieste, Trieste, Italy org60Inha University, Incheon, Republic of Korea org61Institut de Physique Nucléaire d’Orsay (IPNO), Institut National de Physique Nucléaire et de Physique des Particules (IN2P3/CNRS), Université de Paris-Sud, Université Paris-Saclay, Orsay, France org62Institute for Nuclear Research, Academy of Sciences, Moscow, Russia org63Institute for Subatomic Physics, Utrecht University/Nikhef, Utrecht, Netherlands org64Institute of Experimental Physics, Slovak Academy of Sciences, Košice, Slovakia org65Institute of Physics, Homi Bhabha National Institute, Bhubaneswar, India org66Institute of Physics of the Czech Academy of Sciences, Prague, Czech Republic org67Institute of Space Science (ISS), Bucharest, Romania org68Institut für Kernphysik, Johann Wolfgang Goethe-Universität Frankfurt, Frankfurt, Germany org69Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, Mexico City, Mexico org70Instituto de Física, Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, Brazil org71Instituto de Física, Universidad Nacional Autónoma de México, Mexico City, Mexico org72iThemba LABS, National Research Foundation, Somerset West, South Africa org73Jeonbuk National University, Jeonju, Republic of Korea org74Johann-Wolfgang-Goethe Universität Frankfurt Institut für Informatik, Fachbereich Informatik und Mathematik, Frankfurt, Germany org75Joint Institute for Nuclear Research (JINR), Dubna, Russia org76Korea Institute of Science and Technology Information, Daejeon, Republic of Korea org77KTO Karatay University, Konya, Turkey org78Laboratoire de Physique Subatomique et de Cosmologie, Université Grenoble-Alpes, CNRS-IN2P3, Grenoble, France org79Lawrence Berkeley National Laboratory, Berkeley, California, United States org80Lund University Department of Physics, Division of Particle Physics, Lund, Sweden org81Nagasaki Institute of Applied Science, Nagasaki, Japan org82Nara Women’s University (NWU), Nara, Japan org83National and Kapodistrian University of Athens, School of Science, Department of Physics , Athens, Greece org84National Centre for Nuclear Research, Warsaw, Poland org85National Institute of Science Education and Research, Homi Bhabha National Institute, Jatni, India org86National Nuclear Research Center, Baku, Azerbaijan org87National Research Centre Kurchatov Institute, Moscow, Russia org88Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark org89Nikhef, National institute for subatomic physics, Amsterdam, Netherlands org90NRC Kurchatov Institute IHEP, Protvino, Russia org91NRC «Kurchatov Institute» - ITEP, Moscow, Russia org92NRNU Moscow Engineering Physics Institute, Moscow, Russia org93Nuclear Physics Group, STFC Daresbury Laboratory, Daresbury, United Kingdom org94Nuclear Physics Institute of the Czech Academy of Sciences, Řež u Prahy, Czech Republic org95Oak Ridge National Laboratory, Oak Ridge, Tennessee, United States org96Ohio State University, Columbus, Ohio, United States org97Petersburg Nuclear Physics Institute, Gatchina, Russia org98Physics department, Faculty of science, University of Zagreb, Zagreb, Croatia org99Physics Department, Panjab University, Chandigarh, India org100Physics Department, University of Jammu, Jammu, India org101Physics Department, University of Rajasthan, Jaipur, India org102Physikalisches Institut, Eberhard-Karls-Universität Tübingen, Tübingen, Germany org103Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany org104Physik Department, Technische Universität München, Munich, Germany org105Politecnico di Bari, Bari, Italy org106Research Division and ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt, Germany org107Rudjer Bošković Institute, Zagreb, Croatia org108Russian Federal Nuclear Center (VNIIEF), Sarov, Russia org109Saha Institute of Nuclear Physics, Homi Bhabha National Institute, Kolkata, India org110School of Physics and Astronomy, University of Birmingham, Birmingham, United Kingdom org111Sección Física, Departamento de Ciencias, Pontificia Universidad Católica del Perú, Lima, Peru org112St. Petersburg State University, St. Petersburg, Russia org113Stefan Meyer Institut für Subatomare Physik (SMI), Vienna, Austria org114SUBATECH, IMT Atlantique, Université de Nantes, CNRS-IN2P3, Nantes, France org115Suranaree University of Technology, Nakhon Ratchasima, Thailand org116Technical University of Košice, Košice, Slovakia org117Technische Universität München, Excellence Cluster ’Universe’, Munich, Germany org118The Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Cracow, Poland org119The University of Texas at Austin, Austin, Texas, United States org120Universidad Autónoma de Sinaloa, Culiacán, Mexico org121Universidade de São Paulo (USP), São Paulo, Brazil org122Universidade Estadual de Campinas (UNICAMP), Campinas, Brazil org123Universidade Federal do ABC, Santo Andre, Brazil org124University of Cape Town, Cape Town, South Africa org125University of Houston, Houston, Texas, United States org126University of Jyväskylä, Jyväskylä, Finland org127University of Liverpool, Liverpool, United Kingdom org128University of Science and Techonology of China, Hefei, China org129University of South-Eastern Norway, Tonsberg, Norway org130University of Tennessee, Knoxville, Tennessee, United States org131University of the Witwatersrand, Johannesburg, South Africa org132University of Tokyo, Tokyo, Japan org133University of Tsukuba, Tsukuba, Japan org134Université Clermont Auvergne, CNRS/IN2P3, LPC, Clermont-Ferrand, France org135Université de Lyon, Université Lyon 1, CNRS/IN2P3, IPN-Lyon, Villeurbanne, Lyon, France org136Université de Strasbourg, CNRS, IPHC UMR 7178, F-67000 Strasbourg, France, Strasbourg, France org137Université Paris-Saclay Centre d’Etudes de Saclay (CEA), IRFU, Départment de Physique Nucléaire (DPhN), Saclay, France org138Università degli Studi di Foggia, Foggia, Italy org139Università degli Studi di Pavia, Pavia, Italy org140Università di Brescia, Brescia, Italy org141Variable Energy Cyclotron Centre, Homi Bhabha National Institute, Kolkata, India org142Warsaw University of Technology, Warsaw, Poland org143Wayne State University, Detroit, Michigan, United States org144Westfälische Wilhelms-Universität Münster, Institut für Kernphysik, Münster, Germany org145Wigner Research Centre for Physics, Budapest, Hungary org146Yale University, New Haven, Connecticut, United States org147Yonsei University, Seoul, Republic of Korea
2024-09-04T02:54:57.876967
2020-03-06T13:52:46
2003.03206
{ "authors": "Yuanhang Zhang, Shuang Yang, Jingyun Xiao, Shiguang Shan, Xilin Chen", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26082", "submitter": "Yuanhang Zhang", "url": "https://arxiv.org/abs/2003.03206" }
arxiv-papers
# Can We Read Speech Beyond the Lips? Rethinking RoI Selection for Deep Visual Speech Recognition Yuanhang Zhang1,2, Shuang Yang1, Jingyun Xiao1,2, Shiguang Shan1,2, Xilin Chen1,2 1 Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China 2 University of Chinese Academy of Sciences, Beijing 100049, China This work was done by Yuanhang Zhang during his internship at Institute of Computing Technology, Chinese Academy of Sciences. ###### Abstract Recent advances in deep learning have heightened interest among researchers in the field of visual speech recognition (VSR). Currently, most existing methods equate VSR with automatic lip reading, which attempts to recognise speech by analysing lip motion. However, human experience and psychological studies suggest that we do not always fix our gaze at each other’s lips during a face- to-face conversation, but rather scan the whole face repetitively. This inspires us to revisit a fundamental yet somehow overlooked problem: can VSR models benefit from reading extraoral facial regions, i.e. beyond the lips? In this paper, we perform a comprehensive study to evaluate the effects of different facial regions with state-of-the-art VSR models, including the mouth, the whole face, the upper face, and even the cheeks. Experiments are conducted on both word-level and sentence-level benchmarks with different characteristics. We find that despite the complex variations of the data, incorporating information from extraoral facial regions, even the upper face, consistently benefits VSR performance. Furthermore, we introduce a simple yet effective method based on Cutout to learn more discriminative features for face-based VSR, hoping to maximise the utility of information encoded in different facial regions. Our experiments show obvious improvements over existing state-of-the-art methods that use only the lip region as inputs, a result we believe would probably provide the VSR community with some new and exciting insights. ## I Introduction Visual speech recognition (VSR) is the task of recognising speech by analysing video sequences of people speaking. A robust VSR system has a variety of useful applications, such as silent speech interfaces [29], audio-visual speech recognition (AVSR) in noisy environments [1], face liveness detection, and so on. Its performance has progressed significantly over the past few years, thanks to several successful deep learning architectures, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs). It also benefits from the emergence of large-scale, in-the-wild audiovisual datasets, from which deep neural networks can automatically learn strong representations that outperform previous hand-crafted features. Traditionally, the term “visual speech recognition” is used almost interchangeably with “lip reading” within the VSR community, since it is usually believed that lip shapes and lip motion most likely contain almost all the information correlated with speech. The information in other facial regions is considered by default weak, and not helpful for VSR in practical use, due to the diversity of the speaker’s pose and other variations in the facial region that are unrelated to speech production. Accordingly, as part of the dataset creation pipeline, almost all researchers crop regions-of-interest (RoIs) around the mouth after obtaining face bounding boxes and landmarks. By observing only the cropped RoI, the model is expected to focus on fine-grained discrimination of “clean” motion signals within the most relevant lip region, and not be distracted by other parts of the face, whose utilities are less obvious. Some common practices for RoI selection in previous work are depicted in Fig. 1. Figure 1: Common practices for RoI selection in VSR. To date, there is no clear consensus on a best practice, resulting in very different RoIs in different works (see Sec. II-A). Top row: examples of frames from talking face videos. Bottom row: some examples of cropped RoIs in prior work, from left to right: [3, 4, 9, 17, 27, 1, 35]. However, this convention of explicit mouth RoI cropping inevitably brings about many questions. Firstly, lip motion is not the only visual signal we can rely on to decode speech. Research on the advantage of performing speechreading with the whole face has a long history [28, 6]. In particular, movements of articulatory muscles such as the orbicularis oris (which is near the lips) lead to skin movements, often reflected by the cheeks being pushed and pulled, as well as changes in the visibility of the nasolabial folds. The now widely adopted term “speechreading”, used in place of “lip reading” implies precisely the contribution of extraoral facial regions, such as the tongue, teeth, and cheeks, to the speech perception task. Evidence from psychology studies, and our experience in human communication also suggest that we in fact do not focus on the speaker’s lip all the time throughout a conversation, even in very noisy environments. Instead, we scan different regions of the person’s face periodically [31]. Secondly, if we do use a mouth RoI, there are many factors that need to be considered: how much of the face should the RoI cover? Will increased spatial context improve performance by supplying more information, or hinder performance by distracting the network with information unrelated to speech? Should we apply additional corrections to handle pose variation? Answers to these questions are not evident or straight, and there has been no consensus or a universal guideline for choosing RoIs until now. In fact, RoIs are often chosen based on intuition and the researchers’ experience. The choices can be very different for different datasets and different methods, making transfer learning and cross-dataset evaluation difficult. Meanwhile, this specialised RoI selection step separates VSR from other face-related tasks, hindering further joint analysis. Using mouth inputs alone makes VSR an isolated problem, while including other facial regions opens up the possibility of various research in affective analysis and visual speech understanding. For example, joint modeling of non-verbal behaviour and verbal behaviour has been shown to be beneficial for learning adaptive word representations [32]. Finally, data encountered in real-world applications may not have the same luxury that some academic datasets enjoy, where data is biased towards frontal or near-frontal views, and high-resolution mouth crops are easily obtainable. Models trained on such well-behaved, cropped data may not perform as well when presented with the various challenges in practice. This severely limits the potential usage scenarios of VSR systems. Motivated by these observations, we conduct a comparative study on input RoI selection, to (a) quantitatively estimate the contribution of different facial regions to the VSR task, and (b) determine whether state-of-the-art VSR networks, when presented with complex in-the-wild data, still benefit from additional clues within extraoral regions. Previous studies have explored different RoIs in the lower face [17, 24], and demonstrated the importance of suitable RoI coverage and selection. However, these attempts are limited to the lower face, and relatively small datasets with narrow vocabularies. For this study, we approach the problem with state-of-the-art, deep VSR models trained on large-scale “in-the-wild” VSR datasets, which depict many real- world variations, such as pose, lighting, scale, background clutter, makeup, expression, different speaking manners, etc. This is a much fairer reflection of real-world scenarios. Besides, we propose Cutout as an effective approach to encourage the model to utilise all facial regions. This allows researchers to sidestep the ambiguous choice of selecting appropriate RoIs, enhances the visual features, and increases the model’s robustness to mild occlusion within the mouth or other facial regions. ## II Related Work ### II-A Visual Speech Recognition Visual speech recognition (VSR), commonly referred to as automatic lip reading, is a classical problem in computer vision and has received increased interest over recent years. The combination of deep learning approaches and large-scale audiovisual datasets has been highly successful, achieving remarkable word recognition rates and even surpassing human performance. The deep network approach has become increasingly common and mainstream [9, 4, 35]. These methods have retraced the success of the CNN-LSTM-DNN (CLDNN) paradigm [22] in the automated speech recognition (ASR) community. However, though many works have reported state-of-the-art results on multiple challenging datasets, few have investigated the influence of input RoI selection, which can be a nuisance due to variations in mouth shapes, face geometry, and pose, as well as the relatively unknown effect of spatial context and extraoral regions. Unlike face recognition, where face frontalisation is a recognised need for pose invariance, and has been thoroughly investigated as an important part of the pipeline, VSR researchers tend to specify RoIs based on their own experience, and in a dataset-specific manner. In some controlled-environment datasets, e.g. OuluVS [37] and GRID [11], the subjects remain relatively still while speaking. Lombard GRID [2] uses a head- mounted camera which the speakers face directly at for the frontal view, essentially fully removing head motion. The Lip Reading in the Wild dataset [9] is of short duration with small or no face scale variation within clips, and loose registration is enforced by aligning nose centers. A fixed, mouth- centered (and sometimes affine-transformed) rectangular RoI is preferable on these datasets, because the faces are usually stable, frontal or near-frontal, and of uniform sizes. In contrast, OuluVS2 [3] is a multi-view dataset, and RoIs are processed separately for each camera viewpoint. Chung and Zisserman [10] are the first to investigate large-scale deep lip reading in profile, and use an extended bounding box covering the whole face to account for significant pose variations. Yang et al. [35] determine the RoI size based on the distance between the nose and the mouth center and the width of the speaker’s lips, which also effectively handles pose variations. There have been some previous attempts to address the RoI selection problem explicitly. The most relevant work is [17], which perform experiments with rectangular lower face regions with different spatial context and resolution in a connected digits recognition task. [24] experiments with optical flow features in non-rectangular RoIs after removing head motion, and obtain results better than rectangular RoIs. However, these work are limited by the amount of data used and model capacity, and only investigate the utility of the lower face. We adopt face inputs, and mimic real-world scenarios by experimenting on large-scale, in-the-wild VSR benchmarks that are a magnitude larger than those used in the above two papers. ### II-B Human Perception Studies Our intuition that we do not fix our gaze at the speaker’s lip region when communicating with others is supported by a number of psychology studies on human gaze behaviour during visual and audio-visual speech perception [30, 31, 18]. It has been reported that human gaze patterns usually involve repetitive transitioning between the eyes and the mouth, even at high noise levels and when audio is absent [30]. Interestingly, [18] suggests that in a visual-only scenario, speechreading accuracy was related to the difficulty of the presented sentences and individual proficiency, but not to the proportion of gaze time at a specific part of the face. The roles of the upper face, which is less intuitive for VSR, have also been studied [13]. Studies in audiovisual speech perception show that head and eyebrow motion can help discriminate prosodic contrasts [12], which may be helpful for the VSR task. Moreover, in tonal languages like Mandarin Chinese, movements in the neck, head, and mouth might be related to the lexical tones of syllables [7]. In the context of deep-learning based VSR, recently [38] has shown that using video information from the cropped lip region together with pinyin (Chinese syllables) information yields $0.85\%$ reduction in tone prediction error, supporting this hypothesis. ## III Exploring the Influence of RoIs on VSR In this section, we first introduce the deep VSR architectures used in this work, and four manually selected RoIs (some of which include extraoral regions) to be experimented on. Next, we introduce Cutout as a simple strategy to enhance face-based VSR. Finally, we introduce a few visualisation methods we use to diagnose the resulting models. ### III-A Model Architecture 3D-ResNet18. The 3D-ResNet$18$ backbone which holds the current state-of-the- art on the LRW dataset [27, 26] is used for all word-level experiments. It consists of a spatiotemporal convolution layer and a $18$-layer 2D residual network which gradually reduces spatial dimensionality, and yields a $512$-dimensional average-pooled vector for each frame. We use a $2$-layer bidirectional Gated Recurrent Unit (Bi-GRU) with $1024$ hidden units as the recurrent backend, and do not experiment with further regularisation such as Dropout and recurrent batch normalisation [26], as it deviates from the main objective of this paper. LipNet. For sentence-level VSR, we use the LipNet architecture, which achieves state-of-the-art performance on the GRID corpus [4]. With only three spatiotemporal convolutional layers in the frontend, this lightweight model should be less prone to overfitting on such a small-scale dataset. We also use temporal augmentation and Dropout as recommended by the authors. ### III-B Description of Manually Selected RoIs For fixed-size mouth crops, there are two dominant approaches: one uses fixed bounding box coordinates, and the other uses mouth-centered crops. To make this study comprehensive and self-complete, we experiment with both choices (although eventually we found no difference; see Table I). Original frame Landmark detection Aligned face Figure 2: Illustration of the sub-face RoIs defined in this paper. We train baselines on the whole face (blue), the upper face (purple), the cheeks (orange), and the mouth (red). For face based models, we first crop and align the faces using detected eye and nose landmarks. This is because these parts undergo less non-rigid deformation, and this allows us to capture more significant motion in the lower face including the cheeks, the mouth and the jaw. At a low computational cost, this allows for more stable face tracks, and helps the network capture information in different spatial patches more consistently. However, it should be noted that face motion consists of rigid head movement (yaw, pitch, and roll) and non-rigid movement (e.g. frowning, raising eyebrows, and skin movement). Aligning the face retains yaw and pitch, but totally removes roll rotation. To this end, we also experiment with whole face and upper face cropped directly by fixed coordinates on LRW, since the faces are of uniform sizes and loosely aligned. The input sizes we choose are those commonly adopted in prior work, $112\times 112$ and $100\times 100$. We do not elaborate on the effects of spatial downsampling, since lower RoI resolution does not always yield poorer performance, as long as it remains above an acceptable level (e.g. $50\times 50$ pixels) [5, 15, 35]. Besides, in any case datasets collected in-the-wild already contain inherent scale and image quality variations. To crop the cheeks without revealing too much information in other regions, we crop a rectangular tile from the aligned face, which has no in-plane rotation. The vertical center of the RoI is set as the mean $y$ coordinate of the 18th, 27th, 57th, and 59th landmark after transformation. Finally, we do not experiment with the jaw and the lower neck. Although these parts are directly related to the vibrations of the vocal tract, these regions are not always visible, and such an RoI will be difficult to define. ### III-C Enhancing Face Based VSR with Cutout Our motivation of using face as input is that it covers all other three sub- RoIs, which has the possibility of providing a strong “holistic” representation. If the network is able to remain robust against the speech- irrelevant variations that human faces present, including pose, lighting, makeup, etc., we can benefit from additional information in other facial regions all at once. However, a network has only limited capacity. On a fixed budget, the goals of achieving invariance to nuisance factors and keeping stronger discrimination capabilities are inherently complimentary, and the recognition performance may suffer if either is not sufficiently realised. Indeed, our experiments will later show that the face models have already achieved performance comparable to lip based models. However, when we inspect the convolutional responses of vanilla face-based models (see Fig. 5) we find that the network tends to focus on the lip region and does not sufficiently utilise other parts of the face. Inspired by the observation that the vanilla face-based models able to achieve un-expected good performance, we try to enhance further the performance by asking the model to learn more discriminative features by utilising signals spread across the whole face. An intuitive idea is to create a strong patch-based ensemble, which has been shown feasible for facial expression recognition [19]. However, for VSR it would be computationally expensive and impractical for deployment, since we are dealing with spatiotemporal data. Moreover, the redundancy between patches will burden optimisation. Therefore, we propose to apply Cutout [14], a regularisation technique which has been popular with image CNNs. It augments the dataset with partially occluded versions of the samples in the dataset, and has already been successfully applied to image classification and object detection [39]. Our motivation is that this “adversarial erasing” process should help the model pay more attention to less significant motion signals related to speech in extraoral regions. During training, a patch within the face region is randomly zeroed. Note that Cutout is applied to identical spatial positions across the video, since we expect the same facial region to be in roughly the same position after decent alignment. Over the course of the training process the model will encounter many masked versions of the same video, and eventually learn discriminative features for all parts of the face, which the model should be able to combine to its advantage during inference. Cutout fits seamlessly into the training process, and can be performed efficiently as a data augmentation step at no additional cost. ### III-D Interpreting Model Behaviour To explore what our models are paying attention to throughout the video, we apply three visualisation techniques that can provide insights as to what the network focuses on for the task of visual speech recognition – similar to the use of gaze trackers in human psychology experiments. Feature maps. Feature maps are filter responses to input images and outputs from previous layers, and can provide insight into intermediate representations learned by the model. Looking at responses from increasingly deeper layers can give us a sense of how the model combines low-level features into higher-level concepts that are useful for recognition. Saliency maps. We use guided backpropagation saliency visualisation [25], which has been previously utilised to interpret lip reading [4, 9] and image classification models. Specifically, let $\mathbf{x}\in\mathbb{R}^{T\times H\times W\times C}$ be an input video, $V$ be the alphabet or vocabulary, and $\phi$ be the non-linear function that underlies the deep recognition network. For word-level classification, the network outputs a score for the $i$-th class in the vocabulary, $p(i\mid\mathbf{x})=\phi(\mathbf{x})_{i}$, where $1\leq i\leq|V|$. We compute its gradient with respect to the input $\mathbf{x}$ using guided backpropagation. Likewise, for sentence-level VSR, we compute the likelihood of the greedily decoded output $\mathbf{y}\in(V\cup\\{\text{\textvisiblespace}\\})^{*}$, which is $\prod_{t}p(\mathbf{y}_{t}\mid\mathbf{x})$, and derive it against the input video sequence $\mathbf{x}$ to obtain the saliency maps. Eventually we obtain a tensor the same size as the input, which depicts spatial regions the model bases its prediction on for different timesteps. Spatiotemporal masking. This approach, adapted from the patch masking method for visualising the receptive fields of 2D ConvNets [36] has been used to visualise important space-time video patches in audio-visual speech enhancement models [16]. In our experiments, we mask each frame at the identical position for spatially aligned faces using a small $7\times 7$ patch in a sliding window fashion, and measure how overall accuracy is affected by computing performance drop, e.g. $\Delta_{\text{accuracy}}$. This process results in a heatmap depicting the contribution of different facial regions to recognition. ## IV Experiments and Results We train and evaluate on three VSR benchmarks, which cover tonal and atonal languages as well as in-the-wild and scripted speech: the Lip Reading in the Wild (LRW) dataset, the recent LRW-1000 dataset, and the GRID audiovisual corpus. In this section, we first briefly introduce the three datasets we use, and present some implementation details. Next, we compare recognition performance using the four manually selected RoIs in Sec. III-B. We highlight the benefits of incorporating extraoral regions, in particular by using aligned entire faces. Finally, we present results of our best performing model which combine Cutout with face inputs, and make a few useful remarks. ### IV-A Datasets LRW. LRW [9] is a challenging “in-the-wild” English word-level lip reading dataset derived from BBC news collections, with $500$ classes and over $500,000$ instances, of which $25,000$ are reserved for testing. LRW-1000. LRW-1000 [35] is a $1000$-class, large-scale, naturally distributed Mandarin Chinese word-level lip reading dataset, also derived from TV broadcasts. With over $700,000$ word instances, it is even more challenging, with significant pose, scale, background clutter, word length, and inter- speaker variations. Note that the whole face was not provided with LRW-1000 in the initial release. We will release face tracks for LRW-1000111For now, we train and evaluate our models with the original word alignment and landmark annotations in [35] for fair comparison. Note that the original paper used ResNet-$34$, while this work uses ResNet-$18$ which is currently more popular due to fewer parameters and better performance. along with annotations that have undergone another round of manual cleaning as well as corresponding baselines in due course. GRID. The GRID audiovisual corpus [11], released in 2006, is a popular benchmark for sentence-level VSR. It consists of video recordings from $34$ speakers222One speaker’s data is unavailable due to technical reasons., yielding $33,000$ utterances. All sentences follow a fixed grammar. ### IV-B Implementation Details Data preprocessing. We detect faces and facial landmarks with the open-source SeetaFace2 toolkit [23], and align faces with similarity transformations using upper face landmarks [8], which are smoothed using a temporal Gaussian kernel of width $3$. For LRW and LRW-1000, the faces are resized to $122\times 122$ and randomly cropped to $112\times 112$ during training. For GRID, the faces are resized to $100\times 100$. Training details. All models are implemented with PyTorch and trained on $2$ to $3$ NVIDIA Titan X GPUs, each with $12$GB memory. We use the Adam optimizer with default hyperparameters. For word-level VSR, we use three-stage training described in [27]. Weight decay is set to $0.0001$, and learning rate is initialised to $0.0003$ for stage I / II and $0.001$ for stage III, decreasing on log scale from the $5$th or the $10$th epoch onwards, depending on whether Cutout is applied. We apply identical spatial augmentation to every frame in the form of random cropping and random horizontal flipping during training, and take a central crop for testing. Also, for LRW-1000 training, we initialise front-end weights from the corresponding best model on LRW, following [35]. For sentence-level VSR, we use a fixed learning rate of $0.0001$ and use no weight decay. We also apply random horizontal flipping, but no random cropping is used because of relatively accurate and stable landmarking results. Data partitioning, and evaluation metrics. We use the train / validation / test partitioning provided with LRW and LRW-1000. For GRID, we use $255$ random sentences from each speaker for evaluation, and the remainder for training, the same as previous state-of-the-art [4, 34]. The evaluation metrics for word-level and sentence-level tasks are classification accuracy and Word Error Rate (WER), respectively, where WER is defined as $\texttt{WER}=\frac{\text{\\# of substitutions, deletions, and insertions}}{\text{length of ground truth transcript}}.$ Hence lower WERs are preferred. Note that the two metrics are equivalent if one views word recognition as a one-word sentence recognition task, in the sense that $\texttt{Acc}=1-\texttt{WER}$. ### IV-C Experiments on Manually Selected RoIs Baseline results on the manually defined RoIs are shown in Table I, Table II, and III. We analyze the results step by step for the rest of this subsection. TABLE I: Evaluation on the LRW dataset with different RoI choices. The second group uses directly cropped (upper) faces while the third applies face alignment. Region | Resolution | Accuracy | Description ---|---|---|--- Mouth | $88\times 88$ | $83.30\%$ | Fixed bounding box [21, 26] Mouth | $88\times 88$ | $83.30\%$ | Mouth-centered [9] Face | $112\times 112$ | $83.46\%$ | Nose-centered, $7/8$ original size Upper face | $112\times 112$ | $42.28\%$ | Resized upper half from above Face | $112\times 112$ | $83.10\%$ | Aligned with eye & nose landmarks [8] Cheeks | $112\times 112$ | $62.49\%$ | Resized $40\times 112$ crop from above Upper face | $112\times 112$ | $48.33\%$ | Resized upper half Face (CBAM) | $112\times 112$ | $83.14\%$ | Face (Cutout) | $112\times 112$ | $\bf 85.02\%$ | TABLE II: Evaluation on LRW-1000 with different RoI choices. Region | Resolution | Accuracy | Description ---|---|---|--- Mouth | $88\times 88$ | $38.64\%$ | Mouth-centered, no roll (as in [35]) Face | $112\times 112$ | $41.71\%$ | Aligned with eye & nose landmarks Upper face | $112\times 112$ | $15.84\%$ | Resized upper half Cheeks | $112\times 112$ | $32.50\%$ | Resized $40\times 112$ crop from above Face (Cutout) | $112\times 112$ | $\bf 45.24\%$ | Upper face | $112\times 112$ | $13.58\%$ | Front-end loaded from LRW and fixed TABLE III: Evaluation on the GRID corpus with different RoI choices. A $5$-gram, character-level language model is used during beam search. “CER” stands for Character Error Rate. The lower the error rates, the better. Region | Resolution | WER | CER | Description ---|---|---|---|--- Mouth | $100\times 50$ | $4.8\%$[4] | $1.9\%$ | Affine warped Mouth | $100\times 50$ | $4.7\%$ | $1.9\%$ | Above (reproduced) Face | $100\times 100$ | $3.1\%$ | $1.3\%$ | Aligned with eye & nose landmarks Upper face | $100\times 50$ | $14.4\%$ | $7.4\%$ | Upper half of above Cheeks | $100\times 50$ | $6.8\%$ | $3.1\%$ | Resized $36\times 100$ crop from above Face (Cutout) | $100\times 100$ | $\bf 2.9\%$ | $\bf 1.2\%$ | Effectiveness of including extraoral regions. Experiment results clearly show that upper face and cheeks carry useful information, since recognition rates are far above chance. Counterintuitively, the upper face achieves nearly half the accuracy of face and mouth models. To ensure that the model has learned useful discriminative features instead of some unknown inherent bias within the dataset, we conduct an additional experiment, transferring from LRW to LRW-1000 while keeping the front-end fixed. Intuitively, if the front-end has learned spurious clues that are irrelevant to the task itself, it should behave poorly on a dataset it has not been trained on. However, despite significant differences between the two datasets in terms of quality and language, classification accuracy is still far above chance after we fixed the front-end, with only $2.26\%$ absolute performance loss. Seeing how the upper face and cheeks convey useful information for VSR, by feeding the model with the entire face, we would expect it to benefit from the additional spatial context, which is indeed the case. Using the entire face instead of only the mouth region yields $1.6\%$ WER reduction on GRID, $3.07\%$ improvement on LRW-1000, and $0.16\%$ improvement on LRW (when directly cropped faces are used). The slight performance regression on LRW with aligned faces will be discussed next. Making a case for face alignment. For LRW, there are two sets of face-based experiments, one with face alignment, and the other using faces directly cropped by fixed coordinates. We observe a small performance degradation on LRW when we align faces to our canonical template ($0.2\%$ to $0.3\%$ drop relative to mouth and $7/8$ original resolution direct crops). Since recognition performance using the upper face actually benefits from alignment, it can be argued that the performance drop is most likely due to slightly lower mouth resolution (about $70\times 70$), and not removal of roll during the alignment process. This is not desirable, but acceptable, and there may be room for improvement if we adopt higher resolution inputs and improve landmarking quality. Therefore, we consider aligning the faces into stable face tracks to be beneficial, and use aligned faces for the remaining experiments. By any means, this is also necessary for structured facial region erasing with Cutout. Failure modes. As an illustrative example, we further compare confusions made by the model under each crop setting in Table IV and Table V for the two English datasets. Overall, short words with less context and words that invoke weak extraoral motions perform worst. We observe that the words best predicted by the upper face are long words, such as “Westminster” and “Temperatures”, and cheeks are good at making predictions for words that invoke significant extraoral motion. The face based model, which achieves comparable but slightly inferior performance compared to the mouth based model, fails to discriminate words with only subtle differences, such as “benefits” and “benefit”. Recognising such words correctly requires analysis of tongue movement, which is hindered by the lowered overall resolution. However, on the GRID corpus, the face-based model seems to have identified idiosyncratic speaking styles, allowing it to make correct predictions for even short words and letters that are not easy to distinguish, and eventually obtain results better than cropped mouths. TABLE IV: Top predicted words, worst predicted words and pairs exhibiting highest confusion in LRW under each crop setting. Accuracy / Network --- Mouth | Face | Cheeks | Upper Face AGREEMENT (1) | ACCUSED (1) | AFTERNOON (0.98) | WESTMINSTER (0.96) ALLEGATIONS (1) | AGREEMENT (1) | WEEKEND (0.98) | TEMPERATURES (0.92) BEFORE (1) | BEFORE (1) | WELFARE (0.98) | AFTERNOON (0.88) PERHAPS (1) | CAMPAIGN (1) | WESTMINSTER (0.98) | SUNSHINE (0.88) PRIME (1) | FOLLOWING (1) | INFORMATION (0.96) | DESCRIBED (0.86) Accuracy / Network --- Mouth | Face | Cheeks | Upper Face ASKED (0.58) | WORLD (0.6) | REALLY (0.32) | GREAT (0.18) BRITAIN (0.58) | ANSWER (0.58) | COULD (0.3) | OTHER (0.18) MATTER (0.58) | BECAUSE (0.58) | GETTING (0.3) | UNTIL (0.18) SPEND (0.58) | COURT (0.58) | MAKES (0.3) | WHICH (0.18) TAKEN (0.58) | PERSON (0.58) | MATTER (0.3) | BRING (0.16) Target / Estimated (# Errors) / Network --- Mouth | Face | Cheeks | Upper Face SPEND / SPENT (11) | BENEFITS / BENEFIT (14) | CLAIMS / GAMES (11) | MILLION / BILLION (11) PRESS / PRICE (11) | PRESS / PRICE (12) | CHALLENGE / CHANGE (10) | BENEFITS / BENEFIT (10) WORST / WORDS (10) | LIVING / GIVING (12) | SYRIAN / SYRIA (10) | EVERYONE / EVERYBODY (10) PRICE / PRESS (10) | WORST / WORDS (11) | GROUND / AROUND (9) | TAKEN / SECOND (9) SERIES / SERIOUS (9) | THEIR / THERE (10) | INDUSTRY / HISTORY (9) | TERMS / TIMES (9) TABLE V: Examples of predictions on GRID under each crop setting. Prediction errors are highlighted in red. GT: Ground Truth; UF: Upper Face; C: Cheeks; M: Mouth; F: (Aligned) Face. GT | lay white in u four now ---|--- UF | lay white in u four now C | lay white at o four now M | lay white at o four now F | lay white in u four now GT | lay white in q five again ---|--- UF | set white at t five again C | lay white at q five again M | lay white at q five again F | lay white in q five again Superiority of using entire faces for cross-dataset transfer. A byproduct of using the entire face rather than specifying sub-RoIs is that the visual front-end can be transferred across datasets easily with no explicit sub-RoI cropping after face detection and registration, which is already very routine in face recognition. Empirically, we found that transferring across face crops is much better than transferring across mouth crops from LRW to LRW-1000. When both fine-tuned for $9$ epochs, mouth-based models suffer from source and target domain discrepancies, resulting in fluctuating validation loss and accuracy (best value $2.8482/38.40\%$), whereas the face-based model transferred stably, and converged to a better result (best value $2.7202/40.35\%$). ### IV-D Experiments on Enhancing Face Inputs with Cutout Results of combining Cutout with aligned faces can also be found in previous tables. This strategy is extremely powerful, enabling state-of-the-art performance on all three benchmarks. In particular, Cutout significantly reduces overfitting on LRW to the point where the model starts to underfit, as can be seen from the training curves in Fig. 3. This is radically different from models trained on faces or mouths without Cutout, where the network eventually reaches near $100\%$ accuracy on the training data, and proves that this harsh regularisation policy is indeed useful for VSR. Below we elaborate on a few interesting observations. Figure 3: Accuracy curves on LRW as training progresses (only the first 75k steps of the temporal convolution backend stage is shown). Compared to vanilla mouth and face based models which easily achieve nearly $100\%$ accuracy on the training set, Cutout (denoted by red curves) significantly reduces overfitting. Effect of the Cutout patch size. The size of the masked out region is an important hyperparameter for Cutout. We experiment with four different sizes: $70\times 70$, $56\times 56$, $42\times 42$, and $28\times 28$, which are $5/8$, $1/2$, $3/8$, and $1/4$ the scale of the whole face, respectively. Experiments on LRW show that among those a $56\times 56$ patch is most effective (see Fig. 4). This is probably because it is approximately the average size of the mouth, and the possibility of the entire mouth being blocked allows for more efficient utilisation of extraoral regions. Since we adopt the same canonical template, we use $1/2$-size masks for all datasets (i.e. $50\times 50$ for GRID). $W/4$$3W/8$$W/2$$5W/8$$82.5$$83.0$$83.5$$84.0$$84.5$$85.0$$85.5$$86.0$ValTestBaseline (val)Baseline (test)Cutout patch sizeAccuracy (%) Figure 4: Ablation results on LRW with different Cutout patch sizes. We achieve best validation accuracy with patches half the size of the input. Figure 5: Visualisations of model behaviour on LRW, LRW-1000, and GRID with Cutout applied. In particular, note how models trained with Cutout preserve fine-grained features, and yields clearly visible saliency around the cheeks. (a) ResNet layer 3 feature maps for “hundred”; (b) Saliency maps for “bin blue”; (c) Saliency maps for “er ling yi qi” (twenty-seventeen). Visualising key facial regions. Fig. 5 provide some saliency visualisations which show that models with Cutout can learn more discriminative features. Fig. 5(a) is generated by extracting feature maps from the third ResNet layer, and performing max-pooling across channels, and Fig. 5(b)(c) (GRID and LRW-1000) by computing back-prop saliency. The derived maps are colored by intensity and overlaid onto the original frames. Each face thumbnail corresponds to a time step, and the two rows can be compared side-by-side to see the effects of introducing Cutout, especially regions highlighted with a dotted red box. For example, for LRW the convolutional responses are no longer confined to the lips, and for GRID there is stronger saliency in the cheeks, the eyebrows, and other facial muscles. The third row shows that after transferring from LRW to LRW-1000, saliency in extraoral regions persist, which means that the learned facial features generalize well and are relatively robust. We also identify regions that are important to the network’s predictions by spatiotemporal masking on the entire test set of LRW, and results are depicted in Fig. 6. It can be seen that for both models, the area that leads to the most significant performance degradation when masked is still the lip region, which agrees with common intuition. In addition, the model trained with Cutout is also affected by occlusion in extraoral regions such as the cheeks and the upper face, showing that the model has learned strong visual features that encode these weak signals to complement lip motion. More importantly, while the plain model suffers up to $40\%$ performance drop even when only a $7\times 7$ patch is occluded, the drop remains below $2\%$ for the model trained with Cutout. This observation again strongly supports the usefulness of extraoral regions. Figure 6: Important facial regions determined by spatiotemporal masking. Left: without Cutout. Middle: an aligned face for reference. Right: with Cutout. Regions that result in more accuracy drop when occluded are colored brighter, and more crucial for model prediction. Comparison with attention-based RoI selection. The attention mechanism, inspired by human vision, has been widely used for implicit RoI selection and fine-grained image recognition. Here we compare Cutout with a Convolutional Block Attention Module (CBAM) [33] augmented baseline on LRW, where we plug CBAM into ResNet blocks. Here, we use $5\times 5$ instead of $7\times 7$ kernels for the spatial attention modules to accommodate to the smaller feature maps. Although CBAM is a powerful spatial-channel attention mechanism which achieved remarkable performance on ImageNet classification and robust remote heart rate estimation [20], results show that the attention-augmented model is only marginally better than the baseline on LRW. We believe this is because the subtle movements in extraoral regions are too weak to be captured with the attention mechanism, and the model is still biased towards lip motion. Performance across pose. We are also interested in how well the Cutout- augmented model can handle pose variations. We turn to LRW-1000, where the data is divided into different difficulty levels according to yaw rotation. From Table VI, we can see that the model trained with Cutout outperforms the no-augmentation face-based baseline by about $2\%$ on both the easy (yaw $\geq 20^{\circ}$) and the medium ($\geq 40^{\circ}$) subset, but degrades slightly on the hard subset ($\geq 60^{\circ}$). We believe this is mainly because effective areas are smaller in the hard setting, where there are more frames in profile view. The effective regions are more likely to be erased or occluded when there is significant yaw rotation. TABLE VI: Performance w.r.t Pose on LRW-1000 Methods | Easy | Medium | Hard | All ---|---|---|---|--- Mouth (ResNet34) [35] | $24.89\%$ | $20.76\%$ | $15.9\%$ | $38.19\%$ Mouth | $25.95\%$ | $21.55\%$ | $18.36\%$ | $38.64\%$ Face | 28.87% | 28.45% | $\mathbf{27.21\%}$ | 41.71% Face (Cutout) | $\mathbf{31.74\%}$ | $\mathbf{30.04\%}$ | 26.89% | $\mathbf{45.24\%}$ ## V Conclusion In this paper, we have investigated a previously overlooked problem in VSR: the use of extraoral information. We demonstrate that extraoral regions in the face, such as the upper face and the cheeks, can also be included to boost performance. We show that using simple Cutout augmentation with aligned face inputs can yield stronger features, and vastly improve recognition performance by forcing the model to learn the less obvious extraoral cues from data. Beyond VSR, our findings also have clear implications for other speech-related vision tasks, such as realistic talking face generation, face spoofing detection and audio-visual speech enhancement. Next steps include extending the method to sentence-level VSR where more contextual clues are available, increasing input resolution, and eliminating the need for explicit face alignment. ## VI Acknowledgments We would like to thank Chenhao Wang and Mingshuang Luo’s extensive help with data processing. This work is supported in part by the National Key R&D Program of China (No. 2017YFA0700804), and Natural Science Foundation of China (No. 61702486, 61876171). ## References * [1] T. Afouras, J. S. Chung, A. Senior, O. Vinyals, and A. Zisserman. Deep audio-visual speech recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–1, 2018. * [2] N. Alghamdi, S. Maddock, R. Marxer, J. Barker, and G. J. Brown. A corpus of audio-visual lombard speech with frontal and profile views. The Journal of the Acoustical Society of America, 143(6):EL523–EL529, 2018. * [3] I. Anina, Z. Zhou, G. Zhao, and M. Pietikäinen. OuluVS2: A multi-view audiovisual database for non-rigid mouth motion analysis. In 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2015, Ljubljana, Slovenia, May 4-8, 2015, pages 1–5, 2015. * [4] Y. M. Assael, B. Shillingford, S. Whiteson, and N. de Freitas. LipNet: End-to-end sentence-level lipreading. CoRR, abs/1611.01599, 2016. * [5] H. L. Bear, R. W. Harvey, B. Theobald, and Y. Lan. Resolution limits on visual speech recognition. In 2014 IEEE International Conference on Image Processing, ICIP 2014, Paris, France, October 27-30, 2014, pages 1371–1375, 2014. * [6] C. Benoit, T. Guiard-Marigny, B. Le Goff, and A. Adjoudani. Which components of the face do humans and machines best speechread? In Speechreading by humans and machines, pages 315–328. Springer, 1996. * [7] T. H. Chen and D. W. Massaro. Seeing pitch: Visual information for lexical tones of mandarin-chinese. The Journal of the Acoustical Society of America, 123(4):2356–2366, 2008. * [8] J. S. Chung, A. Jamaludin, and A. Zisserman. You said that? In British Machine Vision Conference 2017, BMVC 2017, London, UK, September 4-7, 2017, 2017. * [9] J. S. Chung and A. Zisserman. Lip reading in the wild. In Computer Vision - ACCV 2016 - 13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part II, pages 87–103, 2016. * [10] J. S. Chung and A. Zisserman. Lip reading in profile. In British Machine Vision Conference 2017, BMVC 2017, London, UK, September 4-7, 2017, 2017. * [11] M. Cooke, J. Barker, S. Cunningham, and X. Shao. An audio-visual corpus for speech perception and automatic speech recognition. The Journal of the Acoustical Society of America, 120(5):2421–2424, 2006. * [12] E. Cvejic, J. Kim, and C. Davis. Prosody off the top of the head: Prosodic contrasts can be discriminated by head motion. Speech Communication, 52(6):555–564, 2010. * [13] C. Davis and J. Kim. Audio-visual speech perception off the top of the head. Cognition, 100(3):B21–B31, 2006. * [14] T. Devries and G. W. Taylor. Improved regularization of convolutional neural networks with cutout. CoRR, abs/1708.04552, 2017. * [15] L. Dungan, A. Karaali, and N. Harte. The impact of reduced video quality on visual speech recognition. In 2018 IEEE International Conference on Image Processing, ICIP 2018, Athens, Greece, October 7-10, 2018, pages 2560–2564, 2018. * [16] A. Ephrat, I. Mosseri, O. Lang, T. Dekel, K. Wilson, A. Hassidim, W. T. Freeman, and M. Rubinstein. Looking to listen at the cocktail party: a speaker-independent audio-visual model for speech separation. ACM Trans. Graph., 37(4):112:1–112:11, 2018. * [17] A. Koumparoulis, G. Potamianos, Y. Mroueh, and S. J. Rennie. Exploring ROI size in deep learning based lipreading. In Auditory-Visual Speech Processing, AVSP 2017, Stockholm, Sweden, 25-26 August 2017., pages 64–69, 2017. * [18] C. R. Lansing and G. W. McConkie. Word identification and eye fixation locations in visual and visual-plus-auditory presentations of spoken sentences. Perception & psychophysics, 65(4):536–552, 2003. * [19] Y. Li, J. Zeng, S. Shan, and X. Chen. Patch-gated CNN for occlusion-aware facial expression recognition. In 24th International Conference on Pattern Recognition, ICPR 2018, Beijing, China, August 20-24, 2018, pages 2209–2214, 2018. * [20] X. Niu, X. Zhao, H. Han, A. Das, A. Dantcheva, S. Shan, and X. Chen. Robust remote heart rate estimation from face utilizing spatial-temporal attention. In 14th IEEE International Conference on Automatic Face & Gesture Recognition, FG 2019, Lille, France, May 14-18, 2019, pages 1–8, 2019\. * [21] S. Petridis, T. Stafylakis, P. Ma, F. Cai, G. Tzimiropoulos, and M. Pantic. End-to-end audiovisual speech recognition. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2018, Calgary, AB, Canada, April 15-20, 2018, pages 6548–6552, 2018. * [22] T. N. Sainath, O. Vinyals, A. W. Senior, and H. Sak. Convolutional, long short-term memory, fully connected deep neural networks. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2015, South Brisbane, Queensland, Australia, April 19-24, 2015, pages 4580–4584, 2015. * [23] SeetaTech. SeetaFace2. https://github.com/seetafaceengine/SeetaFace2, 2019. * [24] J. Shiraishi and T. Saitoh. Optical flow based lip reading using non rectangular ROI and head motion reduction. In 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2015, Ljubljana, Slovenia, May 4-8, 2015, pages 1–6, 2015. * [25] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. A. Riedmiller. Striving for simplicity: The all convolutional net. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Workshop Track Proceedings, 2015\. * [26] T. Stafylakis, M. H. Khan, and G. Tzimiropoulos. Pushing the boundaries of audiovisual word recognition using residual networks and LSTMs. Computer Vision and Image Understanding, 176-177:22–32, 2018. * [27] T. Stafylakis and G. Tzimiropoulos. Combining residual networks with LSTMs for lipreading. In Interspeech 2017, 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden, August 20-24, 2017, pages 3652–3656, 2017. * [28] W. H. Sumby and I. Pollack. Visual contribution to speech intelligibility in noise. The journal of the acoustical society of america, 26(2):212–215, 1954. * [29] K. Sun, C. Yu, W. Shi, L. Liu, and Y. Shi. Lip-Interact: Improving mobile device interaction with silent speech commands. In The 31st Annual ACM Symposium on User Interface Software and Technology, UIST 2018, Berlin, Germany, October 14-17, 2018, pages 581–593, 2018. * [30] E. Vatikiotis-Bateson, I.-M. Eigsti, S. Yano, and K. G. Munhall. Eye movement of perceivers during audiovisualspeech perception. Perception & psychophysics, 60(6):926–940, 1998. * [31] M. L.-H. Võ, T. J. Smith, P. K. Mital, and J. M. Henderson. Do the eyes really have it? dynamic allocation of attention when viewing moving faces. Journal of vision, 12(13):3–3, 2012. * [32] Y. Wang, Y. Shen, Z. Liu, P. P. Liang, A. Zadeh, and L. Morency. Words can shift: Dynamically adjusting word representations using nonverbal behaviors. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019., pages 7216–7223, 2019. * [33] S. Woo, J. Park, J. Lee, and I. S. Kweon. CBAM: convolutional block attention module. In Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part VII, pages 3–19, 2018\. * [34] K. Xu, D. Li, N. Cassimatis, and X. Wang. LCANet: End-to-end lipreading with cascaded attention-ctc. In 13th IEEE International Conference on Automatic Face & Gesture Recognition, FG 2018, Xi’an, China, May 15-19, 2018, pages 548–555. IEEE Computer Society, 2018. * [35] S. Yang, Y. Zhang, D. Feng, M. Yang, C. Wang, J. Xiao, K. Long, S. Shan, and X. Chen. LRW-1000: A naturally-distributed large-scale benchmark for lip reading in the wild. In 14th IEEE International Conference on Automatic Face & Gesture Recognition, FG 2019, Lille, France, May 14-18, 2019, pages 1–8, 2019\. * [36] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I, pages 818–833, 2014. * [37] G. Zhao, M. Barnard, and M. Pietikäinen. Lipreading with local spatiotemporal descriptors. IEEE Trans. Multimedia, 11(7):1254–1265, 2009. * [38] Y. Zhao, R. Xu, and M. Song. A cascade sequence-to-sequence model for chinese mandarin lip reading. In C. Xu, M. S. Kankanhalli, K. Aizawa, S. Jiang, R. Zimmermann, and W. Cheng, editors, MMAsia ’19: ACM Multimedia Asia, Beijing, China, December 16-18, 2019, pages 32:1–32:6. ACM, 2019. * [39] Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang. Random erasing data augmentation. CoRR, abs/1708.04896, 2017.
2024-09-04T02:54:57.888713
2020-03-06T14:01:01
2003.03220
{ "authors": "Ozan \\c{C}atal, Samuel Wauthier, Tim Verbelen, Cedric De Boom, Bart\n Dhoedt", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26083", "submitter": "Ozan \\c{C}atal", "url": "https://arxiv.org/abs/2003.03220" }
arxiv-papers
# Deep Active Inference for Autonomous Robot Navigation Ozan Çatal, Samuel Wauthier, Tim Verbelen, Cedric De Boom, & Bart Dhoedt IDLab, Department of Information Technology Ghent University –imec Ghent, Belgium <EMAIL_ADDRESS> ###### Abstract Active inference is a theory that underpins the way biological agent’s perceive and act in the real world. At its core, active inference is based on the principle that the brain is an approximate Bayesian inference engine, building an internal generative model to drive agents towards minimal surprise. Although this theory has shown interesting results with grounding in cognitive neuroscience, its application remains limited to simulations with small, predefined sensor and state spaces. In this paper, we leverage recent advances in deep learning to build more complex generative models that can work without a predefined states space. State representations are learned end-to-end from real-world, high-dimensional sensory data such as camera frames. We also show that these generative models can be used to engage in active inference. To the best of our knowledge this is the first application of deep active inference for a real-world robot navigation task. ## 1 Introduction Active inference and the free energy principle underpins the way our brain – and natural agents in general – work. The core idea is that the brain entertains a (generative) model of the world which allows it to learn cause and effect and to predict future sensory observations. It does so by constantly minimising its prediction error or “surprise”, either by updating the generative model, or by inferring actions that will lead to less surprising states. As such, the brain acts as an approximate Bayesian inference engine, constantly striving for homeostasis. There is ample evidence (Friston, 2012; Friston et al., 2013a; 2014) that different regions of the brain actively engage in variational free energy minimisation. Theoretical grounds indicate that even the simplest of life forms act in a free energy minimising way (Friston, 2013). Although there is a large body of work on active inference for artificial agents (Friston et al., 2006; 2009; 2017; 2013b; Cullen et al., 2018), experiments are typically done in a simulated environment with predefined and simple state and sensor spaces. Recently, research has been done on using deep neural networks as an implementation of the active inference generative model, resulting in the umbrella term “deep active inference”. However, so far all of these approaches were only tested on fairly simple, simulated environments (Ueltzhöffer, 2018; Millidge, 2019; Çatal et al., 2019). In this paper, we apply deep active inference on a robot navigation task, with high-dimensional camera observations and deploy it on a mobile robot platform. To the best of our knowledge, this is the first time that active inference is applied on a real-world robot navigation task. In the remainder of this paper we will first introduce the active inference theory in Section 2. Next, we show how we implement active inference using deep neural networks in Section 3, and discuss initial experiments in Section 4. ## 2 Active Inference Active inference is a process theory of the brain that utilises the concept of free energy (Friston, 2013) to describe the behaviour of various agents. It stipulates that all agents act in order to minimise their own uncertainty of the world. This uncertainty is expressed as Bayesian Surprise, or alternatively the variational free energy. In this context this is characterised by the difference between what an agent imagines about the world and what it has perceived about the world (Friston, 2010). More concretely, the agent builds a generative model $P(\tilde{{\bm{o}}},\tilde{{\bm{s}}},\tilde{{\bm{a}}})$, linking together the agents internal belief states ${\bm{s}}$ with the perceived actions ${\bm{a}}$ and observations ${\bm{o}}$ in the form of a joint distribution. We use a tilde to denote a sequence of variables through time. This generative model can be factorised as in Equation 1. $P(\tilde{{\bm{o}}},\tilde{{\bm{s}}},\tilde{{\bm{a}}})=P(\tilde{{\bm{a}}})P({\bm{s}}_{0})\prod_{t=1}^{T}P({\bm{o}}_{t}|{\bm{s}}_{t})P({\bm{s}}_{t}|{\bm{s}}_{t-1},{\bm{a}}_{t-1})$ (1) The free energy or Bayesian surprise is then defined as: $\begin{split}F&=\mathbb{E}_{Q}[\log Q(\tilde{{\bm{s}}})-\log P(\tilde{{\bm{o}}},\tilde{{\bm{s}}},\tilde{{\bm{a}}})]\\\ &=D_{\mathrm{KL}}(Q(\tilde{{\bm{s}}})\|P(\tilde{{\bm{s}}},\tilde{{\bm{a}}}|\tilde{{\bm{o}}}))-\log P(\tilde{{\bm{o}}})\\\ &=D_{\mathrm{KL}}(Q(\tilde{{\bm{s}}})\|P(\tilde{{\bm{s}}},\tilde{{\bm{a}}}))-\mathbb{E}_{Q}[\log P(\tilde{{\bm{o}}}|\tilde{{\bm{s}}})]\end{split}$ (2) Here, $Q(\tilde{{\bm{s}}})$ is an approximate posterior distribution. The second equality shows that free energy is equivalent to the (negative) evidence lower bound (ELBO) (Kingma & Welling, 2013; Rezende et al., 2014). The final equation frames the problem of free energy minimisation as explaining the world from the agents beliefs whilst minimising the complexity of accurate explanations (Friston et al., 2016). Crucially, in active inference agents will act according to the belief that they will keep minimising surprise in the future. This means agents will infer policies that yield minimal expected free energy in the future, with a policy $\pi$ being the sequence of future actions ${\bm{a}}_{t:t+H}$ starting at current time step $t$ with a time horizon $H$. This principle is formalised in Equation 3 with $\sigma$ being the softmax function with precision parameter $\gamma$. $\begin{split}P(\pi)&=\sigma(-\gamma G(\pi))\\\ G(\pi)&=\sum_{\tau=t}^{t+H}G(\pi,\tau)\end{split}$ (3) Expanding the expected free energy functional $G(\pi,\tau)$ we get Equation 4. Using the factorisation of the generative model from Equation 1 we approximate $Q({\bm{o}}_{\tau},{\bm{s}}_{\tau}|\pi)\approx P({\bm{o}}_{\tau}|{\bm{s}}_{\tau})Q({\bm{s}}_{\tau}|\pi)$. $\begin{split}G(\pi,\tau)&=\mathbb{E}_{Q({\bm{o}}_{\tau},{\bm{s}}_{\tau}|\pi)}[\log Q({\bm{s}}_{\tau}|\pi)-\log P({\bm{o}}_{\tau},{\bm{s}}_{\tau}|\pi)]\\\ &=\mathbb{E}_{Q({\bm{o}}_{\tau},{\bm{s}}_{\tau}|\pi)}[\log Q({\bm{s}}_{\tau}|\pi)-\log P({\bm{o}}_{\tau}|{\bm{s}}_{\tau},\pi)-\log P({\bm{s}}_{\tau}|\pi)]\\\ &=D_{\mathrm{KL}}(Q({\bm{s}}_{\tau}|\pi)\|P({\bm{s}}_{\tau}))+\mathbb{E}_{Q({\bm{s}}_{\tau})}[H(P({\bm{o}}_{\tau}|{\bm{s}}_{\tau}))]\end{split}$ (4) Note that, in the final equality, we substitute $P({\bm{s}}_{\tau}|\pi)$ by $P({\bm{s}}_{\tau})$, a global prior distribution on the so-called “preferred” states of the agent. This reflects the fact that the agent has prior expectations about the states it will reach. Hence, minimising expected free energy entails both realising preferences, while minimising the ambiguity of the visited states. ## 3 Deep active inference Figure 1: The various components of the agent rolled out trough time. We minimise the variational free energy by minimising both the negative log likelihood of observations and the KL divergence between the state transition model and the observation model. The inferred hidden state is characterised as a multivariate Gaussian distribution. In current treatments of active inference the state spaces are typically completely fixed upfront (Friston et al., 2009; Millidge, 2019) or partially (Ueltzhöffer, 2018). However, this does not scale well for more complex tasks as it is often difficult to design meaningful state spaces for such problems. Therefore we allow for the agent to learn by itself what the exact parameterisation of its belief space should be. We enable this by using deep neural networks to generate the various necessary probability distributions for our agent. We approximate the variational posterior distribution for a _single_ timestep $Q({\bm{s}}_{t}|{\bm{s}}_{t-1},{\bm{a}}_{t-1},{\bm{o}}_{t})$ with a network $q_{\phi}({\bm{s}}_{t}|{\bm{s}}_{t-1},{\bm{a}}_{t-1},{\bm{o}}_{t})$. Similarly we approximate the likelihood model $P({\bm{o}}_{t}|{\bm{s}}_{t})$ with the network $p_{\xi}({\bm{o}}_{t}|{\bm{s}}_{t})$ and the prior $P({\bm{s}}_{t}|{\bm{s}}_{t-1},{\bm{a}}_{t-1})$ with the network $p_{\theta}({\bm{s}}_{t}|{\bm{s}}_{t-1},{\bm{a}}_{t-1})$. Each of the networks output a multivariate normal distribution with a diagonal covariance matrix using the reparameterisation trick (Kingma & Welling, 2013). These neural networks cooperate in a way similar to a VAE, where the fixed standard normal prior is replaced with the learnable prior $p_{\theta}$, the decoder by $p_{\xi}$ and finally the encoder by $q_{\phi}$, as visualised in Figure 1. These networks are trained end-to-end using the free energy formula from the previous section as an objective. $\forall t:\underset{\phi,\theta,\xi}{\text{minimise}}:-\log p_{\xi}({\bm{o}}_{t}|{\bm{s}}_{t})+D_{\mathrm{KL}}(q_{\phi}({\bm{s}}_{t}|{\bm{s}}_{t-1},{\bm{a}}_{t-1},{\bm{o}}_{t})\|p_{\theta}({\bm{s}}_{t}|{\bm{s}}_{t-1},{\bm{a}}_{t-1}))$ (5) As in a conventional VAE (Kingma & Welling, 2013) the negative log likelihood (NLL) term in the objective punishes reconstruction error forcing the model to learn relevant information on the belief state to be captured in the posterior output, while the KL term pulls the prior output towards the posterior output, forcing the prior and posterior to agree on the content of the belief state in a way that still allows the likelihood model to reconstruct the current observation. We can now use the learned models to engage in active inference, and infer which action the agent has to take next. This is done by generating imagined trajectories for different policies using $p_{\theta}$ and $p_{\xi}$, calculating the expected free energy $G$ and selecting the action of the policy that yields the lowest $G$. These policies to evaluate can be predefined, or generated through random shooting, using cross-entropy method (Boer et al., 2005) or by building a search tree. ## 4 Experiments We validate our deep active inference approach on a real world robotics navigation task. First, we collect a dataset consisting of two hours worth of real world action-observation sequences by driving a Kuka Youbot base platform up and down the aisles of a warehouse lab. Camera observations are recorded with a front mounted Intel Realsense RGB-D camera, without taking into account the depth information. The x, y and angular velocities are recorded as actions at a recording frequency of 10Hz. The models are trained on a subsampled version of the data resulting in a train set with data points every 200ms. Next, we instantiate neural networks $q_{\phi}$ and $p_{\xi}$ as a convolutional encoder and decoder network, and $p_{\theta}$ using an LSTM. These are trained with Adam optimizer using the objective function from Equation 5 for 1M iterations. We use a minibatch size of 128 and a sequence length of 10 timesteps. A detailed overview of all hyperparameters is given in appendix. We utilise the same approach as in Çatal et al. (2020) for our imaginary trajectories and planning. The agent has access to three base policies to pick from: drive straight, turn left and turn right. Actions from these policies are propagated to the learned models at different time horizons $H=10,25$ or $55$. For each resulting imaginary trajectory, the expected free energy $G$ is calculated. Finally the trajectory with lowest $G$ is picked, and the first action of the chosen policy is executed, after which the imaginary planning restarts. The robot’s preferences are given by demonstration, using the state distribution of the robot while driving in the middle of the aisle. This should encourage the robot to navigate in the aisles. At each trial the robot is placed at a random starting position and random orientation and tasked to navigate to the preferred position. Figure 2 presents a single experiment as an illustrative example. Figure 2(a) shows the reconstructed preferred observation from the given preferred state, while Figure 2(b) shows the trial’s start state from an actual observation. Figure 2(c) shows the imagined results of either following the policy “always turn right”, “always go straight” or “always turn left”. Figure 2(d) is the result of utilising the planning method explained above. Additional examples can be found in the supplementary material. The robot indeed turns and keeps driving in the middle of the aisle, until it reaches the end and then turns around 111A movie demonstrating the results is available at https://tinyurl.com/smvyk53. When one perturbs the robot by pushing it, it will again recover and continue to the middle of the aisle. (a) Preferred state. (b) Start state (c) Imaginary future trajectories for different policies, i.e. going straight ahead (top), turning right (middle), turning left (bottom). (d) Actually followed trajectory. Figure 2: Experimental results: Figure (a) shows the target observation in imagined (reconstructed) space. (b) The start observation of the trial. Figure (c) shows different imaginary planning results, whilst (d) shows the actually followed trajectory. ## 5 Conclusion In this paper we present how we can implement a generative model for active inference using deep neural networks. We show that we are able to successfully execute a simple navigation task on a real world robot with our approach. As future work we want to allow the robot to continuously learn from past autonomous behaviour, effectively “filling the gaps” in its generative model. Also how to define the “preferred state” distributions and which policies to evaluate remains an open research challenge for more complex tasks and environments. ## References * Boer et al. (2005) Pieter-Tjerk Boer, Dirk Kroese, Shie Mannor, and Reuven Rubinstein. A tutorial on the cross-entropy method. _Annals of Operations Research_ , 134:19–67, 02 2005. doi: 10.1007/s10479-005-5724-z. * Çatal et al. (2019) Ozan Çatal, Johannes Nauta, Tim Verbelen, Pieter Simoens, and Bart Dhoedt. Bayesian policy selection using active inference. In _Workshop on “Structure & Priors in Reinforcement Learning” at ICLR 2019 : proceedings_, pp. 9, 2019. * Çatal et al. (2020) Ozan Çatal, Tim Verbelen, Johannes Nauta, Cedric De Boom, and Bart Dhoedt. Learning perception and planning with deep active inference. In _IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, Barcelona, Spain_ , pp. In Press, 2020. * Cullen et al. (2018) Maell Cullen, Ben Davey, Karl J. Friston, and Rosalyn J. Moran. Active inference in openai gym: A paradigm for computational investigations into psychiatric illness. _Biological Psychiatry: Cognitive Neuroscience and Neuroimaging_ , 3(9):809 – 818, 2018. ISSN 2451-9022. doi: https://doi.org/10.1016/j.bpsc.2018.06.010. URL http://www.sciencedirect.com/science/article/pii/S2451902218301617. Computational Methods and Modeling in Psychiatry. * Friston (2010) Karl Friston. The free-energy principle: A unified brain theory? _Nature Reviews Neuroscience_ , 11(2):127–138, 2010. ISSN 1471003X. doi: 10.1038/nrn2787. URL http://dx.doi.org/10.1038/nrn2787. * Friston (2012) Karl Friston. A free energy principle for biological systems. _Entropy_ , 14(11):2100–2121, 2012. ISSN 1099-4300. doi: 10.3390/e14112100. URL https://www.mdpi.com/1099-4300/14/11/2100. * Friston et al. (2006) Karl Friston, James Kilner, and Lee Harrison. A free energy principle for the brain. _Journal of Physiology Paris_ , 100(1-3):70–87, 2006. ISSN 09284257. doi: 10.1016/j.jphysparis.2006.10.001. * Friston et al. (2013a) Karl Friston, Philipp Schwartenbeck, Thomas Fitzgerald, Michael Moutoussis, Tim Behrens, and Raymond Dolan. The anatomy of choice: active inference and agency. _Frontiers in Human Neuroscience_ , 7:598, 2013a. ISSN 1662-5161. doi: 10.3389/fnhum.2013.00598. URL https://www.frontiersin.org/article/10.3389/fnhum.2013.00598. * Friston et al. (2013b) Karl Friston, Philipp Schwartenbeck, Thomas FitzGerald, Michael Moutoussis, Timothy Behrens, and Raymond J. Dolan. The anatomy of choice: active inference and agency. _Frontiers in Human Neuroscience_ , 7(September):1–18, 2013b. ISSN 1662-5161. doi: 10.3389/fnhum.2013.00598. URL http://journal.frontiersin.org/article/10.3389/fnhum.2013.00598/abstract. * Friston et al. (2016) Karl Friston, Thomas FitzGerald, Francesco Rigoli, Philipp Schwartenbeck, John O’Doherty, and Giovanni Pezzulo. Active inference and learning. _Neuroscience & Biobehavioral Reviews_, 68:862 – 879, 2016. ISSN 0149-7634. doi: https://doi.org/10.1016/j.neubiorev.2016.06.022. URL http://www.sciencedirect.com/science/article/pii/S0149763416301336. * Friston et al. (2017) Karl Friston, Thomas FitzGerald, Francesco Rigoli, Philipp Schwartenbeck, and Giovanni Pezzulo. Active inference: A Process Theory. _Neural Computation_ , 29:1–49, 2017. ISSN 1530888X. doi: 10.1162/NECO˙a˙00912. * Friston (2013) Karl J Friston. Life as we know it. _Journal of the Royal Society Interface_ , 2013. * Friston et al. (2009) Karl J. Friston, Jean Daunizeau, and Stefan J. Kiebel. Reinforcement learning or active inference? _PLOS ONE_ , 4(7):1–13, 07 2009. doi: 10.1371/journal.pone.0006421. URL https://doi.org/10.1371/journal.pone.0006421. * Friston et al. (2014) Karl J Friston, Philipp Schwartenbeck, Thomas F. Fitzgerald, Michael Moutoussis, Timothy W. Behrens, and Raymond J. Dolan. The anatomy of choice: dopamine and decision-making. In _Philosophical Transactions of the Royal Society B: Biological Sciences_ , 2014. * Kingma & Welling (2013) Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. _CoRR_ , abs/1312.6114, 2013. URL http://arxiv.org/abs/1312.6114. * Millidge (2019) Beren Millidge. Deep active inference as variational policy gradients. _CoRR_ , abs/1907.03876, 2019. URL http://arxiv.org/abs/1907.03876. * Rezende et al. (2014) Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Eric P. Xing and Tony Jebara (eds.), _Proceedings of the 31st International Conference on Machine Learning_ , volume 32 of _Proceedings of Machine Learning Research_ , pp. 1278–1286, Bejing, China, 22–24 Jun 2014\. PMLR. URL http://proceedings.mlr.press/v32/rezende14.html. * Ueltzhöffer (2018) Kai Ueltzhöffer. Deep active inference. _Biological Cybernetics_ , 112(6):547–573, Dec 2018. ISSN 1432-0770. doi: 10.1007/s00422-018-0785-7. URL https://doi.org/10.1007/s00422-018-0785-7. ## Appendix A Neural architecture | Layer | Neurons/Filters | activation function ---|---|---|--- Posterior | Convolutional | 8 | Leaky ReLU Convolutional | 16 | Leaky ReLU Convolutional | 32 | Leaky ReLU Convolutional | 64 | Leaky ReLU Convolutional | 128 | Leaky ReLU Concat | N.A. | N.A. Linear | 2 x 128 states | Softplus Likelihood | Linear | 128 x 8 x 8 | Leaky ReLU Convolutional | 128 | Leaky ReLU Convolutional | 64 | Leaky ReLU Convolutional | 32 | Leaky ReLU Convolutional | 16 | Leaky ReLU Convolutional | 8 | LeakyReLU Prior | LSTM cell | 400 | Leaky ReLU Linear | 2 x128 states | Softplus Table 1: Neural network architectures. All convolutional layers have a 3x3 kernel. The convolutional layers in the Likelihood model have a stride and padding of 1 to ensure that they preserve the input shape. Upsampling is done by nearest neighbour interpolation. The concat step concatenates the processed image pipeline with the vector inputs ${\bm{a}}$ and ${\bm{s}}$. ## Appendix B Hyperparameters | Parameter | Value ---|---|--- Learning | learning rate | 0.0001 batch size | 128 train iterations | 1M sequence length | 10 Planning | $\gamma$ | 100 D (Çatal et al., 2020) | 1 K (Çatal et al., 2020) | 10, 25, 55 N (Çatal et al., 2020) | 5 $\rho$ (Çatal et al., 2020) | 0.001 Table 2: Overview of the model hyperparemeters. ## Appendix C Detailed Planning example A movie demonstrating the results is available at https://tinyurl.com/smvyk53. Figure 3: Trial preferred state Figure 4: Short term planning Figure 5: Middle long term planning Figure 6: Long term planning
2024-09-04T02:54:57.902017
2020-03-06T14:35:20
2003.03239
{ "authors": "Mutian He, Yangqiu Song, Kun Xu, Dong Yu", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26084", "submitter": "Mutian He", "url": "https://arxiv.org/abs/2003.03239" }
arxiv-papers
# On the Role of Conceptualization in Commonsense Knowledge Graph Construction Mutian He1, Yangqiu Song1, Kun Xu2, Dong Yu 2 1Hong Kong University of Science and Technology 2Tencent {mhear<EMAIL_ADDRESS> <EMAIL_ADDRESS> ###### Abstract Commonsense knowledge graphs (CKGs) like Atomic and ASER are substantially different from conventional KGs as they consist of much larger number of nodes formed by loosely-structured text, which, though, enables them to handle highly diverse queries in natural language related to commonsense, leads to unique challenges for automatic KG construction methods. Besides identifying relations absent from the KG between nodes, such methods are also expected to explore absent nodes represented by text, in which different real-world things, or entities, may appear. To deal with the innumerable entities involved with commonsense in the real world, we introduce to CKG construction methods _conceptualization_ , i.e., to view entities mentioned in text as instances of specific concepts or vice versa. We build synthetic triples by conceptualization, and further formulate the task as triple classification, handled by a discriminatory model with knowledge transferred from pretrained language models and fine-tuned by negative sampling. Experiments demonstrate that our methods can effectively identify plausible triples and expand the KG by triples of both new nodes and edges of high diversity and novelty. ## 1 Introduction Commonsense knowledge, such as knowing that a trophy could not fit in a suitcase because the trophy is too big, is implicitly acknowledged among human beings through real-life experience rather than systematic learning. As a result, artificial intelligence meets difficulties in capturing such commonsense. To deal with the issue, commonsense knowledge graphs (CKGs) like ConceptNet (Speer et al., 2017), Atomic (Sap et al., 2019), ASER (Zhang et al., 2019a), etc. have been proposed. Such graphs are aimed at collecting and solidifying the implicit commonsense in the form of triples $\langle h,r,t\rangle$, with the head node _h_ and the tail node _t_ connected by a relation (i.e., edge) _r_. However, a key difference between traditional knowledge graphs (like WordNet, Freebase, etc.) and CKGs is that commonsense is often difficult to represent as two strictly formed nodes compared by a specific relation. Instead, recent approaches represent a node with loosely- structured text, either annotated by humans or retrieved from text corpora. Nodes are then linked by one of some predefined types of relations, such as the triple $\langle$_h_ : I’m hungry, _r_ : Result, _t_ : I have lunch$\rangle$ in ASER.111Words in ASER are lemmatized, but in this paper we always show the original text for easier understanding. | ASER | Atomic ---|---|--- #Nodes | 194.0M | 309.5K #Triples | 64.4M | 877.1K #Relation Types | 15 | 9 Average Degree | 0.66 | 5.67 Entity Coverage | 52.33% | 6.98% Average Distinct Entity | 0.026 | 0.082 Table 1: Statistics for recently proposed commonsense knowledge graphs. Entity Coverage is calculated by the proportion of top 1% most frequent entities in Probase, that are mentioned by nodes in each CKG. Average Distinct Entity is given by average number of distinct Probase entities per node in each CKG. Core version of ASER is used for these two results. Such CKGs, storing an exceptionally large number of triples, as shown in Table 1, are capable of representing a much broader range of knowledge and handling flexible queries related to commonsense. However, the complexity of real world commonsense is still immense. Particularly, with innumerable eventualities involved with commonsense in the real world, it is of high cost for a CKG to cover all of them in nodes; and even when covered, to acquire the corresponding relation between each pair of nodes is of quadratic difficulty. Such a situation is particularly demonstrated by the sparsity of edges in current CKGs in Table 1: even automatic extraction methods as in ASER, fail to capture edges between most nodes. Therefore, alternative KG construction methods are in need. Figure 1: A sample for conceptualization in CKG. Given a commonsense reasoning problem like (a), even though the corresponding triple (c) is not in the KG, a triple (b) which is present in the KG could be used through abstraction. This is done by identifying real world entities of trophy and brown suitcase in the text of (c), and then substituting them using IsA relations. Following the same idea, (c) can be produced from (b) to be included in the CKG via instantiation. As nodes are represented by text, utilizing semantic information becomes critical for CKG construction. Such semantic information can be leveraged by large pretrained language models like BERT (Devlin et al., 2018) and GPT-2 (Radford et al., 2019): These models capture rich semantic knowledge from unsupervised pretraining, which can be transferred to a wide range of downstream tasks to reach impressive results. Therefore, efforts have been made to extract knowledge from such models for KG construction. COMeT (Bosselut et al., 2019) is a notable attempt which fine-tuned GPT on CKG triples to predict tails from heads and relations. However, it is often observed that such generative neural models suffer from the diversity and novelty issue: They tend to overproduce high-probability samples similar to those in the training set, while failing to cover a broader range of possible triples absent in a CKG with diverse entities involved. In contrast, discriminatory methods incorporate language models to KG completion tasks such as link prediction and triple classification to evaluate whether a triple is valid, and could be extended to arbitrary text for triples on various KGs (Malaviya et al., 2019; Davison et al., 2019; Yao et al., 2019). However it would be computationally infeasible to identify new plausible triples on recent large CKGs if we aimlessly explore new nodes and evaluate each possible triple, without leveraging existent nodes and triples as in generative methods. Therefore, all methods above have certain shortcomings. Particularly, we observe that current methods miss the variation of real world entities, which is a critical factor for the diversity of nodes. For example, a CKG may cover the node I eat apple and its relevant edges, but the node I eat banana might be missing or the edges are incomplete. As shown in Table 1, a large portion of the most common real world entities in Probase are never mentioned in the recent CKGs like ASER, not to say edges related to the entities. More, as demonstrated by the low number of average distinct entity per node, directly expanding the scale of CKGs would not be cost-effective to cover diverse entities. To relieve the issue, we posit the importance of a specific element of human commonsense, conceptualization, which, though found useful for certain natural language understanding tasks (Song et al., 2011; Wang et al., 2015; Hua et al., 2015), has not been investigated in depth in this area. As observed by psychologists, “concepts are the glue that holds our mental world together” (Murphy, 2004). Human beings are able to make reasonable inferences by utilizing the IsA relationship between real-world concepts and instances. For example, without knowing what a _floppy disk_ is, given that it is a memory device, people may infer that it may store data and be readable by a computer, etc. From this viewpoint, instead of directly building triples with countless entities, a CKG can be broadly expanded to handle various queries, as shown in Figure 1, by such substitution of instances on text with the corresponding concepts (i.e., abstraction), or vice versa (i.e., instantiation), given an extra CKG of IsA relations. However, such conceptualization is never strict induction or deduction that is guaranteed to be true. As shown in Figure 2, it is still a challenging task to determine whether a triple built from conceptualization is reasonable, and requires both the context within the triple and a broader range of commonsense. Such discriminatory problem could be viewed as a particular case of the well-studied task of KG completion. Therefore, we propose to formulate our problem as a triple classification task, one of the standard tasks for KG completion (Socher et al., 2013). The difference is that, instead of considering arbitrary substitution of the head or tail with existent nodes, we apply conceptualization as described in Section 2.1, and train our model by negative sampling as discussed in Section 2.2. We leverage the rich semantic information with large pretrained language models by fine-tuning them as discriminators. In this way, the models are expected to take triples with arbitrary node as inputs and evaluate whether the triple is reasonable. Figure 2: A sample from Atomic for discriminating conceptualization. Some conceptualizations, like replacing milk with beverage in the tail node is valid, while with dairy is invalid in the context, as with commonsense one would often want something to drink after eating cookies, while the general concept of dairy is not relevant in such a scenario, and dairy products are not all drinkable. To conclude, our contributions are three-fold: 1. 1. We introduce conceptualization to CKG construction to explore broader range of triples. 2. 2. We formulate the conceptualization-based CKG construction as a triple classification task to be performed on synthetic triples. 3. 3. We propose a method for the task by fine-tuning pretrained language models with negative sampling. Our code and pipeline is available at github.com/mutiann/ccc. ## 2 Methodologies Our methodologies of CKG construction are based on the idea that given a set of ground truth triples as seeds, new triples can be built from them by abstraction or instantiation of entities mentioned in the head or tail node, i.e., substituting a mentioned entity with a more general or more specific one, using the particular commonsense of _IsA_ relations. Therefore, we need a CKG, K, viewed as seeds, and a conceptualization KG, C, both denoting a set of triples $\langle h,r,t\rangle$, while in C, _r_ is always _IsA_. ### 2.1 Conceptualization Since there are diverse ways to conceptualize an entity from commonsense, C must sufficiently cover various real-world entities connected by IsA relations. Therefore, we choose Probase (Wu et al., 2012), which is a large- scale KG that consists of 17.9M nodes and 87.6M edges, extracted from various corpora, and is proved to be suitable for conceptualization (Song et al., 2011). A single entity may have various ways of being abstracted or instantiated, with different typicality. For example, either Linux or BeOS is an operating system, and a pen is either a writing tool or an enclosure for animals, though for both examples the two choices are not equally common. Since Probase is extracted from real corpora, the frequencies of text showing the triple $\langle h,IsA,t\rangle$in the source corpora can demonstrate how common the relation is. Such frequencies $f$ are given by Probase along with the triple, forming 4-tuples of $\langle h,IsA,t,f\rangle$. The frequency information in Probase allows us to balance between IsA relations of different typicality and filter out noise of rare relations in the graph. Figure 3: A sample of identifying entities. All noun phrases are identified as possible entities on which substitutions are proposed. The phrases include basketball, basketball team, team, player, professional player, but except professional , which is tagged as an adjective here. With C prepared, conceptualization can then be performed on any mentioned real-world entity in the head or tail nodes in each triple, which could be a single noun or noun phrases, serving as subjects, objects or even modifiers, as shown in Figure 3. What leads to more complexity is that although nodes in C are all real-world entities in some context, the word or phrase could be used in different manners within a triple of interest. Therefore, for raw text as in Atomic, we choose to perform dependency parsing on each node by spaCy (Honnibal and Montani, 2017). Then we need to filter out all nouns or noun phrases that also present in C as possible candidates. The method to identify entities is given in Algorithm 1: We iterate through each noun $w$ as the root of the entity, and choose all continuous sequences of words within the range of the subtree corresponding to $w$ in the dependency tree, to ensure that all entities rooted by $w$ (possibly with different modifiers) are collected. We then query the possible abstraction and instantiation of each candidate with Probase, and, for any result, add the substituted text and the corresponding frequency into the list of results. If the text in the CKG are given in the original form (i.e., not lemmatized, unlike ASER), we further use a set of rules to inflect the substitution $s$ returned, and to modify the determiner (if any) in the returned text so as to avoid any false statistical clues of grammatical mistakes introduced by the substitution. Input: W $[w_{1},w_{2},...,w_{n}]$: a node represented by a sequence of words P $[p_{1},p_{2},...,p_{n}]$: POS tags of words in W D: Dependency tree for W C {$\langle x,IsA,y,f\rangle$}: Probase of IsA relations Result: S $[W^{\prime}_{1},W^{\prime}_{2},...]$: List of substituted word sequences F $[f_{1},f_{2},...]$: List of frequencies for each substitution $\;\;S\leftarrow[]$ $F\leftarrow[]$ for _$k\in[1,n]$_ do if _$p_{k}\in\\{\mathrm{noun,propn}\\}$_ then $\;\;T\leftarrow$subtree of $w_{k}$ in $D$ $L\leftarrow\mathrm{min}_{x\in T}$ {index of $x$ in $W$} $R\leftarrow\mathrm{max}_{x\in T}$ {index of $x$ in $W$} foreach _$(l,r),L\leq l\leq k\leq r\leq R$_ do $\;\;E\leftarrow[w_{l},...,w_{r}]$ $\;\;A\leftarrow\\{(S,f)|(E,IsA,S,f)\in C\\}$ $I\leftarrow\\{(S,f)|(S,IsA,E,f)\in C\\}$ foreach _$(s,f)\in A\cup I$_ do $\;\;S$.add($[w_{1},...,w_{l-1}]+s+[w_{r+1},...,w_{k}]$) $F$.add(f) end foreach end foreach end if end for Algorithm 1 IdentifyConceptualization ### 2.2 Discriminator We are aimed at building a model capable of evaluating whether a triple is valid, possibly with its head or tail conceptualized. However, as in the well- studied field of KG completion, we fall into the difficult situation that we need to undertake the evaluation using only positive ground truth present in K, except that in our case not only unseen edges but also nodes need to be considered. For such KG completion tasks, commonly it is assumed that it is unknown whether triples not present in K are valid, while the method of negative sampling is applied. In this way, synthetic triples are built from substitution (a.k.a. corruption) of the head or tail of present ones, often using random nodes in the KG. Then the triples are viewed as more likely to be invalid than the original ones, and labelled as members of a negative set $D_{-}$. In combination with triples in K as the positive set $D_{+}$, the model can be trained by such pseudo-labelled data in a self-supervised manner and evaluated by the classification accuracy under triples in K held out from $D_{+}$ and the corresponding negative samples generated in the same way (Bordes et al., 2013; Socher et al., 2013; Nickel et al., 2011). Although there could be false-negatives, it is demonstrated that models trained in this way can successfully identify valid triples (though possibly labelled negative) missing from the KG. More, it is discovered that instead of uniform sampling, using negative samples similar to valid ones leads to better performance (Cai and Wang, 2018; Wang et al., 2018; Zhang et al., 2019b). Therefore, we further propose to sample the substitution of a node from its conceptualized versions, which might be missing in the original KG. This fits into previous KG completion methods if we view the conceptualized new nodes as isolated nodes. We expect that in this way the model could better evaluate conceptualization of triples. To generate negative samples, two different settings are applied. 1. 1. Node Substitution (NS): The common corruption method, as in Bordes et al. (2013), that substitutes the head or tail (each with 0.5 probability) with a random node from the KG. For a CKG like Atomic in which head nodes and tails nodes, as well as tail nodes from triples with different relations, can often be easily distinguished from each other222For instance, the head node in Atomic always starts with Person X, and tail nodes for relation type xAttr (attribute of Person X) are often adjectives., we follow Socher et al. (2013) to pick random heads only from other heads, and random tails only from other tails appearing in triples with the same relation. 2. 2. Entity Conceptualization (EC): To enable the model to identify false triples with inappropriate conceptualization, we randomly choose the head or tail (each with 0.5 probability) and corrupt the node as in Section 2.1 by substituting an entity in the node with its abstraction or instantiation. This method ensures that the substituted nodes are often plausible. Then we use the triples with the head or tail substituted as negative samples. Particularly, we make use of the frequencies returned by Algorithm 1 as weights (or unnormalized probabilities), based on which we sample from possible conceptualized nodes, as shown in Algorithm 2. In this way, we strike a balance between the diversity and the typicality of _IsA_ relations used. Input: N: A node, represented by a sequence of words P: POS tags of words in N D: Dependency tree for N C: Probase Result: $N^{\prime}$: corrupted node $\;\;S$, $F$ $\leftarrow$ IdentifyConceptualization($N$, $P$, $D$, $C$) $W$ $\leftarrow$ $F$ / $\sum F$ $k\sim Categorical(W)$ $N^{\prime}$ $\leftarrow$ $S_{k}$ Algorithm 2 BuildSampleEC Building negative samples with both settings, we expect that the model will be capable of discriminating whether a triple is valid or not, when the triple is possibly corrupted in either way. To reduce noise in training and evaluation, negative samples are filtered to ensure that they are different from any positive samples, which matches the filtered setting from Bordes et al. (2013). To make the best use of semantic information from the textual description of nodes, we apply the widely used transformer-based pretrained language models like BERT as the discriminator. Particularly, the structure of our task matches the next sentence prediction (NSP) in Devlin et al. (2018). As a result, we follow a similar setting that takes pairs of sentences separated by a special [SEP] token and marked by different token type IDs as inputs, with _h_ the first sentence and the concatenation of _r_ and _t_ the second sentence. Binary classification is then performed by a fully-connected layer taking the final representation corresponding to the [CLS] token, as shown in Figure 4. All parameters in the model, except those in the final fully- connected layer for binary classification, can be initialized from the pretrained model. Then the model is fine-tuned using the positive and negative samples mentioned above, with a 1:1 frequency during training, using binary cross-entropy loss below, based on the output $s$, which is a scalar after a logistic sigmoid activation, indicating the confidence on the input being valid: $L=-\sum_{(x,y)\in D_{+}\cup D_{-}}(y\log s+(1-y)\log(1-s)).$ (1) Figure 4: Architecture of the BERT-based discriminator model. Raw text are fed into the model to predict the binary label y. All except the last fully- connected layer are pretrained but not frozen. ## 3 Experiments ### 3.1 Datasets Two different datasets, Atomic and ASER, which are typical CKGs using open- form text as nodes, are used in our experiments. Earlier CKGs such as ConceptNet (Speer et al., 2017) are not discussed, as in ConceptNet, unlike the more recent CKGs, only simple text, mostly noun phrases, is used as nodes, and previous work can already reach close-to-human results (Saito et al., 2018). #### 3.1.1 Atomic Atomic is a CKG of 877K triples on cause-and-effect relations between everyday activities (Sap et al., 2019). Within Atomic, head nodes, or base events, are extracted from text corpora, with the form of some person’s action or state (e.g., Person X is on the basketball team). The dataset further categorizes the cause-and-effect relations into nine types, covering the intention, prerequisites and impacts with regard to the agent Person X and other people. The tail entities are then written in open-form text by crowd-sourced workers under these categories. In this way, a broad range of eventualities and their relations are covered by the dataset. We follow the original data split of Atomic in our experiments. #### 3.1.2 ASER ASER (Zhang et al., 2019a) is a large-scale CKG of 194M nodes representing verb-centric eventualities matching certain patterns (e.g., s-v-o for I love dogs and s-v-v-o for I want to eat an apple) extracted from various corpora. Relations of 15 types between nodes are then extracted as well, identified by matching the text with language patterns such as “$E_{1}$, because $E_{2}$” for Reason and “$E_{1}$, $E_{2}$ instead” for ChosenAlternative. A total 194.0M nodes and 64.4M triples are extracted in ASER. In our experiments, we use the core release of ASER with 27.6M nodes and 10.4M edges. Triples of the CoOccurance type and isolated nodes are further removed to create a smaller and cleaner KG with 1.4M nodes and 1.1M edges. Triples are then randomly split into train, dev, and test set at 8:1:1. ### 3.2 Settings To build our CKG Construction by Conceptualization (CCC) discriminator, we follow the scheme for fine-tuning BERT on downstream tasks (Devlin et al., 2018), and use the pretrained 12-layer BERT-base model on GTX1080 GPUs with 8GB memory. To evaluate the impact of the two different ways of producing negative samples given in Section 2.2, and to trade off between model capabilities of discriminating triples under general cases and specifically identifying inappropriate conceptualization, we perform experiments with different percentages of negative samples built by conceptualization, i.e., the EC setting. Specifically, models with 50%, 75%, and 87.5% negative samples created by EC (and the rest by NS) are trained and reported. For evaluation, negative samples are generated on the dev and test samples as well by both methods, forming the EC and NS dev and test sets with 1:1 positive and negative samples. Under EC, those triples with nodes containing no entities to be conceptualized (e.g., I am fine, for which Algorithm 1 returns empty results) are ignored. Nevertheless, 79.65% and 83.56% of triples in the dev and test sets are collected in the EC set for Atomic and ASER respectively, showing that a majority of triples can be conceptualized. Test results of the models at best EC dev accuracy are reported. ### 3.3 Baselines We train COMeT333Available at github.com/atcbosselut/comet-commonsense and KG- BERT444Available at github.com/yao8839836/kg-bert on the two datasets as our baselines. Particularly, our model will degenerate into KG-BERT with 0% EC samples, as the NS setting is what KG-BERT trained under. Since COMeT itself is not a discriminative model, we use the perplexity per token555Unlike the case in Malaviya et al. (2019), conceptualization would not significantly change the length of a triple, so we only use the Normalized setting. as its score given to each triple, and use the dev set to find a classification threshold with best accuracy. | ASER | Atomic ---|---|--- | EC | NS | EC | NS COMeT | 0.6388 | 0.5869 | 0.6927 | 0.5730 KG-BERT | 0.7091 | 0.8018 | 0.7669 | 0.6575 CCC-50 | 0.8716 | 0.7775 | 0.9016 | 0.7840 CCC-75 | 0.8995 | 0.7250 | 0.9221 | 0.7446 CCC-87.5 | 0.9156 | 0.6635 | 0.9355 | 0.6980 CCC-75-scratch | 0.8284 | 0.5587 | 0.8579 | 0.5003 CCC-75-RoBERTa | 0.8999 | 0.6938 | 0.9305 | 0.7350 Table 2: Results of accuracy on EC and NS test set of baselines and our models on different datasets. CCC denotes our model, with the number attached representing percentage of EC training. ### 3.4 Results #### 3.4.1 Triple Classification Test accuracies of triple classification on both methods are given in Table 2. As shown by the results, COMeT lacks discriminative power, which is consistent with the results in Malaviya et al. (2019). KG-BERT, which has been successfully applied on traditional KGs, produce satisfactory results on CKG as well, while our methods perform better than both baseline methods by a large margin on the EC tests. Hence it is demonstrated that introducing conceptualization during training is effective to create a model capable of identifying false conceptualization. Particularly, the percentage of EC samples in training is critical for a trade-off between EC and NS tasks: Increased EC percentage will lead to better EC results, but the NS results will drop. The Atomic CCC models reach better results on NS than KG-BERT, which is possibly due to the fact that Atomic nodes are mostly about everyday activities, in contrast to ASER which covers a broader range of topics. Therefore by EC training a more diverse set of nodes could be seen by the model in training, and could be helpful for the model to generalize in the NS test. #### 3.4.2 Ablation Studies We perform ablation studies to examine the importance of pretraining and model selection. With the model trained from scratch on our task without using pretrained parameters, the performance significantly drops, as shown in the CCC-75-scratch results in Table 2. We also attempted to use RoBERTa, an alternative pretrained language model that makes improvements on BERT training and has demonstrated better performance on downstream tasks (Liu et al., 2019). However, the results using the pretrained RoBERTa-base model (CCC-75-RoBERTa) are generally on par with our model using BERT. This could be possibly explained by that BERT is sufficient in our current settings, that RoBERTa uses a larger batch size while on our GPU the batch size is more limited, and that the NSP pretraining task is used in BERT but is absent in RoBERTa, as NSP exactly matches the input scheme of our task. | ASER | Atomic ---|---|--- | COMeT | CCC-75 | COMeT | CCC-75 N/Seed | 10 | 8.28 | 10 | 5.02 Dist-N | 24.68% | 96.57% | 6.49% | 51.26% Dist-1 | 1.62% | 6.30% | 0.63% | 8.34% Dist-2 | 10.56% | 15.45% | 2.87% | 49.76% N/T N | 88.48% | 98.60% | 10.30% | 94.65% N/U N | 93.38% | 99.18% | 69.17% | 96.66% Dist-N-Norm | 10.74% | 84.16% | 4.35% | 46.36% N/T N-Norm | 9.37% | 86.01% | 5.12% | 87.95% N/U N-Norm | 65.72% | 95.02% | 58.71% | 93.01% Table 3: Results for diversity and novelty on generations, larger for better results. All rows except N/seed are given in percentage. | head | tail ---|---|--- Seed | another promises him a scholarship | his parents own a successful business COMeT | – | he never gets it he does not get it he never gets one he could not pay it he does not receive it CCC-75 | another promises him a grant | – another promises him an award – | his parents own a successful shop his parents own a successful bank his parents own a successful hotel Table 4: Samples for ASER generations given the seed. In this sample the head and tail are connected by the relation of Concession, i.e., although. #### 3.4.3 Generations We generate triples using the test set from ASER and Atomic as seeds by both COMeT with 10-beam search and the CCC-75 model by conceptualization, and apply various metrics on the results, as shown in Table 3. Both methods may produce a large number of triples, as given by the number of generations per seed, N/Seed. For diversity, we report Dist-1 (number of distinct words per node), Dist-2 (number of distinct bigrams per node), and Dist-N (number of distinct nodes per node). Due to the different number of generated triplets, the results are all normalized by number of nodes.666Results from Bosselut et al. (2019) of test perplexity are reproduced in our experiments. Differences on diversity metrics are due to the fact that we use 10-beam search instead for fair comparison. Novelty is measured by N/T N, the proportion of nodes among all produced nodes that are novel, i.e. not present in the training set, and N/U N, the proportion of novel distinct nodes, among all distinct nodes. Moreover, since generative methods may produce nodes of essentially the same meaning with slight changes in forms, we also have the produced nodes normalized, by removing structural words like determiners, auxiliary verbs, pronouns, etc. We then report the results for the metrics above but applied to generations after such normalization, denoted as Dist-N-Norm, N/T N-Norm, and N/U N-Norm respectively. Furthermore, samples of generations by both models are shown in Table 4. It is clearly demonstrated in the diversity results that a majority of generations by COMeT are similar to each other given a certain head node and relation, as the number of distinct nodes, words and bigrams are all relatively low. The novelty results further show that generated nodes are often similar to the seen ones in the training set as well. It can be particularly observed that, though the original diversity and novelty metrics appear to be acceptable, which is consistent with Bosselut et al. (2019), results drop sharply when the generations are normalized. This indicates that COMeT may produce slightly different nodes paraphrasing each other, as shown in Table 4 where four of five generated tails are similar to each other (saying he does not get it), while this is not the case for CCC, as generated nodes are mostly discussing different entities, and the results will often be diverse and novel. ## 4 Related Work Automatic construction of structured KGs is a well-studied task, and a number of learning-based methods have been proposed, including KG embedding methods based on translational distances (Bordes et al., 2013; Lin et al., 2015; Ji et al., 2015; Wang et al., 2014; Shang et al., 2019) and semantic matching (Nickel et al., 2011; Socher et al., 2013; Yang et al., 2014; Trouillon et al., 2016), typically trained by negative sampling techniques and applied on tasks like link prediction and triple classification. Furthermore, graph neural networks can be used to better capture structural information (Schlichtkrull et al., 2018), GANs are applied to improve negative sampling (Cai and Wang, 2018; Wang et al., 2018; Zhang et al., 2019b) by mining more difficult examples, and textual information from the node can be leveraged(Wang and Li, 2016; Xie et al., 2016; Xiao et al., 2017; An et al., 2018). Textual information is more critical on CKGs with nodes carrying complicated eventualities, often in open-form text. Therefore, Li et al. (2016) proposed to score ConceptNet triples by neural sequence models taking text inputs so as to discover new triples, while Saito et al. (2018) and Sap et al. (2019) further proposed to generate tail nodes by a sequence-to-sequence LSTM model with head and relation as inputs. Recently powerful large pretrained models like BERT and GPT-2 have been proposed (Devlin et al., 2018; Radford et al., 2019), from which, it has been observed by Trinh and Le (2018) and Radford et al. (2019) that rich knowledge including commonsense can be extracted. Therefore, different ways for KG construction have been introduced on such models as downstream tasks: In KG-BERT, BERT was fine-tuned for KG completion tasks like link prediction and triple classification (Yao et al., 2019); COMeT used GPT-based models to generate tails (Bosselut et al., 2019); LAMA directly predicted masked words in triples on various KGs by BERT (Petroni et al., 2019); Davison et al. (2019) considered both generation of new tails and scoring given triples; Malaviya et al. (2019) utilized both structural and semantic information for CKG construction on link prediction tasks. ## 5 Conclusion We introduce conceptualization to commonsense knowledge graph construction and propose a novel method for the task by generating new triples with conceptualization and examine them by a discriminator transferred from pretrained language models. Future studies will be focused on strategies of conceptualization and its role in natural languages and commonsense by deep learning approaches. ## References * An et al. (2018) Bo An, Bo Chen, Xianpei Han, and Le Sun. 2018. Accurate text-enhanced knowledge graph representation learning. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 745–755, New Orleans, Louisiana. Association for Computational Linguistics. * Bordes et al. (2013) Antoine Bordes, Nicolas Usunier, Alberto Garcia-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In _Proceedings of the 26th International Conference on Neural Information Processing Systems_ , pages 2787–2795. * Bosselut et al. (2019) Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for automatic knowledge graph construction. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4762–4779, Florence, Italy. Association for Computational Linguistics. * Cai and Wang (2018) Liwei Cai and William Yang Wang. 2018. KBGAN: Adversarial learning for knowledge graph embeddings. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1470–1480, New Orleans, Louisiana. Association for Computational Linguistics. * Davison et al. (2019) Joe Davison, Joshua Feldman, and Alexander Rush. 2019. Commonsense knowledge mining from pretrained models. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 1173–1178, Hong Kong, China. Association for Computational Linguistics. * Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. _Computing Research Repository_ , arXiv:1810.04805. * Honnibal and Montani (2017) Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. * Hua et al. (2015) Wen Hua, Zhongyuan Wang, Haixun Wang, Kai Zheng, and Xiaofang Zhou. 2015. Short text understanding through lexical-semantic analysis. In _2015 IEEE 31st International Conference on Data Engineering_ , pages 495–506. * Ji et al. (2015) Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge graph embedding via dynamic mapping matrix. In _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 687–696, Beijing, China. Association for Computational Linguistics. * Li et al. (2016) Xiang Li, Aynaz Taheri, Lifu Tu, and Kevin Gimpel. 2016. Commonsense knowledge base completion. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1445–1455, Berlin, Germany. Association for Computational Linguistics. * Lin et al. (2015) Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In _Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence_ , pages 2181–2187. * Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. _Computing Research Repository_ , arXiv:1907.11692. * Malaviya et al. (2019) Chaitanya Malaviya, Chandra Bhagavatula, Antoine Bosselut, and Yejin Choi. 2019\. Commonsense knowledge base completion with structural and semantic context. _Computing Research Repository_ , arXiv:1910.02915. Version 2. * Murphy (2004) Gregory Murphy. 2004. _The big book of concepts_. MIT press, Cambridge, MA. * Nickel et al. (2011) Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In _Proceedings of the 28th International Conference on Machine Learning_ , pages 809–816. * Petroni et al. (2019) Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2463–2473, Hong Kong, China. Association for Computational Linguistics. * Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. * Saito et al. (2018) Itsumi Saito, Kyosuke Nishida, Hisako Asano, and Junji Tomita. 2018. Commonsense knowledge base completion and generation. In _Proceedings of the 22nd Conference on Computational Natural Language Learning_ , pages 141–150, Brussels, Belgium. Association for Computational Linguistics. * Sap et al. (2019) Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. ATOMIC: An atlas of machine commonsense for if-then reasoning. In _Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence_ , pages 3027–3035. * Schlichtkrull et al. (2018) Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In _The Semantic Web – 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, Proceedings_ , pages 593–607. * Shang et al. (2019) Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, and Bowen Zhou. 2019. End-to-end structure-aware convolutional networks for knowledge base completion. In _Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence_ , pages 3060–3067. * Socher et al. (2013) Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In _Proceedings of the 26th International Conference on Neural Information Processing Systems_ , pages 926–934. * Song et al. (2011) Yangqiu Song, Haixun Wang, Zhongyuan Wang, Hongsong Li, and Weizhu Chen. 2011. Short text conceptualization using a probabilistic knowledgebase. In _Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence_ , pages 2330–2336. * Speer et al. (2017) Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. ConceptNet 5.5: An open multilingual graph of general knowledge. In _Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence_ , pages 4444–4451. * Trinh and Le (2018) Trieu H Trinh and Quoc V Le. 2018. A simple method for commonsense reasoning. _Computing Research Repository_ , arXiv:1806.02847. * Trouillon et al. (2016) Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In _Proceedings of the 33rd International Conference on Machine Learning_ , pages 2071–2080. * Wang et al. (2018) Peifeng Wang, Shuangyin Li, and Rong Pan. 2018. Incorporating GAN for negative sampling in knowledge representation learning. In _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence_ , pages 2005–2012. * Wang et al. (2014) Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In _Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence_ , pages 1112–1119. * Wang and Li (2016) Zhigang Wang and Juanzi Li. 2016. Text-enhanced representation learning for knowledge graph. In _Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence_ , pages 1293–1299. * Wang et al. (2015) Zhongyuan Wang, Kejun Zhao, Haixun Wang, Xiaofeng Meng, and Ji-Rong Wen. 2015. Query understanding through knowledge-based conceptualization. In _Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence_ , pages 3264–3270. * Wu et al. (2012) Wentao Wu, Hongsong Li, Haixun Wang, and Kenny Q Zhu. 2012. Probase: A probabilistic taxonomy for text understanding. In _Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data_ , pages 481–492. * Xiao et al. (2017) Han Xiao, Minlie Huang, Lian Meng, and Xiaoyan Zhu. 2017. SSP: semantic space projection for knowledge graph embedding with text descriptions. In _Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence_ , pages 3104–3110. * Xie et al. (2016) Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. 2016. Representation learning of knowledge graphs with entity descriptions. In _Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence_ , pages 2659–2665. * Yang et al. (2014) Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. _Computing Research Repository_ , arXiv:1412.6575. * Yao et al. (2019) Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. KG-BERT: BERT for knowledge graph completion. _Computing Research Repository_ , arXiv:1909.03193. * Zhang et al. (2019a) Hongming Zhang, Xin Liu, Haojie Pan, Yangqiu Song, Cane Wing-Ki, et al. 2019a. ASER: A large-scale eventuality knowledge graph. _Computing Research Repository_ , arXiv:1905.00270. * Zhang et al. (2019b) Yongqi Zhang, Quanming Yao, Yingxia Shao, and Lei Chen. 2019b. NSCaching: Simple and efficient negative sampling for knowledge graph embedding. In _2019 IEEE 35th International Conference on Data Engineering_ , pages 614–625.
2024-09-04T02:54:57.912432
2020-03-06T14:45:46
2003.03243
{ "authors": "M. Redies, F. R. Lux, J.-P. Hanke, P. M. Buhl, S. Bl\\\"ugel, Y.\n Mokrousov", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26085", "submitter": "Matthias Redies", "url": "https://arxiv.org/abs/2003.03243" }
arxiv-papers
# Mixed topology ring states for Hall effect and orbital magnetism in skyrmions of Weyl semimetals M. Redies<EMAIL_ADDRESS>Peter Grünberg Institut and Institute for Advanced Simulation, Forschungszentrum Jülich and JARA, 52425 Jülich, Germany Department of Physics, RWTH Aachen University, 52056 Aachen, Germany F. R. Lux Peter Grünberg Institut and Institute for Advanced Simulation, Forschungszentrum Jülich and JARA, 52425 Jülich, Germany Department of Physics, RWTH Aachen University, 52056 Aachen, Germany J.-P. Hanke Institute of Physics, Johannes Gutenberg-University Mainz, 55099 Mainz, Germany P. M. Buhl Institute of Physics, Johannes Gutenberg-University Mainz, 55099 Mainz, Germany S. Blügel Peter Grünberg Institut and Institute for Advanced Simulation, Forschungszentrum Jülich and JARA, 52425 Jülich, Germany Y. Mokrousov<EMAIL_ADDRESS>Peter Grünberg Institut and Institute for Advanced Simulation, Forschungszentrum Jülich and JARA, 52425 Jülich, Germany Institute of Physics, Johannes Gutenberg-University Mainz, 55099 Mainz, Germany ###### Abstract As skyrmion lattices are attracting increasing attention owing to their properties driven by real-space topology, properties of magnetic Weyl semimetals with complex $k$-space topology are drawing increasing attention. We consider Hall transport properties and orbital magnetism of skyrmion lattices imprinted in topological semimetals, by employing a minimal model of a mixed Weyl semimetal which, as a function of the magnetization direction, exhibits two Chern insulator phases separated by a Weyl state. We find that while the orbital magnetization is topologically robust and Hall transport properties exhibit a behavior consistent with that expected for the recently discovered chiral Hall effect Lux _et al._ (2020), their evolution in the region of the Chern insulator gap is largely determined by the properties of the so-called mixed topology ring states, emerging in domain walls that separate the skyrmion core from the ferromagnetic background. In particular, we show that these localized ring states possess a robust orbital chirality which reverses sign as a function of the skyrmion radius, thereby mediating a smooth switching dynamics of the orbital magnetization. We speculate that while the emergent ring states can possibly play a role in the physics of Majorana states, probing their properties experimentally can provide insights into the details of skyrmionic spin structures. ## I Introduction Topological chiral spin textures such as magnetic skyrmions have established themselves as an exciting platform for the realization of novel physical effects and for innovative ideas in the realm of practical applications of magnetic systems Lai _et al._ (2017); Zhang _et al._ (2015a); Tomasello _et al._ (2015); Zhang _et al._ (2020a); Zázvorka _et al._ (2019); Bourianoff _et al._ (2018); Pinna _et al._ (2019); Zhang _et al._ (2015b); Luo _et al._ (2018); Zhang _et al._ (2020a). On the other side of the topology scale, magnetic materials which exhibit non-trivial $k$-space topology in the electronic structure $-$ such as quantum anomalous Hall insulators Deng _et al._ (2020); Chang _et al._ (2013), or antiferromagnetic topological insulators Otrokov _et al._ (2019); Niu _et al._ (2020) $-$ are at the heart of current research in solid state physics and spintronics. Bringing together the benefits of skyrmions, such as their efficient dynamics and stability, with the advantages of $k$-topological materials, normally associated with dissipationless transport and topological protection, appears to be an exciting avenue to pursue. To date, however, skyrmions have been mostly realized in metallic ferromagnetic materials that do not exhibit a distinct non-trivial $k$-space topology, such as metallic ferromagnetic materials Miao _et al._ (2014); Münzer _et al._ (2010). And while the interest in skyrmions realized in insulating materials is rising Zhang _et al._ (2020b); Kong and Zang (2013); Onose _et al._ (2012), the emergence of global Chern insulating states in skyrmion lattices has been recently shown theoretically Lado and Fernández-Rossier (2015); Hamamoto _et al._ (2015); Göbel _et al._ (2018, 2017, 2019). At the same time, another flavor of $k$-space topology exhibited by magnetic Weyl semimetals is gaining increasing attention, and the number of specific material candidates which exhibit distinct topological band crossings in their electronic structure is constantly growing Wang _et al._ (2017); Liu _et al._ (2019). In three dimensions, Weyl semimetals host bands crossings known as Weyl nodes, that appear in pairs and carry an opposite non-zero topological charge. Recently, it was suggested that in two dimensions (2D) it is natural to interpret the emergence of topological band crossings in the context of mixed Weyl points, emphasizing that their topology and properties can be classified most optimally by including the magnetization direction $\hat{\mathbf{m}}$ and the mixed components of the Berry curvature tensor into the topological analysis Hanke _et al._ (2017a); Niu _et al._ (2019). The 2D mixed Weyl semimetals thus behave in many aspects similarly to the 3D Weyl semimetals in $(\mathbf{k},\hat{\mathbf{m}})$-space. Recently, it was realized that the presence of Weyl points in spin textures in 2D or 3D Weyl semimetals can have a drastic effect on their magneto-electric properties, orbital magnetism and dynamics, see e.g. Refs. [Araki, 2020; Araki and Nomura, 2018; Lux _et al._ , 2018; Hanke _et al._ , 2017a; Niu _et al._ , 2019]. In this work, by performing explicit tight-binding calculations of skyrmion lattices imprinted into a 2D Weyl semimetal, we attempt to understand the role that the complex mixed topology can have on the Hall transport properties and orbital magnetism in these systems. Our main finding is the demonstration that properties of skyrmions of mixed Weyl semimetals, whose chemical potential resides in the vicinity of Weyl points, are largely determined by so-called ring states, which are localized at the skyrmion boundary and carry an orbital moment of a specific orbital chirality. While we discuss how the transport properties and orbital magnetization in these systems can be used to get insights into the details of spin distribution, we also consider a range of phenomena where ring states can be utilized for shaping skyrmion dynamics and for mediating the emergence of novel topological phases. This article is structured as follows. In Sec. II, we describe the tight-binding model and parameters used in this paper. The details of the setup and transport calculations are given in Sec. III. In Sec. IV we present and discuss the results of our calculations, providing an outlook in Sec. V. ## II Model In order to assess the transport properties of 2D skyrmions in mixed Weyl semimetals we choose the tight-binding model such that the underlying electronic structure of the ferromagnetic host exhibits a mixed Weyl point in an extended phase space of the magnetization direction and $k$-space. A similar model has been used to study the topological properties of ferromagnetic mixed Weyl semimetals in the past Hanke _et al._ (2017a). The tight-binding Hamiltonian of this model on a honeycomb lattice with two structurally inequivalent atoms per unit cell and two spin-split orbitals per site reads: $\begin{split}H&=\,\lambda\sum_{i\epsilon\zeta}\left(\hat{\mathbf{m}_{i}}\cdot\boldsymbol{\sigma}\right)c_{i\epsilon}^{\dagger}c_{i\zeta}\\\ -t\sum_{\left<ij\right>\epsilon}c_{i\epsilon}^{\dagger}c_{j\epsilon}&+it_{\text{so}}\sum_{\left<ij\right>\epsilon\zeta}\hat{\mathbf{e}}_{z}\cdot\left(\boldsymbol{\sigma}\times\hat{\mathbf{d}}_{ij}\right)c_{i\epsilon}^{\dagger}c_{j\zeta}\end{split}$ (1) The properties of this model of magnetic graphene with Rashba spin-orbit interaction have been extensively studied in the past for collinear ferromagnetic and antiferromagnetic cases Castro _et al._ (2008); Matte _et al._ (2009); Qiao _et al._ (2010). In Eq. (1), $t$ is the magnitude of the nearest neighbour hopping, $t_{\text{so}}$ is the magnitude of Rasba-like spin-orbit coupling Bychkov and Rashba (1984) , and $\lambda$ characterizes the magnitude of the Stoner exchange splitting Stoner (1936, 1938). In the above expression the indices $i$ and $j$ run over nearest-neighbor atoms, while $\epsilon$ and $\zeta$ mark the spin channel. Further, $\hat{\mathbf{e}}_{z}$ is the unit vector in the $z$-direction (out of the two dimensional plane), while $\hat{\mathbf{d}}_{ij}$ is the unit vector connecting atom sites $i$ and $j$. The operators $c_{i\epsilon}^{\dagger}$ and $c_{i\epsilon}$ are the creation and annihilation operators of an electron on site $i$ with spin $\epsilon$. The vector of Pauli matrices is denoted as $\boldsymbol{\sigma}$, while $\hat{\mathbf{m}_{i}}$ is the unit vector of the magnetization direction at atom $i$, which generally varies in real space when a spin texture is present. Throughout this work, we choose $t=-1.0$ eV, $t_{so}=~{}0.4$ eV and $\lambda=1.4$ eV in order to realize a mixed Weyl point in the electronic structure for the ferromagnetic case when all $\hat{\mathbf{m}_{i}}$ are aligned along a single direction $\hat{\mathbf{m}}$. The bandstructure of the model for the out-of-plane direction of the ferromagnetic magnetization with these parameters is shown in Fig. 1(a). The emergence of the mixed Weyl point upon varying the magnetization direction is shown in Fig. 1(c). The topologically non-trivial character of this crossing point can be shown by computing the flux of the Berry curvature tensor with components in $k$\- and $\theta$-space, which quantifies the mixed topological charge of the mixed Weyl point to be $+2$ for $\theta=\pi/2$, and $-2$ for $\theta=-\pi/2$ Niu _et al._ (2019). Figure 1: (a) The bandstructure of the ferromagnetic model with an out-of- plane magnetization. The green dashed line marks the region $[-0.47,-0.46]$ eV. (b) The bandstructure of a skyrmion with the parameterization described in the text. The red-shaded area in (a) and (b) indicates the band gap region of about 0.6 eV of the ferromagnetic model. (c) A section of the bandstructure is shown for different angles $\theta$ that the magnetization in the ferromagnetic model makes with the $z$-axis. (d) The magnetization is characterized by the azimuthal angle $\theta$ and the polar angle $\phi$. The skyrmion is placed in the $xy$-plane. (e,f) The Hall conductance (e) and orbital magnetization (f) of the ferromagnetic model with an out-of-plane magnetization (along $+z$) as a function of band filling. Red shaded areas coincide with the band gap marked in (a). All plots except for (b) and (d) are for $\phi=0$. ## III Computational approach Similarly to our previous work on transport properties of chiral bobbers Redies _et al._ (2019) we calculate the Hall conductance and orbital magnetization of the system by employing the $k$-space Berry curvature formalism, with the only non-vanishing component of the Berry curvature tensor in $k$-space being $\Omega_{xy}^{n}=-2\imaginary\innerproduct{\frac{\partial u_{n\bm{k}}}{\partial k_{x}}}{\frac{\partial u_{n\bm{k}}}{\partial k_{y}}},$ (2) where $u_{n\bm{k}}$ is the lattice-periodic part of the Bloch wave function of the band $n$ for a Bloch vector $\bm{k}$. The intrinsic Hall conductance (in units of $e^{2}/h$) is then evaluated as the Brillouin zone integral of the Berry curvature of all occupied states Blügel _et al._ (2017) $\sigma_{xy}=\sum_{n}^{occ}\int\limits_{\rm BZ}\Omega_{xy}^{n}(\bm{k})d\bm{k}.$ (3) For an insulator the Hall conductance is quantized and it is proportional to the value of an integer Chern number $\mathcal{C}:=\int_{S}\Omega_{xy}dS/2\pi=-\sigma_{xy}/2\pi$. We calculate the out-of-plane component of the orbital magnetization (OM) in the system, $M_{orb}$, employing the modern theory of orbital magnetism Thonhauser _et al._ (2005); Xiao _et al._ (2005); Thonhauser (2011), according to which $\begin{split}M_{orb}=&M^{LC}_{orb}+M^{IC}_{orb}=\frac{e}{2\hbar c}\imaginary\int\limits_{\rm BZ}\frac{d\bm{k}}{(2\pi)^{2}}\\\ &\sum_{n}^{occ.}\matrixelement{\frac{\partial u_{n\bm{k}}}{\partial k_{x}}}{\times\left(H_{\bm{k}}+\varepsilon_{n\bm{k}}-2\mu\right)}{\frac{\partial u_{n\bm{k}}}{\partial k_{y}}},\end{split}$ (4) where $M^{LC}_{orb}$ and $M^{IC}_{orb}$ are the local circulation and itinerant circulation parts of the OM, respectively, $\varepsilon_{n\bm{k}}$ is the energy of the $n$-th Bloch state at $\bm{k}$, and $H_{\bm{k}}$ is the lattice-periodic part of the Hamiltionian $H_{\bm{k}}:=e^{-i\bm{k}\bm{r}}He^{i\bm{k}\bm{r}}$. In order to compute the derivatives of the Bloch states in the tight-binding basis we employ first- order perturbation theory: $\ket{\frac{\partial u_{n\bm{k}}}{\partial k_{i}}}=\sum_{m\neq n}\frac{\matrixelement{u_{m\bm{k}}}{\nicefrac{{\partial H_{\bm{k}}}}{{\partial k_{i}}}}{u_{n\bm{k}}}}{\varepsilon_{n\bm{k}}-\varepsilon_{m\bm{k}}+i\eta}\ket{u_{m\bm{k}}},$ (5) which results in the well-known gauge-invariant expression for the Berry curvature and OM Yao _et al._ (2004); Thonhauser (2011). In order to avoid divergences for (nearly) degenerate states a broadening $\eta=10^{-8}\mbox{ Hartree}$ is introduced. The calculated Hall conductance (HC) and OM of the model as a function of band filling for the out-of-plane magnetization are shown in Fig. 1(e) and (f), respectively. As evident from these plots, at half filling the system is a Chern insulator with the Chern number of $-2$. Therefore, the variation of the OM in the topologically non-trivial gap is linear with the chemical potential $\mu$, according to the relation valid for insulators Thonhauser (2011): $\frac{dM_{orb}}{d\mu}=\frac{e}{(2\pi)^{2}\hbar c}\,\sigma_{xy},$ (6) where $e$ is the electron charge and $c$ is the speed of light. The emergence of the metallic point in the spectrum of the model is due to the change in the Chern number from $-2$ to $+2$ at half filling upon reversing the direction of $\hat{\mathbf{m}}$. Based on the model (1) we imprint the skyrmion lattice by varying the direction of $\hat{\mathbf{m}}_{i}$ in real space according to the parametrization of the skyrmions discussed below. We consider a hexagonal lattice of skyrmions where in the center of each unit cell we place a Néel skyrmion. Skyrmions of this type can be stabilized experimentally in the presence of a small external magnetic field, the effect of which on the electronic properties we do not take into account. The chemical unit cell contains two atoms, which are located at $(0,\pm a,0)^{T}$. The lattice vectors of the chemical unit cell are $(l_{\text{chem}},0,0)^{T}$ and $l_{\text{chem}}\cdot(\frac{1}{2},\frac{\sqrt{3}}{2},0)^{T}$, where $l_{\text{chem}}=\sqrt{3}a$ and $a$ is the distance between the inequivalent atoms in the chemical unit cell. The unit cell of the skyrmion lattice is hexagonal and contains 1568 atoms. The lattice vectors of the supercell are $(l_{\text{mag}},0,0)^{T}$ and $l_{\text{mag}}\cdot(\frac{1}{2},\frac{\sqrt{3}}{2},0)^{T}$, where $l_{\text{mag}}=28\,l_{\text{chem}}$. This corresponds to 28 atoms in between the centers of neighboring skyrmions and 1568 atoms in the unit cell. We use an adaptive integration scheme to evaluate the integrals in Eqs. (3) and (4). This leads to the convergence of the results presented in Figs. 3 and 4 with 6144 $k$-points, and results presented in Fig. 1(e,f) with 16042 $k$-points. The full source code is available as open source Redies (2020). ## IV Results ### IV.1 Emergence of mixed topology ring states in skyrmions of mixed Weyl semimetals Here, we analyze the electronic structure of a skyrmion lattice of a mixed Weyl semimetal at half filling, and in the following the zero of energy is associated with the position of Fermi energy for the case when exactly two electronic states per structural unit cell are occupied. For the ferromagnetic system this corresponds to the position of the metallic Weyl point in the spectrum when the magnetization is in-plane, see Fig. 1(c). Correspondingly, in the limit of very large skyrmions, when the regions with homogeneous out-of plane magnetization in the skyrmion center and in-between the skyrmions are large, we expect an emergence of electronic states around the zero energy. This state is expected to be localized to the region where the magnetization lies in-plane, i.e. within the domain wall separating the skyrmion center with the outside region. Figure 2: Emergence of mixed topology ring states (MTRS) in a skyrmion lattice of a mixed Weyl semimetal with skyrmion radius of $r_{\text{max}}=\frac{1}{2}\,l_{mag}$. (a-c) The local density of states (LDOS) in real space integrated over the energy range from $-$0.3 to $+$0.3 eV, corresponding to the gap of the ferromagnetic system in Fig. 1(a,b). The magnitude of the integrated LDOS is indicated in the color bar. The arrows to the right indicate the spin direction along the orange line in the $(xy)$-plane. Here the dark red (blue) arrows indicate a $\mathbf{m}=(-)\hat{e}_{z}$ magnetization direction. In (a) $d=0.2$, in (b) $d=0.5$ and in (c) $d=0.8$. The LDOS marks the formation of MTRS. (d) The LDOS integrated in the energy range $[-0.47,-0.46]\mbox{ eV}$, away from the gap, marked green in Fig 1(a). The magnetic structure is identical to (c). In this energy region the MTRS are absent. In the center of the skyrmion $\mathbf{m}=+\mathbf{e}_{z}$, while $\mathbf{m}=-\mathbf{e}_{z}$ at the edges of the unit cell. We verify this by performing explicit calculations of the electronic structure of the skyrmion lattice with the model described above. The Néel skyrmions of radius $R$ (with the maximal value of it being $R_{max}=l_{\rm mag}/2$), are parameterized by the angle $\alpha$ that a spin at the distance $r$ from the center of the skyrmion makes with the $z$-axis, while the parameter $d$ is introduced to tune the width of the domain wall region where the spins are lying in-plane so that larger $d$ results in a larger region of “flat” magnetization (see examples in Fig 2(a-c) for different values of $d$). Note that the rate of change of the magnetization outside of the in-plane regions does not change. We use this parametrization to artificially tune the internal width of the states which emerge in the “flat” region. Below we consider the case of $R=R_{max}/2$ when introducing the parameterization in terms of $r$ and $d$ as follows: $\alpha(r,d)=\pi\frac{\beta(r/R_{max},d)-\beta(1,d)}{\beta(0,d)-\beta(1,d)}$, with $\beta(x,d)=\operatorname{atan}\left(R_{max}(6x-3d-3)\right)+\operatorname{atan}\left(R_{max}(6x+3d-3)\right)$. As we show below, this parameterization allows us to manipulate the spatial spread of states associated with the transition between the two out-of-plane domains. In order to analyze the spatial localization of the electronic states, we compute the space-resolved local density of states (LDOS), presenting the results in Fig. 2. Namely, we look at the LDOS of states which are appearing in the electronic structure of the skyrmion in the gap of the out-of-plane ferromagnetic system between $-$0.3 and $+$0.3 eV, see Fig. 1(a,b). In Fig. 2 (a-c) the spatial distribution of the skyrmion’s LDOS, which has been integrated in this energy region, is shown for different widths of the domain wall as controlled by the parameter $d$. Quite remarkably, our calculations show that, irrespective of the domain wall width, the LDOS is vanishing exactly outside of the domain wall $-$ i.e., in the center of the skyrmion, and in the region between the skyrmions. This is in sharp contrast to all other states of the skyrmion lattice which lie outside of the gap in the “metallic” region: as an example, in Fig. 2(d) the LDOS of the system, which has been integrated over the energy region $[-0.47,-0.46]$ eV (i.e. outside of the gap as marked with a green dashed line in Fig. 1(a)), is finite at any point in the unit cell which marks the delocalized character of the constituting states. The precise localization of the gap states in the domain that we observe irrespective of the domain wall width allows us to directly associate them with the emergence of the band crossing in the electronic structure of the model ferromagnetic system for the in-plane magnetization. Recently it was shown that this band crossing acquires a non-trivial topological character once the magnetization direction $\hat{\mathbf{m}}$ is included into the topological analysis. In this sense the ferromagnetic 2D model that we study is an example of what is called a mixed Weyl semimetal Hanke _et al._ (2017a); Niu _et al._ (2019) $-$ a term motivated by necessity of including the so-called mixed Berry curvature into the analysis of the band topology. Since the gap states that we observe arise as a result of the non-trivial mixed topology of our model, we refer to them as mixed topology ring states (MTRS). It has been shown, that, depending on the symmetry of the model, the mixed Weyl points can be associated with a transition between the Chern insulating phases with different Chern number arising for different directions of $\hat{\mathbf{m}}$ Hanke _et al._ (2017a); Niu _et al._ (2019). This is exactly the case for our model: as the direction of $\hat{\mathbf{m}}$ is changed from along the $z$-axis to the opposite to it, the Chern number at half-filling changes sign from $+2$ to $-2$. The skyrmion lattice that we consider thus presents a lattice of domains with an opposite Chern number separated by the domain walls, and the MTRS can be naturally interpreted as the topological edge states localized at the boundary separating the two domains. ### IV.2 Hall conductance and orbital magnetization in skyrmions of a mixed Weyl semimetal In this section we investigate the possible influence of MTRS on the Hall transport properties and orbital magnetization exhibited by the skyrmion lattices of mixed Weyl semimetals. In order to do that, we parametrize the magnetization distribution within the skyrmion of radius $R$ in a way that $\alpha(r)=\pi\left(1-\frac{\delta(r)-\delta(0)}{\delta(R_{\text{max}})-\delta(0)}\right)$ with $\delta(x)=-\operatorname{atan}\left(a\cdot(r-R)\right)+\frac{\pi}{2}$. Unlike the previous parameterization used in section IV.A, which exhibits a pronounced flat region in the middle of the wall, the parametrization we use from here on is designed to model most closely the skyrmions in systems which favor an out-of-plane magnetization direction, which leads to domain walls of roughly constant magnetization gradient across the wall. When compared to the approach used in the preceding subsection, the parametrization employed from now on is very close to that routinely used to model skyrmion profiles in micromagnetic studies Romming _et al._ (2015); foo . It is important to note that the MTRS emerge for both types of the profile parametrization. We first consider the case of $R=R_{max}/2$, and show the skyrmion profile along the path between the skyrmion centers for various values of $a$ in Fig. 3(a). We present the results of our calculations of the Hall conductance and orbital magnetization in the system in Fig. 3(b,c) as a function of band filling and parameter $a$, of which the latter controls the domain wall width and correspondingly the MTRS spread in real space. Concerning the overall energy dependence for all values of $a$, while HC exhibits a symmetric structure with respect to the middle of the bulk gap at $E_{F}=0$ eV, the OM is antisymmetric, which originates in the symmetry of the ferromagnetic band structure and the properties of the OM around the mixed Weyl points Niu _et al._ (2019). Overall $-$ excluding the region of the bulk gap, marked with a shaded area in Fig. 3(b,c), which will be considered in detail below $-$ the computed HC appears to be extremely sensitive to the domain wall width. This manifestly “non-topological” behavior of the HC emerges in contrast to the expectations of the topological Hall effect in this system, insensitive to the details of the spin distribution but determined by the overall topological charge, with the latter remaining constant in our calculations. The observed sensitivity of the HC to the domain wall width, i.e. to the magnitude of the magnetization gradient in it, can be best understood by referring to the novel phenomenon of the chiral Hall effect Lux _et al._ (2020), which has been recently shown to be prominent in chiral skyrmions with interfacial spin-orbit coupling, and which is originated in the change of the magnetization within the walls as given by their chirality along the line which passes through the center of the skyrmion. Within the theory of the chiral Hall effect, emerging already for spin-spiral solutions Lux _et al._ (2020), the chiral signal is proportional to the sense of the spin chirality among the neighboring spins $\mathbf{S}_{i}\times\mathbf{S}_{j}$, and thus the change in sign of the Hall conductivity upon changing the sign of the spin chirality serves as one of the trademarks of the chiral Hall effect Kipp _et al._ (2020). In Fig. 3(d) we present the results of the HC calculations as a function of the domain wall width, but assuming that the domain wall has an opposite sense of spin rotation from that shown in Fig. 3(a). While the topological charge of the skyrmion does not depend on the chosen chirality, the sign of the HC is opposite for two opposite chiralities for almost all values of $a$ and majority of energies. This underpins the origin of the observed HC in the chiral Hall effect originated in the domain walls of skyrmions. A perfect reversal of the HC at a given energy with chirality (i.e. reversal in sign but not in magnitude) is not expected given very strong spin- orbit interaction strength which makes the electronic structure of two flavors of skyrmions different. On the other hand, the overall value of the orbital magnetization is remarkably insensitive to the domain wall width. While the emergence of the chiral orbital magnetization, arising in analogy to the chiral Hall effect, would be also expected for interfacial skyrmions Lux _et al._ (2020), for our particular choice of the model and symmetric parametrization of the skyrmion profile the chiral part of the OM vanishes from symmetry arguments, thus reducing to the well-known effect of topological orbital magnetization Hoffmann _et al._ (2015); Hanke _et al._ (2016); dos Santos Dias _et al._ (2016); Hanke _et al._ (2017b); Lux _et al._ (2018). The topological OM remains extremely robust with respect to the changes in the details of spin distribution, as visible in Fig. 3(c), and its behavior is manifestly not very sensitive to the change in the spin chirality of the skyrmion (not shown). Figure 3: Transport properties of a skyrmion lattice of a mixed Weyl semimetal with $R=R_{max}/2$. (a) The real-space cut through the skyrmion profile along the path connecting two skyrmion centers for different values of parameter $a$ (see text for more details). (b,c) The dependence of the Hall conductance $\sigma_{xy}$ (b), and orbital magnetization $M$ (c), on the band filling and parameter $a$. The inset in (b) zooms into the region of quantization where MTRS reside. (d) Displays the HC of the skyrmions with profile shown in (a) but with reversed sense of the spin rotation in the walls (for impression see the sketch shown as an inset). In all plots the color indicates the value of $a$ in accordance to the color scale shown in (a). In the following, we focus on the region in energy which corresponds to the position of the bulk gap of the model for the out-of-plane magnetization, marked with a shaded area in Fig. 3(b-c), and zoomed into in the inset of Fig. 3(b). The behavior of the HC, shown in the inset, displays isolated spikes on the background of a plateau value of $-2$ $\frac{e^{2}}{h}$. The latter value can be traced back to the quantized value of the HC for the out-of-plane ferromagnetic model, see Fig. 1(e), as for the given value of $R$ the overall out-of-plane magnetization in the unit cell is positive. On the other hand, the spikes at isolated positions correspond to the contributions of ring states to the HC. These contributions become somewhat more pronounced as the width of the domain wall increases, owing to the increased spread of the ring states and increased probability of electron hopping among the ring states positioned in different unit cells. In order to investigate this effect in more detail, we study the dependence of the HC and OM on the radius of the skyrmion $R$, while assuming a very large value of parameter $a$, thus localizing the ring states in a narrow region of the domain wall, see sketches in the right column of Fig. 5. The overall qualitative behavior of the HC and OM with $R$, discussed below, does not qualitatively depend on the value of parameter $a$ chosen. Figure 4: Transport properties of a skyrmion lattice of a mixed Weyl semimetal in the region of the bulk gap as a function of relative radius $R/R_{max}$ and a value of parameter $a$ of 105. (a) The energy dependence of the orbital magnetization as a function of $R/R_{max}$. The color of the line corresponds to the magnitude of the relative radius according to the color bar. (b) The energy dependence of the Hall conductance as a function of $R/R_{max}$. The HC of the systems with $R/R_{max}\leq 0.7$ are plotted with a solid line “$-$”, with $R/R_{max}=0.8$ is plotted with a dashed line “$---$”, with $R/R_{max}=0.9$ is plotted with a dashed-dotted line “$-\cdot-$”, and with $R/R_{max}=1$ is plotted with a dotted line “$\cdot\cdot\cdot$”. Figure 5: The evolution of the bandstructure around the Fermi energy in the region of MTRS (in green, scale on the left) and of the orbital magnetization $M$ (in blue, scale on top, as a function of band filling) with respect to $R/R_{max}$. The positions of the jumps in the saw-tooth pattern of the orbital magnetization coincide with the positions of the MTRSs. The sketches on the left mark the division of the unit cell into regions with roughly magnetization up (tainted in blue), and roughly magnetization down (tainted in red), with $R/R_{max}=0.3$ in (d), in $R/R_{max}=0.8$ in (e), and $R/R_{max}=1$ in (f). We first discuss the evolution of the HC with the relative radius $R/R_{max}$, presented in Fig. 4(b). Over a very large range of radii $R<0.8R_{max}$ the conductance is ideally quantized to $-2$ $\frac{e^{2}}{h}$ in the gap, despite a large number of MTRS, see Fig. 5(a-b). In this case the Hall effect is dominated by the ferromagnetic background in between the skyrmions with $\hat{\mathbf{m}}$ along $+z$ which provides a Chern number of $-2$. The ring states are strongly localized in the wall region (which explains the flat dispersion of the ring states visible in Fig. 5(a)), do not hybridize with each other across the unit cell boundaries, and do not contribute to the HC. In the other limit of $R=R_{max}$, see Fig. 5(f) for a sketch, the film comprises mainly areas of the texture with $\hat{\mathbf{m}}$ along $-z$, which explains mostly quantized value of the HC of $+2$ $\frac{e^{2}}{h}$. The crossover point of the HC from $\sigma_{xy}=-2\,\frac{e^{2}}{h}$ to $\sigma_{xy}=2\,\frac{e^{2}}{h}$ positioned between $R/R_{\text{max}}=0.7$ and $R/R_{\text{max}}=0.8$ coincides with the point where the integrated over the unit cell out-of-plane component of the magnetization changes sign as a result of the competition between domains with $\hat{\mathbf{m}}$ along $+z$ and domains with $\hat{\mathbf{m}}$ along $-z$. The region of $0.8R_{max}\leq R\leq R_{max}$ is the region of where the MTRS interact with each other strongly across the unit cell boundaries, exhibit a strong dispersion (see Fig. 5(b)) and give rise to a drastic variation in the HC. In the limit of $R=R_{max}$ the MTRS are localized in the unit cell corners, thus interacting noticeably along the unit cell boundaries, which gives rise to a small but finite dispersion of the bands (see Fig. 5(c)) and peak-like contributions to the HC. We come now to the discussion of the OM of the skyrmion lattice exhibited in response to the variation of the radius. This behavior, presented in Fig. 4(a), is distinctly different from that exhibited by the Hall conductance. Clearly, in the limiting cases of $R=0$ and $R=R_{max}$ the slope of the OM as a function of energy in the gap has to be opposite, as follows from Eq. (6), however, while in the case of the HC the transition between the two Chern insulator cases is relatively abrupt, the same transition in terms of the OM seems to be quite smooth, with the overall slope of the OM changing continuously as a function of $R$, and the OM curves being quite smooth as a function of energy, when compared to the case of the HC. This can be understood by looking in detail at the contributions of the MTRS to the OM variation in the gap, Fig. 5(a-c). Here, we observe that although the slope of the OM in between the groups of MTRS is strictly constant and consistent with the sign of the quantized value of the HC in the same region, Fig. 5(b), the overall curve of the OM for all values of $R$ displays a characteristic saw- tooth shape, where the overall slope is “renormalized” by the contributions from MTRS. Interestingly, these contributions are consistent in sign among all MTRS for a given $R$: they are positive in the region of $R<0.8R_{max}$, and negative for $R\geq 0.8R_{max}$. This “coherence” of MTRS is the reason that the slope of OM is getting consistenly and gradually increased as $R$ goes from zero to $R_{max}$. According to our calculations, the OM of MTRS is predominantly originated in the local circulation part, which is consistent with the strong localization of the ring states. In a sense, the ring states can be envisaged as a group of localized electronic states which have a common unique sense of bound orbital current circulation around the skyrmion center $-$ which we can naturally refer to as having a specific sense of orbital chirality. And while in the case of the Hall effect it is the Hall conductance which changes abruptly at the point of phase transition between two areas of out-of-plane magnetized domains, the corresponding marker in terms of orbital magnetism is the orbital chirality of the ring states which exhibits a sharp transition. Manifestly, as follows from our calculations of the OM with reversed spin chirality of skyrmions, while the number and exact energetic positions of the states depend on the width of the domain wall for both spin chiralities, the orbital chirality of MTRS is remains a robust quantity even under the change in the skyrmion chirality. ## V Discussion In our work, we have considered the intrinsic Hall effect and orbital magnetism exhibited by skyrmion lattices of mixed Weyl semimetals by referring to the Berry phase framework. Our main finding is the discovery of the special type of ring states which form as a result of complex mixed topology in real- space textures of these materials. These electronic states reside at the domain wall region, which serves as the boundary between the two Chern insulator phases and separates the skyrmion cores from the ferromagnetic background. We have shown that the degree of the localization of the ring states and the strength of their inter-cell interaction can be controlled by the parameters which determine the texture details. In turn, the ring states mediate the transition between the two Chern insulating phases occurring upon switching of the magnetization direction from out-of-plane to the opposite, when e.g. an external magnetic field changes its strength and sign in experiment. This concerns the Hall conductance, but mostly the orbital magnetization of the samples, as the ring states carry a specific orbital chirality and orbital momentum. The scaling of the Hall conductance as a function of the skyrmions domain wall width is a strong indicator of a prominent chiral Hall effect, where such a behavior has recently been predicted and which could have possible ramifications for the unambiguous electrical detection of magnetic skyrmions Lux _et al._ (2020). In simplified model systems, the emergence of this effect can be understood from the interplay of spin-orbit interaction and exchange coupling which gives rise to an effective magnetic field Nakabayashi and Tatara (2014). This phenomenology is similar to the topological Hall effect (THE) Bruno _et al._ (2004), but in contrast, it materializes already at the leading order perturbation theory whereas the THE is subleading Lux _et al._ (2020). From the viewpoint of Weyl semimetals, the coaction of chiral magnetism and spin-orbit coupling (which is responsible for the chiral Hall effect) is directly linked to an emergent chiral anomaly Ilan _et al._ (2020). Our model elevates this phenomenological interpretation to a more realistic setting. To which degree this explanation can be uphold will be the subject of future investigations. Ultimately, the ring states arise as a result of a strong variation in the electronic structure around a specific magnetization direction, and thus it is rewarding to explore in the future what impact can the ring states have on the local real-space variation of the spin-orbit torques and the Dzyaloshinskii- Moriya interaction, as well as current-induced skyrmion dynamics Hanke _et al._ (2017a); Niu _et al._ (2019); Araki (2020); Hanke _et al._ (2020); Liu _et al._ (2013); Kurebayashi and Nomura (2019); Kurebayashi and Nagaosa (2019), where the ring states can play a role of local “hooks” strongly coupling skyrmions to applied currents. On the other hand, the unique orbital properties of the ring states lend themselves as a possible platform for obtaining a detailed information on the skyrmion parameters with e.g. XMCD type of techniques. Additionally, peculiar topological nature of these one- dimensional edge states emerging in skyrmions of Weyl semimetals hints at exceptional possibilities that the ring states can play in mediating the physics of Majorana states emerging upon deposition of skyrmions on superconductors Pershoguba _et al._ (2016); Güngördü _et al._ (2018); Palacio-Morales _et al._ (2019); Yang _et al._ (2016); Rex _et al._ (2019); Palacio-Morales _et al._ (2019), which presents an exciting future direction to pursue. ## VI Acknowledgements We gratefully acknowledge computing time on the supercomputers JUQUEEN and JURECA at Jülich Supercomputing Center, and at the JARA-HPC cluster of RWTH Aachen. We acknowledge funding under SPP 2137 “Skyrmionics” of Deutsche Forschungsgemeinschaft (DFG), DARPA TEE program through grant MIPR# HR0011831554 from DOI. We gratefully acknowledge financial support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant No. 856538, project ”3D MAGiC”). ## References * Lux _et al._ (2020) F. R. Lux, F. Freimuth, S. Blugel, and Y. Mokrousov, Physical Review Letters 124, 096602 (2020). * Lai _et al._ (2017) P. Lai, G. P. Zhao, H. Tang, N. Ran, S. Q. Wu, J. Xia, X. Zhang, and Y. Zhou, Scientific Reports 7, 45330 (2017). * Zhang _et al._ (2015a) X. Zhang, G. P. Zhao, H. Fangohr, J. P. Liu, W. X. Xia, J. Xia, and F. J. Morvan, Scientific Reports 5, 7643 (2015a). * Tomasello _et al._ (2015) R. Tomasello, E. Martinez, R. Zivieri, L. Torres, M. Carpentieri, and G. Finocchio, Scientific Reports 4, 6784 (2015). * Zhang _et al._ (2020a) X. Zhang, Y. Zhou, K. Mee Song, T.-E. Park, J. Xia, M. Ezawa, X. Liu, W. Zhao, G. Zhao, and S. Woo, Journal of Physics: Condensed Matter 32, 143001 (2020a). * Zázvorka _et al._ (2019) J. Zázvorka, F. Jakobs, D. Heinze, N. Keil, S. Kromin, S. Jaiswal, K. Litzius, G. Jakob, P. Virnau, D. Pinna, K. Everschor-Sitte, L. Rózsa, A. Donges, U. Nowak, and M. Kläui, Nature Nanotechnology 14, 658 (2019). * Bourianoff _et al._ (2018) G. Bourianoff, D. Pinna, M. Sitte, and K. Everschor-Sitte, AIP Advances 8, 055602 (2018). * Pinna _et al._ (2019) D. Pinna, G. Bourianoff, and K. Everschor-Sitte, arXiv:1811.12623 [cond-mat] (2019). * Zhang _et al._ (2015b) X. Zhang, M. Ezawa, and Y. Zhou, Scientific Reports 5, 9400 (2015b). * Luo _et al._ (2018) S. Luo, M. Song, X. Li, Y. Zhang, J. Hong, X. Yang, X. Zou, N. Xu, and L. You, Nano Letters 18, 1180 (2018). * Deng _et al._ (2020) Y. Deng, Y. Yu, M. Z. Shi, Z. Guo, Z. Xu, J. Wang, X. H. Chen, and Y. Zhang, Science 367, 895 (2020). * Chang _et al._ (2013) C.-Z. Chang, J. Zhang, X. Feng, J. Shen, Z. Zhang, M. Guo, K. Li, Y. Ou, P. Wei, L.-L. Wang, Z.-Q. Ji, Y. Feng, S. Ji, X. Chen, J. Jia, X. Dai, Z. Fang, S.-C. Zhang, K. He, Y. Wang, L. Lu, X.-C. Ma, and Q.-K. Xue, Science 340, 167 (2013). * Otrokov _et al._ (2019) M. M. Otrokov, I. I. Klimovskikh, H. Bentmann, D. Estyunin, A. Zeugner, Z. S. Aliev, S. Gaß, A. U. B. Wolter, A. V. Koroleva, A. M. Shikin, M. Blanco-Rey, M. Hoffmann, I. P. Rusinov, A. Y. Vyazovskaya, S. V. Eremeev, Y. M. Koroteev, V. M. Kuznetsov, F. Freyse, J. Sánchez-Barriga, I. R. Amiraslanov, M. B. Babanly, N. T. Mamedov, N. A. Abdullayev, V. N. Zverev, A. Alfonsov, V. Kataev, B. Büchner, E. F. Schwier, S. Kumar, A. Kimura, L. Petaccia, G. Di Santo, R. C. Vidal, S. Schatz, K. Kißner, M. Ünzelmann, C. H. Min, S. Moser, T. R. F. Peixoto, F. Reinert, A. Ernst, P. M. Echenique, A. Isaeva, and E. V. Chulkov, Nature 576, 416 (2019). * Niu _et al._ (2020) C. Niu, H. Wang, N. Mao, B. Huang, Y. Mokrousov, and Y. Dai, Physical Review Letters 124, 066401 (2020). * Miao _et al._ (2014) B. F. Miao, L. Sun, Y. W. Wu, X. D. Tao, X. Xiong, Y. Wen, R. X. Cao, P. Wang, D. Wu, Q. F. Zhan, B. You, J. Du, R. W. Li, and H. F. Ding, Physical Review B 90, 174411 (2014). * Münzer _et al._ (2010) W. Münzer, A. Neubauer, T. Adams, S. Mühlbauer, C. Franz, F. Jonietz, R. Georgii, P. Böni, B. Pedersen, M. Schmidt, A. Rosch, and C. Pfleiderer, Physical Review B 81, 041203(R) (2010). * Zhang _et al._ (2020b) L.-C. Zhang, Y. A. Onykiienko, P. M. Buhl, Y. V. Tymoshenko, P. Čermák, A. Schneidewind, J. R. Stewart, A. Henschel, M. Schmidt, S. Blügel, D. S. Inosov, and Y. Mokrousov, Physical Review Research 2, 013063 (2020b). * Kong and Zang (2013) L. Kong and J. Zang, Physical Review Letters 111, 067203 (2013). * Onose _et al._ (2012) Y. Onose, Y. Okamura, S. Seki, S. Ishiwata, and Y. Tokura, Physical Review Letters 109, 037603 (2012). * Lado and Fernández-Rossier (2015) J. L. Lado and J. Fernández-Rossier, Physical Review B 92, 115433 (2015). * Hamamoto _et al._ (2015) K. Hamamoto, M. Ezawa, and N. Nagaosa, Physical Review B 92, 115417 (2015). * Göbel _et al._ (2018) B. Göbel, A. Mook, J. Henk, and I. Mertig, The European Physical Journal B 91, 179 (2018). * Göbel _et al._ (2017) B. Göbel, A. Mook, J. Henk, and I. Mertig, Physical Review B 96, 060406(R) (2017). * Göbel _et al._ (2019) B. Göbel, A. Mook, J. Henk, and I. Mertig, Physical Review B 99, 060406(R) (2019). * Wang _et al._ (2017) S. Wang, B.-C. Lin, A.-Q. Wang, D.-P. Yu, and Z.-M. Liao, Advances in Physics: X 2, 518 (2017). * Liu _et al._ (2019) D. F. Liu, A. J. Liang, E. K. Liu, Q. N. Xu, Y. W. Li, C. Chen, D. Pei, W. J. Shi, S. K. Mo, P. Dudin, T. Kim, C. Cacho, G. Li, Y. Sun, L. X. Yang, Z. K. Liu, S. S. P. Parkin, C. Felser, and Y. L. Chen, Science 365, 1282 (2019). * Hanke _et al._ (2017a) J.-P. Hanke, F. Freimuth, C. Niu, S. Blügel, and Y. Mokrousov, Nature Communications 8, 1479 (2017a). * Niu _et al._ (2019) C. Niu, J.-P. Hanke, P. M. Buhl, H. Zhang, L. Plucinski, D. Wortmann, S. Blügel, G. Bihlmayer, and Y. Mokrousov, Nature Communications 10, 3179 (2019). * Araki (2020) Y. Araki, Annalen der Physik 532, 1900287 (2020). * Araki and Nomura (2018) Y. Araki and K. Nomura, Phys. Rev. Applied 10, 014007 (2018). * Lux _et al._ (2018) F. R. Lux, F. Freimuth, S. Blügel, and Y. Mokrousov, Communications Physics 1, 60 (2018). * Castro _et al._ (2008) E. V. Castro, N. M. R. Peres, T. Stauber, and N. A. P. Silva, Physical Review Letters 100, 186803 (2008). * Matte _et al._ (2009) H. S. S. R. Matte, K. S. Subrahmanyam, and C. N. R. Rao, The Journal of Physical Chemistry C 113, 9982 (2009). * Qiao _et al._ (2010) Z. Qiao, S. A. Yang, W. Feng, W.-K. Tse, J. Ding, Y. Yao, J. Wang, and Q. Niu, Phys. Rev. B 82, 161414(R) (2010). * Bychkov and Rashba (1984) Y. A. Bychkov and E. I. Rashba, Journal of Physics C: Solid State Physics 17, 6039 (1984). * Stoner (1936) E. C. Stoner, Proceedings of the Royal Society of London. Series A - Mathematical and Physical Sciences 154, 656 (1936). * Stoner (1938) E. C. Stoner, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 165, 372 (1938). * Redies _et al._ (2019) M. Redies, F. R. Lux, J.-P. Hanke, P. M. Buhl, G. P. Müller, N. S. Kiselev, S. Blügel, and Y. Mokrousov, Physical Review B 99, 140407(R) (2019). * Blügel _et al._ (2017) S. Blügel, Y. Mokrousov, T. Schäpers, Y. Ando, G. Bihlmeyer, I. Aguilera, C. Friedrich, L. Plucinski, P. Rüssmann, P. Mavropoulos, G. Mussler, N. S. Kiselev, B. Zimmermann, J. Chico, A. Feoktystov, K. Nemkovski, A. Kovacs, R. E. Dunin-Borkowski, and D. P. DiVincenzo, eds., _Topological matter - topological insulators, skyrmions and majoranas_, Lecture notes of the IFF spring school ; (Forschungszentrum, Zentralbibliothek, Jülich, 2017). * Thonhauser _et al._ (2005) T. Thonhauser, D. Ceresoli, D. Vanderbilt, and R. Resta, Physical Review Letters 95, 137205 (2005). * Xiao _et al._ (2005) D. Xiao, J. Shi, and Q. Niu, Physical Review Letters 95, 137204 (2005). * Thonhauser (2011) T. Thonhauser, International Journal of Modern Physics B 25, 1429 (2011). * Yao _et al._ (2004) Y. Yao, L. Kleinman, A. H. MacDonald, J. Sinova, T. Jungwirth, D.-s. Wang, E. Wang, and Q. Niu, Physical Review Letters 92, 037204 (2004). * Redies (2020) M. Redies, “Stb,” https://github.com/MRedies/STB (2020). * Romming _et al._ (2015) N. Romming, A. Kubetzka, C. Hanneken, K. von Bergmann, and R. Wiesendanger, Physical Review Letters 114, 177203 (2015). * (46) “As a check, in order to confirm that we obtain non-trivial results for a spin distribution corresponding to realistic skyrmions, by assuming magnetic parameters typical for the electronic model at hand, and by applying a small external magnetic field, we have generated a stable hexagonal lattice of skyrmions using the spirit code (www.spirit.de), finding that for most energies the computed HC for such a more realistic structure is very close to that corresponding to a skyrmion lattice with the same size but parametrized in a way used in Sec. IV.B,” . * Kipp _et al._ (2020) J. Kipp, K. Samanta, F. R. Lux, M. Merte, J.-P. Hanke, M. Redies, F. Freimuth, S. Blügel, M. Ležaić, and Y. Mokrousov, arXiv:2007.01529 [cond-mat] (2020), arXiv: 2007.01529. * Hoffmann _et al._ (2015) M. Hoffmann, J. Weischenberg, B. Dupé, F. Freimuth, P. Ferriani, Y. Mokrousov, and S. Heinze, Physical Review B 92, 020401(R) (2015). * Hanke _et al._ (2016) J.-P. Hanke, F. Freimuth, A. K. Nandy, H. Zhang, S. Blügel, and Y. Mokrousov, Physical Review B 94, 121114(R) (2016). * dos Santos Dias _et al._ (2016) M. dos Santos Dias, J. Bouaziz, M. Bouhassoune, S. Blügel, and S. Lounis, Nature Communications 7, 13613 (2016). * Hanke _et al._ (2017b) J.-P. Hanke, F. Freimuth, S. Blügel, and Y. Mokrousov, Scientific Reports 7, 41078 (2017b). * Nakabayashi and Tatara (2014) N. Nakabayashi and G. Tatara, New Journal of Physics 16, 015016 (2014). * Bruno _et al._ (2004) P. Bruno, V. K. Dugaev, and M. Taillefumier, Physical Review Letters 93, 096806 (2004). * Ilan _et al._ (2020) R. Ilan, A. G. Grushin, and D. I. Pikulin, Nature Reviews Physics 2, 29 (2020). * Hanke _et al._ (2020) J.-P. Hanke, F. Freimuth, B. Dupé, J. Sinova, M. Kläui, and Y. Mokrousov, Physical Review B 101, 014428 (2020). * Liu _et al._ (2013) Y.-H. Liu, Y.-Q. Li, and J. H. Han, Physical Review B 87, 100402(R) (2013). * Kurebayashi and Nomura (2019) D. Kurebayashi and K. Nomura, Scientific Reports 9, 5365 (2019). * Kurebayashi and Nagaosa (2019) D. Kurebayashi and N. Nagaosa, Phys. Rev. B 100, 134407 (2019). * Pershoguba _et al._ (2016) S. S. Pershoguba, S. Nakosai, and A. V. Balatsky, Physical Review B 94, 064513 (2016). * Güngördü _et al._ (2018) U. Güngördü, S. Sandhoefner, and A. A. Kovalev, Physical Review B 97, 115136 (2018). * Palacio-Morales _et al._ (2019) A. Palacio-Morales, E. Mascot, S. Cocklin, H. Kim, S. Rachel, D. K. Morr, and R. Wiesendanger, Science Advances 5, eaav6600 (2019). * Yang _et al._ (2016) G. Yang, P. Stano, J. Klinovaja, and D. Loss, Physical Review B 93, 224505 (2016). * Rex _et al._ (2019) S. Rex, I. V. Gornyi, and A. D. Mirlin, Physical Review B 100, 064504 (2019).
2024-09-04T02:54:57.924517
2020-02-18T01:54:36
2003.03259
{ "authors": "Mingxiao Li, Jingwei Ling, Yang He, Usman A. Javid, Shixin Xue and\n Qiang Lin", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26086", "submitter": "Mingxiao Li", "url": "https://arxiv.org/abs/2003.03259" }
arxiv-papers
# Lithium niobate photonic-crystal electro-optic modulator Mingxiao Li Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY 14627 Jingwei Ling Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY 14627 Yang He Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY 14627 Usman A. Javid Institute of Optics, University of Rochester, Rochester, NY 14627 Shixin Xue Institute of Optics, University of Rochester, Rochester, NY 14627 Qiang Lin<EMAIL_ADDRESS>Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY 14627 Institute of Optics, University of Rochester, Rochester, NY 14627 ###### Abstract Modern advanced photonic integrated circuits require dense integration of high-speed electro-optic functional elements on a compact chip that consumes only moderate power. Energy efficiency, operation speed, and device dimension are thus crucial metrics underlying almost all current developments of photonic signal processing units. Recently, thin-film lithium niobate (LN) emerges as a promising platform for photonic integrated circuits. Here we make an important step towards miniaturizing functional components on this platform, reporting probably the smallest high-speed LN electro-optic modulators, based upon photonic crystal nanobeam resonators. The devices exhibit a significant tuning efficiency up to 1.98 GHz/V, a broad modulation bandwidth of 17.5 GHz, while with a tiny electro-optic modal volume of only 0.58 $\mu{\rm m}^{3}$. The modulators enable efficient electro-optic driving of high-Q photonic cavity modes in both adiabatic and non-adiabatic regimes, and allow us to achieve electro-optic switching at 11 Gb/s with a bit- switching energy as low as 22 fJ. The demonstration of energy efficient and high-speed electro-optic modulation at the wavelength scale paves a crucial foundation for realizing large-scale LN photonic integrated circuits that are of immense importance for broad applications in data communication, microwave photonics, and quantum photonics. ## Introduction High-speed electro-optic modulation underlies many important applications ranging from optical communication Wooten00 , microwave photonics Capmany19 , computing Vladimir15 , frequency metrology Diddams18 , to quantum photonics Smith17 . A variety of approaches have been employed for electro-optic modulation, such as carrier plasma dispersion Reed10 ; Keeler18 , electro- absorption Nakano18 ; Michel08 , and Pockels effect Wooten00 ; Koos15 , the latter of which is particularly interesting since the Pockels effect offers an ultrafast and pure refractive-index modulation over an extremely broad optical spectrum while without introducing extra loss. The best-known electro-optic Pockels material is probably lithium niobate (LN), which has been widely used in telecommunication Wooten00 . Recently, thin-film monolithic LN Gunter12 ; Bowers18 emerges as a promising platform, where low-loss and high-quality photonic integration together with the strong Pockels effect enables superior modulation performance Gunter07 ; Reano13 ; Reano14 ; Fathpour15 ; Fathpour16 ; Amir17 ; Loncar18 ; Prather18 ; Loncar18_2 ; Shayan18 ; Fathpour18 ; Cai19 ; Cai19_2 ; Loncar19 , showing great potential as an excellent medium for photonic integrated circuits and future photonic interconnect. Figure 1: Design of LN photonic crystal EOM. (a) Schematic of the LN photonic crystal EOM. (b) The structure of the unit cell (top: top view; bottom: cross- section view). The LN photonic crystal nanobeam has a width of $w=1200$ nm, layer thickness of $t=300$ nm, and a partially etched wing layer with a thickness of 150 nm. The elliptical hole has dimensions of $h_{x}=270$ nm and $h_{y}=490$ nm, and a fully etched depth of 300 nm. The full cross section is shown in (d). (c) Dispersion property of the partially etched LN photonic crystal nanobeam, simulated by the finite element method (FEM). The blue open circles show the dielectric and air bands. The red solid and open circles denote the fundamental and second-order TE-like cavity modes shown in (f) and (g). Our simulations show that there exhibits another mode with eigenfrequency within the band gap (gray open circles). This mode, however, has only negligible perturbation to the dielectric mode due to distinctive spatial symmetry, thus not affecting the quality of the defect cavity mode. (d) Cross- section schematic of the EOM structure, where the arrow profile shows the RF electric field distribution and the color profile shows the optical cavity mode field distribution, both simulated by the FEM method. (e) Lattice constant as a function of position, which is optimized for low insertion loss together with high radiation-limited optical Q. (f) Top view of the FEM- simulated optical mode field profile of the fundamental TE-like cavity mode $TE^{0}_{01}$. The left inset shows the orientation of the LN crystal where the optical axis is along the $z$ direction. (e) Simulated optical mode field profile of the second-order TE-like cavity mode $TE^{1}_{01}$. Power efficiency is crucial for the application of electro-optic modulator (EOM), which depends sensitively on the physical size of the device Miller17 . Scaling an EOM down to a small footprint would reduce the device capacitance and thus decrease the switching energy Miller17 ; Sorger15 , which is indispensable for all practical applications. Among various device geometries, photonic crystal nanoresonators are particularly beneficial in this regard, given their exceptional capability of controlling light confinement and light- matter interactions on the sub-wavelength scale. In the past decade, photonic- crystal EOMs have been developed on various material platforms such as silicon Notomi09 ; Baba12 ; Notomi14 , GaAs Jelena11 , InP Notomi19 , polymers Krauss09 ; Chen16 , ITO AlanX19 , etc. For lithium niobate, however, the EOMs developed so far Wooten00 ; Gunter07 ; Reano13 ; Reano14 ; Fathpour15 ; Fathpour16 ; Amir17 ; Loncar18 ; Prather18 ; Loncar18_2 ; Shayan18 ; Fathpour18 ; Cai19 ; Cai19_2 ; Loncar19 generally exhibit significant dimensions, leading to significant power required to drive the EOMs. Although attempts have been made to explore the electro-optic effect in LN photonic crystals Bernal12 ; Bernal12_3 ; Bernal14 , the low device quality and poor optoelectronic integration unfortunately limit seriously the operation speed. To date, it remains an open challenge in realizing a high-speed and energy- efficient modulator at the wavelength scale on the monolithic LN platform. Figure 2: Scanning electron microscopic (SEM) image of a fabricated EOM device. (a) Full SEM image of the whole device structure. The region highlighted in red is the electrode used to drive the photonic crystal nanoresonator. That highlighted in blue indicates the large metal pad used for contacting the RF probe. The green region indicates the electrode that can be shrunk to in the future design. (b) Zoom-in image of the photonic crystal resonator and electrodes, corresponding to the dashed rectangular region in (a). (c) Further zoom-in image showing the detailed structure of the photonic crystal defects cavity, corresponding to the dashed rectangular region in (b). Here we report high-speed and energy-efficient LN photonic crystal EOMs, which exhibits a tiny electro-optic modal volume of only $\sim$ 0.58 $\mu{\rm m}^{3}$, the smallest among all high-speed LN EOMs ever reported Wooten00 ; Gunter07 ; Reano13 ; Reano14 ; Fathpour15 ; Fathpour16 ; Amir17 ; Loncar18 ; Prather18 ; Loncar18_2 ; Shayan18 ; Fathpour18 ; Cai19 ; Cai19_2 ; Loncar19 . The sub-wavelength-scale EOM cavity enables compact optoelectronic integration to achieve not only a high electro-optic tuning efficiency up to 16 pm/V (corresponding to 1.98 GHz/V) that is significantly beyond other LN EOM resonators Gunter07 ; Reano13 ; Reano14 ; Fathpour15 ; Amir17 ; Loncar18 ; Fathpour18 ; Loncar19 , but also a large modulation bandwidth up to 17.5 GHz that reaches the photon-lifetime limit of the EOM cavity. The fully on-chip design achieves a full-swing extinction ratio of 11.5 dB. With these devices, we are able to realize efficient driving of the optical mode in both adiabatic sideband-unresolved and non-adiabatic sideband-resolved regimes, and to observe the transition in between. As an example application, we demonstrate electro-optic switching of non-return-to-zero signal at a rate of 11 Gb/s, with a switching energy as low as 22 fJ/bit that is more than one order of magnitude smaller than other LN EOMs Wooten00 ; Gunter07 ; Reano13 ; Reano14 ; Fathpour15 ; Fathpour16 ; Amir17 ; Loncar18 ; Prather18 ; Loncar18_2 ; Shayan18 ; Fathpour18 ; Cai19 ; Cai19_2 ; Loncar19 . To the best of our knowledge, this is the smallest LN EOM ever demonstrated with combined high speed and energy efficient operation. ## Device design and fabrication Recently, there have been significant advance in high-Q LN photonic-crystal nanoresonators Liang17 ; Li19 ; Amir19 ; Mingxiao192 , which led to the demonstration of intriguing phenomena and functionalities such as photorefraction quenching Liang17 , harmonic generation Li19 , piezo- optomechanics Amir19 , and all-optical resonance tuning Mingxiao192 . For EOM, We adopt one-dimensional photonic-crystal nanobeam as the basic underlying structure (Fig. 1(a)) since it supports compact optical and electrical integration to enhance the electro-optic response. Due to the high permittivity of LN at radio frequency, the commonly used full surrounding air cladding Liang17 ; Mingxiao192 ; Amir19 is not suitable for EOM since it would significantly reduce the coupling between the optical and electric fields. To maximize the electro-optic interaction, we utilize a partially etched structure with a rib-waveguide-like cross section, leaving a 150-nm- thick wing layer for the electrodes to sit on (Fig. 1(a) and (d)). Although the breaking of the mirror symmetry along the normal direction of the device plane considerably alters the band gap of the photonic crystal (Fig. 1(c)), optimization of the photonic potential via an appropriate pattern of lattice constant (Fig. 1(e)) is still able to produce a well-confined point-defect cavity, with a simulated optical Q of $\sim 10^{5}$ for the fundamental transverse-electric-like (TE-like) cavity mode, $TE^{0}_{01}$, shown in Fig. 1(f). The cavity mode exhibits an extremely small electro-optic modal volume of $1.52(\lambda/n)^{3}\sim 0.58~{}\mu{\rm m}^{3}$ (where $n$ is the refractive index of LN), which is the smallest among all LN EOMs ever reported Wooten00 ; Gunter07 ; Reano13 ; Reano14 ; Fathpour15 ; Fathpour16 ; Amir17 ; Loncar18 ; Prather18 ; Loncar18_2 ; Shayan18 ; Fathpour18 ; Cai19 ; Cai19_2 ; Loncar19 , to the best of our knowledge. The photonic crystal cavity is oriented such that the dominant optical field is in parallel with the optical axis of underlying LN medium (Fig. 1(f)), so as to take advantage of the largest electro-optic component $r_{33}$ of lithium niobate. The electrodes are designed to be placed close to the photonic-crystal resonator (Fig. 1(d)) to maximize the in-plane electric field ${E_{z}}$, while preventing potential loss induced by metal absorption. The compact device structure design results in a significant electro-optic tuning efficiency of 1.81 GHz/V, simulated by the finite element method. The electrodes are designed to have a length of 30 $\mu$m to ensure a full coverage of the applied electric field over the entire photonic crystal structure. Numerical simulations show that the device exhibits a small capacitance C of $C$ = $\sim$22 fF, which is more than one order of magnitude smaller than other LN EOMs Wooten00 ; Gunter07 ; Reano13 ; Reano14 ; Fathpour15 ; Fathpour16 ; Amir17 ; Loncar18 ; Prather18 ; Loncar18_2 ; Shayan18 ; Fathpour18 ; Cai19 ; Cai19_2 ; Loncar19 . Therefore, we expect our devices to have much higher energy efficiency, as will be shown in the following sections. Figure 3: Experimental testing setup. Light is coupled into and out of the EOM chip via one lensed fiber. The inset shows an optical microscopic image of an EOM with the RF probe in contact. The equipment in the highlighted dashed box is used for characterizing the performance of electro-optic modulation. VOA: variable optical attenuator; MZI: Mach-Zehnder interferometer; EDFA: erbium doped fiber amplifier; BPF: band pass filter; MNA: microwave network analyzer; PRBS: pseudo-random binary sequence source For simplicity of testing, the EOM is designed such that light is coupled into and out of the EOM via only one side of the cavity (Fig. 1(a)). As such, the photonic-crystal mirror on the right side of the defect cavity is designed to be of 100% reflection, while that on the left side has decreased number of holes (Fig. 1(e)) to enable a partial reflection/transmission, with the hole number optimized for a critical coupling to the cavity. To support on-chip integration, light is coupled to the EOM cavity via an on-chip waveguide (Fig. 1(a)), where an injector section (Fig. 1(e)), with the lattice constant varying from 450 to 550 nm, is designed and placed in front of the left mirror to reduce the coupling loss. The devices were fabricated on a $300$-nm-thick x-cut single-crystalline LN thin film bonded on a 3-${\mu}$m silicon dioxide layer sitting on a silicon substrate (from NanoLN). The photonic crystal hole structure was patterned with ZEP-$520$A positive resist via electron-beam lithography, which was then transferred to the LN layer with an Ar+ plasma milling process to etch down the full 300-nm depth. The resist residue was removed by a further O+ plasma etching. A second exposure is then performed to define the waveguide structure, which is partially etched by 150 nm with the same process. After the residue removal, we used diluted hydrofluoric acid to undercut the buried oxide layer to form a suspended photonic crystal membrane structure [Fig. 1(d)]. The metal electrode layer (10 nm Ti/500 nm Au) was deposited by an electron-beam evaporator and the electrode structure was formed by a lift-off process via ZEP-$520$A. Figure 2 shows a fabricated device. The large metal pads (highlighted in blue box) are used simply as the contacts for the air-coplanar probe (Formfactor Acp65-A-GSG-100) for applying the RF driving signal (see also the inset of Fig. 3). The impedance of the metallic structure is optimized to minimize the coupling loss of the RF signal from the pads to the device. The high quality of device fabrication as indicated by the device images implies high performance of the EOM, as we will show below. ## Device characterization and electro-optic properties Figure 4: Linear optical property of a fabricated LN photonic crystal EOM. (a) Laser-scanned transmission spectrum in the telecom band. (b) Detailed transmission spectrum of the fundamental TE-like cavity mode $TE^{0}_{01}$ at wavelength of 1554.47 nm, with the experimental data shown in blue and the theoretical fitting shown in red. Figure 5: Electro-optic tuning property of an LN photonic crystal EOM. (a) Recorded transmission spectrum of the EOM cavity as a function of applied DC voltage from 0 to 4.5 V, with a voltage step of 0.5 V. (b) Recorded resonance shift as a function of applied DC voltage, where the experimental data are shown in black dots and the blue line is a linear fitting to the data. To characterize the optical and electro-optic properties of the devices, a continuous-wave tunable laser (Santec TSL-510) was launched onto the chip via a lensed fiber. The light reflected from the EOM was collected by the same lensed fiber, routed by a circulator, and then delivered to a photodiode for detection. Figure 3 illustrates the schematic of the experimental testing setup, where the inset shows an optical image of the device with the RF probe in contact. The insertion loss from the on-chip coupling waveguide to the photonic-crystal cavity is measured to be around 2.2 dB, calibrated by subtracting the coupling loss from the facet and circulator transmission loss. To characterize the performance of high-speed modulation, the majority of the modulated light output was amplified by an erbium-doped fiber amplifier to boost the power, passed through a bandpass filter to remove the amplifier noise, and was then detected by a high-speed detector (New Focus 1024). The detector output was recorded either by a microwave network analyzer (Keysight N5235B) for characterizing the modulation bandwidth or by a sampling oscilloscope module (Keysight 54754A) to record the eye diagram of the switching signal. Figure 4(a) shows the transmission spectrum of an EOM when the laser is scanned in the telecom band. The device exhibits a resonance at 1554.47 nm, which corresponds to the fundamental TE-like cavity mode $TE^{0}_{01}$ (Fig. 1(f)). As shown in Fig. 4(b), the $TE^{0}_{01}$ mode exhibits a high loaded optical Q of $1.34\times 10^{5}$, which is very close to our numerical simulation, indicating the negligible impact of the electrodes on the optical quality. The cavity resonance exhibits a coupling depth of 93%, corresponding to a full-swing extinction ratio of 11.5 dB. This value can be improved in the future by further optimizing the partially reflective photonic-crystal mirror (Fig. 1(e)). The device also exhibits a second-order TE-like cavity mode $TE^{1}_{01}$ (Fig. 1(g)) at 1604.13 nm (not shown) with a loaded optical Q of $3.03\times 10^{4}$. To show the electro-optic tuning property, we applied a DC voltage to the chip and monitored the cavity transmission spectrum of the $TE^{0}_{01}$ mode. As shown in Fig. 5(a), the cavity resonance tunes smoothly with the applied voltage, without any degradation to the lineshape or coupling depth, clearly showing the pure dispersive electro-optic tuning as expected from the Pockels effect. We have applied a voltage of 25 Volts to the device (not shown in the figure) and did not observe any degradation. Figure 5(b) shows a clear linear dependence of the induced resonance wavelength shift on the applied voltage, from which we obtained a tuning slope of 16 pm/V (corresponding to a frequency tuning slope of 1.98 GHz/V), close to our design. This value is significantly larger than those in other LN EOM resonators Gunter07 ; Reano13 ; Reano14 ; Fathpour15 ; Amir17 ; Loncar18 ; Fathpour18 ; Loncar19 , which is primarily benefited from the strong optical field confinement, large optical and electric field overlap, and the resulting compact optical and electric integration offered by our devices. ## Electro-optic modulation Figure 6: Electro-optic modulation of a high-Q optical cavity resonance. (a) Recorded transmission spectra of the $TE^{0}_{01}$ cavity mode with RF driving signal at 7 different powers from 0 to 12 mW, with a power step of 2 mW, modulated at 0.6 GHz. (b) Same as (a) but with a modulation frequency of 2.0 GHz. (c) Detailed spectrum with RF driving signal at 2.0 GHz with power of 16 mW. The gray lines are in Lorentzian shapes and the sum of all gray lines are showed in red lines fitted by the theory. The dashed lines note the spacing of the sidebands. (d) Recorded transmission spectra at different RF modulation frequency from 0.4 to 3.0 GHz, with a frequency step of 0.2 GHz. The RF driving power is 16 mW. The high efficiency of electro-optic tuning together with the high optical quality of the EOM resonator enables efficient electrical driving of the optical mode into different dynamic regimes. To show this phenomenon, we applied a sinusoidal RF signal at a certain frequency to the EOM and monitored the transmission spectrum of the device by scanning laser back and forth across the cavity resonance. The laser wavelength is scanned at a repetition rate of $\sim$15 Hz, so we primarily monitored the time-averaged cavity transmission. When the EOM is driven at a modulation frequency of 600 MHz much smaller than the cavity linewidth of 1.4 GHz, increasing the driving power simply broadens the transmission spectrum into one with two shallow side lobes, as shown in Fig. 6(a), with a broadened spectral linewidth dependent on the driving power. This is a typical signature of resonance modulation in the sideband-unresolved regime, where the cavity resonance follows adiabatically the electric driving signal in a sinusoidal fashion, resulting in a broadened average transmission spectrum as shown in Fig. 6(a). When the modulation frequency is increased to 2.0 GHz greater than the cavity linewidth, the cavity is too slow to follow the electro-optic modulation, which results in the frequency conversion of photons into sidebands with frequency separation equal to the modulation frequency. Consequently, the transmission spectrum transforms into a multi-resonance spectrum, as shown in Fig. 6(b). Increasing the electrical driving power now does not perturb the positions of the resonance dips, but rather changes their relative magnitudes since the magnitudes of the created sidebands depends on the driving amplitude Usman19 . This phenomenon is shown more clearly in Fig. 6(c), where a driving power of 16 mW (corresponding peak-to-peak driving voltage, ${\rm V_{pp}}$, of ${\rm V_{pp}}=2.5$ V) splits the cavity resonance into five with notable magnitudes (black curve), resulting in a cavity transmission with five side lobes (blue curve). Electro-optic modulation enables arbitrary modulation of cavity resonance within the bandwidth allowed by the driving circuit. This is in strong contrast to piezoelectric acoustic modulation which is confined to the vicinity of mechanical resonance frequency Amir19 ; Loncar196 ; Piazza19 . Such flexibility allows us to observe direct transition between the adiabatic driving regime and the non-adiabatic regime simply by continuously sweeping the modulation frequency to across the cavity linewidth. Figure 6(d) shows an example. When the modulation frequency is below 1.0 GHz, The transmission spectrum remains fairly similar regardless of modulation frequency, as expected from the adiabatic driving discussed above. However, when the modulation frequency is tuned above 1.0 GHz towards the cavity linewidth, the two side lobes moves towards each other and the spectral shape is considerably distorted, until around 1.8 GHz where the transmission spectrum splits into three lobes, with the two side lobes located about 1.8 GHz from the center. Further increase of the modulation frequency shifts apart the two side lobes accordingly, with amplitude decreased, while the position of the center lobe remains unchanged, as expected from the non-adiabatic driving. The flexible electro-optic modulation shown here may offer a convenient method for controlling the spectrotemporal properties of photons inside the cavity and for creating exotic quantum states Usman19 that are crucial for quantum photonic applications. Figure 7: High-speed electro-optic switching. (a) Measured scattering parameter $S_{21}$ for one device with Q around 14000 in blue and another with Q around 20000 in orange. The gray dashed line notes the 3 dB cutoff frequency with the gray region represents the bandwidth limit for two device respectively. The inset shows the $S_{11}$ reflection scattering parameter for both devices. (b) and (c) Eye diagrams of the photonic crystal EOM, measured with $2^{7}$-1 NRZ PRBS with a driving voltage of Vpp=2V. ## Electro-optic switching The electro-optic modulation demonstrated in the previous section indicates the potential high-speed operation of the EOMs. To show this feature, we selected another similar device on the same chip, which has a lower loaded optical Q of 14000. Figure 7(a) shows the electro-optic modulation response of the device (blue curve), which exhibits a 3-dB modulation bandwidth up to around 17.5 GHz. This value primarily reaches the photon-lifetime limit of the EOM cavity ($\sim$11 ps), as the electrode circuit has much broader spectral response as indicated by the flat S11 reflection spectrum shown in the inset of Fig. 7(a). As the modulation bandwidth is primarily related to the optical Q of the device, it can be engineered flexibly for different application purposes, simply by choosing device with appropriate optical Q. The red curve in Fig. 7(a) shows another example of a device with optical Q of 20000, which exhibits a 3-dB bandwidth of about 12.5 GHz. The broad modulation bandwidth of these devices would thus enable high-speed electro-optic switching. As an example application, we applied non-return-to- zero (NRZ) signal with a ($2^{7}$-1)-bit pseudo-random binary sequence (PRBS) to an EOM with a ${\rm V_{pp}}$ of 2.0 Volts. Figure 7(b) and (c) show the recorded eye diagrams at two different bit rates of 9 and 11 Gb/s, respectively, which show clear open eyes. The demonstrated bit rate is currently limited by our PRBS generator (Agilent 70843B) which has a maximum bit rate of 12 Gb/s. However, negligible degradation observed between Fig. 7(b) and (c) implies that the EOM could operate at higher bit rates, which will left for future demonstration. The bit switching energy for NRZ signal is given by $\frac{1}{4}CV_{\rm pp}^{2}$ Miller17 , which is about 22 fJ/bit in our EOM. This value is the smallest switching energy ever reported for LN EOMs Wooten00 ; Gunter07 ; Reano13 ; Reano14 ; Fathpour15 ; Fathpour16 ; Amir17 ; Loncar18 ; Prather18 ; Loncar18_2 ; Shayan18 ; Fathpour18 ; Cai19 ; Cai19_2 ; Loncar19 , clearly showing the high energy efficiency of our devices. ## Discussion and conclusion The energy efficiency of the LN photonic crystal EOM can be further improved since our current devices are not optimized. For example, the capacitance of our device can be significantly decreased since the majority of the metallic parts in the current devices are used for coupling the RF driving signal, which can be removed in a future on-chip integration design. The 50-$\mu$m width of the electrode (Fig. 2, red box) is used primarily for impedance matching to the large metal pad for probe contact, which can be decreased to 3 $\mu$m for a fully on-chip operation Notomi19 . On the other hand, the 30-$\mu$m length of the electrode is overly conservative since it covers the full length of photonic crystal structure including the injector, mirrors, and the cavity (Fig. 1(e) and Fig. 2). Essentially, only the 10-$\mu$m long point- defect cavity requires electric driving to achieve electro-optic modulation. Therefore, the electrodes can be shrunk to $10\times 3~{}{\mu{\rm m}}^{2}$, which would reduce the capacitance considerably to $\sim$0.27 fF ($\sim$1.0 fF if including the integrated wires Notomi19 ), according to our FEM simulations. On the other hand, the electrodes are currently placed far from the photonic crystal cavity so as to leave the optical mode intact to achieve a high optical Q. For the application of high-speed electro-optic switching, our simulations show that the electrode-waveguide spacing can be decreased to 1.5 $\mu$m for an optical Q of $\sim$5000 (corresponding to a modulation bandwidth of $\sim$ 45 GHz), which will improve the modulation efficiency to 2.38 GHz/V. We expect that these optimization would significantly improve the energy efficiency of the LN photonic crystal EOM, further decreasing the switching energy down to sub-femtoJoule level. In the current EOMs shown above, light is coupled into and out of the EOMs via a same side of the cavity, which is not convenient in practice since a circulator is required to separate the modulated light for the laser input. This can be changed simply by engineering the photonic crystal mirror on the other side to function as the output port. In summary, we have demonstrated high-speed LN EOMs with a broad modulation bandwidth of 17.5 GHz, a significant tuning efficiency up to 1.98 GHz/V, and an electro-optic modal volume as small as 0.58 $\mu{\rm m}^{3}$. We believe this is the first LN EOM ever reported with such combined device characteristics and modulation performance. With these devices, we are able to demonstrate efficient electrical driving of high-Q cavity mode in both adiabatic and non-adiabatic regimes and to observe transition in between. We are also able to achieve high-speed electro-optic switching of at 11 Gb/s, with switching energy as low as 22 fJ/bit. The demonstration of energy efficient and high-speed EOM at the wavelength scale paves an important step for device miniaturization and high density photonic integration on the monolithic LN platform, which is expected to find broad applications in communication, computing, microwave signal processing, and quantum photonic information processing. ## Funding Information National Science Foundation (NSF) (EFMA-1641099, ECCS-1810169, and ECCS-1842691); the Defense Threat Reduction Agency-Joint Science and Technology Office for Chemical and Biological Defense (grant No. HDTRA11810047). ## Acknowledgment The authors thank Professor Hui Wu and Professor Wayne Knox for the use of their equipment. They also thank Wuxiucheng Wang, Lejie Lu, and Ming Gong for valuable discussions and help on testing. This work was performed in part at the Cornell NanoScale Facility, a member of the National Nanotechnology Coordinated Infrastructure (National Science Foundation, ECCS-1542081). ## References * (1) E. L. Wooten, K. M. Kissa, A. Yi-Yan, E. J. Murphy, D. A. Lafaw, P. F. Hallemeier, D. Maack, D. V. Attanasio, D. J. Fritz, G. J. McBrien, and D. E. Bossi, “A review of lithium niobate modulators for fiber-optic communications Systems,” IEEE J. Sel. Top. Quant. Electron. 6, 69 (2000). * (2) D. Marpaung, J. Yao, and J. Capmany, “Integrated microwave photonics,” Nature Photon. 13, 80 (2019). * (3) C. Sun, M. T. Wade, Y. Lee, J. S. Orcutt, L. Alloatti, M. S. Georgas, A. S. Waterman, J. M. Shainline, R. R. Avizienis, S. Lin, B. R. Moss, R. Kumar, F. Pavanello, A. H. Atabaki, H. M. Cook, A. J. Ou, J. C. Leu, Y.-H. Chen, K. Asanović, R. J. Ram, M. A. Popović, and V. M. Stojanović, “Single-chip microprocessor that communicates directly using light,” Nature 528, 534-538 (2015). * (4) D. R. Carlson, D. D. Hickstein, W. Zhang, A. J. Metcalf, F. Quinlan, S. A. Diddams, S. B. Papp, “Ultrafast electro-optic light with subcycle control,” Science 361, 1358 (2018). * (5) M. Karpiński, M. Jachura, L. J. Wright, and B. J. Smith, “Bandwidth manipulation of quantum light by an electro-optic time lens,” Nature Photon. 11, 53 (2017). * (6) G. T. Reed, G. Mashanovich, F. Y. Gardes, and D. J. Thomson, “Silicon optical modulators,” Nature Photon. 4, 518 (2010). * (7) M. G. Wood, S. Campione, S. Parameswaran, T. S. LUK, J. R. WENDT, D. K. Serkland, and G. A. Keeler, “Gigahertz speed operation of epsilon-near-zero silicon photonic modulators,” Optica 2, 233 (2018). * (8) J. Ozaki, Y. Ogiso, and S. Nakano, “High-speed modulator for next-generation large-capacity coherent optical networks,” NTT Technical Reviews 16(4), 1 (2018). * (9) J. Liu, M. Beals, A. Pomerene, S. Bernardis, R. Sun, J. Cheng, L. C. Kimerling, and J. Michel, “Waveguide-integrated, ultralow-energy GeSi electro-absorption modulators,” Nature Photon. 2, 433 (2008). * (10) S. Koeber, _et al_ , “Femtojoule electro-optic modulation using a silicon–organic hybrid device,” Light: Sci. Appl. 4, e255 (2015). * (11) G. Poberaj, H. Hu, W. Sohler, and P. Günter, “Lithium niobate on insulator (LNOI) for micro-photonic devices,” Laser Photon. Rev. 6, 488 (2012). * (12) A. Boes, B. Corcoran, L. Chang, J. Bowers, and A. Mitchell, “Status and Potential of Lithium Niobate on Insulator (LNOI) for Photonic Integrated Circuits,” Laser Photon. Rev. 12, 1700256 (2018). * (13) A. Guarino, G. Poberaj, D. Rezzonico, R. Gegl’Innocenti, and P. Günter, “Electro–optically tunable microring resonators in lithium niobate,” Nature Photon. 1, 407 (2007). * (14) L. Chen, M. G. Wood, and R. M. Reano, “12.5 pm/V hybrid silicon and lithium niobate optical microring resonator with integrated electrodes,” Opt. Express 21, 27003 (2013). * (15) L. Chen, Q. Xu, M. G. Wood, and R. M. Reano, “Hybrid silicon and lithium niobate electro-optical ring modulator,” Optica 1, 112 (2014). * (16) A. Rao, A. Patil, J. Chiles, M. Malinowski, S. Novak, K. Richardson, P. Rabiei, and S. Fathpour, “Heterogeneous microring and Mach-Zehnder modulators based on lithium niobate and chalcogenide glasses on silicon,” Opt. Express 23, 22746 (2015). * (17) A. Rao, A. Patil, P. Rabiei, A. Honardoost, R. Desalvo, A. Paolella, and S. Fathpour, “High-performance and linear thin-film lithium niobate Mach–Zehnder modulators on silicon up to 50 GHz,” Opt. Lett. 41, 5700 (2016). * (18) J. D. Witmer, J. A. Valery, P. Arrangoiz-Arriola, C. J. Sarabalis, J. T. Hill, and A. H. Safavi-Naeini, “High-Q photonic resonators and electro-optic coupling using silicon-on-lithium-niobate,” Sci. Rep. 7, 46313 (2017). * (19) C. Wang, M. Zhang, B. Stern, M. Lipson, and M. Loncar, “Nanophotonic lithium niobate electro-optic modulators,” Opt. Express 26, 1547-1555 (2018). * (20) A. J. Mercante, S. Shi, P. Yao, L. Xie, R. M. Weikle, and D. W. Prather, “Thin film lithium niobate electro-optic modulator with terahertz operating bandwidth,” Opt. Express 26, 14810 (2018). * (21) C. Wang, M. Zhang, X. Chen, M. Bertrand, A. Shams-Ansari, S. Chandrasekhar, P. Winzer, and M. Loncar, “Integrated lithium niobate electro-optic modulators operating at CMOS-compatible voltages,” Nature 562, 101 (2018). * (22) P. O. Weigel, _et al_ , “Bonded thin film lithium niobate modulator on a silicon photonics platform exceeding 100 GHz 3-dB electrical modulation bandwidth,” Opt. Express 26, 23728 (2018). * (23) A. Rao and S. Fathpour, “Compact lithium niobate electrooptic modulators,” IEEE J. Sel. Top. Quant. Electron. 24, 3400114 (2018). * (24) M. He, _et al_ , “High-performance hybrid silicon and lithium niobate Mach Zehnder modulators for 100 Gbit s-1 and beyond,” Nature Photon. 13, 359 (2019). * (25) J. Jian, M. Xu, L. Liu, Y. Luo, J. Zhang, L. Liu, L. Zhou, H. Chen, S. Yu, and X. Cai, “High modulation efficiency lithium niobate Michelson interferometer modulator,” Opt. Express 27, 18731 (2019). * (26) M. Zhang, B. Buscaino, C. Wang, A. S. Ansari, C. Reimer, R. Zhu, J. Kahn, and M. Loncar, “Broadband electro-optic frequency comb generation in a lithium niobate microring resonator,” Nature 568 373-377 (2019). * (27) D. A. B. Miller, “Attojoule optoelectronics for low-energy information processing and communications,” J. Lightwave Technol. 35, 346-396 (2017). * (28) K. Liu, C. R. Ye, S. Khan, and V. J. Sorger, “Review and perspective on ultrafast wavelength-size electro-optic modulators,” Laser. Photon. Rev. 9, 172 (2015). * (29) Takasumi Tanabe, Katsuhiko Nishiguchi, Eiichi Kuramochi, and Masaya Notomi, ”Low power and fast electro-optic silicon modulator with lateral p-i-n embedded photonic crystal nanocavity,” Opt. Express 17, 22505-22513 (2009). * (30) H. C. Nguyen, S. Hashimoto, M. Shinkawa, and T. Baba, “Compact and fast photonic crystal silicon optical modulators,” Opt. Express 20, 22465 (2012). * (31) Abdul Shakoor, Kengo Nozaki, Eiichi Kuramochi, Katsuhiko Nishiguchi, Akihiko Shinya, and Masaya Notomi, ”Compact 1D-silicon photonic crystal electro-optic modulator operating with ultra-low switching voltage and energy,” Opt. Express 22, 28623-28634 (2014) * (32) Gary Shambat, Bryan Ellis, Marie A. Mayer, Arka Majumdar, Eugene E. Haller, and Jelena Vuckovic, ”Ultra-low power fiber-coupled gallium arsenide photonic crystal cavity electro-optic modulator,” Opt. Express 19, 7530-7536 (2011) * (33) Kengo Nozaki, Shinji Matsuo, Takuro Fujii, Koji Takeda, Akihiko Shinya, Eiichi Kuramochi, and Masaya Notomi, “Femtofarad optoelectronic integration demonstrating energy-saving signal conversion and nonlinear functions”, Nat. Photonics 13, 454–459 (2019) * (34) J. H. Wülbern, J. Hampe, A. Petrov, M. Eich, J. Luo, A. K.-Y. Jen, A. Di Falco, T. F. Krauss, and J. Bruns, “Electro-optic modulation in slotted resonant photonic crystal heterostructures,” Appl. Phys. Lett. 94, 241107 (2009). * (35) X. Zhang, C.-J. Chung, A. Hosseini, H. Subbaraman, J. Luo, A. K-Y. Jen, R. L. Nelson, C. Y-C. Lee, and R. T. Chen, “High performance optical modulator based on electro-optic polymer filled silicon slot photonic crystal waveguide,” J. Lightwave Technol. 34, 2941 (2016). * (36) E. Li, B. Zhou, Y. Bo, and A. X. Wang, ”High-Speed Compact Silicon Nanocavity Modulator with Transparent Conductive Oxide Gate,” in Frontiers in Optics + Laser Science APS/DLS, OSA Technical Digest (Optical Society of America, 2019), paper FW5C.2. * (37) H. Lu, F. I. Baida, G. Ulliac, N. Courjal, M. Collet, and M.-P. Bernal, “Lithium niobate photonic crystal wire cavity: Realization of a compact electro-optically tunable filter,” Appl. Phys. Lett. 101, 151117 (2012). * (38) H. Lu, B. Sadani, N. Courjal, G. Ulliac, N. Smith, V. Stenger, M. Collet, F. I. Baida, and M.-P. Bernal, “Enhanced electro-optic lithium niobate photonic crystal wire waveguide on a smart-cut thin film,” Opt. Express 20, 2974 (2012). * (39) H. Lu, W. Qiu, C. Guyot, G. Ulliac, J.-M. Merolla, F. Baida, and M.-P. Bernal, “Optical and RF characterization of a lithium niobate photonic crystal modulator,” IEEE Photon. Technol. Lett. 26, 1332 (2014). * (40) H. Liang, R. Luo, Y. He, H. Jiang, and Q. Lin, “High-quality lithium niobate photonic crystal nanocavities,” Optica 4, 1251 (2017). * (41) M. Li, H. Liang, R. Luo, Y. He, and Q. Lin, “High‐Q 2D Lithium Niobate Photonic Crystal Slab Nanoresonators,” Laser Photon. Rev. 13, 1800228 (2019). * (42) W. Jiang, R. N. Patel, F. M. Mayor, T. P. McKenna, P. Arrangoiz-Arriola, C. J. Sarabalis, J. D. Witmer, R. Van Laer, and A. H. Safavi-Naeini, “Lithium niobate piezo-optomechanical crystals,” Optica 6, 845-853 (2019). * (43) M. Li, H. Liang, R. Luo, Y. He, J. Ling, and Q. Lin, “Photon-level tuning of photonic nanocavities,”Optica 6, 860-863 (2019). * (44) U. A. Javid and Q. Lin, “Quantum correlations from dynamically modulated optical nonlinear interactions”, Phys. Rev. A, 100, 043811 (2019). * (45) L. Cai, A. Mahmoud, M. Khan, M. Mahmoud, T. Mukherjee, J. Bain, and G. Piazza, “Acousto-optical modulation of thin film lithium niobate waveguide devices,” Photon. Res. 7, 1003 (2019). * (46) L. Shao, M. Yu, S. Maity, N. Sinclair, L. Zheng, C. Chia, A. Shams-Ansari, C. Wang, M. Zhang, K. Lai, and M. Loncar, “Microwave-to-optical conversion using lithium niobate thin-film acoustic resonators,” Optica 6, 1498 (2019).
2024-09-04T02:54:57.935208
2020-03-06T16:23:42
2003.03298
{ "authors": "Shubham Gupta", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26087", "submitter": "Shubham Gupta", "url": "https://arxiv.org/abs/2003.03298" }
arxiv-papers
# Certain Diophantine tuples in imaginary quadratic fields Shubham Gupta Shubham Gupta @Harish-Chandra Research Institute, HBNI, Chhatnag Road, Jhunsi, Allahabad 211 019, India<EMAIL_ADDRESS> ###### Abstract. Let $K$ be an imaginary quadratic field and $\mathcal{O}_{K}$ be its ring of integers. A set $\\{a_{1},a_{2},\cdots,a_{m}\\}\subset\mathcal{O}_{K}\setminus\\{0\\}$ is called a Diophantine $m$-tuple in $\mathcal{O}_{K}$ with $D(-1)$ if $a_{i}a_{j}-1=x_{ij}^{2}$, where $x_{ij}\in\mathcal{O}_{K}$ for all $i,j$ such that $1\leq i<j\leq m$. Here we prove the non-existence of Diophantine $m$-tuples in $\mathcal{O}_{K}$ with $D(-1)$ for $m>36$. ###### Key words and phrases: Diophantine tuples, Imaginary quadratic fields, Pell equation, Simultaneous approximation. ###### 2010 Mathematics Subject Classification: Primary: 11D09, 11R11, Secondary: 11J68. ## 1\. Introduction A set $\\{a_{1},a_{2},\cdots,a_{m}\\}$ of $m$ positive integers is called a Diophantine $m$-tuple with $D(n)$ if $a_{i}a_{j}+n=x_{ij}^{2}$, where $x_{ij}\in\mathbb{Z}$ and $n\in\mathbb{Z}$, for all $1\leq i<j\leq m$. Diophantus found a set of four positive rationals {1/16, 33/16, 17/4, 105/16} with the above property for $n=1$. The first Diophantine $4$-tuple with $D(1)$, namely, $\\{1,3,8,120\\}$ was found by Fermat. Baker and Davenport [2] proved that this particular quadruple cannot be extended to a Diophantine $5$-tuple with $D(1)$. Now on whenever we say a $m$-tuple, it would mean a Diophantine $m$-tuple as above. Let $\\{a,b,c\\}$ be a $3$-tuple with $D(1)$. If there exists a $d\in\mathbb{N}$ such that $\\{a,b,c,d\\}$ is a $4$-tuple with $D(1)$, then there exist $x,y,z$ $\in$ $\mathbb{Z}$ such that $ad+1=x^{2},\hskip 5.69046ptbd+1=y^{2},\hskip 5.69046pt\text{and }cd+1=z^{2}.$ Hence we get an elliptic curve $E$ over $\mathbb{Q}$ $E:(xyz)^{2}=(ad+1)(bd+1)(cd+1).$ As the number of integral points on an elliptic curve over $\mathbb{Q}$ is finite([13, page 176]) so the number of possible choices of $d$ is finite. Over the years due to the findings of many researchers there exist many examples of $3$\- and $4$-tuples. In 2001, Dujella [5] proved that there are atmost finitely many Diophantine 8-tuple with $D(1)$ and there does not exist Diophantine $9$-tuple with $D(1)$. In 2004, he improved this result and proved that there does not exist Diophantine 6-tuple with $D(1)$ and there exist atmost finitely many Diophantine 5-tuple with $D(1)$ (see [6]). There was a ‘folklore’ conjecture that there does not exist Diophantine $5$-tuples with $D(1)$. This is recently (in 2019) been settled by B. He et. al. [9] in a pioneering work. Let $S(n)=\max\\{|A|:A\text{ is a Diophantine}~{}~{}m-\text{tuple with}~{}D(n)\\}.$ Thus from the work of He et.al. $S(1)\leq 4$. Dujella and Fuchs [7] showed that there do not exist Diophnatine 5-tuples with $D(-1)$. Dujella, Fuchs and Filipin [8] also proved that there exist atmost finitely many Diophnatine 4-tuple with $D(-1)$. Furthermore they showed that, any such Diophantine 4-tuple with $D(-1)$ $\\{a_{1},\cdots,a_{4}\\}$ should satisfy $a_{4}<10^{903}$. This bound was further reduced to $3.01\times 10^{60}$ by Trudgian [14]. ###### Definition 1.1. A set $\\{a_{1},a_{2},\cdots,a_{m}\\}\subset\mathcal{O}_{K}\setminus\\{0\\}$ is called Diophantine $m$-tuples in $\mathcal{O}_{K}$ with $D(n)$ if $a_{i}a_{j}+n=x_{ij}^{2}$, $x_{ij}\in\mathcal{O}_{K}$ for all $1\leq i<j\leq m$. For the remainder of the article, $m$ and $n$ carry the same meaning as in definition 1.1 above. In 1997, Dujella proved that there does not exist Diophantine 4-tuple in $\mathbb{Z}[i]$ with $D(a+bi)$ , where $b$ is odd or $a\equiv b\equiv 2\pmod{4}$ (see [4]). For $n=1$, Azadaga [1] proved that $m\leq 42$. For $n=-1$, Soldo studied the extension of certain triples to quadruples (see [11], [12]). In this paper, we studied the existence of $m$-tuple with $D(-1)$ and obtained the following: ###### Theorem 1.1. Let $K$ be an imaginary quadratic field and $\mathcal{O}_{K}$ be its ring of integers. Then there does not exist Diophantine $m$-tuple with $D(-1)$ for $m>36$ in $\mathcal{O}_{K}$. Here is a brief of how we proceed to prove the above result. We employ similar techniques as that of Azadaga [1]. Let $\\{a,b,c\\}$ be a triple in $\mathcal{O}_{K}$ with $D(-1)$. If $d\in\mathcal{O}_{K}$ such that $\\{a,b,c,d\\}$ be a quadruple with $D(-1)$, then we get a system of Pellian equations. Using the solution of these Pellian equations and a result of Jedrizević-Zeigler [10], we will get an upper bound on $d$ in term of $c$, if $\\{a,b,c,d\\}$ satisfies some conditions. Further using the regularity condition (refer section 4 below) on $\\{a,b,c,d\\}$ one gets a lower bound, i.e., $d\geq g(a)$ for some function $g$ in terms of $a$. We use SAGE for the computations and prove Theorem 1.1 by contradiction. The lower and upper bounds on $d$ will give the desired contradiction. ## 2\. System of Pellian equations Let $K=\mathbb{Q}(\sqrt{-D})$ with $D$ a square free positive integer. We know that $\mathcal{O}_{K}=\mathbb{Z}[\omega]=\\{a+b\omega:a,b\in\mathbb{Z}\\}$, where $\omega=\begin{cases}\sqrt{-D}&\text{if $-D\equiv 2,3\pmod{4}$},\\\ \dfrac{1+\sqrt{-D}}{2}&\text{if $-D\equiv 1\pmod{4}$}.\end{cases}$ If $\alpha=\left(a+\dfrac{b}{2}\right)+\dfrac{b}{2}\sqrt{-D}\in\mathcal{O}_{K}$ then the norm of $\alpha$: $||\alpha||=\left(a+\dfrac{b}{2}\right)^{2}+\dfrac{Db^{2}}{4},$ and in particular if $\alpha=a+b\sqrt{-D}$, then $||\alpha||=a^{2}+Db^{2}.$ Then the absolute value of $\alpha\in\mathcal{O}_{K}$ (denoted as $|\alpha|$) is defined as $|\alpha|=\sqrt{||\alpha||}$. When $D=1$ the units in $\mathbb{Z}[i]$ are $\\{\pm 1,\pm i\\}$, when $D=3$ the units are $\left\\{\pm 1,\dfrac{\pm 1\pm\sqrt{-3}}{2}\right\\}$ and else the units are $\\{\pm 1\\}$. Notations- Throughout, a triple $\\{a,b,c\\}$ will denote a Diophantine $3$-tuple in $\mathcal{O}_{K}$ such that $0<|a|\leq|b|\leq|c|$ with property $D(-1)$ and similarly other tuples. Let $r,s,t\in\mathcal{O}_{K}$ such that $r=\sqrt{ab-1},\ s=\sqrt{ac-1}~{}~{}\text{and}~{}~{}t=\sqrt{bc-1},$ where $a,b,c,d$ form a quadruple. ###### Lemma 2.1. Let $\mathcal{A}=\\{a_{1},a_{2},a_{3},\cdots,a_{m}\\}$ be a $m$-tuple in $\mathcal{O}_{K}$ with $D(-1)$. Then, for $m\geq 4$, $a_{i}a_{j}$ is not a square in $\mathcal{O}_{K}$ for all $1\leq i<j\leq m$. Also, for $m\geq 4$, $a_{i}a_{j}$ is not a square in $K$. ###### Proof. If $\\{a,b\\}$ be a pair in $\mathcal{A}$ such that $ab=x^{2}$ where $x\in\mathcal{O}_{K}\setminus\\{0\\}$, then $ab-1=r^{2}=x^{2}-1\Rightarrow 1=x^{2}-r^{2}=(x-r)(x+r)\Rightarrow x=0\hskip 5.69046pt\text{or}\hskip 5.69046ptr=0,$ so $r=0$ and hence $ab=1$. If $D=1$ then $a,b\in\\{i,-i\\}$ and it implies that if $\\{a,b,c\\}$ be a triple then $c$ has to be one of $\\{\pm i\\}$. One can easily check that $\\{a,b,c\\}$ is not triple in $\mathcal{O}_{K}$ with $D(-1)$. On the other hand when $D=3$ then $a,b\in\Big{\\{}\dfrac{\pm 1\pm\sqrt{-3}}{2}\Big{\\}}$. It implies that if $\\{a,b,c\\}$ be a triple then, $c$ is one of $\\{\pm 1\\}$. Thus only two pairs $\Big{\\{}\dfrac{1+\sqrt{-3}}{2},\dfrac{1-\sqrt{-3}}{2}\Big{\\}}$ and $\Big{\\{}\dfrac{-1+\sqrt{-3}}{2},\dfrac{-1-\sqrt{-3}}{2}\Big{\\}}$ survive. The corresponding triples are $\Big{\\{}\dfrac{1+\sqrt{-3}}{2},\dfrac{1-\sqrt{-3}}{2},1\Big{\\}}~{}~{}\text{and}~{}~{}\Big{\\{}\dfrac{-1+\sqrt{-3}}{2},\dfrac{-1-\sqrt{-3}}{2},-1\Big{\\}}.$ Note also that these pairs $\Big{\\{}\dfrac{1+\sqrt{-3}}{2},\dfrac{1-\sqrt{-3}}{2}\Big{\\}}$ and $\Big{\\{}\dfrac{-1+\sqrt{-3}}{2},\dfrac{-1-\sqrt{-3}}{2}\Big{\\}}$ cannot be extended to quadruple. Now if $D\neq 1,3$ then the units are $\pm 1$ so either $a=b=1$ or $a=b=-1$. Hence $ab$ is not a square in $\mathcal{O}_{K}$. Now if $ab$ is a square in $K$, then it is a root of monic polynomial $x^{2}-ab$. Since $\mathcal{O}_{K}$ is integrally closed, $ab$ is not a square in $K$. Hence $ab$ is not a square in $K$. ∎ Let us suppose $\\{a,b,c\\}$ extends to a quadruple $\\{a,b,c,d\\}$. Thus there exist $x,y,z\in\mathcal{O}_{K}$ such that $ad-1=x^{2},\hskip 5.69046ptbd-1=y^{2},\hskip 5.69046ptcd-1=z^{2}.$ Thus there is a system of Pell’s equations: $az^{2}-cx^{2}=c-a$ (2.1) $bz^{2}-cy^{2}=c-b$ (2.2) with $d=\dfrac{z^{2}+1}{c}$. ## 3\. Upper bound of $d$ in term of $c$ Let $\\{a,b,c,d\\}$ be a quadruple. We will see that if $c$ is bounded by some power of $b$ then $d$ is bounded by some power of $c$. In 1998, Bennett [3] proved a theorem which is related to simultaneous approximations of rationals, where these rationals have square roots close to one. Jadrijevic̀-Zeigler proved the following theorem which is an analog to Bennett’s theorem. ###### Lemma 3.1. $($Jadrijević -Zeigler [10, Theorem 7.3, 7.4]$)$ Let $\theta_{i}=\sqrt{1+\dfrac{a_{i}}{T}}$, $i=1,2$ with $a_{1},a_{2}$ distinct algebraic integers in $K$, and $T$ be any algebraic integer of $K$. Further, let $M=\max\\{|a_{1}|,|a_{2}|$}, $|T|>M$, $a_{0}=0$ and $L=\dfrac{27}{16|a_{1}|^{2}|a_{2}|^{2}|a_{1}-a_{2}|^{2}}(|T|-M)^{2}>1$ Then $\max\\{|\theta_{1}-p_{1}/q|,|\theta_{2}-p_{2}/q|\\}>c_{1}|q|^{-\lambda}$ (3.1) for all algebraic integers $p_{1},p_{2},q\in K$ where $\displaystyle\lambda$ $\displaystyle=1+\dfrac{\log P}{\log L},\hskip 5.69046ptc_{1}^{-1}=4pP(\max\\{1,2l\\})^{\lambda-1},$ $\displaystyle l$ $\displaystyle=\dfrac{27|T|}{64(|T|-M)},\hskip 5.69046ptp=\sqrt{\dfrac{2|T|+3M}{2|T|-2M}},$ $\displaystyle P$ $\displaystyle=16\dfrac{|a_{1}|^{2}|a_{2}|^{2}|a_{1}-a_{2}|^{2}}{\min\\{|a_{1}|,|a_{2}|,|a_{1}-a_{2}|\\}^{3}}(2|T|+3M).$ ###### Lemma 3.2. Let $(x,y,z)$ be a solution of the system of equations (2.1) and (2.2). Assume $|c|>4|b|$, $|a|\geq 2$. If $\theta_{1}^{(1)}=\pm\dfrac{s}{a}\sqrt{\dfrac{a}{c}},\hskip 5.69046pt\theta_{1}^{(2)}=-\theta_{1}^{(1)}$ and $\theta_{2}^{(1)}=\pm\dfrac{t}{b}\sqrt{\dfrac{b}{c}},\hskip 5.69046pt\theta_{2}^{(2)}=-\theta_{2}^{(1)}$ with ‘sign’ chosen so that $\Big{|}\theta_{1}^{(1)}-\dfrac{sx}{az}\Big{|}\leq\Big{|}\theta_{1}^{(2)}-\dfrac{sx}{az}\Big{|}$ and $\Big{|}\theta_{2}^{(1)}-\dfrac{ty}{bz}\Big{|}\leq\Big{|}\theta_{2}^{(2)}-\dfrac{ty}{bz}\Big{|}$, then $\Big{|}\theta_{1}^{(1)}-\dfrac{sbx}{abz}\Big{|}\leq\dfrac{|s||a-c|}{|a|\sqrt{|ac|}}\times\dfrac{1}{|z|^{2}}<\dfrac{21|c|}{16|a|}\times\dfrac{1}{|z|^{2}}$ (3.2) and $\Big{|}\theta_{2}^{(1)}-\dfrac{tay}{abz}\Big{|}\leq\dfrac{|s||a-c|}{|b|\sqrt{|bc|}}\times\dfrac{1}{|z|^{2}}<\dfrac{21|c|}{16|a|}\times\dfrac{1}{|z|^{2}}.$ (3.3) ###### Proof. We prove inequality (3.2) and similarly (3.3) can be proven. Consider $\Big{|}\theta_{1}^{(1)}-\dfrac{sx}{az}\Big{|}=\dfrac{\Big{|}\theta_{1}^{(1)}-\dfrac{sx}{az}\Big{|}\times\Big{|}\theta_{1}^{(1)}+\dfrac{sx}{az}\Big{|}}{\Big{|}\theta_{1}^{(1)}+\dfrac{sx}{az}\Big{|}}=\dfrac{\Big{|}\Big{(}\theta_{1}^{(1)}\Big{)}^{2}-\dfrac{s^{2}x^{2}}{a^{2}z^{2}}\Big{|}}{\Big{|}\theta_{1}^{(1)}+\dfrac{sx}{az}\Big{|}}.$ We substitute $\theta_{1}^{(2)}=-\theta_{1}^{(1)}$ in above and get $\displaystyle\dfrac{\Big{|}\Big{(}\theta_{1}^{(1)}\Big{)}^{2}-\dfrac{s^{2}x^{2}}{a^{2}z^{2}}\Big{|}}{\Big{|}\theta_{1}^{(1)}+\dfrac{sx}{az}\Big{|}}$ $\displaystyle=\Big{|}\dfrac{s^{2}}{a^{2}}\Big{|}\times\Big{|}\dfrac{a^{2}}{s^{2}}\times\Big{(}\theta_{1}^{(1)}\Big{)}^{2}-\dfrac{x^{2}}{z^{2}}\Big{|}\times\Big{|}\theta_{1}^{(2)}-\dfrac{sx}{az}\Big{|}^{-1}$ $\displaystyle=\Big{|}\dfrac{s^{2}}{a^{2}}\Big{|}\times\Big{|}\dfrac{a}{c}-\dfrac{x^{2}}{z^{2}}\Big{|}\times\Big{|}\theta_{1}^{(2)}-\dfrac{sx}{az}\Big{|}^{-1}$ $\displaystyle=\Big{|}\dfrac{s^{2}}{a^{2}}\Big{|}\times\Big{|}\dfrac{az^{2}-cx^{2}}{|cz^{2}|}\Big{|}\times\Big{|}\theta_{1}^{(2)}-\dfrac{sx}{az}\Big{|}^{-1}$ $\displaystyle=\Big{|}\dfrac{s^{2}}{a^{2}}\Big{|}\times\dfrac{|c-a|}{|cz^{2}|}\times\Big{|}\theta_{1}^{(2)}-\dfrac{sx}{az}\Big{|}^{-1}.$ This is because $\displaystyle 2\Big{|}\theta_{1}^{(2)}-\dfrac{sx}{az}\Big{|}$ $\displaystyle\geq\Big{|}\theta_{1}^{(2)}-\dfrac{sx}{az}\Big{|}+\Big{|}\theta_{1}^{(1)}-\dfrac{sx}{az}\Big{|}$ $\displaystyle\geq\Big{|}\theta_{1}^{(2)}-\dfrac{sx}{az}-\Big{(}\theta_{1}^{(1)}-\dfrac{sx}{az}\Big{)}\Big{|}$ $\displaystyle=\Big{|}\theta_{1}^{(2)}-\theta_{1}^{(1)}\Big{|}=2\Big{|}\dfrac{s}{a}\sqrt{\dfrac{a}{c}}\Big{|}.$ Thus $\Big{|}\theta_{1}^{(2)}-\dfrac{sx}{az}\Big{|}\geq\Big{|}\dfrac{s}{a}\sqrt{\dfrac{a}{c}}\Big{|}.$ This implies that $\Big{|}\theta_{1}^{(1)}-\dfrac{sbx}{abz}\Big{|}\leq\dfrac{|s||c-a|}{|a|\sqrt{|ac|}}\times\dfrac{1}{|z|^{2}}.$ For proving other part of the inequality (3.2), we want to show that $|\sqrt{ac-1}|\times|c-a|<(21/16)\times|c|\times\sqrt{|ac|}$ and this holds if and only if $\Big{|}\sqrt{1-\dfrac{1}{ac}}\Big{|}<\dfrac{21}{16}\times\dfrac{|c|}{|c-a|}.$ Now $|c|>4|a|$ implies that $\dfrac{21}{16}\times\dfrac{|c|}{|c-a|}\geq\dfrac{21}{20}$ and then $\displaystyle\Big{|}\sqrt{1-\dfrac{1}{ac}}\Big{|}$ $\displaystyle=$ $\displaystyle\sqrt{\Big{|}1-\dfrac{1}{ac}\Big{|}}$ $\displaystyle\leq$ $\displaystyle\sqrt{1+\dfrac{1}{|ac|}}<\dfrac{\sqrt{17}}{4}$ $\displaystyle<$ $\displaystyle\dfrac{21}{20}$ $\displaystyle\leq$ $\displaystyle\dfrac{21}{16}\times\dfrac{|c|}{|c-a|}.$ ∎ Thus from Lemma 3.2 we conclude that $\displaystyle\Big{|}\theta_{1}^{(2)}+\dfrac{sbx}{abz}\Big{|}$ $\displaystyle=$ $\displaystyle\Big{|}\theta_{1}^{(1)}-\dfrac{sbx}{abz}\Big{|}$ $\displaystyle\leq$ $\displaystyle\dfrac{|s||a-c|}{|a|\sqrt{|ac|}}\times\dfrac{1}{|z|^{2}}$ $\displaystyle<$ $\displaystyle\dfrac{21|c|}{16|a|}\times\dfrac{1}{|z|^{2}},$ and $\displaystyle\Big{|}\theta_{2}^{(2)}+\dfrac{tay}{abz}\Big{|}$ $\displaystyle=$ $\displaystyle\Big{|}\theta_{2}^{(1)}-\dfrac{tay}{abz}\Big{|}$ $\displaystyle\leq$ $\displaystyle\dfrac{|s||a-c|}{|b|\sqrt{|bc|}}\times\dfrac{1}{|z|^{2}}$ $\displaystyle<$ $\displaystyle\dfrac{21|c|}{16|a|}\times\dfrac{1}{|z|^{2}}.$ ###### Lemma 3.3. Let {a, b, c, d} be a quadruple such that $|b|\geq(3/2)|a|$, $|b|\geq 22$, $|a|\geq 2$ and $|c|>|b|^{16}$. Then $|d|<(3956)^{10}|c|^{24}.$ ###### Proof. Let $\theta_{1}=\dfrac{s}{a}\sqrt{\dfrac{a}{c}}$ and $\theta_{2}=\dfrac{t}{b}\sqrt{\dfrac{b}{c}}$. Then $\displaystyle\theta_{1}$ $\displaystyle=$ $\displaystyle\sqrt{\dfrac{s^{2}a}{a^{2}c}}=\sqrt{1+\dfrac{(-b)}{abc}},~{}~{}\text{and}$ $\displaystyle\theta_{2}$ $\displaystyle=$ $\displaystyle\sqrt{\dfrac{t^{2}b}{b^{2}c}}=\sqrt{1+\dfrac{(-a)}{abc}}.$ If we write $a_{1}=-b$, $a_{2}=-a,$ $T=abc$ and $M=|b|$ then the claim is that: $l=\dfrac{27|abc|}{64(|abc|-|b|)}<\dfrac{1}{2}.$ Proving the above claim is equivalent to show that $27|abc|<32(|abc|-|b|)$ and this holds if and only if $|ac|>(32/5)$. By hypothesis $|ac|\geq|b|\geq 22>(32/5)$ and thus the claim holds. Now $p=\sqrt{\dfrac{2|abc|+3|b|}{2|abc|-2|b|}}=\sqrt{1+\dfrac{5}{2(|ac|-1)}}\leq\sqrt{\dfrac{47}{42}}.$ Also $l<\dfrac{1}{2}$, one has $c_{1}^{-1}=4pP\times 1$ would give $c_{1}\geq\dfrac{1}{4\times P\times(\sqrt{47/42})}=\dfrac{\sqrt{42}}{\sqrt{47}(4P)}.$ Consider now $P=16\times\dfrac{|-b|^{2}|-a|^{2}|-b+a|^{2}}{\min\\{|-a|,|-b|,|-a+b|\\}^{3}}\times\Big{(}2|abc|+3|b|\Big{)}.$ Since $|-b+a|\geq|b|-|a|\geq\Big{(}\dfrac{3}{2}\times|a|-|a|\Big{)}=\dfrac{|a|}{2},$ so, $\min\\{|a|,|b|,|a-b|\\}\geq\dfrac{|a|}{2}$. Thus $P\leq 128\cdot\dfrac{|b|^{2}|a|^{2}|b-a|^{2}|b|(2|ac|+3)}{|a|^{3}}.$ Hence $P\leq\dfrac{128|b|^{3}|b-a|^{2}(2|ac|+3)}{|a|}.$ (3.4) Let us now look at $L=\dfrac{27}{16|-b|^{2}|-a|^{2}|-b+a|^{2}}\times\Big{(}|abc|-|b|\Big{)}^{2}=\dfrac{27(|ac|-1)^{2}}{16|a|^{2}|b-a|^{2}}.$ We claim that $L>1$. Which is equivalent to show $27(|ac|-1)^{2}>16|a|^{2}|b-a|^{2}$. This holds if and only if $3\sqrt{3}(|ac-1|)>4|a||b-a|$ which is equivalent to $\dfrac{3\sqrt{3}}{4}\times(|ac|-1)>|a||b-a|.$ Since $|ac|-1>|a||b|^{3}-1>2|a|^{2}|b|-1>|a||b|+|a|^{2}\geq|ab-b^{2}|=|a||b-a|$ the claim is validated. Clearly $P>1$ and so $\lambda>1$. In fact $\lambda<1.8$. Indeed, observe that $\lambda=1+\dfrac{\log P}{\log L}<1.8$ holds if and only if $P<L^{0.8}$ , which is equivalent to $P<\Big{(}\dfrac{27}{16}\Big{)}^{0.8}\times\Bigg{(}\dfrac{|ac|-1}{|a|(|b-a|)}\Bigg{)}^{1.6}.$ Appealing to inequality (3.4), we need to show $\dfrac{128|b|^{3}|b-a|^{2}(2|ac|+3)}{|a|}<\Big{(}\dfrac{27}{16}\Big{)}^{0.8}\cdot\Big{(}\dfrac{|ac|-1}{|a||b-a|}\Big{)}^{1.6}.$ After rearranging the above inequality, $128|b|^{3}|b-a|^{3.6}|a|^{0.6}(2|ac|+3)<\Big{(}\dfrac{27}{16}\Big{)}^{0.8}(|ac|-1)^{1.6}.$ We see that it suffices to show $128|b|^{3}|b-a|^{3.6}(9/4)|a|^{0.6}<\Big{(}\dfrac{27}{16}\Big{)}^{0.8}(|ac|-1)^{0.6},$ (3.5) as $|ac|-1>\dfrac{4}{9}(2|ac|+3)$. Since the function $f(t)=(t-1)^{0.6}-t^{0.6}+1$ vanishes at $t=1$ and is increasing, $|ac|^{0.6}-1<(|ac|-1)^{0.6}$. Thus (using $|c|>|b|^{16}$) $|a|^{0.6}|b|^{9.6}-1=|a|^{0.6}|b|^{(16)\cdot(0.6)}-1<|ac|^{0.6}-1<(|ac|-1)^{0.6}.$ For proving inequality (3.5), it suffices to show $128\times(9/4)|b|^{3}|b-a|^{3.6}|a|^{0.6}<\Big{(}\dfrac{27}{16}\Big{)}^{0.8}(|b|^{9.6}-1).$ (3.6) Since we have $|a|\leq\dfrac{2}{3}(|b|)$, $\displaystyle\Big{(}\dfrac{16}{27}\Big{)}^{0.8}\times 128\times(9/4)|b|^{3}|b-a|^{3.6}|a|^{0.6}$ $\displaystyle<$ $\displaystyle\Big{(}\dfrac{16}{27}\Big{)}^{0.8}\times 128\times(9/4)|b|^{3}(5/3)^{3.6}\cdot|b|^{3.6}\cdot\Big{|}\dfrac{2b}{3}\Big{|}^{0.6}$ $\displaystyle<$ $\displaystyle 936|b|^{7.2}.$ Thus inequality (3.6) holds if $936|b|^{7.2}<|b|^{9.6}-1$. This is obvious since the function $f(t)=t^{9.6}-936t^{7.2}-1$ is increasing function for $t\geq 15.5$ and $f(18)>0$. Hence our claim is proved. Proceeding further, with $\theta_{1},\theta_{2}$ as above, take $p_{1}=\pm sbx,p_{2}=\pm tay,q=abz$ (‘sign’ is chosen suitably) and upon applying Lemmas 3.1 and 3.2, we get $\dfrac{21}{16}\cdot\dfrac{|c|}{|a|}\cdot\dfrac{1}{|z|^{2}}>\dfrac{\sqrt{42}}{\sqrt{47}(4P)}|abz|^{-\lambda}.$ From inequality (3.4), we get $\dfrac{21}{16}\cdot\dfrac{|c|}{|a|}\cdot\dfrac{1}{|z|^{2}}>\dfrac{\sqrt{42}|a||abz|^{-\lambda}}{\sqrt{47}(4\cdot 128)\cdot|b|^{3}|b-a|^{2}(2|ac|+3)}.$ It implies that $\dfrac{21}{16}\dfrac{4\sqrt{47}\times 128}{\sqrt{42}}\dfrac{|c|}{|a|^{2}}|b|^{3}|b-a|^{2}(2|ac|+3)\cdot|ab|^{\lambda}>|z|^{2-\lambda}>|z|^{0.2}.$ Hence $|z|^{0.2}<712|c|\cdot 3\cdot|ac||b-a|^{2}|b|^{3+\lambda}|a|^{\lambda-2}<712\times 3|c|^{2}\cdot(2/3)|b|(5/3)^{2}|b|^{2}|b|^{4.8}.$ Using $|c|<|b|^{16}$, one further gets, $|z|^{0.2}<3956\cdot|c|^{2}|b|^{7.8}<3956|c|^{2.49}.$ Hence $|z|<(3956)^{5}|c|^{12.45}$ and finally $|d|=\dfrac{|z^{2}-1|}{|c|}\leq\dfrac{|z|^{2}+1}{|c|}\leq\dfrac{(3956)^{10}|c|^{24.9}+1}{|c|}<3956^{10}|c|^{24}.$ ∎ ## 4\. Lower bound on $d$ A triple $\\{a,b,c\\}$ is said to be regular if $c=a+b\pm 2r$ (refer notation above). If $\\{a,b,c,d\\}$ is a quadruple, then the use of this regularity criterion gives us a lower bound on $d$ in terms of $a$. The following lemma states this. ###### Lemma 4.1. Let $\\{a,b,c,d\\}$ be a quadruple with $5<|a|\leq|b|\leq|c|\leq|d|$. Then atleast one of $\\{a,b,c\\}$ and $\\{a,b,d\\}$ is not regular. ###### Proof. If possible let both $\\{a,b,c\\}$ and $\\{a,b,d\\}$ are regular, i.e., $c=a+b+2r$ and $d=a+b-2r$. Substituting the value of $r$ gives $cd-1=(a-b)^{2}+3$. As $\\{c,d\\}$ is a pair in $\mathcal{O}_{K}$ with $D(-1)$, there exists a $z\in\mathcal{O}_{K}$ such that $cd-1=z^{2}$. Thus $z^{2}=(a-b)^{2}+3$ and therefore $3=(z-(a-b))(z+(a-b))$. We take $X=(z-(a-b))$ and $Y=(z+(a-b))$. Then $XY=3$ (4.1) and $X+Y=2z.$ (4.2) Taking norm on both sides in (4.1), we get $||X||\times||Y||=||3||=9$. Case (i): $||X||=1$ or $||Y||=1$. Assume that $||X||=1$, then $X$ is a unit. If $D=1$, by equation (4.1), $(X,Y)\in\\{(1,3),(-1,-3),(i,-3i),(-i,3i)$}. This implies that $X+Y=\pm 4,\pm 2i$ and therefore $z=\pm 2,\pm i$ (from the equation (4.2)). Since $cd-1=z^{2}$, so either $cd=5$ or $cd=0$. Thus we get $|d|\leq 5,$ which is a contradiction to our hypothesis. If $D=3,$ by again using equation (4.1), we get $\begin{split}(X,Y)\in\Bigg{\\{}(1,3),(-1,-3),\Big{(}\dfrac{1+\sqrt{-3}}{2},\dfrac{3(1-\sqrt{-3})}{2}\Big{)},\Big{(}\dfrac{1-\sqrt{-3}}{2},\dfrac{3(1+\sqrt{-3})}{2}\Big{)},\\\ \Big{(}\dfrac{-1+\sqrt{-3}}{2},\dfrac{3(-1-\sqrt{-3})}{2}\Big{)},\Big{(}\dfrac{-1-\sqrt{-3}}{2},\dfrac{3(-1+\sqrt{-3})}{2}\Big{)}\Bigg{\\}}.\end{split}$ From equation (4.2), it follows that $2z=\pm 4,\pm 2\pm\sqrt{-3}$. Since $z\in\mathcal{O}_{K}$, therefore $z=\pm 2$. Thus $cd=5$. This implies that $|d|\leq 5$, a contradiction. If $D\neq 1,3,$ then $(X,Y)\in\\{(1,3),(-1,-3)\\}$ (from equation (4.1)). Again using equation (4.2), we get $2z=\pm 4$ and hence $cd=5$. Again this will give $|d|\leq 5$, contradiction. Case (ii): $||X||=||Y||=3$. If $D=1,$ then $||X||=3=a_{1}^{2}+b_{1}^{2}$ where $a_{1},b_{1}\in\mathbb{Z}$, which is not possible. If $D=2$, then $||X||=3=a_{1}^{2}+2b_{1}^{2}$ where $a_{1},b_{1}\in\mathbb{Z}$. This implies that $\displaystyle(X,Y)\in$ $\displaystyle\Big{\\{}\Big{(}1+\sqrt{-2},1-\sqrt{-2}\Big{)},\Big{(}1-\sqrt{-2},1+\sqrt{-2}\Big{)},$ $\displaystyle\Big{(}-1+\sqrt{-2},-1-\sqrt{-2}\Big{)},\Big{(}-1-\sqrt{-2},-1+\sqrt{-2}\Big{)}\Big{\\}}.$ Then $z=\pm 1$ and therefore $cd=2$. We conclude that $|d|\leq 2$. If $D>3$ and $D\equiv 1,2\pmod{4},$ then $||X||=a_{1}^{2}+Db_{1}^{2}=3$ where $a_{1},b_{1}\in\mathbb{Z}$ which is again not possible. If $D=3$, then $||X||=\Big{(}a+\dfrac{b}{2}\Big{)}^{2}+\dfrac{3\cdot b^{2}}{4}=3$. From equation (4.1), we get $\begin{split}(X,Y)\in\Bigg{\\{}\Big{(}\dfrac{3}{2}+\dfrac{\sqrt{-3}}{2},\dfrac{3}{2}-\dfrac{\sqrt{-3}}{2}\Big{)},\Big{(}\dfrac{-3}{2}+\dfrac{\sqrt{-3}}{2},\dfrac{-3}{2}-\dfrac{\sqrt{-3}}{2}\Big{)},\Big{(}\dfrac{3}{2}-\dfrac{\sqrt{-3}}{2},\dfrac{3}{2}+\dfrac{\sqrt{-3}}{2}\Big{)},\\\ \Big{(}\dfrac{-3}{2}-\dfrac{\sqrt{-3}}{2},\dfrac{-3}{2}+\dfrac{\sqrt{-3}}{2}\Big{)},\Big{(}\sqrt{-3},-\sqrt{-3}\Big{)},\Big{(}-\sqrt{-3},\sqrt{-3}\Big{)}\Bigg{\\}}.\end{split}$ Using equation (4.2), $2z=0,\pm 3$. Since $z\in\mathcal{O}_{K}$, we get $z=0$ and therefore $cd=1$. This implies that $|d|\leq 1$, which is a contradiction. Same way we can prove our lemma for $D\geq 7$ with $D\equiv 3\pmod{4}$. ∎ ###### Lemma 4.2. Let $\\{a,b,c,d\\}$ be a quadruple with $10\leq|a|\leq|b|\leq|c|\leq|d|$, then $|d|\geq\dfrac{|ab|}{(330/65)}\geq\dfrac{|a|^{2}}{(330/65)}$. ###### Proof. We assume that $\\{a,b,d\\}$ is not regular(from Lemma (4.1)). Define $c_{\pm}=a+b+d-2abd\pm 2rxy,$ where $x,y\in\mathcal{O}_{K}$ such that, $ad-1=x^{2}$ and $bd-1=y^{2}$. Claim: $c_{\pm}\neq 0$. Suppose $c_{\pm}=0$. This implies that $a+b+d(1-2ab)=\mp 2rxy$. Squaring and rearranging this equation we get, $d^{2}-2d(a+b)+(a-b)^{2}+4=0$. Therefore $d=a+b+2r$ or $a+b-2r$. Since $\\{a,b,d\\}$ is not regular, this is a contradiction. Consider $c_{+}c_{-}=(a+b+d-2abd)^{2}-4(rxy)^{2}=a^{2}+b^{2}+d^{2}-2ab-2ad-2bd+4$. Therefore $|c_{+}c_{-}|\leq|d^{2}|+|d^{2}|+|d^{2}|+2|d|^{2}+2|d|^{2}+2|d|^{2}+|d|^{2}\leq 10|d|^{2}$, also $|c_{+}+c_{-}|=2|a+b+d-2abd|$. We may assume that $|c_{+}|\geq|c_{-}|$. Since $2c_{+}=|c_{+}|+|c_{+}|\geq|c_{+}+c_{-}|=2|a+b+d-2abd|$, this implies that, $|c_{+}|\geq|a+b+d-2abd|$ We have $10\leq|a|\leq|b|\leq|c|\leq|d|$, which follows that $|a+b+d|\leq 3|d|\leq\dfrac{3}{99}\cdot|abd|$. Thus $|c_{+}|\geq|a+b+d-2abd|\geq 2|abd|-|a+b+d|\geq 2|abd|-(3/99)|abd|=\dfrac{65}{33}\cdot|abd|.$ We have proved that $|c_{+}c_{-}|\leq 10|d|^{2}$ which gives that $|c_{-}|\leq\dfrac{10|d|^{2}}{|c_{+}|}\leq\dfrac{10|d|^{2}}{(65/33)|abd|}=\dfrac{(330)|d|}{(65)|ab|}$. Since $c_{-}\neq 0$, $|c_{-}|\geq 1$ and this implies that $\dfrac{330|d|}{65|ab|}\geq 1$. Hence $|d|\geq\dfrac{|ab|}{(330/65)}\geq\dfrac{|a|^{2}}{(330/65)}.$ ∎ ## 5\. Proof of the main theorem Let $\\{a,b,c,d,e\\}$ be a quintuple with $|e|<15$. For $D<226$, we can check that, by computer, there does not exist such type of quintuples, and for $D\geq 226$, we can easily seen that $a,b,c,d,e\in\mathbb{Z}$. Therefore, if $ab-1=(x+y\sqrt{-D})^{2}$, then $2xy=0$. This gives that either $x=0$ or $y=0$. Now if $x=0$ then $ab-1=-Dy^{2}$. This implies that $|ab-1|\leq|ab|+1<226$, and hence $x=0$ is not possible. Thus $y=0$. We conclude that if $\\{a,b,c,d,e\\}$ is a quintuple, then $|e|\geq 15$. Similarly, one can check that if $\\{a,b,c,d\\}$ is a quadruple, then $|d|\geq 12$. Let $\mathcal{A}=\\{a_{1},a_{2},\cdots,a_{m}\\}$ be a Diophantine $m$-tuple in $\mathcal{O}_{K}$ with $D(-1)$ such that $m\geq 37$. Thus $\\{a_{4},a_{5},a_{6},a_{7}\\}$ is a quadruple. From Lemma (4.2), we get $|a_{7}|\geq\dfrac{|a_{4}a_{5}|}{(330/65)}\geq\dfrac{12\cdot 15}{(330/65)}>35$. By applying lemma (4.2) to quadruples $\\{a_{7},a_{8},a_{9},a_{10}\\},\\{a_{10},a_{11},a_{12},a_{13}\\}$,$\cdots$, $\\{a_{19},a_{20},\\\ a_{21},a_{22}\\}$ respectively, we get the following inequalities $\displaystyle|a_{10}|\geq\dfrac{|a_{7}|^{2}}{(330/65)},$ $\displaystyle|a_{13}|\geq\dfrac{|a_{10}|^{2}}{(330/65)}=\dfrac{|a_{7}|^{4}}{(330/65)^{3}},$ $\displaystyle|a_{22}|\geq\dfrac{|a_{7}|^{32}}{(330/65)^{31}}.$ Consider quadruples $\\{a_{4},a_{7},a_{22},a_{22+k}\\}$ for $k>0$. Since $\\{a_{1},a_{2},a_{3},a_{4}\\}$ is a quadruple, $|a_{4}|\geq 12$. Quadruple $\\{a_{4},a_{5},a_{6},a_{7}\\}$ implies that $|a_{7}|\geq|a_{5}|\geq 15$ and from Lemma (4.2), $|a_{7}|\geq\dfrac{|a_{4}a_{5}|}{(330/65)}\geq\dfrac{15|a_{4}|}{(330/65)}>\dfrac{3|a_{4}|}{2}$. Inequality $|a_{22}|>|a_{7}|^{16}$ holds if $\dfrac{|a_{7}|^{32}}{(330/65)^{31}}>|a_{7}|^{16}$, and this holds if $|a_{7}|>24$. By Lemma(3.3), $|a_{22+k}|<3956^{10}|a_{22}|^{24},\hskip 5.69046pt\text{ $k>$ 0}.$ (5.1) Again we apply lemma (4.2) to quadruples $\\{a_{22},a_{23},a_{24},a_{25}\\},\\{a_{25},a_{26},a_{27},a_{28}\\}$, $\cdots$, $\\{a_{34},a_{35},a_{36},a_{37}\\}$ respectively, and get the following inequalities $\displaystyle|a_{25}|\geq\dfrac{|a_{22}|^{2}}{(330/65)},$ $\displaystyle|a_{28}|\geq\dfrac{|a_{25}|^{2}}{(330/65)}\geq\dfrac{|a_{22}|^{4}}{(330/65)^{3}},$ $\displaystyle|a_{37}|\geq\dfrac{|a_{22}|^{32}}{(330/65)^{31}}.$ From inequality (5.1), $3956^{10}|a_{22}|^{24}>|a_{37}|$. Claim: $\dfrac{|a_{22}|^{32}}{(330/65)^{31}}>3956^{10}|a_{22}|^{24}$. It is equivalent to showing $|a_{22}|^{8}\geq(330/65)^{31}\cdot 3956^{10}$, and this inequality holds, if $|a_{22}|>1.8\times 10^{7}$. Since $|a_{22}|\geq\dfrac{|a_{7}|^{32}}{(330/65)^{31}}\geq\dfrac{35^{32}}{(330/65)^{31}}>10^{27}$, our claim is proved. Finally we get $3956^{10}|a_{22}|^{24}>|a_{37}|\geq\dfrac{|a_{22}|^{32}}{(330/65)^{31}}>3956^{10}|a_{22}|^{24},$ which is a contradiction. Hence $m\leq 36$. This completes the proof. We have an example of quadruple in $\mathbb{Z}[i]$ with $D(-1)$ which is $\\{1,2,5,-24\\}$. Unfortunately, we do not know about the existence of Diophantine $m$-tuple in $\mathcal{O}_{K}$ with $D(-1)$, for $m\geq 5$. ## 6\. Acknowledgement The author is indebted to Prof. Kalyan Chakraborty for his suggestions and for carefully going through the manuscript; The author is also thankful to Dr. A. Hoque for introducing him to this area and for his encouragement throughout. It is also a pleasure to acknowledge Mr. Mohit Mishra and Mr. Rishabh Agnihotri their support throughout the preparation of this manuscript and for providing all required assistance. ## References * [1] N. Adzaga, On the size of Diophantine m-tuples in imaginary quadratic number rings, Bull. Math. Sci. 9(3) (2019) 1950020. * [2] A. Baker and H. Davenport, The equations $3x^{2}-2=y^{2}$ and $8x^{2}-7=z^{2}$, Quart. J. Math. Oxford Ser. (2) 20 (1969), 129-137. * [3] M. A. Bennett, On the number of solutions of simultaneous Pell equations, J. ReineAngew. Math. 498 (1998) 173-199. * [4] A. Dujella, The problem of Diophantus and Davenport for Gaussian integers, Glas. Mat. Ser. III 32 (1997), 1-10. * [5] A. Dujella, An absolute bound for the size of Diophantine m-tuples, J. Number Theory 89 (2001), 126-150. * [6] A. Dujella, There are only finitely many Diophantine quintuples, J. Reine Angew. Math. 566 (2004), 183-214. * [7] A. Dujella and C. Fuchs, Complete solution of a problem of Diophantus and Euler, J. London Math. Soc. 71 (2005), 33-52. * [8] A. Dujella, A. Filipin and C. Fuchs, Effective solution of the D(-1)-quadruple conjecture, Acta Arith. 128 (2007), 319-338. * [9] B. He, A. Togbé, V. Ziegler, There is no Diophantine quintuple, Trans. Amer. Math. Soc. 371 (2019), 6665-6709. * [10] B. Jadrijević and V. Ziegler, A system of relative Pellian equations and a related family of relative Thue equations, Int. J. Number Theory 2(4) (2006) 569-590. * [11] I. Soldo, On the extensibility of $D(-1)$-triples {1, b, c} in the ring $\mathbb{Z}[\sqrt{-t}],t>0$, Studia Sci. Math. Hungar. 50 (2013), 296-330. * [12] I. Soldo, $D(-1)$-triples of the form {1, b, c} in the ring $\mathbb{Z}[\sqrt{-t}],t>0$, Bull. Malays. Math. Sci. Soc. 39 (2016), 1201-1224. * [13] J. H. Silverman and J. Tate, Rational points on elliptic curves, Undergraduate Texts in Mathematics, Springer-Verlag, New York, 1992. * [14] T. Trudgian, Bounds on the number of Diophantine quintuples, J.Number Theory 157 (2015), 233-249.
2024-09-04T02:54:57.945316
2020-03-06T17:45:31
2003.03323
{ "authors": "Louisa Seelbach Benkner and Stephan Wagner", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26088", "submitter": "Louisa Seelbach Benkner", "url": "https://arxiv.org/abs/2003.03323" }
arxiv-papers
11institutetext: Department für Elektrotechnik und Informatik, Universität Siegen, Hölderlinstrasse 3, 57076 Siegen, Germany 11email<EMAIL_ADDRESS>22institutetext: Department of Mathematical Sciences, Stellenbosch University, Private Bag X1, Matieland 7602, South Africa 22email<EMAIL_ADDRESS>33institutetext: Department of Mathematics, Uppsala Universitet, Box 480, 751 06 Uppsala, Sweden 33email<EMAIL_ADDRESS> # On the Collection of Fringe Subtrees in Random Binary Trees Louisa Seelbach Benkner This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 731143 and the DFG research project LO 748/10-1 (QUANT-KOMP).11 Stephan Wagner 2233 ###### Abstract A fringe subtree of a rooted tree is a subtree consisting of one of the nodes and all its descendants. In this paper, we are specifically interested in the number of non-isomorphic trees that appear in the collection of all fringe subtrees of a binary tree. This number is analysed under two different random models: uniformly random binary trees and random binary search trees. In the case of uniformly random binary trees, we show that the number of non- isomorphic fringe subtrees lies between $c_{1}n/\sqrt{\ln n}(1+o(1))$ and $c_{2}n/\sqrt{\ln n}(1+o(1))$ for two constants $c_{1}\approx 1.0591261434$ and $c_{2}\approx 1.0761505454$, both in expectation and with high probability, where $n$ denotes the size (number of leaves) of the uniformly random binary tree. A similar result is proven for random binary search trees, but the order of magnitude is $n/\ln n$ in this case. Our proof technique can also be used to strengthen known results on the number of distinct fringe subtrees (distinct in the sense of ordered trees). This quantity is of the same order of magnitude in both cases, but with slightly different constants in the upper and lower bounds. ###### Keywords: Uniformly Random Binary Trees Random Binary Search Trees Fringe Subtrees Tree Compression ## 1 Introduction A subtree of a rooted tree that consists of a node and all its descendants is called a _fringe subtree_. Fringe subtrees are a natural object of study in the context of random trees, and there are numerous results for various random tree models, see e.g. [3, 9, 11, 13]. Fringe subtrees are of particular interest in computer science: One of the most important and widely used lossless compression methods for rooted trees is to represent a tree as a directed acyclic graph, which is obtained by merging nodes that are roots of identical fringe subtrees. This compressed representation of the tree is often shortly referred to as _minimal DAG_ and its size (number of nodes) is the number of distinct fringe subtrees occurring in the tree. Compression by minimal DAGs has found numerous applications in various areas of computer science, as for example in compiler construction [2, Chapter 6.1 and 8.5], unification [25], symbolic model checking (binary decision diagrams) [7], information theory [21, 28] and XML compression and querying [8, 20]. In this work, we investigate the number of fringe subtrees in random binary trees, i.e. random trees such that each node has either exactly two or no children. So far, this problem has mainly been studied with respect to ordered fringe subtrees in random ordered binary trees: A _uniformly random ordered binary tree_ of size $n$ (with $n$ leaves) is a random tree whose probability distribution is the uniform probability distribution on the set of ordered binary trees of size $n$. In [19], Flajolet, Sipala and Steyaert proved that the expected number of distinct ordered fringe subtrees in a uniformly random ordered binary tree of size $n$ is asymptotically equal to $c\cdot n/\sqrt{\ln n}$, where $c$ is the constant $2\sqrt{\ln 4/\pi}$. This result of Flajolet et al. was extended to unranked labelled trees in [6] (for a different constant $c$). Moreover, an alternative proof to the result of Flajolet et al. was presented in [26] in the context of simply-generated families of trees. Another important type of random trees are so-called _random binary search trees_ : A random binary search tree of size $n$ is a binary search tree built by inserting the keys $\\{1,\dots,n\\}$ according to a uniformly chosen random permutation on $\\{1,\dots,n\\}$. Random binary search trees naturally arise in theoretical computer science, see e.g. [12]. In [17], Flajolet, Gourdon and Martinez proved that the expected number of distinct ordered fringe subtrees in a random binary search tree of size $n$ is $O(n/\ln n)$. This result was improved in [10] by Devroye, who showed that the asymptotics $\Theta(n/\ln n)$ holds. Moreover, the result of Devroye was generalized from random binary search trees to a broader class of random ordered binary trees in [27], where the problem of estimating the expected number of distinct ordered fringe subtrees in random binary trees was considered in the context of so-called leaf-centric binary tree sources, which were introduced in [23, 28] as a general framework for modeling probability distributions on the set of ordered binary trees of size $n$. In this work, we focus on estimating the number of _non-isomorphic_ fringe subtrees in random ordered binary trees, where we call two binary trees non- isomorphic if they are distinct as unordered binary trees. This question arises quite naturally for example in the context of XML compression: Here, one distinguishes between so-called document-centric XML, for which the corresponding XML document trees are ordered, and data-centric XML, for which the corresponding XML document trees are unordered. Understanding the interplay between ordered and unordered structures has thus received considerable attention in the context of XML (see, for example, [1, 5, 29]). In particular, in [24], it was investigated whether tree compression can benefit from unorderedness. For this reason, so-called _unordered minimal DAGs_ were considered. An unordered minimal DAG of a binary tree is a directed acyclic graph obtained by merging nodes that are roots of isomorphic fringe subtrees, i.e. of fringe subtrees which are identical as unordered trees. From such an unordered minimal DAG, an unordered representation of the original tree can be uniquely retrieved. The size of this compressed representation is the number of non-isomorphic fringe subtrees occurring in the tree. So far, only some worst-case estimates comparing the size of a minimal DAG to the size of its corresponding unordered minimal DAG are known: Among other things, it was shown in [24] that the size of an unordered minimal DAG of a binary tree can be exponentially smaller than the size of the corresponding (ordered) minimal DAG. However, no average-case estimates comparing the size of the minimal DAG of a binary tree to the size of the corresponding unordered minimal DAG are known so far. In particular, in [24] it is stated as an open problem to estimate the expected number of non-isomorphic fringe subtrees in a uniformly random ordered binary tree of size $n$ and conjectured that this number asymptotically grows as $\Theta(n/\sqrt{\ln n})$. In this work, as one of our main theorems, we settle this open conjecture by proving upper and lower bounds of order $n/\sqrt{\ln n}$ for the number of non-isomorphic fringe subtrees which hold both in expectation and with high probability (i.e., with probability tending to $1$ as $n\to\infty$). Our approach can also be used to obtain an analogous result for random binary search trees, though the order of magnitude changes to $\Theta(n/\ln n)$. Again, we have upper and lower bounds in expectation and with high probability. Our two main theorems read as follows. ###### Theorem 1 Let $F_{n}$ be the total number of non-isomorphic fringe subtrees in a uniformly random ordered binary tree with $n$ leaves. For two constants $c_{1}\approx 1.0591261434$ and $c_{2}\approx 1.0761505454$, the following holds: 1. (i) $\displaystyle c_{1}\frac{n}{\sqrt{\ln n}}(1+o(1))\leq\mathbb{E}(F_{n})\leq c_{2}\frac{n}{\sqrt{\ln n}}(1+o(1))$, 2. (ii) $\displaystyle c_{1}\frac{n}{\sqrt{\ln n}}(1+o(1))\leq F_{n}\leq c_{2}\frac{n}{\sqrt{\ln n}}(1+o(1))$ with high probability. ###### Theorem 2 Let $G_{n}$ be the total number of non-isomorphic fringe subtrees in a random binary search tree with $n$ leaves. For two constants $c_{3}\approx 1.5470025923$ and $c_{4}\approx 1.8191392203$, the following holds: 1. (i) $\displaystyle c_{3}\frac{n}{\ln n}(1+o(1))\leq\mathbb{E}(G_{n})\leq c_{4}\frac{n}{\ln n}(1+o(1))$, 2. (ii) $\displaystyle c_{3}\frac{n}{\ln n}(1+o(1))\leq G_{n}\leq c_{4}\frac{n}{\ln n}(1+o(1))$ with high probability. To prove the above Theorems 1 and 2, we refine techniques from [26]. Our proof technique also applies to the problem of estimating the number of distinct ordered fringe subtrees in uniformly random binary trees or in random binary search trees. In this case, upper and lower bounds for the expected value have already been proven by other authors. Our new contribution is to show that they also hold with high probability. ###### Theorem 3 Let $H_{n}$ denote the total number of distinct fringe subtrees in a uniformly random ordered binary tree with $n$ leaves. Then, for the constant $c=2\sqrt{\ln 4/\pi}\approx 1.3285649405$, the following holds: 1. (i) $\displaystyle\mathbb{E}(H_{n})=c\frac{n}{\sqrt{\ln n}}(1+o(1))$, 2. (ii) $\displaystyle H_{n}=c\frac{n}{\sqrt{\ln n}}(1+o(1))$ with high probability. Here, the first part (i) was already shown in [19] and [26], part (ii) is new. Similarly, we are able to strengthen the results of [10] and [27]: ###### Theorem 4 Let $J_{n}$ be the total number of distinct fringe subtrees in a random binary search tree with $n$ leaves. For two constants $c_{5}\approx 2.4071298335$ and $c_{6}\approx 2.7725887222$, the following holds: 1. (i) $\displaystyle c_{5}\frac{n}{\ln n}(1+o(1))\leq\mathbb{E}(J_{n})\leq c_{6}\frac{n}{\ln n}(1+o(1))$, 2. (ii) $\displaystyle c_{5}\frac{n}{\ln n}(1+o(1))\leq J_{n}\leq c_{6}\frac{n}{\ln n}(1+o(1))$ with high probability. The upper bound in part (i) can already be found in [17] and [10]. Moreover, a lower bound of the form $\mathbb{E}(J_{n})\geq\frac{\alpha n}{\ln n}(1+o(1))$ was already shown in [10] for the constant $\alpha=(\ln 3)/2\approx 0.5493061443$ and in [27] for the constant $\alpha\approx 0.6017824584$. So our new contributions are part (ii) and the improvement of the lower bound on $\mathbb{E}(J_{n})$. ## 2 Preliminaries Let $\mathcal{T}$ denote the set of ordered binary trees, i.e. of ordered rooted trees such that each node has either exactly two or no children. We define the _size_ $|t|$ of a binary tree $t\in\mathcal{T}$ as the number of leaves of $t$ and by $\mathcal{T}_{k}$ we denote the set of binary trees of size $k$ for every integer $k\geq 1$. It is well known that $|\mathcal{T}_{k}|=C_{k-1}$, where $C_{k}$ denotes the $k$-th _Catalan number_ [18]: We have $\displaystyle C_{k}=\frac{1}{k+1}\binom{2k}{k}\sim\frac{4^{k}}{\sqrt{\pi}k^{3/2}}(1+O(1/k)),$ (1) where the asymptotic growth of the Catalan numbers follows from Stirling’s Formula [18]. Analogously, let $\mathcal{U}$ denote the set of unordered binary trees, i.e. of unordered rooted trees such that each node has either exactly two or no children. The _size_ $|u|$ of an unordered tree $u\in\mathcal{U}$ is again the number of leaves of $u$ and by $\mathcal{U}_{k}$ we denote the set of unordered binary trees of size $k$. We have $|\mathcal{U}_{k}|=W_{k}$, where $W_{k}$ denotes the $k$-th _Wedderburn- Etherington number_. Their asymptotic growth is $\displaystyle W_{k}\sim A\cdot k^{-3/2}\cdot b^{k},$ (2) for certain positive constants $A,b$ [4, 16]. In particular, we have $b\approx 2.4832535362$. A _fringe subtree_ of a binary tree is a subtree consisting of a node and all its descendants. For a binary tree $t$ and a given node $v\in t$, let $t(v)$ denote the fringe subtree of $t$ rooted at $v$. Two fringe subtrees are called _distinct_ if they are distinct as ordered binary trees. Every tree $t\in\mathcal{T}$ can be considered as an element of $\mathcal{U}$ by simply forgetting the ordering on $t$’s nodes. If two binary trees $t_{1},t_{2}$ correspond to the same unordered tree $u\in\mathcal{U}$, we call them _isomorphic_ : Thus, we obtain a partition of $\mathcal{T}$ into isomorphism classes. If two binary trees $t_{1},t_{2}\in\mathcal{T}$ belong to the same isomorphism class, we can obtain $t_{1}$ from $t_{2}$ and vice versa by reordering the children of some of $t_{1}$’s (respectively, $t_{2}$’s) inner nodes. An inner node $v$ of an ordered or unordered binary tree $t$ is called a _symmetrical node_ if the fringe subtrees rooted at $v$’s children are isomorphic. Let $\operatorname{sym}(t)$ denote the number of symmetrical nodes of $t$. The cardinality of the automorphism group of $t$ is given by $\lvert\operatorname{\mathrm{Aut}}(t)\rvert=2^{\operatorname{sym}(t)}$. Thus, by the orbit-stabilizer theorem, there are $2^{k-1-\operatorname{sym}(t)}$ many ordered binary trees in the isomorphism class of $t\in\mathcal{T}_{k}$, and likewise $2^{k-1-\operatorname{sym}(t)}$ many ordered representations of $t\in\mathcal{U}_{k}$. We consider two types of probability distributions on the set of ordered binary trees of size $n$: * (i) The _uniform probability distribution_ on $\mathcal{T}_{n}$, that is, every binary tree of size $n$ is assigned the same probability $\frac{1}{C_{n-1}}$. A random variable taking values in $\mathcal{T}_{n}$ according to the uniform probability distribution is called a _uniformly random (ordered) binary tree_ of size $n$. * (ii) The probability distribution induced by the so-called _Binary Search Tree Model_ (see e.g. [12, 17]): The corresponding probability mass function $P_{\operatorname{bst}}:\mathcal{T}_{n}\rightarrow[0,1]$ is given by $\displaystyle P_{\operatorname{bst}}(t)=\prod_{v\in t\atop|t(v)|>1}\frac{1}{|t(v)|-1},$ (3) for every $n\geq 1$. A random variable taking values in $\mathcal{T}_{n}$ according to this probability mass function is called a _random binary search tree_ of size $n$. Before we start with proving our main results, we need two preliminary lemmas on the number of fringe subtrees in uniformly random ordered binary trees and in random binary search trees: ###### Lemma 1 Let $a,\varepsilon$ be positive real numbers with $\varepsilon<\frac{1}{3}$. For every positive integer $k$ with $a\ln n\leq k\leq n^{\varepsilon}$, let $\mathcal{S}_{k}\subset\mathcal{T}_{k}$ be a set of ordered binary trees with $k$ leaves. We denote the cardinality of $\mathcal{S}_{k}$ by $s_{k}$. Let $X_{n,k}$ denote the (random) number of fringe subtrees with $k$ leaves in a uniformly random ordered binary tree with $n$ leaves that belong to $\mathcal{S}_{k}$. Moreover, let $Y_{n,\varepsilon}$ denote the (random) number of arbitrary fringe subtrees with more than $n^{\varepsilon}$ leaves in a uniformly random ordered binary tree with $n$ leaves. We have 1. (1) $\mathbb{E}(X_{n,k})=s_{k}4^{1-k}n\big{(}1+O(k/n)\big{)}$ for all $k$ with $a\ln n\leq k\leq n^{\varepsilon}$, the $O$-constant being independent of $k$, 2. (2) $\mathbb{V}(X_{n,k})=s_{k}4^{1-k}n(1+O(k^{-1/2}))$ for all $k$ with $a\ln n\leq k\leq n^{\varepsilon}$, again with an $O$-constant that is independent of $k$, 3. (3) $\mathbb{E}(Y_{n,\varepsilon})=O(n^{1-\varepsilon/2})$ and 4. (4) with high probability, the following statements hold simultaneously: * (i) $|X_{n,k}-\mathbb{E}(X_{n,k})|\leq s_{k}^{1/2}2^{-k}n^{1/2+\varepsilon}$ for all $k$ with $a\ln n\leq k\leq n^{\varepsilon}$, * (ii) $Y_{n,\varepsilon}\leq n^{1-\varepsilon/3}$. We emphasize (since it will be important later) that the inequality in part (4), item (i), does not only hold with high probability for each individual $k$, but that it is satisfied with high probability for all $k$ in the given range simultaneously. ###### Proof (1) Recall first that the number of ordered binary trees with $n$ leaves is the Catalan number $C_{n-1}=\frac{1}{n}\binom{2n-2}{n-1}$. We observe that every occurrence of a fringe subtree in $\mathcal{S}_{k}$ in a tree with $n$ leaves can be obtained by choosing an ordered tree with $n-k+1$ leaves, picking one of the leaves and replacing it by a tree in $\mathcal{S}_{k}$. Thus the total number of occurrences is $\frac{1}{n-k+1}\binom{2n-2k}{n-k}\cdot(n-k+1)\cdot s_{k}=\binom{2n-2k}{n-k}s_{k}.$ Consequently, the average number is $\mathbb{E}(X_{n,k})=\frac{\binom{2n-2k}{n-k}s_{k}}{\frac{1}{n}\binom{2n-2}{n-1}}=s_{k}4^{1-k}n\big{(}1+O(k/n)\big{)},$ by Stirling’s formula (the $O$-constant being independent of $k$ in the indicated range). (2) The variance is determined in a similar fashion: we first count the total number of pairs of fringe subtrees in $\mathcal{S}_{k}$ that appear in the same ordered tree with $n$ leaves. Each such pair can be obtained as follows: take an ordered tree with $n-2k+2$ leaves, pick two leaves, and replace them by fringe subtrees in $\mathcal{S}_{k}$. The total number is thus $\frac{1}{n-2k+2}\binom{2n-4k+2}{n-2k+1}\cdot\binom{n-2k+2}{2}\cdot s_{k}^{2}=\frac{n-2k+1}{2}\binom{2n-4k+2}{n-2k+1}s_{k}^{2},$ giving us $\mathbb{E}\Big{(}\binom{X_{n,k}}{2}\Big{)}=\frac{\frac{n-2k+1}{2}\binom{2n-4k+2}{n-2k+1}s_{k}^{2}}{\frac{1}{n}\binom{2n-2}{n-1}}=s_{k}^{2}4^{2-2k}\frac{n^{2}}{2}\big{(}1+O(k/n)\big{)},$ again by Stirling’s formula. The second moment and the variance are now derived from this formula in a straightforward fashion: We find $\displaystyle\mathbb{E}(X_{n,k}^{2})=2\mathbb{E}\Big{(}\binom{X_{n,k}}{2}\Big{)}+\mathbb{E}(X_{n,k})=\left(s_{k}^{2}4^{2-2k}n^{2}+s_{k}4^{1-k}n\right)\big{(}1+O(k/n)\big{)},$ and thus, as $s_{k}/C_{k-1}\leq 1$, $\displaystyle\mathbb{V}(X_{n,k})$ $\displaystyle=\mathbb{E}(X_{n,k}^{2})-\mathbb{E}(X_{n,k})^{2}=O(s_{k}^{2}4^{2-2k}nk)+s_{k}4^{1-k}n(1+O(k/n))$ $\displaystyle=O\left(\frac{s_{k}^{2}}{C_{k-1}^{2}}\frac{n}{k^{2}}\right)+\frac{s_{k}}{C_{k-1}}\frac{n}{\sqrt{\pi}k^{3/2}}(1+O(1/k))=s_{k}4^{1-k}n(1+O(1/k^{1/2})).$ (3) To obtain the estimate for $\mathbb{E}(Y_{n,\varepsilon})$, we observe that the average total number of fringe subtrees with $k$ leaves is $\frac{\binom{2n-2k}{n-k}\cdot\frac{1}{k}\binom{2k-2}{k-1}}{\frac{1}{n}\binom{2n-2}{n-1}}=O\Big{(}\frac{n^{3/2}}{k^{3/2}(n-k+1)^{1/2}}\Big{)},$ where the estimate follows from Stirling’s formula again for $k>n^{\varepsilon}$. Summing over all $k$, we get $\mathbb{E}(Y_{n,\varepsilon})=O\Big{(}n^{3/2}\sum_{n^{\varepsilon}<k\leq n}\frac{1}{k^{3/2}(n-k+1)^{1/2}}\Big{)}=O(n^{1-\varepsilon/2}).$ (4) For the second part, we apply Chebyshev’s inequality to obtain concentration of $X_{n,k}$: $\mathbb{P}\Big{(}\big{|}X_{n,k}-\mathbb{E}(X_{n,k})\big{|}\geq s_{k}^{1/2}2^{-k}n^{1/2+\varepsilon}\Big{)}\leq\frac{\mathbb{V}(X_{n,k})}{s_{k}4^{-k}n^{1+2\varepsilon}}=O\big{(}n^{-2\varepsilon}\big{)}.$ Hence, by the union bound, the probability that the stated inequality fails for any $k$ in the given range is only $O(n^{-\varepsilon})$, proving that the first statement holds with high probability. Finally, Markov’s inequality implies that $\mathbb{P}\big{(}Y_{n,\varepsilon}>n^{1-\varepsilon/3}\big{)}\leq\frac{\mathbb{E}(Y_{n,\varepsilon})}{n^{1-\varepsilon/3}}=O(n^{-\varepsilon/6}),$ showing that the second inequality holds with high probability as well. $\scriptstyle\blacksquare$ For the number of fringe subtrees in random binary search trees, a very similar lemma holds: ###### Lemma 2 Let $a,\varepsilon$ be positive real numbers with $\varepsilon<\frac{1}{3}$ and let $n$ and $k$ denote positive integers. Moreover, for every $k$, let $\mathcal{S}_{k}\subset\mathcal{T}_{k}$ be a set of ordered binary trees with $k$ leaves and let $p_{k}$ denote the probability that a random binary search tree is contained in $\mathcal{S}_{k}$, that is, $p_{k}=\sum P_{\operatorname{bst}}(t)$, where the sum is taken over all binary trees in $\mathcal{S}_{k}$. Let $X_{n,k}$ denote the (random) number of fringe subtrees with $k$ leaves in a random binary search tree with $n$ leaves that belong to $\mathcal{S}_{k}$. Moreover, let $Y_{n,\varepsilon}$ denote the (random) number of arbitrary fringe subtrees with more than $n^{\varepsilon}$ leaves in a random binary search tree with $n$ leaves. We have * (1) $\mathbb{E}(X_{n,k})=\frac{2p_{k}n}{k(k+1)}$ for $1\leq k<n$, * (2) $\mathbb{V}(X_{n,k})=O(p_{k}n/k^{2})$ for all $k$ with $a\ln n\leq k\leq n^{\varepsilon}$, where the $O$-constant is independent of $k$, * (3) $\mathbb{E}(Y_{n,\varepsilon})=2n/\lceil n^{\varepsilon}\rceil-1=O(n^{1-\varepsilon})$ and * (4) with high probability, the following statements hold simultaneously: * (i) $|X_{n,k}-\mathbb{E}(X_{n,k})|\leq p_{k}^{1/2}k^{-1}n^{1/2+\varepsilon}$ for all $k$ with $a\ln n\leq k\leq n^{\varepsilon}$, * (ii) $Y_{n,\varepsilon}\leq n^{1-\varepsilon/2}$. ###### Proof (1) In order to estimate $\mathbb{E}(X_{n,k})$, we define $Z_{n,k}$ as the (random) number of arbitrary fringe subtrees with $k$ leaves in a random binary search tree with $n$ leaves. That is, $Z_{n,k}=X_{n,k}$ for $\mathcal{S}_{k}=\mathcal{T}_{k}$. Applying the law of total expectation, we find $\displaystyle\mathbb{E}(X_{n,k})=\sum_{m=0}^{n}\mathbb{E}(X_{n,k}\mid Z_{n,k}=m)\mathbb{P}(Z_{n,k}=m).$ As $X_{n,k}$ conditioned on $Z_{n,k}=m$ for some integer $m$ is binomially distributed with parameters $m$ and $p_{k}$, we find $\mathbb{E}(X_{n,k}\mid Z_{n,k}=m)=mp_{k}$ and hence $\displaystyle\mathbb{E}(X_{n,k})=p_{k}\sum_{m=0}^{n}m\mathbb{P}(Z_{n,k}=m)=p_{k}\mathbb{E}(Z_{n,k}).$ With $\mathbb{E}(Z_{n,k})=\frac{2n}{k(k+1)}$ (see for example [14]), the statement follows. (2) In order to estimate $\mathbb{V}(X_{n,k})$, we apply the law of total variance: $\displaystyle\mathbb{V}(X_{n,k})=\mathbb{V}(\mathbb{E}(X_{n,k}\mid Z_{n,k}))+\mathbb{E}(\mathbb{V}(X_{n,k}\mid Z_{n,k})).$ Again as $X_{n,k}$ conditioned on $Z_{n,k}=m$ for some integer $m$ is binomially distributed with parameters $m$ and $p_{k}$, we find $\mathbb{E}(X_{n,k}\mid Z_{n,k})=p_{k}Z_{n,k}$ and $\mathbb{V}(X_{n,k}\mid Z_{n,k})=Z_{n,k}p_{k}(1-p_{k})$. Thus, we have $\displaystyle\mathbb{V}(X_{n,k})=\mathbb{V}(p_{k}Z_{n,k})+\mathbb{E}(Z_{n,k}p_{k}(1-p_{k}))=\mathbb{V}(Z_{n,k})p_{k}^{2}+\mathbb{E}(Z_{n,k})p_{k}(1-p_{k}).$ With $\mathbb{E}(Z_{n,k})=\frac{2n}{k(k+1)}$ and $\mathbb{V}(Z_{n,k})=\frac{2(k-1)(4k^{2}-3k-4)n}{(k+1)^{2}k(2k-1)(2k+1)},$ (see for example [14]), this yields $\displaystyle\mathbb{V}(X_{n,k})=\frac{2(k-1)(4k^{2}-3k-4)np_{k}^{2}}{(k+1)^{2}k(2k-1)(2k+1)}+\frac{2np_{k}(1-p_{k})}{k(k+1)}=O\left(\frac{np_{k}}{k^{2}}\right).$ (3) In order to estimate $\mathbb{E}(Y_{n,\varepsilon})$, first observe that $\displaystyle\mathbb{E}(Y_{n,\varepsilon})=\sum_{k>n^{\varepsilon}}\mathbb{E}(Z_{n,k}).$ With $\mathbb{E}(Z_{n,k})=\frac{2n}{k(k+1)}$ for $n^{\varepsilon}<k<n$ and $\mathbb{E}(Z_{n,n})=1$, this yields $\displaystyle\mathbb{E}(Y_{n,\varepsilon})=\sum_{n^{\varepsilon}<k\leq n-1}\frac{2n}{k(k+1)}+1=\frac{2n}{\lceil n^{\varepsilon}\rceil}-1=O(n^{1-\varepsilon}).$ (4) For the second part of the statement, we apply Chebyshev’s inequality to obtain: $\displaystyle\mathbb{P}\left(\left|X_{n,k}-\frac{2np_{k}}{k(k+1)}\right|\geq p_{k}^{1/2}n^{1/2+\varepsilon}k^{-1}\right)\leq\frac{\mathbb{V}(X_{n,k})k^{2}}{p_{k}n^{1+2\varepsilon}}=O(n^{-2\varepsilon}).$ Hence, by the union bound, the probability that the stated inequality fails for any $k$ in the given range is $O(n^{-\varepsilon})$, proving that the given statement holds with high probability. Furthermore, with Markov’s inequality, we find $\displaystyle\mathbb{P}(Y_{n,\varepsilon}>n^{1-\varepsilon/2})\leq\frac{\mathbb{E}(Y_{n,\varepsilon})}{n^{1-\varepsilon/2}}=O(n^{-\varepsilon/2}).$ Thus, the second inequality holds with high probability as well. $\scriptstyle\blacksquare$ ## 3 Fringe Subtrees in Uniformly Random Binary Trees ### 3.1 Ordered Fringe Subtrees We provide the proof of Theorem 3 first, since it is simplest and provides us with a template for the other proofs. Basically, it is a refinement of the proof for the corresponding special case of Theorem 3.1 in [26]. In the following sections, we refine the argument further to prove Theorems 1, 2 and 4. ###### Proof (Proof of Theorem 3) We prove the statement in two steps: In the first step, we show that the upper bound $H_{n}\leq cn/\sqrt{\ln n}(1+o(1))$ holds for $c=2\sqrt{\ln 4/\pi}$ both in expectation and with high probability. In the second step, we prove the corresponding lower bound. _The upper bound:_ Let $k_{0}=\log_{4}n$. The number $H_{n}$ of distinct fringe subtrees in a uniformly random ordered binary tree with $n$ leaves equals (i) the number of such distinct fringe subtrees of size at most $k_{0}$ plus (ii) the number of such distinct fringe subtrees of size greater than $k_{0}$. We upper-bound (i) by the number of all ordered binary trees of size at most $k_{0}$ (irrespective of their occurrence as fringe subtrees), which is $\displaystyle\sum_{k=0}^{k_{0}-1}C_{k}=O\left(\frac{4^{k_{0}}}{k_{0}^{3/2}}\right)=O\left(\frac{n}{(\ln n)^{3/2}}\right).$ This upper bound holds deterministically. Furthermore, we upper-bound (ii) by the total number of fringe subtrees of size greater than $k_{0}$ occurring in the tree: We apply Lemma 1 with $a=1/\ln 4$ and $\varepsilon=1/6$ and let $\mathcal{S}_{k}$ denote the set $\mathcal{T}_{k}$, such that $s_{k}=C_{k-1}$, to obtain: $\displaystyle\left(\sum_{k_{0}<k\leq n^{\varepsilon}}X_{n,k}\right)+Y_{n,\varepsilon}$ $\displaystyle=\frac{n}{\sqrt{\pi}}\sum_{k_{0}<k\leq n^{\varepsilon}}k^{-3/2}\left(1+O(k^{-1})\right)+O(n^{1-\varepsilon/3})$ $\displaystyle=\frac{2\sqrt{\ln 4}}{\sqrt{\pi}}\cdot\frac{n}{\sqrt{\ln n}}+O(n/(\ln n)^{3/2}),$ in expectation and with high probability as well, as the estimate from Lemma 1 (part (4)) holds with high probability simultaneously for all $k$ in the given range. As we have $\displaystyle H_{n}\leq\sum_{k\leq k_{0}}C_{k-1}+\Big{(}\sum_{k_{0}<k\leq n^{\varepsilon}}X_{n,k}\Big{)}+Y_{n,\varepsilon},$ we can combine the two bounds to obtain the upper bound on $H_{n}$ stated in Theorem 3, both in expectation and with high probability. _The lower bound:_ Again, let $k_{0}=\log_{4}n$ and $\varepsilon=\frac{1}{6}$. From the first part of the proof, we find that the main contribution to the total number of fringe subtrees in a uniformly random binary tree of size $n$ comes from fringe subtrees of sizes $k$ with $k_{0}<k\leq n^{\varepsilon}$. Hence, in order to lower-bound the number $H_{n}$ of distinct fringe subtrees in a uniformly random binary tree with $n$ leaves, we only count distinct fringe subtrees of sizes $k$ with $k_{0}<k\leq n^{\varepsilon}$ and show that we did not overcount too much in the first part of the proof by upper-bounding this number by the total number of fringe subtrees of sizes $k$. To this end, let $X_{n,k}^{(2)}$ denote the number of pairs of identical fringe subtrees of size $k$ in a uniformly random ordered binary tree of size $n$. Each such pair can be obtained as follows: Take an ordered tree with $n-2k+2$ leaves, pick two leaves, and replace them by the same ordered binary tree of size $k$. The total number of such pairs of identical fringe subtrees of size $k$ is thus $\displaystyle C_{n-2k+1}\cdot\binom{n-2k+2}{2}\cdot C_{k-1}=\frac{4^{n-k}}{2\pi k^{3/2}}(n-2k+1)^{1/2}(1+O(1/k)).$ By dividing by $C_{n-1}$, i.e. the total number of binary trees of size $n$, we thus obtain the expected value: $\displaystyle\mathbb{E}(X_{n,k}^{(2)})=\frac{1}{C_{n-1}}\frac{4^{n-k}}{2\pi k^{3/2}}(n-2k+1)^{1/2}(1+O(1/k))=O(4^{-k}n^{2}k^{-3/2}).$ Thus, we find $\displaystyle\sum_{k_{0}<k\leq n^{\varepsilon}}\mathbb{E}(X_{n,k}^{(2)})=O\left(n^{2}\frac{4^{-k_{0}}}{k_{0}^{3/2}}\right)=O\left(\frac{n}{(\ln n)^{3/2}}\right).$ If a binary tree of size $k$ occurs $m$ times as a fringe subtree in a uniformly random binary tree of size $n$, it contributes $m-\binom{m}{2}$ to the random variable $X_{n,k}-X_{n,k}^{(2)}$. Since $m-\binom{m}{2}\leq 1$ for all non-negative integers $m$, we find that $X_{n,k}-X_{n,k}^{(2)}$ is a lower bound on the number of distinct fringe subtrees with $k$ leaves. Hence, we have $\displaystyle H_{n}\geq\sum_{k_{0}<k\leq n^{\varepsilon}}X_{n,k}-\sum_{k_{0}<k\leq n^{\varepsilon}}X_{n,k}^{(2)}.$ The second sum is $O(n/(\ln n)^{3/2})$ in expectation and thus with high probability as well by the Markov inequality. As the first sum is $\frac{2\sqrt{\ln 4}}{\sqrt{\pi}}\cdot\frac{n}{\sqrt{\ln n}}(1+o(1)),$ both in expectation and with high probability by our estimate from the first part of the proof, the statement of Theorem 3 follows. $\scriptstyle\blacksquare$ As the main idea of the proof is to split the number of distinct fringe subtrees into the number of distinct fringe subtrees of size at most $k_{0}$ plus the number of distinct fringe subtrees of size greater than $k_{0}$ for some suitably chosen integer $k_{0}$, this type of argument is called a _cut- point argument_ and the integer $k_{0}$ is called the _cut-point_ (see [17]). This basic technique is applied in several previous papers to similar problems (see for instance [10], [17], [26], [27]). Moreover, we remark that the statement of Theorem 3 can be easily generalized to simply generated families of trees. ### 3.2 Unordered Fringe Subtrees In this subsection, we prove Theorem 1. For this, we refine the cut-point argument we applied in the proof of Theorem 3: In particular, for the lower bound on $F_{n}$, we need a result due to Bóna and Flajolet [4] on the number of automorphisms of a uniformly random ordered binary tree. It is stated for random phylogenetic trees in [4], but the two probabilistic models are equivalent. ###### Theorem 5 ([4], Theorem 2) Consider a uniformly random ordered binary tree $T_{k}$ with $k$ leaves, and let $A_{k}=\lvert\operatorname{\mathrm{Aut}}(T_{k})\rvert$ be the cardinality of its automorphism group. The logarithm of this random variable satisfies a central limit theorem: For certain positive constants $\gamma$ and $\sigma_{1}$, we have $\mathbb{P}(A_{k}\leq 2^{\gamma k+\sigma_{1}\sqrt{k}x})\overset{k\to\infty}{\to}\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-t^{2}/2}\,dt$ for every real number $x$. The numerical value of the constant $\gamma$ is $0.2710416936$. With Theorem 5, we are able to upper-bound the probability that two fringe subtrees of the same size are isomorphic in our proof of Theorem 1: ###### Proof (Proof of Theorem 1) We prove the statement in two steps: First, we show that the upper bound on $F_{n}$ stated in Theorem 1 holds both in expectation and with high probability, then we prove the respective lower bound. _The upper bound:_ The proof for the upper bound in Theorem 1 exactly matches the first part of the proof of Theorem 3, except that we choose a different cut-point: Let $k_{0}=\log_{b}n$, where $b\approx 2.4832535362$ is the constant in the asymptotic formula (2) for the Wedderburn-Etherington numbers. We then find $\displaystyle F_{n}\leq\sum_{k<k_{0}}W_{k}+\Big{(}\sum_{k_{0}\leq k\leq n^{\epsilon}}X_{n,k}\Big{)}+Y_{n,\epsilon}=\frac{2\sqrt{\ln b}}{\sqrt{\pi}}\cdot\frac{n}{\sqrt{\ln n}}+O(n(\ln n)^{-3/2}),$ both in expectation and with high probability, where the estimates for $X_{n,k}$ and $Y_{n,\varepsilon}$ follow again from Lemma 1. We have $2\sqrt{\ln b}/\sqrt{\pi}\approx 1.0761505454$. _The lower bound:_ As a consequence of Theorem 5, the probability that the cardinality of the automorphism group of a uniformly random binary tree $T_{k}$ of size $k$ satisfies $\lvert\operatorname{\mathrm{Aut}}(T_{k})\rvert\leq 2^{\gamma k-k^{3/4}}$ tends to $0$ as $k\to\infty$. We define $\mathcal{S}_{k}$ as the set of ordered trees with $k$ leaves that do not satisfy this inequality, so that $s_{k}=|\mathcal{S}_{k}|=C_{k-1}(1+o(1))$. Our lower bound is based on counting only fringe subtrees in $\mathcal{S}_{k}$ for suitable $k$. The reason for this choice is that we have an upper bound on the number of ordered binary trees in the same isomorphism class for every tree in $\mathcal{S}_{k}$. Recall that the number of possible ordered representations of an unordered binary tree $t$ with $k$ leaves is given by $2^{k-1}/\lvert\operatorname{\mathrm{Aut}}(t)\rvert$ by the orbit-stabiliser theorem. Hence, the number of ordered binary trees in the same isomorphism class as a tree $t\in\mathcal{S}_{k}$ is bounded above by $2^{k-1-\gamma k+k^{3/4}}$. Now set $k_{1}=\frac{1+\delta}{1+\gamma}\log_{2}n$ for some positive constant $\delta<\frac{2}{3}$, and consider only fringe subtrees that belong to $\mathcal{S}_{k}$, where $k_{1}\leq k\leq n^{\delta/2}$. By Lemma 1, the number of such fringe subtrees in a random ordered binary tree with $n$ leaves is $s_{k}4^{1-k}n(1+O(k/n+s_{k}^{-1/2}2^{k}n^{(\delta-1)/2}))$ both in expectation and with high probability. Since $s_{k}=C_{k-1}(1+o(1))$, the number of fringe subtrees that belong to $\mathcal{S}_{k}$ in a random ordered binary tree of size $n$ becomes $\frac{n}{\sqrt{\pi k^{3}}}(1+o(1))$. We show that most of these trees are the only representatives of their isomorphism classes as fringe subtrees. To this end, we consider all fringe subtrees in $\mathcal{S}_{k}$ for some $k$ that satisfies $k_{1}\leq k\leq n^{\delta/2}$. Let the sizes of the isomorphism classes of trees in $\mathcal{S}_{k}$ be $r_{1},r_{2},\ldots,r_{\ell}$, so that $r_{1}+r_{2}+\cdots+r_{\ell}=s_{k}$. By definition of $\mathcal{S}_{k}$, we have $r_{i}\leq 2^{k-1-\gamma k+k^{3/4}}$ for every $i$. Let us condition on the event that their number $X_{n,k}$ is equal to $N$ for some $N\leq n$. Each of these $N$ fringe subtrees $S_{1},S_{2},\ldots,S_{N}$ follows a uniform distribution among the elements of $\mathcal{S}_{k}$, so the probability of being in an isomorphism class with $r_{i}$ elements is $r_{i}/s_{k}$. Moreover, the $N$ fringe subtrees are also all independent. Let $X_{n,k}^{(2)}$ be the number of pairs of isomorphic trees among the fringe subtrees with $k$ leaves. We have $\displaystyle\mathbb{E}\big{(}X_{n,k}^{(2)}|X_{n,k}=N\big{)}$ $\displaystyle=\binom{N}{2}\sum_{i}\Big{(}\frac{r_{i}}{s_{k}}\Big{)}^{2}\leq\frac{n^{2}}{2s_{k}^{2}}\sum_{i}r_{i}^{2}\leq\frac{n^{2}}{s_{k}}2^{k-2-\gamma k+k^{3/4}}.$ Since this holds for all $N$, the law of total expectation yields $\mathbb{E}\big{(}X_{n,k}^{(2)}\big{)}\leq\frac{n^{2}}{s_{k}}2^{k-2-\gamma k+k^{3/4}}=\sqrt{\pi}n^{2}k^{3/2}2^{-k-\gamma k+k^{3/4}}(1+o(1)).$ Since $k\geq k_{1}=\frac{1+\delta}{1+\gamma}\log_{2}n$, we find that $\mathbb{E}\big{(}X_{n,k}^{(2)}\big{)}\leq n^{2}2^{-(1+\gamma)k+O(k^{3/4})}\leq n^{1-\delta}\exp\big{(}O((\ln n)^{3/4})\big{)}.$ Thus $\sum_{k_{1}\leq k\leq n^{\delta/2}}\mathbb{E}\big{(}X_{n,k}^{(2)}\big{)}\leq n^{1-\delta/2}\exp\big{(}O((\ln n)^{3/4})\big{)}=o(n/\sqrt{\ln n}).$ As in the previous proof, we see that $X_{n,k}-X_{n,k}^{(2)}$ is a lower bound on the number of non-isomorphic fringe subtrees with $k$ leaves. This gives us $F_{n}\geq\sum_{k_{1}\leq k\leq n^{\delta/2}}X_{n,k}-\sum_{k_{1}\leq k\leq n^{\delta/2}}X_{n,k}^{(2)}.$ The second sum is negligible since it is $o(n/\sqrt{\ln n})$ in expectation and thus also with high probability by the Markov inequality. For the first sum, a calculation similar to that for the upper bound shows that it is $\frac{2\sqrt{(1+\gamma)\ln 2}}{\sqrt{\pi(1+\delta)}}\cdot\frac{n}{\sqrt{\ln n}}(1+o(1)),$ both in expectation and with high probability. Since $\delta$ is arbitrary, we can choose any constant smaller than $\frac{2\sqrt{(1+\gamma)\ln 2}}{\sqrt{\pi}}\approx 1.0591261434$ for $c_{1}$. $\scriptstyle\blacksquare$ ## 4 Fringe Subtrees in Random Binary Search Trees In this section, we prove our results presented in Theorem 2 and Theorem 4 on the number of distinct, respectively, non-isomorphic fringe subtrees in a random binary search tree. In order to show the respective lower bounds of Theorem 2 and Theorem 4, we need two theorems similar to Theorem 5: The first one shows that the logarithm of the random variable $B_{k}=P_{\operatorname{bst}}(T_{k})^{-1}$, where $T_{k}$ denotes a random binary search tree of size $k$, satisfies a central limit theorem and is needed to estimate the probability that two fringe subtrees in a random binary search tree are identical. The second one transfers the statement of Theorem 5 from uniformly random binary trees to random binary search trees and is needed in order to estimate the probability that two fringe subtrees in a random binary search tree are isomorphic. The first of these two central limit theorems is shown in [15]: ###### Theorem 6 ([15], Theorem 4.1) Consider a random binary search tree $T_{k}$ with $k$ leaves, and let $B_{k}=P_{\operatorname{bst}}(T_{k})^{-1}$. The logarithm of this random variable satisfies a central limit theorem: For certain positive constants $\mu$ and $\sigma_{2}$, we have $\displaystyle\mathbb{P}\left(B_{k}\leq 2^{\mu k+\sigma_{2}\sqrt{k}x}\right)\overset{k\to\infty}{\to}\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-t^{2}/2}\,dt$ for every real number $x$. The numerical value of the constant $\mu$ is $\displaystyle\mu=\sum_{k=1}^{\infty}\frac{2\log_{2}k}{(k+1)(k+2)}\approx 1.7363771368.$ The second of these two central limit theorems follows from a general theorem devised by Holmgren and Janson [22]: Let $f:\mathcal{T}\rightarrow\mathbb{R}$ denote a function mapping an ordered binary tree to a real number. Moreover, given such a mapping $f$, define $\mathcal{F}:\mathcal{T}\rightarrow\mathbb{R}$ by $\displaystyle\mathcal{F}(t)=\sum_{v\in t}f(t(v)).$ The theorem by Holmgren and Janson states: ###### Theorem 7 ([22], Theorem 1.14) Let $T_{k}$ be a random binary search tree of size $k$. If $\displaystyle\sum_{k=1}^{\infty}\frac{\mathbb{V}(f(T_{k}))^{1/2}}{k^{3/2}}<\infty,\quad\lim_{k\rightarrow\infty}\frac{\mathbb{V}(f(T_{k}))}{k}=0\quad\text{ and }\quad$ $\displaystyle\sum_{k=1}^{\infty}\frac{\mathbb{E}(f(T_{k}))^{2}}{k^{2}}<\infty,$ then for certain constants $\nu$ and $\sigma\geq 0$, we have $\mathbb{E}(\mathcal{F}(T_{k}))\sim\nu k\text{ and }\mathbb{V}(\mathcal{F}(T_{k}))\sim\sigma^{2}k.$ Moreover, if $\sigma\neq 0$, then $\displaystyle\mathbb{P}(\mathcal{F}(T_{k})\leq\nu k+\sigma\sqrt{k}x)\overset{k\to\infty}{\to}\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-t^{2}/2}\,dt$ for every real number $x$. In particular, we have $\displaystyle\nu=\sum_{k=1}^{\infty}\frac{2}{k(k+1)}\mathbb{E}(f(T_{k})).$ Note that in [22], the equivalent binary search model is considered that allows binary trees to have unary nodes, so that the index of summation has to be shifted in the sum defining $\nu$. Moreover, note that if we set $f(t)=\log_{2}(|t|-1)$ for $|t|>1$ and $f(t)=0$ otherwise, we have $\displaystyle\mathcal{F}(t)=\sum_{v\in t}f(t(v))=\sum_{v\in t\atop|t(v)|>1}\log_{2}(|t(v)|-1)=\log_{2}(P_{\operatorname{bst}}(t)^{-1}),$ by definition of $P_{\operatorname{bst}}$ in (3), and thus Theorem 6 follows as a special case of Theorem 7. This special case is also considered in Example 8.13 of [22]. As our main application of Theorem 7, we transfer the statement of Theorem 5 from uniformly random binary trees to random binary search trees, that is, we show that if the random number $A_{k}=\lvert\operatorname{\mathrm{Aut}}(T_{k})\rvert$ denotes the size of the automorphism group of a random binary search tree $T_{k}$ with $k$ leaves, then the logarithm of this random variable satisfies a central limit theorem as well. For this, we define the function $f:\mathcal{T}\rightarrow\mathbb{R}$ in Theorem 7 by $\displaystyle f(t)=\begin{cases}1\quad&\text{if the root of }t\text{ is a symmetrical node,}\\\ 0\quad&\text{otherwise.}\end{cases}$ We thus have $\displaystyle\mathcal{F}(t)=\sum_{v\in t}f(t(v))=\operatorname{sym}(t),$ that is, $\mathcal{F}(t)$ evaluates to the number of symmetrical nodes in $t$. Recall that $2^{\operatorname{sym}(t)}$ equals the size of the automorphism group $\operatorname{\mathrm{Aut}}(t)$ of $t$. It is not difficult to check that $f$ satisfies the conditions of Theorem 7: As $f(t)\in\\{0,1\\}$ for every $t\in\mathcal{T}$, we have $\mathbb{E}(f(T_{k})^{2})=\mathbb{E}(f(T_{k}))\in[0,1]$ and thus $\mathbb{V}(f(T_{k}))\in[0,1]$ as well, so that the assumptions of Theorem 7 are satisfied. In order to determine the corresponding value $\nu$, we start with estimating the expectation $\mathbb{E}(f(T_{k}))$: If $k$ is odd, then $\mathbb{E}(f(T_{k}))=0$, as the fringe subtrees rooted at the root’s children cannot be of the same size in this case and thus cannot be isomorphic. If $k$ is even, then $\mathbb{E}(f(T_{k}))$ equals the probability that these two subtrees are of the same size $\frac{k}{2}$ (which is $\frac{1}{k-1}$) times the probability that these two subtrees of size $\frac{k}{2}$ are isomorphic. In order to estimate the latter probability, let $P_{k}^{r}$ for positive integers $k$ and $r$ denote the probability that $2^{r}$ random binary search trees of size $k$ are isomorphic, and let $\delta(k)=1$ if $k$ is even and $\delta(k)=0$ otherwise. We find that $P_{k}^{r}$ satisfies the following recurrence relation: $\displaystyle P_{k}^{r}=\smashoperator[]{\sum_{i=1}^{\lfloor\frac{k-1}{2}\rfloor}}\Big{(}\frac{2}{k-1}\Big{)}^{\\!2^{r}}\\!P_{i}^{r}P_{k-i}^{r}+\delta(k)\Big{(}\frac{1}{k-1}\Big{)}^{\\!2^{r}}\\!\Big{(}2^{2^{r}-1}\big{(}P_{k/2}^{r}\big{)}^{2}\\!-\\!\Big{(}2^{2^{r}-1}-1\Big{)}P_{k/2}^{r+1}\Big{)},$ with $P_{k}^{r}=1$ for $k\in\\{1,2,3\\}$ and every positive integer $r$. To see that this recurrence relation holds, first consider the case that $k$ is odd: If all the $2^{r}$ trees are isomorphic, then the respective sizes of the fringe subtrees rooted at the root nodes’ children must coincide, that is, there are integers $i$ and $k-i$ with $1\leq i\leq\lfloor\frac{k-1}{2}\rfloor$, such that for each of the $2^{r}$ trees, one of those subtrees is of size $i$ while the other is of size $k-i$. This holds with probability $(2/(k-1))^{2^{r}}$. Moreover, all of the $2^{r}$ subtrees of size $i$ (respectively, $k-i$) have to be isomorphic, which holds with probability $P_{i}^{r}$ (respectively, $P_{k-i}^{r}$). If $k$ is even, we furthermore have to consider the case that $i=\frac{k}{2}$, which holds with probability $(1/(k-1))^{2^{r}}$: In this case pick the first of the $2^{r}$ trees and let $t_{1}$ (respectively, $t_{2}$) denote the fringe subtree of size $\frac{k}{2}$ rooted at the root node’s left (respectively, right) child. For all of the other $2^{r}-1$ trees, one of the fringe subtrees rooted at the root node’s children has to be isomorphic to $t_{1}$, while the other has to be isomorphic to $t_{2}$: This holds with probability $(P_{k/2}^{r})^{2}$. Moreover, for each of those $2^{r}-1$ many trees, we can choose whether the subtree rooted at the root’s left child or right child is isomorphic to $t_{1}$, which gives us $2^{2^{r}-1}$ many possibilities. However, in the case that $t_{1}$ is isomorphic to $t_{2}$ as well (which means that all the $2^{r+1}$ subtrees are isomorphic, which holds with probability $P_{k/2}^{r+1}$), this means some overcounting, which is taken into account by the final term. Thus, the recursion for $P_{k}^{r}$ follows. We find for a random binary search tree $T_{k}$ of size $k$: $\displaystyle\mathbb{E}(f(T_{k}))=\begin{cases}\frac{1}{k-1}P^{1}_{k/2}\quad&\text{if }k\text{ is even,}\\\ 0\quad&\text{otherwise.}\end{cases}$ Thus, we have $\displaystyle\nu=\sum_{k=1}^{\infty}\frac{2}{k(k+1)}\mathbb{E}(f(T_{k}))=\sum_{k=1}^{\infty}\frac{P_{k}^{1}}{k(2k+1)(2k-1)}\approx 0.3795493473,$ where the numerical value for $\nu$ can be determined using the recurrence relation for $P_{k}^{r}$. We remark that the constant $\sigma^{2}$ can also be evaluated numerically. It is approximately $0.115$, thus in particular not $0$, but we do not need its precise value. The following theorem now follows from Theorem 7: ###### Theorem 8 Consider a random binary search tree $T_{k}$ with $k$ leaves, and let $A_{k}=\lvert\operatorname{\mathrm{Aut}}(T_{k})\rvert$ be the cardinality of its automorphism group. The logarithm of this random variable satisfies a central limit theorem: for certain positive constants $\nu$ and $\sigma_{3}$, we have $\displaystyle\mathbb{P}(A_{k}\leq 2^{\nu k+\sigma_{3}\sqrt{k}x})\overset{k\to\infty}{\to}\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-t^{2}/2}\,dt$ for every real number $x$. The numerical value of $\nu$ is $\nu\approx 0.3795493473$. ### 4.1 Ordered Fringe Subtrees in Random Binary Search Trees We are now able to prove Theorem 4: ###### Proof (Proof of Theorem 4) _The upper bound:_ Let $k_{0}=\log_{4}n$. We upper-bound the number $J_{n}$ of distinct fringe subtrees in a random binary search tree with $n$ leaves as follows: The number of distinct fringe subtrees with fewer than $k_{0}$ leaves is trivially bounded from above by the number of all binary trees of size at most $k_{0}$ (irrespective of their occurrence as fringe subtrees), which is $\displaystyle\sum_{k<k_{0}}C_{k-1}=O\left(\frac{4^{k_{0}}}{k_{0}^{3/2}}\right)=O\left(\frac{n}{(\ln n)^{3/2}}\right).$ This upper bound holds deterministically. The number of distinct fringe subtrees with at least $k_{0}$ leaves is upper-bounded by the total number of fringe subtrees with at least $k_{0}$ leaves: For this, we apply Lemma 2 with $a=\frac{1}{\ln 4}$, $\varepsilon<\frac{1}{3}$ and $\mathcal{S}_{k}=\mathcal{T}_{k}$, so that $p_{k}=1$ for $k_{0}\leq k\leq n^{\varepsilon}$. Thus, both in expectation and with high probability, as the estimate from Lemma 2 (part (4)) holds with high probability simultaneously for all $k$ in the given range, we obtain: $\displaystyle\left(\sum_{k_{0}\leq k\leq n^{\varepsilon}}X_{n,k}\right)+Y_{n,\varepsilon}$ $\displaystyle=\sum_{k_{0}\leq k\leq n^{\varepsilon}}\frac{2n}{k(k+1)}(1+O(kn^{-1/2+\varepsilon}))+O(n^{1-\frac{\varepsilon}{2}})$ $\displaystyle=2\ln 4\cdot\frac{n}{\ln n}(1+o(1)).$ Hence, we find that $J_{n}$ is both in expectation and with high probability bounded from above by $\displaystyle J_{n}\leq\sum_{k<k_{0}}C_{k}+\left(\sum_{k_{0}\leq k\leq n^{\varepsilon}}X_{n,k}\right)+Y_{n,\varepsilon}=2\ln 4\cdot\frac{n}{\ln n}(1+o(1)).$ The numerical value of the constant is $c_{6}=2\ln(4)\approx 2.7725887222$. _The lower bound:_ As a consequence of Theorem 6, the probability that $P_{\operatorname{bst}}(T_{k})^{-1}\leq 2^{\mu k-k^{3/4}}$ for a random binary search tree $T_{k}$ of size $k$ tends to $0$ for $k\rightarrow\infty$. Let $\mathcal{S}_{k}$ denote the set of binary trees of size $k$ that do not satisfy this inequality: Thus, every binary tree $t\in\mathcal{S}_{k}$ satisfies $P_{\operatorname{bst}}(t)\leq 2^{-\mu k+k^{3/4}}$ and we have $p_{k}=1+o(1)$. In order to prove the lower bound, we only consider fringe subtrees in $\mathcal{S}_{k}$ for suitable $k$: Thus, we can suitably upper- bound the probability that two fringe subtrees of size $k$ in a random binary search tree are identical. Let $\delta$ denote a positive constant with $\delta<\frac{2}{3}$, let $k_{1}=(1+\delta)\mu^{-1}\log_{2}n$ and let $k_{1}\leq k\leq n^{\delta/2}$. By Lemma 2, the number of such fringe subtrees in a random binary search tree with $n$ leaves is $2np_{k}/(k(k+1))$ in expectation and $\displaystyle\frac{2np_{k}}{k(k+1)}(1+O(p_{k}^{-1/2}n^{(\delta-1)/2}k))$ with high probability. Furthermore, let $X^{(2)}_{n,k}$ denote the (random) number of pairs of identical fringe subtrees among the fringe subtrees with $k$ leaves that belong to $\mathcal{S}_{k}$, for $k_{1}\leq k\leq n^{\delta/2}$. Let us condition on the event that $X_{n,k}=N$ for some nonnegative integer $N\leq n$. Those $N$ fringe subtrees are all independent random binary search trees, and the probability that such a fringe subtree equals a given binary tree $t\in\mathcal{S}_{k}$ is $P_{\operatorname{bst}}(t)/p_{k}$. Thus, we have $\displaystyle\mathbb{E}(X_{n,k}^{(2)}\mid X_{n,k}=N)=\binom{N}{2}\sum_{t\in\mathcal{S}_{k}}\left(\frac{P_{\operatorname{bst}}(t)}{p_{k}}\right)^{2}\leq\frac{n^{2}}{2}\frac{1}{p_{k}^{2}}\sum_{t\in\mathcal{S}_{k}}P_{\operatorname{bst}}(t)^{2}.$ As by assumption, $P_{\operatorname{bst}}(t)\leq 2^{-\mu k+k^{3/4}}$ for every $t\in\mathcal{S}_{k}$, the expected value is upper-bounded by $\displaystyle\mathbb{E}(X_{n,k}^{(2)}\mid X_{n,k}=N)\leq\frac{n^{2}}{2}\frac{2^{-\mu k+k^{3/4}}}{p_{k}^{2}}\sum_{t\in\mathcal{S}_{k}}P_{\operatorname{bst}}(t)=\frac{n^{2}}{p_{k}}2^{-\mu k-1+k^{3/4}}.$ Since this upper bound for $\mathbb{E}(X_{n,k}^{(2)}\mid X_{n,k}=N)$ holds independently of $N$, the law of total expectation yields $\displaystyle\mathbb{E}(X_{n,k}^{(2)})=\sum_{N=0}^{n}\mathbb{E}(X_{n,k}^{(2)}\mid X_{n,k}=N)\mathbb{P}(X_{n,k}=N)\leq\frac{n^{2}}{p_{k}}2^{-\mu k-1+k^{3/4}}.$ With $p_{k}=1+o(1)$, we obtain $\displaystyle\mathbb{E}(X_{n,k}^{(2)})\leq n^{2}2^{-\mu k-1+k^{3/4}}(1+o(1))=n^{2}2^{-\mu k+O(k^{3/4})}.$ As $k\geq k_{1}=\frac{1+\delta}{\mu}\log_{2}n$, we find $\displaystyle\mathbb{E}(X_{n,k}^{(2)})\leq n^{2}2^{-(1+\delta)\log_{2}n+O(k^{3/4})}\leq n^{1-\delta}2^{O((\log_{2}n)^{3/4})}.$ Thus, $\displaystyle\sum_{k_{1}\leq k\leq n^{\delta/2}}\mathbb{E}(X_{n,k}^{(2)})\leq n^{1-\delta/2}2^{O((\log_{2}n)^{3/4})}=o(n/\ln n).$ The (random) number $J_{n}$ of distinct fringe subtrees in a random binary search tree of size $n$ is lower-bounded by the number of distinct fringe subtrees of sizes $k$ for $k_{1}\leq k\leq n^{\delta/2}$ that belong to $\mathcal{S}_{k}$, and this number is again lower-bounded by the sum over $X_{n,k}-X_{n,k}^{(2)}$ for $k_{1}\leq k\leq n^{\delta/2}$. We thus have $\displaystyle J_{n}\geq\sum_{k_{1}\leq k\leq n^{\delta/2}}X_{n,k}-\sum_{k_{1}\leq k\leq n^{\delta/2}}X_{n,k}^{(2)}.$ The second sum is $o(n/\ln n)$ in expectation and hence by Markov’s inequality with high probability as well. The first sum can be estimated using Lemma 2, as in the proof of the upper bound, which yields $\displaystyle\sum_{k_{1}\leq k\leq n^{\delta/2}}X_{n,k}=\frac{2\mu\ln 2}{1+\delta}\cdot\frac{n}{\ln n}(1+o(1)),$ in expectation and with high probability. Since $\delta$ can be chosen arbitrarily, the desired statement holds for any constant $\displaystyle c_{5}<2\mu\ln 2\approx 2.4071298335.$ ### 4.2 Unordered Fringe Subtrees in Random Binary Search Trees It remains to prove Theorem 2: ###### Proof (Proof of Theorem 2) _The upper bound_ : The proof for the upper bound exactly matches the first part of the proof of Theorem 4, except that we choose the cut-point $k_{0}=\log_{b}(n)$, where $b\approx 2.4832535362$ is the constant determining the asymptotic growth of the Wedderburn-Etherington numbers. _The lower bound_ : Let $T_{k}$ denote a random binary search tree with $k$ leaves. As a consequence of Theorem 8, the probability that $\lvert\operatorname{\mathrm{Aut}}(T_{k})\rvert=A_{k}\leq 2^{\nu k-k^{3/4}}$ tends to $0$ as $k\rightarrow\infty$. Moreover, by Theorem 6 the probability that $B_{k}=P_{\operatorname{bst}}(T_{k})^{-1}\leq 2^{\mu k-k^{3/4}}$ tends to $0$ for $k\rightarrow\infty$. Let $\mathcal{S}_{k}$ denote the set of ordered binary trees with $k$ leaves for which neither of the two inequalities is satisfied: Thus, every binary tree $t\in\mathcal{S}_{k}$ satisfies $P_{\operatorname{bst}}(t)\leq 2^{-\mu k+k^{3/4}}$ and $\lvert\operatorname{\mathrm{Aut}}(t)\rvert\geq 2^{\nu k-k^{3/4}}$, and we have $p_{k}=1+o(1)$. By the orbit-stabilizer theorem, we find that the number of ordered binary trees in the same isomorphism class as a tree $t\in\mathcal{S}_{k}$ is bounded from above by $\displaystyle\frac{2^{k-1}}{\lvert\operatorname{\mathrm{Aut}}(t)\rvert}\leq\frac{2^{k-1}}{2^{\nu k-k^{3/4}}}=2^{(1-\nu)k-1+k^{3/4}}.$ In order to prove the lower bound, we only consider fringe subtrees in $\mathcal{S}_{k}$ for suitable $k$: Thus, we are able to suitably upper-bound the probability that two fringe subtrees of size $k$ in a random binary search tree are identical as unordered binary trees. Let $\delta<\frac{2}{3}$, let $k_{1}=(1+\delta)\log_{2}n/(\mu+\nu-1)$ and let $k_{1}\leq k\leq n^{\delta/2}$. By Lemma 2, the number of fringe subtrees of size $k$ with $k_{1}\leq k\leq n^{\delta/2}$ is $2np_{k}/(k(k+1))$ in expectation and $\displaystyle\frac{2np_{k}}{k(k+1)}(1+O(p_{k}^{-1/2}n^{(\delta-1)/2}k))$ with high probability. For $k_{1}\leq k\leq n^{\delta/2}$, let $X_{n,k}^{(2)}$ denote the (random) number of pairs of isomorphic binary trees among the fringe subtrees of size $k$ that belong to $\mathcal{S}_{k}$. Moreover, let $l$ denote the number of isomorphism classes of binary trees in $\mathcal{S}_{k}$ and for each isomorphism class, pick one representative: Let $t_{1},t_{2},\dots,t_{l}$ denote those representatives. If a binary tree $t$ is in the same isomorphism class as tree $t_{i}$, then $P_{\operatorname{bst}}(t)=P_{\operatorname{bst}}(t_{i})$ and $\lvert\operatorname{\mathrm{Aut}}(t)\rvert=\lvert\operatorname{\mathrm{Aut}}(t_{i})\rvert$. In particular, if $t\in\mathcal{T}_{k}$ is isomorphic to a representative $t_{i}$, then $t\in\mathcal{S}_{k}$ as well, that is, all binary trees that are isomorphic to a binary tree in $\mathcal{S}_{k}$ are automatically contained in $\mathcal{S}_{k}$ as well. As there are $2^{k-1}/\lvert\operatorname{\mathrm{Aut}}t_{i}\rvert$ many trees in the same isomorphism class as the binary tree $t_{i}$, we find $\displaystyle\sum_{i=1}^{l}P_{\operatorname{bst}}(t_{i})\frac{2^{k-1}}{\lvert\operatorname{\mathrm{Aut}}t_{i}\rvert}=\sum_{t\in\mathcal{S}_{k}}P_{\operatorname{bst}}(t)=p_{k}.$ Let us condition on the event that $X_{n,k}=N$ for some integer $0\leq N\leq n$. Those $N$ fringe subtrees are all independent random binary search trees, and the probability that such a fringe subtree is isomorphic to a given binary tree $t_{i}\in\mathcal{S}_{k}$ is $(P_{\operatorname{bst}}(t_{i})/p_{k})\cdot(2^{k-1}/\lvert\operatorname{\mathrm{Aut}}(t_{i})\rvert)$. Thus, we find $\displaystyle\mathbb{E}(X_{n,k}^{(2)}\mid X_{n,k}=N)\\!=\\!\binom{N}{2}\sum_{i=1}^{l}\left(\frac{2^{k-1}P_{\operatorname{bst}}(t_{i})}{\lvert\operatorname{\mathrm{Aut}}t_{i}\rvert p_{k}}\right)^{2}\\!\leq\frac{n^{2}}{2}\frac{1}{p_{k}^{2}}\sum_{i=1}^{l}\left(\frac{2^{k-1}P_{\operatorname{bst}}(t_{i})}{\lvert\operatorname{\mathrm{Aut}}t_{i}\rvert}\right)^{2}\\!.$ As $t_{1},\dots,t_{l}\in\mathcal{S}_{k}$, we have $P_{\operatorname{bst}}(t_{i})\leq 2^{-\mu k+k^{3/4}}$ and thus $2^{k-1}/\lvert\operatorname{\mathrm{Aut}}t_{i}\rvert\leq 2^{(1-\nu)k-1+k^{3/4}}$ for every $i\in\\{1,2,\dots,l\\}$. Hence $\displaystyle\mathbb{E}(X_{n,k}^{(2)}\mid X_{n,k}=N)$ $\displaystyle\leq\frac{n^{2}2^{(1-\nu-\mu)k+2k^{3/4}-2}}{p_{k}^{2}}\sum_{i=1}^{l}\left(\frac{2^{k-1}P_{\operatorname{bst}}(t_{i})}{\lvert\operatorname{\mathrm{Aut}}t_{i}\rvert}\right)$ $\displaystyle=p_{k}^{-1}n^{2}2^{(1-\nu-\mu)k+2k^{3/4}-2}.$ As this upper bound on the expectation is independent of $N$, we find by the law of total expectation: $\displaystyle\mathbb{E}(X_{n,k}^{(2)})\leq p_{k}^{-1}n^{2}2^{(1-\nu-\mu)k+2k^{3/4}-2}=n^{2}2^{-(\nu+\mu-1)k+2k^{3/4}-2}(1+o(1)).$ With $k\geq k_{1}=(1+\delta)/(\mu+\nu-1)\log_{2}n$, we obtain $\displaystyle\mathbb{E}(X_{n,k}^{(2)})\leq n^{2}2^{-(1+\delta)\log_{2}n+O((\log_{2}n)^{3/4})}=n^{1-\delta}2^{O((\log_{2}n)^{3/4})}.$ Thus, $\displaystyle\sum_{k_{1}\leq k\leq n^{\delta/2}}\mathbb{E}(X_{n,k}^{(2)})\leq n^{1-\frac{\delta}{2}}2^{O((\log_{2}n)^{3/4})}=o(n/\ln n).$ Analogously as in the previous proofs, we lower-bound the random number $G_{n}$ of non-isomorphic fringe subtrees in a random binary search tree of size $n$ by the number of such fringe subtrees of sizes $k$ for $k_{1}\leq k\leq n^{\delta/2}$ that belong to $\mathcal{S}_{k}$, and this number is again lower-bounded by the sum over $X_{n,k}-X_{n,k}^{(2)}$ for $k_{1}\leq k\leq n^{\delta/2}$ by the inclusion-exclusion principle. We thus have $\displaystyle G_{n}\geq\sum_{k_{1}\leq k\leq n^{\delta/2}}X_{n,k}-\sum_{k_{1}\leq k\leq n^{\delta/2}}X_{n,k}^{(2)}.$ The second sum is $o(n/\ln n)$ in expectation and hence by Markov’s inequality with high probability as well. The first sum is bounded similarly as in the estimate for the upper bound, which yields $\displaystyle\sum_{k_{1}\leq k\leq n^{\delta/2}}X_{n,k}=\frac{2(\mu+\nu-1)\ln 2}{1+\delta}\cdot\frac{n}{\ln n}(1+o(1)).$ Since $\delta$ can again be chosen arbitrarily, the desired statement holds for any constant $c_{3}<2(\mu+\nu-1)\ln 2\approx 1.5470025923.$ $\scriptstyle\blacksquare$ ## 5 Open Problems The following natural question arises from our results: Is it possible to determine constants $\alpha_{1},\alpha_{2},\alpha_{3}$ with $c_{1}\leq\alpha_{1}\leq c_{2}$, $c_{3}\leq\alpha_{2}\leq c_{4}$ and $c_{5}\leq\alpha_{3}\leq c_{6}$, such that $\displaystyle\mathbb{E}(F_{n})=\frac{\alpha_{1}n}{\sqrt{\log n}}(1+o(1)),\ \mathbb{E}(G_{n})=\frac{\alpha_{2}n}{\log n}(1+o(1)),\ \mathbb{E}(J_{n})=\frac{\alpha_{3}n}{\log n}(1+o(1)),$ respectively, and $\displaystyle\frac{F_{n}}{n/\sqrt{\log n}}\overset{P}{\to}\alpha_{1},\ \frac{G_{n}}{n/\log n}\overset{P}{\to}\alpha_{2},\ \text{and}\ \frac{J_{n}}{n/\log n}\overset{P}{\to}\alpha_{3}\ ?$ In order to prove such estimates, it seems essential to gain a better understanding of the random variables $P_{\operatorname{bst}}(T_{k})^{-1}$ and $\lvert\operatorname{\mathrm{Aut}}(T_{k})\rvert$, in particular their distributions further away from the mean values, for random binary search trees or uniformly random ordered binary trees $T_{k}$ of size $k$. ## References * [1] Serge Abiteboul, Pierre Bourhis, and Victor Vianu. Highly expressive query languages for unordered data trees. Theory of Computing Systems, 57(4):927–966, 2015. * [2] Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman. Compilers: Principles, Techniques, and Tools. Addison-Wesley series in computer science / World student series edition. Addison-Wesley, 1986. * [3] David Aldous. Asymptotic fringe distributions for general families of random trees. The Annals of Applied Probability, 1(2):228–266, 1991. * [4] Miklós Bóna and Philippe Flajolet. Isomorphism and symmetries in random phylogenetic trees. Journal of Applied Probability, 46(4):1005–1019, 2009. * [5] Iovka Boneva, Radu Ciucanu, and Slawek Staworko. Schemas for unordered XML on a DIME. Theory of Compuing Systems, 57(2):337–376, 2015. * [6] Mireille Bousquet-Mélou, Markus Lohrey, Sebastian Maneth, and Eric Noeth. XML compression via DAGs. Theory of Computing Systems, 57(4):1322–1371, 2015. * [7] Randal E. Bryant. Symbolic boolean manipulation with ordered binary-decision diagrams. ACM Computing Surveys, 24(3):293–318, 1992. * [8] Peter Buneman, Martin Grohe, and Christoph Koch. Path queries on compressed XML. In Johann Christoph Freytag et al., editors, Proceedings of the 29th Conference on Very Large Data Bases, VLDB 2003, pages 141–152. Morgan Kaufmann, 2003. * [9] Florian Dennert and Rudolf Grübel. On the subtree size profile of binary search trees. Combinatorics, Probability and Computing, 19(4):561–578, 2010. * [10] Luc Devroye. On the richness of the collection of subtrees in random binary search trees. Information Processing Letters, 65(4):195–199, 1998. * [11] Luc Devroye and Svante Janson. Protected nodes and fringe subtrees in some random trees. Electronic Communications in Probability, 19:1–10, 2014. * [12] Michael Drmota. Random Trees: An Interplay Between Combinatorics and Probability. Springer Publishing Company, Incorporated, 1st edition, 2009. * [13] Qunqiang Feng and Hosam M. Mahmoud. On the variety of shapes on the fringe of a random recursive tree. Journal of Applied Probability, 47(1):191–200, 2010. * [14] Qunqiang Feng, Hosam M. Mahmoud, and Alois Panholzer. Phase changes in subtree varieties in random recursive and binary search trees. SIAM Journal on Discrete Mathematics, 22(1):160–184, 2008. * [15] James Allen Fill. On the distribution of binary search trees under the random permutation model. Random Structures & Algorithms, 8(1):1–25, 1996. * [16] Steven R. Finch and Gian-Carlo Rota. Mathematical Constants. Encyclopedia of Mathematics and its Applications. Cambridge University Press, 2003. * [17] Philippe Flajolet, Xavier Gourdon, and Conrado Martínez. Patterns in random binary search trees. Random Structures & Algorithms, 11(3):223–244, 1997. * [18] Philippe Flajolet and Robert Sedgewick. Analytic Combinatorics. Cambridge University Press, 2009. * [19] Philippe Flajolet, Paolo Sipala, and Jean-Marc Steyaert. Analytic variations on the common subexpression problem. In Proceedings of the 17th International Colloquium on Automata, Languages and Programming, ICALP 1990, volume 443 of Lecture Notes in Computer Science, pages 220–234. Springer, 1990. * [20] Markus Frick, Martin Grohe, and Christoph Koch. Query evaluation on compressed trees (extended abstract). In Proceedings of the 18th Annual IEEE Symposium on Logic in Computer Science, LICS 2003, pages 188–197. IEEE Computer Society Press, 2003. * [21] Moses Ganardi, Danny Hucke, Markus Lohrey, and Louisa Seelbach Benkner. Universal tree source coding using grammar-based compression. IEEE Transactions on Information Theory, 65(10):6399–6413, 2019\. * [22] Cecilia Holmgren and Svante Janson. Limit laws for functions of fringe trees for binary search trees and random recursive trees. Electronic Journal of Probability, 20:1–51, 2015. * [23] John C. Kieffer, En-Hui Yang, and Wojciech Szpankowski. Structural complexity of random binary trees. In Proceedings of the 2009 IEEE International Symposium on Information Theory, ISIT 2009, pages 635–639. IEEE, 2009. * [24] Markus Lohrey, Sebastian Maneth, and Carl Philipp Reh. Compression of unordered XML trees. In 20th International Conference on Database Theory, ICDT 2017, March 21-24, 2017, Venice, Italy, pages 18:1–18:17, 2017. * [25] Mike Paterson and Mark N. Wegman. Linear unification. Journal of Computer and System Sciences, 16(2):158–167, 1978. * [26] Dimbinaina Ralaivaosaona and Stephan G. Wagner. Repeated fringe subtrees in random rooted trees. In Proceedings of the Twelfth Workshop on Analytic Algorithmics and Combinatorics, ANALCO 2015, pages 78–88. SIAM, 2015. * [27] Louisa Seelbach Benkner and Markus Lohrey. Average case analysis of leaf-centric binary tree sources. In 43rd International Symposium on Mathematical Foundations of Computer Science, MFCS 2018, August 27-31, 2018, Liverpool, UK, pages 16:1–16:15, 2018. * [28] Jie Zhang, En-Hui Yang, and John C. Kieffer. A universal grammar-based code for lossless compression of binary trees. IEEE Transactions on Information Theory, 60(3):1373–1386, 2014\. * [29] Sen Zhang, Zhihui Du, and Jason Tsong-Li Wang. New techniques for mining frequent patterns in unordered trees. IEEE Transactions on Cybernetics, 45(6):1113–1125, 2015.
2024-09-04T02:54:57.957653
2020-03-06T12:28:04
2003.03375
{ "authors": "Eric Guizzo, Tillman Weyde, Jack Barnett Leveson", "full_text_license": null, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "arxiv-papers-0000.json.gz:26089", "submitter": "Eric Guizzo", "url": "https://arxiv.org/abs/2003.03375" }
arxiv-papers
# Multi-Time-Scale Convolution for Emotion Recognition from Speech Audio Signals ###### Abstract Robustness against temporal variations is important for emotion recognition from speech audio, since emotion is expressed through complex spectral patterns that can exhibit significant local dilation and compression on the time axis depending on speaker and context. To address this and potentially other tasks, we introduce the multi-time-scale (MTS) method to create flexibility towards temporal variations when analyzing time-frequency representations of audio data. MTS extends convolutional neural networks with convolution kernels that are scaled and re-sampled along the time axis, to increase temporal flexibility without increasing the number of trainable parameters compared to standard convolutional layers. We evaluate MTS and standard convolutional layers in different architectures for emotion recognition from speech audio, using 4 datasets of different sizes. The results show that the use of MTS layers consistently improves the generalization of networks of different capacity and depth, compared to standard convolution, especially on smaller datasets. Index Terms— Convolutional Neural Network, Scale Invariance, Speech Emotion Recognition ## 1 Introduction Convolutional Neural Networks (CNNs) have been extremely successful in recent years in a number of audio processing tasks, such as source separation, audio denoising, speech enhancement, speech and music transcription [1, 2, 3, 4]. CNNs have also been extensively adopted for speech emotion recognition (SER) [5, 6, 7]. Convolutional networks benefit from translation invariance of the processing on the time and frequency axis of a spectrogram or other time-frequency representations. However, in speech there are also variations in the speed of articulation between speakers and even of the same speaker in different situations. Therefore, allowing for matching the same kernel in multiple versions that are scaled differently on the time axis is the main idea in this work. We implement this in a self-contained layer architecture, the multi- time-scale (MTS) convolution layer, which does not increase the number of parameters and increases the temporal flexibility in our networks compared to standard CNNs. Separate treatment of dimensions is useful for speech processing with time-frequency representations, as opposed to image processing, where scaling is normally applied to both dimensions. The contributions of our work are specifically: * • a convolution layer design for audio emotion recognition that learns locally- scale-invariant features in the time dimension * • an evaluation of our approach to 4 emotion-labelled speech datasets with 4 different network architectures * • an analysis of the experimental results, confirming the effectiveness of the MTS approach. The remainder of the paper is organized as follows: Section 2 contains a brief review of relevant background literature, Section 3 introduces the architecture of multi-time-scale convolution layer, Section 4 presents the experimental results we obtained and Section 5 provides the conclusion of this paper. ## 2 Related Work Scale-invariance in convolutional neural networks has been addressed in a number of ways. The most common approach for audio by far is data augmentation [8, 9], which is frequently done by generating time-stretched variants of the training data. This procedure is usually part of a pipeline of different transformations, as in [10], which has proven effective in various tasks. However, in this approach the different scales in the data need to be learned by different filters in the network. Therefore, greater network capacity is required and there is no guarantee that scale-invariance is consistently achieved . Another strategy for scale-invariance in neural networks is to design it into the training and inference methods, so that it is applied consistently and without the need for additional training examples. There are many existing approaches to achieve this. The majority of them use a pyramidal structure, in which the scale is progressively narrowed along the network. [11] use parallel models trained with images at descending resolutions and then combine the obtained predictions as an ensemble model. [12] achieve scale invariance with multiple loss functions, separately computed in layers with different resolutions within the network. Inception networks [12] use parallel convolution layers with different filter sizes, matching features at different scales, but also increasing the number of variables in the network. [13] propose a convolutional architecture, in which a scaling factor is learned by the network for every layer. The majority of studies of scale-invariance in neural networks is focused on computer vision tasks. In the acoustic domain, in addition to data augmentation techniques, scale-invariance can also be addressed through specific hard-coded transforms [14] that are robust to some extent to scale variations. Nevertheless, since they are hard-coded, these methods need manual intervention and are usually highly task-specific, while embedding scale- invariance in the models provides a more generic solution that can be applied to multiple domains. The work of [15] is an exception to this trend. They show that a network with $n$ identically-sized filters performs worse than a network with the same number of filters, but split in $3$ different sizes. Nevertheless, their models learn independent filters at different scales, increasing the number of free parameters. Locally scale-invariant convolutional neural networks, as introduced for image recognition in [16], are similar to our approach. This method consists of performing feature-extraction through multiple parallel convolution layers, whose outputs are locally merged through max-pooling. This produces a self- contained structure that can substitute a canonical convolution layer. The key feature of their approach is the possibility of matching a feature at multiple scales without increasing the number of free variables in the network. It permits introducing several re-scaled parallel branches at different points in the network, providing higher flexibility then pyramidal architectures. ## 3 Method Our approach is similar to [16], but specifically adapted to the audio domain, where we analyse 2D magnitude spectrograms of speech audio. Since the time and frequency dimensions are of different nature in this representation, we treat them independently. Here, we focus on SER and address only time-scaling, while image processing techniques apply re-scaling to both dimensions with the same factor. The core of our architecture is the multi-time-scale convolution layer (MTS), a custom 2D-convolution layer that can replace a standard convolution layer in a CNN design. The main feature of MTS is that it uses multiple versions of the learned kernel that are re-sampled on the time axis and performs parallel convolutions with them. This method enables the network to detect patterns at multiple time scales. Fig. 1: Example architecture of a Multi-Time-Scale convolution layer with 3 scale factors. Figure 1 shows the architecture of one MTS layer with 3 parallel branches. In this example, the 2D spectrogram input, is convolved in parallel with the original kernel (in the center) and 2 time-stretched versions of the kernel (on both sides). The latter are generated by re-sampling the original kernel, applying linear interpolation. It is possible to independently apply different scaling factors for the 2 dimensions. These parallel convolutions produce 3 different feature maps, matching the feature of the original kernel at 3 different time scales. After this stage, the scaled feature maps are re- sampled again (applying linear interpolation) to match the shape of the original feature map. Then, a 3D max-pooling function is applied to merge the feature maps, selecting the scale with the maximal result in every time- frequency point. Therefore, the pooled feature map maintains the same dimension of the feature map generated by the original kernel. During the training we average the weights of the original kernel and its scaled versions after each update. There is no constraint by design on the number of parallel branches that can be added to a MTS layer and MTS layers with different numbers of branches can be placed at various positions in the network. It is possible to fine-tune the scaling factors layer-by-layer. This approach provides a high degree of flexibility in the network design and enables scale invariance without increasing the number of free parameters. We have implemented this method in PyTorch as open source111https://github.com/ericguizzo/multi_time_scale. Our method is different from [16] in that it re-scales only one dimension and that we re-sample the kernels. Although re-sampling the data or kernel is equivalent up to numerical variations, our method is somewhat more efficient. Moreover, [16] augment test data by re-scaling. At least for SER tasks, we believe that this practice would not give a good estimate of the generalization capabilities of the models and thus we test without augmentation. ## 4 Evaluation We have evaluated the performance of MTS on 4 benchmark datasets for speech emotion recognition: 1. 1. EMODB, a database of German emotional speech [17]. 10 speakers, German language, 535 utterances, 25 min of audio, 7 emotion labels: angry, bored, disgusted, anxious/fearful, happy, sad. Actors pronounce 10 different sentences which could be used in everyday communication. 2. 2. RAVDESS, the Ryerson Audio Visual Database of Emotional Speech and Song [18]. 24 speakers, English language, 2542 utterances, 2:47 hours of audio, 8 emotion labels: happy, sad, angry, fearful, surprised, disgusted, calm, neutral. Actors pronounce 2 sentences: “Kids are talking by the door” and “Dogs are sitting by the door”. 3. 3. TESS, the Toronto Emotional Speech Set [19]. 2 speakers, English language, 2800 utterances, 1:36 hours of audio, 7 emotion labels: happy, sad, angry, disgusted, neutral, pleasant surprise, fearful. Actors say “Say the word …” followed by 200 different words. 4. 4. IEMOCAP, the Interactive Emotional Dyadic Motion Capture Database [20]. 5 speakers, English language, 7529 utterances, 9:32 hours of audio, 10 emotion labels: neutral, angry, happy, excited, sad, frustrated, fearful, surprised, disgusted, other. Actors perform improvisations or scripted scenarios on defined topics. For each dataset we keep only the audio information and the emotion labels, discarding any other types of data. We also discard the “song” data from RAVDESS. IEMOCAP is the only highly inbalanced dataset, therefore we removed the rarest labels from it, keeping only neutral, angry, happy and sad samples. Every sound file is pre-processed in 3 consecutive stages: re-sampling to 16 kHz, Short-Time Fourier Transform and normalization. For EMODB, RAVDESS and TESS datasets every file is zero-padded to obtain equally-sized data. Since the IEMOCAP dataset contains longer recordings we segmented them into 4-second frames with 2-second overlap. The STFT is computed using 20 ms sliding windows with 10 ms overlap. Then, we normalize the magnitude spectra to zero mean and unit standard deviation. Table 1: Accuracy results for all datasets. N ist the number of audio recordings per dataset. A1-4 are the network architectures. The usage factors relate to scaling factors in the same row. The best results per dataset are highlighted in bold font. Dataset | N | Type | A1 | A2 | A3 | A4 | Best scale factors | Use of parallel branches ---|---|---|---|---|---|---|---|--- EMODB | 535 | Standard | 64.3 | 66.26 | 66.91 | 62.75 | n/a | n/a | 535 | MTS | 66.5 | 70.97 | 70.68 | 66.28 | 0.7, 1, 1.428 | 0.47, 0.05, 0.48 RAVDESS | 1440 | Standard | 42.09 | 39,84 | 42.56 | 47.41 | n/a | n/a | 1440 | MTS | 47.85 | 44.95 | 51.32 | 55.85 | 0.5, 1, 2 | 0.45, 0.06, 0.49 TESS | 2800 | Standard | 47.45 | 49.6 | 50.61 | 40.78 | n/a | n/a | 2800 | MTS | 51.76 | 48.75 | 53.05 | 51.71 | 0.5, 0.7, 1, 1.428, 2. | 0.41, 0.04, 0.05, 0.07, 0.43 IEMOCAP | 5531 | Standard | 48.93 | 50.48 | 49.0 | 54.96 | n/a | n/a | 5531 | MTS | 49.0 | 50.84 | 49.86 | 55.01 | 0.5, 0.7, 1, 1.428, 2 | 0.39, 0.04, 0.04, 0.05, 0.48 We divide every dataset using approximately 70% of the data as training, 20% for validation and 10% as test set. Furthermore, we perform every experiment with 4-fold cross-validation. We make sure that samples from the same speaker appear only in the same set, in order to get a meaningful measure of the models’ capability to generalize to new speakers, because new speakers are likely to produce patterns at different speeds. For this and other reasons, our results are not directly comparable to most published results. Many results are computed with randomly-split training, validation and test sets, without separating speakers, as in [21]. Many rely on different preprocessing [22, 23], on different architectures [22] or use multi-modal features rather than only audio [23]. Rather than aiming at a state-of-art classification accuracy for these datasets, we focus on evaluating the performance of MTS layers compared to standard convolution with the same number of channels, i.e. without increasing the number of trainable variables. Therefore, we arranged our experiments in order to obtain consistent results within our set-up, with the same conditions for all datasets. We perform this comparison for 4 different CNN architectures with different capacity: 1. A1: Convolution (1 channel, [10,5] kernel) - fully connected (200 neurons) - fully connected output layer. 2. A2: Convolution (10 channels, [10,5] kernel) - fully connected (200 neurons) - fully connected output layer. 3. A3: Convolution (10 channels, [10,5] kernel) - max pooling ([2,2] kernel) - convolution (10 channels, [10,5] kernel - fully connected (200 neurons) - fully connected output layer. 4. A4: AlexNet: 5 convolutions and max pooling, 2 fully connected layers. See [24] for a detailed description. The kernel dimensions above are in the form [time,frequency]. The activation function is ReLU for hidden and softmax for output units. In all experiments we use the ADAM optimizer with L2 regularization and Cross Entropy loss. We perform a grid search to find the best regularization parameter. We train for a maximum of 500 epochs, applying early stopping with 10 epochs patience for validation loss improvement. In architectures A1, A2 and A3, MTS is applied to all convolutional layers, while in A4 only the first 2 layers are augmented with MTS. We tested MTS with 3, 5 and 7 parallel branches, using logarithmically spaced scale factors in these combinations: (0.25, 1, 4), (0.5, 1, 2), (0.7, 1, 1.428), (0.8, 1, 1.25), (0.9, 1, 1.111), (0.95, 1, 1.053), (0.25, 0.5, 1, 2, 4), (0.5, 0.7, 1, 1.428, 2), (0.8, 0.9, 1, 1.111, 1.25), (0.25, 0.5, 0.7, 1, 1.428, 2, 4), (0.7, 0.8, 0.9, 1, 1.111, 1.25, 1.428). In each experiment, we apply the same combination of stretch factors to all MTS-enabled layers. Table 1 shows the results we obtained for all datasets and all architectures. The first 3 columns show the dataset, total number of data points and the type of convolution layer(s). Columns A1-A4 show the mean test accuracy across folds obtained with each architecture (as listed above). The last 2 columns refer to the MTS model with the best accuracy in a row and show the scaling factors applied to each parallel branch of MTS and their average percentage of use. The results clearly show that MTS consistently improves the generalization for all datasets. We reach a maximum improvement of 8.04 percentage points (RAVDESS) and with an average of 3.78 with a standard deviation of 3.45 across all datasets and architectures. For all model/architecture combinations except one (A2 with TESS), MTS outperforms standard convolution. We performed a two- sided Wilcoxon signed-rank test comparing the standard and MTS results, which shows statistical significance with $p<0.001$. The mean improvement is higher for the smaller datasets, which confirms that enabling pattern recognition at different time scales with MTS improves generalisation. Considering the general scarcity of emotion-labelled speech data, this is a desirable feature for SER applications. The best performing models on different datasets used different combinations of scaling factors. In particular, for the smaller datasets applying only 3 factors gives the best results. Architectures with 5 parallel branches perform better for the larger datasets. MTS models tend to use mostly 2 scale factors (see last column of table 1). In every case, at least 2 parallel branches give a high contribution, confirming that MTS is actually matching patterns at multiple time-scales. We found that MTS is more effective at larger kernel sizes. In an experiment with an MTS version of ResNet18, where most kernels are very small (3x3), we achieved no improvement with MTS. Training a MTS-enabled network generally takes longer than a standard CNN. In a test with architecture A2, it took on average 1.3 times longer per epoch to train MTS models with 3 branches and 1.52 times longer for MTS models 5 branches. Moreover, MTS networks need on average more epochs to converge (27.85 vs 32.26 epochs for CNN vs MTS average overall). We also tested modified variants of MTS: * • Applying a penalty to the re-sampled feature maps, to give the model a preference for the unscaled kernel. * • Performing the training using standard convolution layers and substitute them with MTS layers with shared weights only at inference time. * • Concatenating the used scaling factor for each time-frequency point to the output feature map of an MTS layer. Each of these modifications reduced the performance of MTS models. Therefore, we kept the simplest variant described above. ## 5 Conclusions In this paper, we propose a multi-time-scale convolution layer (MTS) for CNNs applied to audio analysis, specifically emotion recognition from speech. The MTS performs parallel 2D-convolutions using a standard kernel and its re- sampled versions to match patterns at different time scales. This method enables the network to learn to some extent time-invariant features without increasing its number of trainable parameters or the number of training examples. We evaluated our approach on speech emotion recognition with unknown speakers, using 4 different datasets and applying it to networks of different size and structure. We found a consistent and statistically significant improvement in test accuracy across all datasets and models, up to 8.04 percentage points for RAVDESS and on average 3.78 across all datasets and architectures. MTS is particularly effective on smaller datasets, which makes MTS well suited for Speech Emotion Recognition where labelled data is scarce. As future developments we intend to test more extensively the effectiveness of MTS with larger datasets and explore more architectures and different resampling techniques. Furthermore, we are going to apply the concept of MTS in the context of convolution-based generative models, extending our multi- branch approach also to transposed convolutions. ## References * [1] Andreas Jansson, Eric J. Humphrey, Nicola Montecchio, Rachel M. Bittner, Aparna Kumar, and Tillman Weyde, “Singing voice separation with deep u-net convolutional networks,” in ISMIR, 2017, pp. 745–751. * [2] Szu-Wei Fu, Yu Tsao, and Xugang Lu, “Snr-aware convolutional neural network modeling for speech enhancement,” in Interspeech, 2016, pp. 3768–3772. * [3] Dimitri Palaz, Ronan Collobert, et al., “Analysis of cnn-based speech recognition system using raw speech as input,” Tech. Rep., Idiap, 2015. * [4] Rachel M Bittner, Brian McFee, Justin Salamon, Peter Li, and Juan Pablo Bello, “Deep salience representations for f0 estimation in polyphonic music,” in ISMIR, 2017, pp. 63–70. * [5] Abdul Malik Badshah, Jamil Ahmad, Nasir Rahim, and Sung Wook Baik, “Speech emotion recognition from spectrograms with deep convolutional neural network,” in PlatCon, 2017, pp. 1–5. * [6] Qirong Mao, Ming Dong, Zhengwei Huang, and Yongzhao Zhan, “Learning salient features for speech emotion recognition using convolutional neural networks,” IEEE Multimedia, vol. 16, no. 8, pp. 2203–2213, 2014. * [7] George Trigeorgis, Fabien Ringeval, Raymond Brueckner, Erik Marchi, Mihalis A Nicolaou, Björn Schuller, and Stefanos Zafeiriou, “Adieu features? end-to-end speech emotion recognition using a deep convolutional recurrent network,” in ICASSP, 2016, pp. 5200–5204. * [8] Justin Salamon and Juan Pablo Bello, “Deep convolutional neural networks and data augmentation for environmental sound classification,” IEEE Signal Processing Letters, vol. 24, no. 3, pp. 279–283, 2017\. * [9] Brian McFee, Eric J Humphrey, and Juan Pablo Bello, “A software framework for musical data augmentation,” in ISMIR, 2015, pp. 248–254. * [10] Jan Schlüter and Thomas Grill, “Exploring data augmentation for improved singing voice detection with neural networks,” in ISMIR, 2015, pp. 121–126. * [11] Pedro F Felzenszwalb, Ross B Girshick, David McAllester, and Deva Ramanan, “Object detection with discriminatively trained part-based models,” IEEE PAMI, vol. 32, no. 9, pp. 1627–1645, 2009. * [12] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich, “Going deeper with convolutions,” in CVPR, 2015, pp. 1–9. * [13] Huiyu Wang, Aniruddha Kembhavi, Ali Farhadi, Alan L Yuille, and Mohammad Rastegari, “Elastic: Improving cnns with dynamic scaling policies,” in CVPR, 2019, pp. 2258–2267. * [14] Ugo Marchand and Geoffroy Peeters, “Scale and shift invariant time/frequency representation using auditory statistics: Application to rhythm description,” in IEEE MLSP, 2016, pp. 1–6. * [15] Zhenyao Zhu, Jesse H Engel, and Awni Hannun, “Learning multiscale features directly from waveforms,” arXiv:1603.09509, 2016. * [16] Angjoo Kanazawa, Abhishek Sharma, and David Jacobs, “Locally scale-invariant convolutional neural networks,” in NIPS Deep Learning and Representation Learning Workshop, 12 2014\. * [17] Felix Burkhardt, Astrid Paeschke, Miriam Rolfes, Walter F Sendlmeier, and Benjamin Weiss, “A database of german emotional speech,” in Eurospeech, 2005. * [18] Steven R Livingstone and Frank A Russo, “The ryerson audio-visual database of emotional speech and song (ravdess): A dynamic, multimodal set of facial and vocal expressions in north american english,” PloS one, vol. 13, no. 5, pp. e0196391, 2018. * [19] Kate Dupuis and M Kathleen Pichora-Fuller, Toronto Emotional Speech Set (TESS), University of Toronto, Psychology Department, 2010. * [20] Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan, “Iemocap: Interactive emotional dyadic motion capture database,” Language resources and evaluation, vol. 42, no. 4, pp. 335, 2008\. * [21] Yuni Zeng, Hua Mao, Dezhong Peng, and Zhang Yi, “Spectrogram based multi-task audio classification,” Multimedia Tools and Applications, vol. 78, no. 3, pp. 3705–3722, 2019. * [22] Pankaj Shegokar and Pradip Sircar, “Continuous wavelet transform based speech emotion recognition,” in IEEE ICSPCS, 2016, pp. 1–8. * [23] Björn Schuller, Ronald Müller, Manfred Lang, and Gerhard Rigoll, “Speaker independent emotion recognition by early fusion of acoustic and linguistic features within ensembles,” in Eurospeech, 2005. * [24] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012, pp. 1097–1105.
2024-09-04T02:54:57.967746
2020-03-06T19:02:55
2003.03394
{ "authors": "Ikram Ullah, Umar Hayat and Miguel D. Bustamante", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26090", "submitter": "Miguel D. Bustamante", "url": "https://arxiv.org/abs/2003.03394" }
arxiv-papers
###### Abstract We propose an image encryption scheme based on quasi-resonant Rossby/drift wave triads (related to elliptic surfaces) and Mordell elliptic curves (MECs). By defining a total order on quasi-resonant triads, at a first stage we construct quasi-resonant triads using auxiliary parameters of elliptic surfaces in order to generate pseudo-random numbers. At a second stage, we employ an MEC to construct a dynamic substitution box (S-box) for the plain image. The generated pseudo-random numbers and S-box are used to provide diffusion and confusion, respectively, in the tested image. We test the proposed scheme against well-known attacks by encrypting all gray images taken from the USC-SIPI image database. Our experimental results indicate the high security of the newly developed scheme. Finally, via extensive comparisons we show that the new scheme outperforms other popular schemes. ###### keywords: quasi-resonant Rossby/drift wave triads; Mordell elliptic curve; pseudo-random numbers; substitution box xx 1 5 Entropy 2020, 22, 454; doi.org/10.3390/e22040454; Received: 04 March 2020; Accepted: 14 April 2020 Image Encryption Using Elliptic Curves and Rossby/Drift Wave Triads Ikram Ullah 1, Umar Hayat 1,* and Miguel D. Bustamante 2,* Ikram Ullah, Umar Hayat and Miguel D. Bustamante Correspondence<EMAIL_ADDRESS>(M.D.B<EMAIL_ADDRESS>(U.H.) ## 1 Introduction The exchange of confidential images via the internet is usual in today’s life, even though the internet is an open source that is unsafe and unauthorized persons can steal useful or sensitive information. Therefore it is essential to be able to share images in a secure way. This goal is achieved by using cryptography. Traditional cryptographic techniques such as data encryption standard (DES) and advanced encryption standard (AES) are not suitable for image transmission because image pixels are usually highly correlated Mahmud (2020); Zhang (2014). By contrast, DES and AES are ideal techniques for text encryption El-Latif (2013), so researchers are trying to develop such techniques to meet the demand for reliable image delivery. A number of image encryption schemes have been developed using different approaches Yang (2015); Zhong (2019); Li (2017); Hua (2018); Xie (2017); Azam (2017); Luo (2018); Li (2015); Hua (2019); Wu (2019); Yousaf (2020). Hua et al. Hua (2019) developed a highly secure image encryption algorithm, where pixels are shuffled via the principle of the Josephus problem and diffusion is obtained by a filtering technology. Wu et al. Wu (2019) proposed a novel image encryption scheme by combining a random fractional discrete cosine transform (RFrDCT) and the chaos-based Game of Life (GoL). In their scheme, the desired level of confusion and diffusion is achieved by GoL and an XOR operation, respectively. “Confusion” entails hiding the relation between input image, secret keys and the corresponding cipher image, and “diffusion” is an alteration of the value of each pixel in an input image Mahmud (2020). One of the dominant trends in encryption techniques is chaos-based encryption Ismaila (2020); Tang (2019); Abdelfatah (2019); Yu (2020); Zhu (2018); ElKamchouchi (2020). The reason for this dominance is that the chaos-based encryption schemes are highly sensitive to the initial parameters. However, there are certain chaotic cryptosystems that exhibit a lower security level due to the usage of chaotic maps with less complex behavior (see Zhou (2013)). This problem is addressed in Hu (2019) by introducing a cosine-transform-based chaotic system (CTBCS) for encrypting images with higher security. Xu et al. Xu (2014) suggested an image encryption technique based on fractional chaotic systems and verified experimentally the higher security of the underlying cryptosystem. Ahmad et al. Ahmad (2015) highlighted certain defects of the above-mentioned cryptosytem by recovering the plain image without the secret key. Moreover, they proposed an enhanced scheme to thwart all kinds of attacks. The chaos-based algorithms also use pseudo-random numbers and substitution boxes (S-boxes) to create confusion and diffusion Cheng (2015); Belazi (2016). Cheng et al. Cheng (2015) proposed an image encryption algorithm based on pseudo-random numbers and AES S-box. The pseudo-random numbers are generated using AES S-box and chaotic tent maps. The scheme is optimized by combining the permutation and diffusion phases, but the image is encrypted in rounds, which is time consuming. Belazi et al. Belazi (2016) suggested an image encryption algorithm using a new chaotic map and logistic map. The new chaotic map is used to generate a sequence of pseudo-random numbers for masking phase. Then eight dynamic S-boxes are generated. The masked image is substituted in blocks via aforementioned S-boxes. The substituted image is again masked by another pseudo-random sequence generated by the logistic map. Finally, the encrypted image is obtained by permuting the masked image. The permutation is done by a sequence generated by the map function. This algorithm fulfills the security analysis but performs slowly due to the four cryptographic phases. In Rehman (2016), an image encryption method based on chaotic maps and dynamic S-boxes is proposed. The chaotic maps are used to generate the pseudo-random sequences and S-boxes. To break the correlation, pixels of an input image are permuted by the pseudo-random sequences. In a second phase the permuted image is decomposed into blocks. Then blocks are encrypted by the generated S-boxes to get the cipher image. From histogram analysis it follows that the suggested technique generates cipher images with a nonuniform distribution. Similar to the chaotic maps, elliptic curves (ECs) are sensitive to input parameters, but EC-based cryptosystems are more secure than those of chaos Jia (2016). Toughi et al. Toughi (2017) developed a hybrid encryption algorithm using elliptic curve cryptography (ECC) and AES. The points of an EC are used to generate pseudo-random numbers and keys for encryption are acquired by applying AES to the pseudo-random numbers. The proposed algorithm gets the promising security but pseudo-random numbers are generated via the group law, which is time consuming. In El-Latif (2013), a cyclic EC and a chaotic map are combined to design an encryption algorithm. The developed scheme overcomes the drawbacks of small key space but is unsafe to the known-plaintext/chosen- plaintext attack Liu (2014). Similarly, Hayat et al. Hayat (2019) proposed an EC-based encryption technique. The stated scheme generates pseudo-random numbers and dynamic S-boxes in two phases, where the construction of S-box is not guaranteed for each input EC. Therefore, changing of ECs to generate an S-box is a time-consuming work. Furthermore, the generation of ECs for each input image makes it insufficient. Based on the above discussion, we propose an improved image encryption algorithm, based on quasi-resonant Rossby/drift wave triads Bustamante (2013); Hayat (2016) (triads, for short) and Mordell elliptic curves (MECs). The triads are utilized in the generation of pseudo-random numbers and MECs are employed to create dynamic S-boxes. The proposed scheme is novel in that it introduces the technique of pseudo-random numbers generation using triads, which is faster than generating pseudo-random numbers by ECs. Moreover, the scheme does not require to separately generate triads for each input image of the same size. In the present scheme, MECs are used opposite to Hayat (2019), in the sense that now, for each input image, the generation of a dynamic S-box is guaranteed Azam (2018). Finally, extensive performance analyses and comparisons reveal the efficiency of the proposed scheme. This paper is organized as follows. Preliminaries are described in Section 2. In Section 3, the proposed encryption algorithm is explained in detail. Section 4 provides the experimental results as well as a comparison between the proposed method and other existing popular schemes. Lastly, conclusions are presented in Section 5. ## 2 Preliminaries Barotropic vorticity equation: The barotropic vorticity equation (in the so- called $\beta$-plane approximation) is one of the simplest two-dimensional models of the large-scale dynamics of a shallow layer of fluid on the surface of a rotating sphere. It is described in mathematical terms by the partial differential equation $\frac{\partial}{\partial t}(\nabla^{2}\psi-F\psi)+\Big{(}\frac{\partial\psi}{\partial x}\frac{\partial\nabla^{2}\psi}{\partial y}-\frac{\partial\psi}{\partial y}\frac{\partial\nabla^{2}\psi}{\partial x}\Big{)}+\gamma\frac{\partial\psi}{\partial x}=0,$ (1) where $\psi(x,y,t)\in\mathbb{R}$ represents the geopotential height, $\gamma$ is the Coriolis parameter, a real constant measuring the variation of the Coriolis force with latitude ($x$ represents longitude and $y$ represents latitude) and $F$ is a non-negative real constant representing the inverse of the square of the deformation radius. We assume periodic boundary conditions: $\psi(x+2\pi,y,t)=\psi(x,y+2\pi,t)=\psi(x,y,t)$ for all $x,y,t\in\mathbb{R}$. In the literature Equation (1) is also known as the Charney–Hasegawa–Mima equation (CHM) Charney (1948); Hasegawa (1978); Connaughton (2010); Harris (2013); Galperin (2019). This equation accepts harmonic solutions, known as Rossby waves, which are solutions of both the linearized form and the whole (nonlinear) form of Equation (1). A Rossby wave solution is given explicitly by the parameterized function $\psi_{(k,l)}(x,y,t)=\Re\\{A\,{\mathrm{e}}^{i(kx+ly-\omega(k,l)t)}\\}$, where $A\in\mathbb{C}$ is an arbitrary constant, $\omega(k,l)=-\frac{\gamma\,k}{k^{2}+l^{2}+F}$ is the so-called dispersion relation, and $(k,l)\in\mathbb{Z}^{2}$ is called the wave vector. For simplicity, we take $\gamma=-1$ and $F=0$ in what follows Bustamante (2013); Hayat (2016). Resonant triads: As Equation (1) is nonlinear, modes with different wave vectors tend to couple and exchange energy. If the nonlinearity is weak, this exchange happens to be quite slow and is more efficient amongst groups of modes that are in _resonance_. To the lowest order of nonlinearity in Equation (1), approximate solutions known as resonant triad solutions can be constructed via linear combinations of the form $\psi(x,y,t)=\Re\\{A_{1}\,{\mathrm{e}}^{i(k_{1}x+l_{1}y-\omega(k_{1},l_{1})t)}+A_{2}\,{\mathrm{e}}^{i(k_{2}x+l_{2}y-\omega(k_{2},l_{2})t)}+A_{3}\,{\mathrm{e}}^{i(k_{3}x+l_{3}y-\omega(k_{3},l_{3})t)}\\}\,,$ where $A_{1},A_{2},A_{3}$ are slow functions of time (they satisfy a closed system of ODEs, not shown here), and the wave vectors $(k_{1},l_{1}),(k_{2},l_{2})$ and $(k_{3},l_{3})$ satisfy the Diophantine system of equations: $k_{1}+k_{2}=k_{3},\,\,\,\,\,\,l_{1}+l_{2}=l_{3}\,\,\,\,\,\,{\rm and}\,\,\,\,\,\,\omega_{1}+\omega_{2}=\omega_{3},$ (2) for $\omega_{i}=\omega(k_{i},l_{i}),i=1,2,3$. A set of three wavevectors satisfying Equations (2) is called a resonant triad. Solutions can be found analytically via a rational transformation to elliptic surfaces (see below). Quasi-resonant triads and detuning level: If, in (2), the equation $\omega_{1}+\omega_{2}=\omega_{3}$ is replaced by the inequality $|\omega_{1}+\omega_{2}-\omega_{3}|\leq\delta^{-1}$, for a large positive number $\delta$, then the triad becomes a quasi-resonant triad and $\delta^{-1}$ is known as the detuning level of the quasi-resonant triad. It is possible to construct quasi-resonant triads via downscaling of resonant triads that have very large wave vectors Bustamante (2013). For simplicity, in what follows we simply call a quasi-resonant triad a triad and denote it by $\Delta$. Finally, to avoid over-counting of triads we will impose the condition $k_{3}>0$. Rational transformation: In Bustamante (2013), wave vectors are explicitly expressed in terms of rational variables $X,Y$ and $D$ as follows: $\frac{k_{1}}{k_{3}}=\frac{X}{Y^{2}+D^{2}},\,\,\,\,\,\,\frac{l_{1}}{k_{3}}=\Big{(}\frac{X}{Y}\Big{)}\Big{(}1-\frac{D}{Y^{2}+D^{2}}\Big{)},\,\,\,\,\,\,\frac{l_{3}}{k_{3}}=\frac{D-1}{Y}.$ (3) In the case $F=0$, the rational variables $X,Y,D$ lie on an elliptic surface. The transformation is bijective and its inverse mapping is given by: $X=\frac{k_{3}(k_{1}^{2}+l_{1}^{2})}{k_{1}(k_{3}^{2}+l_{3}^{2})},\,\,\,\,\,\,Y=\frac{k_{3}(k_{3}l_{1}-k_{1}l_{3})}{k_{1}(k_{3}^{2}+l_{3}^{2})},\,\,\,\,\,\,D=\frac{k_{3}(k_{3}k_{1}-l_{1}l_{3})}{k_{1}(k_{3}^{2}+l_{3}^{2})}.$ (4) New parameterization: In Kopp (2017), Kopp parameterized the resonant triads and in terms of parameters $u$ and $t$ it follows by Kopp (2017) (Equation (1.22)) that: $\displaystyle\frac{k_{1}}{k_{3}}=$ $\displaystyle(t^{2}+u^{2})(t^{2}-2u+u^{2})/(1-2u),$ (5) $\displaystyle\frac{l_{3}}{k_{3}}=$ $\displaystyle\big{(}u(2u-1)+(t^{2}+u^{2})(t^{2}-2u+u^{2})\big{)}/\big{(}t(1-2u)\big{)},$ (6) $\displaystyle\frac{l_{1}}{k_{3}}=$ $\displaystyle(t^{2}+u^{2})\big{(}(2u-1)+u(t^{2}-2u+u^{2})\big{)}/\big{(}t(1-2u)\big{)}.$ (7) In $2019$, Hayat et al. Hayat (2016) found a new parameterization of $X,Y$ and $D$ in terms of auxiliary parameters $a,b$ and hence $\frac{k_{1}}{k_{3}},\frac{l_{3}}{k_{3}}$ and $\frac{l_{1}}{k_{3}}$ are given by: $\displaystyle\frac{k_{1}}{k_{3}}=$ $\displaystyle\frac{\big{(}a^{2}+b(2-3b)+1\big{)}^{3}}{(a^{2}-3b^{2}-2b+1)\big{(}2(11-3a^{2})b^{2}+(a^{2}+1)^{2}-16ab+9b^{4}\big{)}},$ (8) $\displaystyle\frac{l_{3}}{k_{3}}=$ $\displaystyle\frac{6(a^{2}+a-1)b^{2}-(a+1)^{2}(a^{2}+1)+4ab-9b^{4}}{(a^{2}-3b^{2}-1)(a^{2}-3b^{2}-2b+1)},$ (9) $\displaystyle\frac{l_{1}}{k_{3}}=$ $\displaystyle\frac{\big{(}a^{2}+b(2-3b)+1\big{)}}{\footnotesize\begin{matrix}(a^{2}-3b^{2}-1)(a^{2}-3b^{2}-2b+1)\big{(}2(11-2a^{2})b^{2}+(a^{2}+1)^{2}-16ab+9b^{4}\big{)}&\\\ \times[a^{6}+2a^{5}+a^{4}(-9b^{2}-6b+3)-4a^{3}(3b^{2}+2b-1)+3a^{2}(3b^{2}+2b-1)^{2}&\\\ +2a(9b^{4}+12b^{3}+14b^{2}-4b+1)-(3b^{2}+1)^{2}(3b^{2}+6b-1)]\end{matrix}}.$ (10) Elliptic curve (EC): Let $\mathbb{F}_{p}$ be a finite field for any prime $p$, then an EC $E_{p}$ over $\mathbb{F}_{p}$ is defined by $y^{2}\equiv x^{3}+bx+c\pmod{p},$ (11) where $b,c\in\mathbb{F}_{p}$. The integers $b,c$ and $p$ are called parameters of an EC. The number of all $(x,y)\in\mathbb{F}_{p}^{2}$ satisfying the congruence (11) is denoted by $\\#E_{p}$. Mordell elliptic curve (MEC): In the special but important case $b=0$, the above EC is known as an MEC and is represented by $y^{2}\equiv x^{3}+c\pmod{p}.$ (12) For $p\equiv 2\pmod{3}$, there are exactly $p+1$ points $(x,y)\in\mathbb{F}_{p}^{2}$ satisfying the congruence (12), see Wash for further details. If points on $E_{p}$ are ordered according to some total order $\prec$ then $E_{p}$ is said to be an ordered EC. Recall that total order is a binary relation which possesses the reflexive, antisymmetric and transitive properties. Azam et al. Azam1 (2019) introduced a total order known as a natural ordering on MECs given by $(x_{1},y_{1})\prec(x_{2},y_{2})\Leftrightarrow\begin{cases}{\rm either\ }x_{1}<x_{2},{\rm\ or}\\\\[5.0pt] x_{1}=x_{2}{\rm\ and\ }y_{1}<y_{2},\end{cases}$ and generated efficient S-boxes using the aforesaid ordering. We will use natural ordering to generate S-boxes. Thus from here on $E_{p}$ stands for a naturally ordered MEC unless it is specified otherwise. ## 3 The Proposed Encryption Scheme The proposed encryption scheme is based on pseudo-random numbers and S-boxes. The pseudo-random numbers are generated using quasi-resonant triads. To get an appropriate level of diffusion we need to properly order the $\Delta$s. For this purpose we define a binary relation $\lesssim$ as follows. ### 3.1 Ordering on Quasi-Resonant Triads Let $\Delta,\Delta^{\prime}$ represent the triads $(k_{i},l_{i}),(k_{i}^{\prime},l_{i}^{\prime}),i=1,2,3$, respectively, then $\Delta\lesssim\Delta^{\prime}\Leftrightarrow\begin{cases}{\rm either\ }a<a^{\prime},{\rm\ or}\\\\[5.0pt] a=a^{\prime}{\rm\ and\ }b<b^{\prime},{\rm\ or}\\\\[5.0pt] a=a^{\prime},b=b^{\prime}{\rm\ and\ }k_{3}\leq k_{3}^{\prime},\end{cases}$ where $a,b$ and $a^{\prime},b^{\prime}$ are the corresponding auxiliary parameters of $\Delta$ and $\Delta^{\prime}$, respectively. ###### Lemma . If $T$ denotes the set of $\Delta$s in a box of size $L$, then $\lesssim$ is a total order on $T$. ###### Proof. The reflexivity of $\lesssim$ follows from $a=a,b=b$ and $k_{3}=k_{3}$ and hence $\Delta\lesssim\Delta.$ As for antisymmetry we suppose $\Delta\lesssim\Delta^{\prime}$ and $\Delta^{\prime}\lesssim\Delta$. Then, by definition $a\leq a^{\prime}$ and $a^{\prime}\leq a$, which imply $a=a^{\prime}$. Thus we are left with two results: $b\leq b^{\prime}$ and $b^{\prime}\leq b$, which imply $b=b^{\prime}$. Thus, we obtain the results $k_{3}\leq k_{3}^{\prime}$ and $k_{3}^{\prime}\leq k_{3}$, which ultimately give $k_{3}=k_{3}^{\prime}$. Solving Equations (8)–(10) for the obtained values, we get $k_{1}=k_{1}^{\prime},l_{3}=l_{3}^{\prime}$ and from Equation (2) it follows that $l_{2}=l_{2}^{\prime}$. Consequently $\Delta=\Delta^{\prime}$ and $\lesssim$ is antisymmetric. As for transitivity, let us assume $\Delta\lesssim\Delta^{\prime}$ and $\Delta^{\prime}\lesssim\Delta^{\prime\prime}$. Then $a\leq a^{\prime}$ and $a^{\prime}\leq a^{\prime\prime}$, implying $a\leq a^{\prime\prime}$. If $a<a^{\prime\prime}$, then transitivity follows. If $a=a^{\prime\prime}$, then $a^{\prime}=a^{\prime\prime}$ too. Thus, $b\leq b^{\prime}$ and $b^{\prime}\leq b^{\prime\prime}$, so $b\leq b^{\prime\prime}$. If $b<b^{\prime\prime}$, then transitivity follows. If $b=b^{\prime\prime}$, then $b^{\prime}=b^{\prime\prime}$ too. Thus, $k_{3}\leq k_{3}^{\prime}$ and $k_{3}^{\prime}\leq k_{3}^{\prime\prime}$, implying $k_{3}\leq k_{3}^{\prime\prime}$ and hence transitivity follows: $\Delta\lesssim\Delta^{\prime\prime}$. ∎ Let $\stackrel{{\scriptstyle*}}{{T}}$ stand for the set of $\Delta$s ordered with respect to the order $\lesssim$. The main steps of the proposed scheme are explained as follows. ### 3.2 Encryption A. Public parameters: In order to exchange the useful information the sender and receiver should agree on the public parameters described as below: 1. (1) Three sets: choose three sets $\mathcal{A}_{i}=[A_{i},B_{i}],i=1,2,3$ of consecutive numbers with unknown step sizes, where the end points $A_{i},B_{i},i=1,2,3$ are rational numbers. 2. (2) A total order: select a total order $\prec$ so that the triads generated by the above-mentioned sets may be arranged with respect to that order. Suppose that $P$ represents an image of size $m\times n$ to be encrypted, and the pixels of $P$ are arranged in column-wise linear ordering. Thus, for positive integer $i\leq mn$, $P(i)$ represents the $i$-th pixel value in linear ordering. Define $S_{\rm P}$ as the sum of all pixel values of the image $P$. Then the proposed scheme chooses the secret keys in the following ways. B. Secret keys: To generate confusion and diffusion in an image, the sender chooses the secret keys as follows. 1. (1) Step size: select positive integers $a_{i},b_{i}$ to construct the step sizes $\alpha_{i}=\frac{a_{i}}{b_{i}}$ of $\mathcal{A}_{i},i=1,2$. Additionally, choose a non-negative integer $a_{3}$ as a step size of $\mathcal{A}_{3}$ in such a way that $\prod_{i=1}^{3}n_{i}\geq mn$, where $\\#\mathcal{A}_{i}=n_{i}$ represents the number of elements in $\mathcal{A}_{i}$. 2. (2) Detuning level: fix some posive integer $\delta$ to find the detuning level $\delta^{-1}$ allowed for the triads. 3. (3) Bound: select a positive integer $L$ such that $|k_{i}|,|l_{i}|\leq L$ for $i=1,2,3.$ This condition is imposed in order to bound the components of the triad wave vectors. Furthermore, choose an integer $t$ to find $r=\lfloor S_{\rm P}/t\rceil$, where $\lfloor\cdot\rceil$ gives the nearest integer when $S_{\rm P}$ is divided by $t$. The reason for choosing such a $t$ is to generate key-dependent S-boxes and the integer $r$ is used to diffuse the components of triads. 4. (4) A prime: select a prime $p\geq 257$ such that $p\equiv 2\pmod{3}$ as a secret key for computing nonzero $c\equiv S_{\rm P}+t\pmod{p}$ to generate an S-box $\zeta_{E_{p}}(p,t,S_{\rm P})$ on the $E_{p}$. The S-box construction technique is made clear in Algorithm 1, and the S-box generated for $p=1607,t=182$ and $S=0$ by Algorithm 1 is shown in Table 1. Furthermore, the cryptographic properties of the said S-box are evaluated in Sections 4.1 and 4.2. /* $B$ is a set of points $(x,y)$ satisfying $E_{p}$, $B(i)$ is $i$-th point of $B$ and $y_{i}$ stands for $y$-component of point $B(i)$. */ Input : A prime $p\equiv 2\pmod{3}$ and two integers $t$ and $S$ such that $c=S+t$ and $S+t\not\equiv 0\pmod{p}$. Output : An S-box $\zeta_{E_{p}}(p,t,S)$. 1 $B:=\emptyset$; 2 $Y:=[0,(p-1)/2]$; 3 $i\leftarrow 0$; 4 for _$x\in[0,p-1]$_ do 5 for _$y\in Y$_ do 6 if _$y^{2}\equiv x^{3}+c\pmod{p}$_ then 7 $i\leftarrow i+1$; $B(i):=(x,y)$; 8 if _$y\not=0$_ then 9 $i\leftarrow i+1$; $B(i):=(x,p-y)$; 10 break; 11 12 13 14 $Y=Y-\\{y\\}$; 15$\zeta_{E_{p}}(p,t,S)=\\{y_{i}\in B(i):0\leq y_{i}<256\\}.$ Algorithm 1 Construction of $8\times 8$ S-box. Table 1: The obtained S-box $\zeta_{E_{1607}}(1607,182,0)$. 220 | 118 | 17 | 158 | 25 | 138 | 33 | 196 | 247 | 252 | 15 | 226 | 135 | 177 | 232 | 83 ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- 161 | 70 | 107 | 186 | 137 | 236 | 21 | 142 | 131 | 103 | 54 | 58 | 217 | 181 | 201 | 172 91 | 84 | 223 | 89 | 29 | 156 | 136 | 14 | 69 | 99 | 164 | 171 | 35 | 188 | 76 | 139 153 | 16 | 198 | 227 | 32 | 10 | 115 | 122 | 184 | 61 | 208 | 225 | 213 | 106 | 94 | 56 165 | 40 | 245 | 189 | 163 | 239 | 193 | 194 | 129 | 175 | 241 | 141 | 130 | 231 | 215 | 127 151 | 199 | 105 | 22 | 148 | 39 | 179 | 173 | 78 | 248 | 81 | 23 | 75 | 55 | 146 | 109 195 | 251 | 178 | 170 | 162 | 206 | 228 | 169 | 147 | 28 | 210 | 221 | 80 | 121 | 202 | 77 9 | 74 | 197 | 31 | 26 | 154 | 145 | 44 | 47 | 82 | 43 | 60 | 117 | 250 | 88 | 191 67 | 8 | 174 | 93 | 1 | 20 | 128 | 53 | 218 | 237 | 96 | 72 | 3 | 65 | 6 | 253 150 | 101 | 119 | 87 | 160 | 133 | 108 | 57 | 41 | 64 | 51 | 49 | 185 | 243 | 2 | 249 167 | 50 | 205 | 183 | 97 | 114 | 48 | 27 | 246 | 254 | 124 | 92 | 19 | 134 | 159 | 95 24 | 224 | 111 | 62 | 116 | 168 | 200 | 86 | 79 | 143 | 126 | 112 | 45 | 71 | 125 | 13 5 | 216 | 187 | 222 | 7 | 113 | 238 | 36 | 204 | 52 | 140 | 46 | 240 | 85 | 207 | 4 152 | 104 | 235 | 190 | 242 | 68 | 63 | 203 | 230 | 176 | 180 | 59 | 157 | 244 | 66 | 212 34 | 90 | 120 | 0 | 30 | 166 | 37 | 255 | 38 | 110 | 211 | 233 | 11 | 155 | 209 | 219 192 | 12 | 144 | 73 | 182 | 132 | 98 | 214 | 42 | 102 | 18 | 149 | 123 | 229 | 100 | 234 The positive integers $a_{1},b_{1},a_{2},b_{2},a_{3},\delta,L,S_{\rm P},t$ and $p$ are secret keys. Here it is mentioned that the parameters $a_{1},b_{1},a_{2},b_{2},a_{3},\delta$ and $L$ are used to generate $mn$ triads in a box of size $L$. The generation of triads is explained step by step in Algorithm 2. These triads along with keys $S_{\rm P}$ and $t$ are used to generate the sequence $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(t,S_{\rm P})$ of pseudo-random numbers. /* T is a set containing the Quasi-resonant triads, while $m$ and $n$ are the dimensions of an input image. */ Input : Three sets $\mathcal{A}_{i},i=1,2,3$, inverse detuning level $\delta$, bound $L$, two positive integers $m$ and $n$. Output : Quasi-resonant triads 1 $T:=\emptyset$; 2 $c_{1}\leftarrow 0,c_{2}\leftarrow 1$ ; 3 for _$a\in\mathcal{A}_{1}$_ do 4 for _$b\in\mathcal{A}_{2}$_ do 5 $c_{1}\leftarrow c_{1}+1$; 6 Calculate and store the values of $k_{1}^{\prime}(c_{1}),l_{3}^{\prime}(c_{1})$, and $l_{1}^{\prime}(c_{1})$ for each pair $(a,b)$ using Equations (8)–(10). 7 8for _$c_{2}\in[1,c_{1}]$_ do 9 for _$k_{3}\in\mathcal{A}_{3}$_ do 10 $k_{1}=\lfloor(k_{1}^{\prime}(c_{2})*k_{3})\rceil,l_{3}=\lfloor(l_{3}^{\prime}(c_{2})*k_{3})\rceil$ and $l_{1}=\lfloor(l_{1}^{\prime}(c_{2})*k_{3})\rceil$; 11 $k_{2}=k_{3}-k_{1},l_{2}=l_{3}-l_{1}$ and $\omega_{i}=k_{i}/(k_{i}^{2}+l_{i}^{2}),i=1,2,3;$ 12 $\omega_{4}=\omega_{3}-\omega_{2}-\omega_{1};$ 13 if _$|\omega_{4}| <\delta^{-1}$ and $0<|k_{i}|,|l_{i}|<L,i=1,2,3$_ then 14 $T:=T\cup\\{\Delta\\}$; 15 16 if _#T=mn_ then 17 break; 18 19 break; 20 Sort $T$ with respect to the ordering $\lesssim$ to get $\stackrel{{\scriptstyle*}}{{T}}$. Algorithm 2 Generating quasi-resonant triads. Thus $\Delta_{\rm j}$ represents the $j$-th triad in ordered set $\stackrel{{\scriptstyle*}}{{T}}$. Moreover, $(k_{ji},l_{ji}),i=1,2,3$ are the components of $\Delta_{\rm j}$ . In Algorithm 3, the generation of $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(t,S_{\rm P})$ is interpreted. Input : An ordered set $\stackrel{{\scriptstyle*}}{{T}}$, an integer $t$ and a plain image $P$. Output : Random numbers sequence $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(t,S_{\rm P})$. 1 $Tr(j):=|{rk_{j1}}|+|{l_{j1}}|+|{k_{j2}}|$; 2 $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(t,S_{\rm P})(j)=(Tr(j)+S_{\rm P})\pmod{256}$; 3 Algorithm 3 Generating the proposed pseudo-random sequence. The proposed sequence $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(t,S_{\rm P})$ is cryptographically a good source of pseudo-randomness because triads are highly sensitive to the auxiliary parameters $(a,b)$ Hayat (2016) and inverse detuning level $\delta$. It is shown in Bustamante (2013) that the intricate structure of clusters formed by triads depends on the chosen $\delta$, and the size of the clusters increases as the inverse detuning level increases. Moreover, the generation of triads is rapid due to the absence of modular operation. C. Performing diffusion. To change the statistical properties of an input image, a diffusion process is performed. While performing the diffusion, the pixel values are changed using the sequence $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(t,S_{\rm P})$. Let $M_{\rm P}$ denote the diffused image for a plain image $P$. The proposed scheme alters the pixels of $P$ according to: $M_{\rm P}(i)=\beta_{\stackrel{{\scriptstyle*}}{{T}}}(t,S_{\rm P})(i)+P(i)\pmod{256}.$ (13) D. Performing confusion. A nonlinear function causes confusion in a cryptosystem, and nonlinear components are necessary for a secure data encryption scheme. The current scheme uses the dynamic S-boxes to produce the confusion in an encrypted image. If $C_{\rm P}$ stands for the encrypted image of $P$, then confusion is performed as follows: $C_{\rm P}(i)=\zeta_{E_{p}}(p,t,S_{\rm P})(M_{\rm P}(i)).$ (14) ###### Lemma . If $\\#\mathcal{A}_{i}=n_{i},i=1,2,3$ and $p$ is a prime chosen for the generation of an S-box, then the time complexity of the proposed encryption scheme is max$\\{\mathcal{O}(n_{1}n_{2}n_{3}),p^{2}\\}$. ###### Proof. The computation of all possible values of $k_{1}^{\prime},l_{3}^{\prime}$ and $l_{1}^{\prime}$ in Algorithm 2 takes $\mathcal{O}(n_{1}n_{2})$ time. Similarly the time complexity for generating $\stackrel{{\scriptstyle*}}{{T}}$ is $\mathcal{O}(c_{1}n_{3})$ but $c_{1}$ executes $n_{1}n_{2}$ times. Thus the time required by $\stackrel{{\scriptstyle*}}{{T}}$ and hence by $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(t,S_{\rm P})$ is $\mathcal{O}(n_{1}n_{2}n_{3})$. Additionally, Algorithm 1 shows that the proposed S-box can be constructed in $\mathcal{O}(p^{2})$ time. Thus the time complexity of the proposed scheme is max$\\{\mathcal{O}(n_{1}n_{2}n_{3}),p^{2}\\}$. ∎ Example 1. _In order to have a clear picture of the proposed cryptosystem, we explain the whole procedure using the following hypothetical $4\times 4$ image. For example, let $I$ represent the plain image of Lena256×256, and let $P$ be the subimage of $I$ consisting of the intersection of the first four rows and the first four columns of $I$ as shown in Table 2, whereas the column-wise linearly ordered image $P$ is shown in Table 3._ Table 2: Plain image $P$. 162 | 162 | 162 | 163 ---|---|---|--- 162 | 162 | 162 | 163 162 | 162 | 162 | 163 160 | 163 | 160 | 159 Table 3: Linear ordering of image $P$. $P(1)$ | $P(5)$ | $P(9)$ | $P(13)$ ---|---|---|--- $P(2)$ | $P(6)$ | $P(10)$ | $P(14)$ $P(3)$ | $P(7)$ | $P(11)$ | $P(15)$ $P(4)$ | $P(8)$ | $P(12)$ | $P(16)$ We have $S_{\rm P}=2589$ and $c=247$ and the values of other parameters are described in Section 4.3. The corresponding $16$ triads are obtained by Algorithm 2 as shown in Table 4. Table 4: The corresponding set $\stackrel{{\scriptstyle*}}{{T}}$ for image $P$. $\Delta_{j}$ | $k_{1}$ | $l_{1}$ | $k_{2}$ | $l_{2}$ | $k_{3}$ | $l_{3}$ | $\Delta_{j}$ | $k_{1}$ | $l_{1}$ | $k_{2}$ | $l_{2}$ | $k_{3}$ | $l_{3}$ ---|---|---|---|---|---|---|---|---|---|---|---|---|--- $\Delta_{1}$ | $-$1128 | 1152 | 1529 | 668 | 401 | 1820 | $\Delta_{9}$ | $-$1240 | 1267 | 1681 | 735 | 441 | 2002 $\Delta_{2}$ | $-$1142 | 1167 | 1548 | 676 | 406 | 1843 | $\Delta_{10}$ | $-$1254 | 1282 | 1700 | 743 | 446 | 2025 $\Delta_{3}$ | $-$1156 | 1181 | 1567 | 685 | 411 | 1866 | $\Delta_{11}$ | $-$1268 | 1296 | 1719 | 751 | 451 | 2047 $\Delta_{4}$ | $-$1170 | 1195 | 1586 | 694 | 416 | 1889 | $\Delta_{12}$ | $-$1282 | 1310 | 1738 | 760 | 456 | 2070 $\Delta_{5}$ | $-$1184 | 1210 | 1605 | 701 | 421 | 1911 | $\Delta_{13}$ | $-$1296 | 1325 | 1757 | 768 | 461 | 2093 $\Delta_{6}$ | $-$1198 | 1224 | 1624 | 710 | 426 | 1934 | $\Delta_{14}$ | $-$1310 | 1339 | 1776 | 776 | 466 | 2115 $\Delta_{7}$ | $-$1212 | 1238 | 1643 | 719 | 431 | 1957 | $\Delta_{15}$ | $-$1325 | 1353 | 1796 | 785 | 471 | 2138 $\Delta_{8}$ | $-$1226 | 1253 | 1662 | 726 | 436 | 1979 | $\Delta_{16}$ | $-$1339 | 1368 | 1815 | 793 | 476 | 2161 From $S_{\rm P}=2589$ and $t=2$, it follows that $r=1295$ and hence by application of Algorithm 3 the terms of $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)$ are listed in Table 5. Moreover, the S-box $\zeta_{E_{293}}(293,2,2589)$ is constructed by Algorithm 1, giving the mapping $\zeta_{E_{293}}(293,2,2589):\\{0,1,\ldots,255\\}\rightarrow\\{0,1,\ldots,255\\}$, which maps the list $(0,\ldots,255)$ to the list $(80,213,29,113,180,2,119,174,10,103,190,120,173,99,194,126,167,42,251,78,215,84,209,93,200,130,$ $163,32,17,117,176,62,231,110,183,56,237,75,218,127,166,73,220,13,91,202,28,129,164,118,175,69,$ $224,50,243,100,193,137,156,89,204,12,63,230,74,219,4,131,162,134,159,123,170,90,203,70,223,87,$ $206,59,234,145,148,58,235,57,236,65,228,15,112,181,52,241,76,217,60,233,121,172,68,225,51,242,$ $135,158,41,252,21,142,151,26,25,40,253,96,197,136,157,9,116,177,122,171,45,248,115,178,102,191,$ $67,226,95,198,143,150,133,160,98,195,3,94,199,30,104,189,132,161,8,64,229,144,149,140,153,14,$ $85,208,20,6,109,184,125,168,92,201,19,53,240,31,66,227,35,82,211,108,185,139,154,33,16,86,207,$ $128,165,5,71,222,38,255,23,0,81,212,1,141,152,111,182,138,155,49,244,22,106,187,105,188,36,54,$ $239,46,247,43,250,97,196,27,11,24,44,249,83,210,61,232,39,254,7,72,221,77,216,47,246,107,186,$ $48,245,55,238,124169,34,79,214,88,205,114,179,37,18,146,147,101,192)$. Table 5: Pseudo-random sequence for plain image $P.$ $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)(1)=188$ | $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)(5)=126$ | $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)(9)=65$ | $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)(13)=3$ ---|---|---|--- $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)(2)=108$ | $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)(6)=47$ | $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)(10)=241$ | $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)(14)=180$ $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)(3)=29$ | $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)(7)=224$ | $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)(11)=162$ | $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)(15)=115$ $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)(4)=206$ | $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)(8)=144$ | $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)(12)=83$ | $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,2589)(16)=35$ Hence by the respective application of Equation (13) and the S-box $\zeta_{E_{293}}(293,2,2589)$, the pixel values of diffused image $M_{\rm P}$ and encrypted image $C_{\rm P}$ are shown in Tables 6 and 7, respectively. Table 6: Diffused image $M_{\rm P}.$ 94 | 32 | 227 | 166 ---|---|---|--- 14 | 209 | 147 | 87 191 | 130 | 68 | 22 110 | 51 | 243 | 194 Table 7: Encrypted image $C_{\rm P}.$ 76 | 231 | 254 | 19 ---|---|---|--- 194 | 54 | 161 | 65 0 | 67 | 162 | 209 151 | 69 | 34 | 1 ### 3.3 Decryption In our scheme the decryption process can take place by reversing the operations of the encryption process. One should know the inverse S-box $\zeta^{-1}_{E_{p}}(n,t,S_{\rm P})$ and the pseudo-random numbers $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(t,S_{\rm P})$. Assume the situation when the secret keys $a_{1},b_{1},a_{2},b_{2},a_{3},\delta$, $L,S_{\rm P},t$ and $p$ are transmitted by a secure channel, so that the set $\stackrel{{\scriptstyle*}}{{T}}$ is obtained using keys $a_{1},b_{1},a_{2},b_{2},a_{3},\delta$ and $L$, and hence the S-box $\zeta^{-1}_{E_{p}}(p,t,S_{\rm P})$ and the pseudo-random numbers $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(t,S_{\rm P})$ can be computed by $S_{\rm P},t$ and $p$. Finally, the receiver gets the original image $P$ by applying the following equations: $\displaystyle M_{\rm P}(i)=\zeta_{E_{p}}^{-1}(p,t,S_{\rm P})(C_{\rm P}(i)),$ (15) $\displaystyle\begin{split}P(i)=M_{\rm P}(i)-\beta_{\stackrel{{\scriptstyle*}}{{T}}}(t,S_{\rm P})(i)\pmod{256}.\end{split}$ (16) ## 4 Security Analysis In this section the cryptographic strength of both the S-box construction technique and encryption scheme are analyzed in detail. ### 4.1 Evaluation of the Designed S-Box An S-box with good cryptographic properties ensures the quality of an encryption technique. Generally, some standard tests such as nonlinearity (NL), linear approximation probability (LAP), strict avalanche criterion (SAC), bit independence criterion (BIC) and differential approximation probability (DAP) are used to evaluate the cryptographic strength of an S-box. The NL Adams (1990) and the LAP Matsui (1994) are outstanding features of an S-box, used to measure the resistance against linear attacks. The NL measures the level of nonlinearity and the LAP finds the maximum imbalance value of an S-box. The optimal value of the nonlinearity is $112$. A low value of LAP corresponds to a high resistance. The minimum NL and the LAP values for the displayed S-box are $106$ and $0.1484$, respectively. This ensures that the proposed S-box is immune to linear attacks. Webster and Tavares Webster (1985) developed the concepts of the SAC and the BIC, which are used to find the confusion and diffusion creation potential of an S-box. In other words, the SAC criterion measures the change in output bits when an input bit is altered. Similarly, the BIC criterion explores the correlation in output bits when change in a single input bit occurs. The average values of the SAC and the BIC for the constructed S-box are $0.4951$ and $0.4988$, respectively, which are close to the optimal value $0.5$. Thus, both tests are satisfied by the suggested S-box. The DAP Biham (1991) is another important feature used to analyze the capability of an S-box against differential attacks. The lowest value of DAP for an S-box implies the highest security to the differential attacks. Our DAP result is $0.0234$, which is good enough to resist differential cryptanalysts. ### 4.2 Performance Comparison of the S-Box Generation Algorithm After performing the rigorous analyses, the S-box constructed by the current algorithm is compared with some cryptographically strong S-boxes developed by recent schemes, as shown in Table 8. Table 8: Comparison table of the proposed S-box $\zeta_{E_{1607}}(1607,182,0)$. S-Boxes | NL | LAP | SAC | BIC | DAP ---|---|---|---|---|--- | | | (min) | (avg) | (max) | (min) | (avg) | (max) | Ours | 106 | 0.1484375 | 0.390625 | 0.49511719 | 0.609375 | 0.47265625 | 0.49888393 | 0.52539063 | 0.0234375 Ref. Hayat (2019) | 104 | 0.1484375 | 0.421900 | - | 0.6094 | 0.4629 | - | 0.5430 | 0.0469 Ref. Ye (2018) | 104 | 0.1328125 | 0.40625 | 0.49755859 | 0.625 | 0.46679688 | 0.50223214 | 0.5234375 | 0.0234375 Ref. Ozkaynak (2017) | 101 | 0.140625 | 0.421875 | 0.49633789 | 0.578125 | 0.46679688 | 0.49379185 | 0.51953125 | 0.03125 Ref. Çavuşoğlu (2017) | 104 | 0.140625 | 0.421875 | 0.50390625 | 0.59375 | 0.4765625 | 0.50585938 | 0.5390625 | 0.0234375 Ref. Belazi (2017) | 100 | 0.140625 | 0.40625 | 0.50097656 | 0.609375 | 0.44726563 | 0.50634766 | 0.53320313 | 0.03125 Ref. Özkaynak (2019) | 106 | 0.140625 | 0.390625 | 0.49414063 | 0.609375 | 0.47070313 | 0.50132533 | 0.53320313 | 0.0234375 Ref. Liu (2018) | 102 | 0.140625 | 0.421875 | 0.49804688 | 0.640625 | 0.4765625 | 0.50746373 | 0.53320313 | 0.0234375 Ref. Hayat (2018) | 104 | 0.0391 | 0.3906 | - | 0.6250 | 0.4707 | - | 0.53125 | 0.0391 Ref. Wang (2019) | 104 | 0.0547000 | 0.4018 | 0.4946 | 0.5781 | 0.4667969 | 0.4988839 | 0.5332031 | 0.0391 Ref. Alzaidi (2018) | 108 | 0.1328 | 0.40625 | 0.4985352 | 0.59375 | 0.46484375 | 0.5020229 | 0.52734375 | 0.0234375 From Table 8 it follows that the NL of $\zeta_{E_{1607}}(1607,182,0)$ is greater than the S-boxes in Hayat (2019); Ye (2018); Ozkaynak (2017); Çavuşoğlu (2017); Belazi (2017); Liu (2018); Hayat (2018); Wang (2019), equal to that of Özkaynak (2019) and less than the S-box developed in Alzaidi (2018), which indicates that $\zeta_{E_{1607}}(1607,182,0)$ is highly nonlinear in comparison to the S-boxes in Hayat (2019); Ye (2018); Ozkaynak (2017); Çavuşoğlu (2017); Belazi (2017); Liu (2018); Hayat (2018); Wang (2019). Additionally, the LAP of $\zeta_{E_{1607}}(1607,182,0)$ is comparable to all the S-boxes in Table 8. The SAC (average) value of $\zeta_{E_{1607}}(1607,182,0)$ is greater than the S-boxes in Özkaynak (2019); Wang (2019), and the SAC (max) value is less than or equal to the S-boxes in Hayat (2019); Ye (2018); Belazi (2017); Özkaynak (2019); Liu (2018); Hayat (2018). Similarly the BIC (min) value of $\zeta_{E_{1607}}(1607,182,0)$ is closer to the optimal value $0.5$ than that of Hayat (2019); Ye (2018); Ozkaynak (2017); Belazi (2017); Özkaynak (2019); Hayat (2018); Wang (2019); Alzaidi (2018), and the BIC (max) value of the new S-box is better than that of the S-boxes in Hayat (2019); Çavuşoğlu (2017); Belazi (2017); Özkaynak (2019); Liu (2018); Hayat (2018); Wang (2019); Alzaidi (2018). Thus the confusion/diffusion creation capability of $\zeta_{E_{1607}}(1607,182,0)$ is better than Hayat (2019); Belazi (2017); Özkaynak (2019); Liu (2018); Hayat (2018); Alzaidi (2018). The DAP value of our suggested S-box $\zeta_{E_{1607}}(1607,182,0)$ is lower than the DAP of the S-boxes presented in Hayat (2019); Ozkaynak (2017); Belazi (2017); Hayat (2018); Wang (2019) and equal to that of Ye (2018); Çavuşoğlu (2017); Özkaynak (2019); Liu (2018); Alzaidi (2018). Thus from the above discussion it follows that the newly designed S-box shows high resistance to linear as well as differential attacks. ### 4.3 Evaluation of the Proposed Encryption Technique In this section the current scheme is implemented on all gray images of the USC-SIPI Image Database Dbase . The USC-SIPI database contains images of size $m\times m$, $m$ = 256,512,1024. Furthermore, some security analyses that are explained one by one in the associated subsections are presented. To validate the quality of the proposed scheme, the experimental results are compared with some other encryption schemes. The parameters used for the experiments are $A_{1}=A_{2}=-1.0541,A_{3}=401,B_{1}=B_{2}=-0.8514$ and $B_{3}=691,3036,5071$ for $m$ = 256,512,1024, respectively; $a_{1}=2,b_{1}=1000,a_{2}=19,b_{2}=1000,a_{3}=5,\delta=1000,t=2,p=293,L$ = 90,000 and $S_{\rm P}$ varies for each $P$. The experiments were performed using Matlab R$2016$a on a personal computer with a $1.8$ GHz Processor and $6$ GB RAM. All encrypted images of the database along with histograms are available at github . Some plain images, House256×256, Stream512×512, Boat512×512 and Male1024×1024 and their cipher images are displayed in Figure 1. a b c d e f g h Figure 1: (a)–(d) Plain images House, Stream, Boat and Male; (e)–(h) cipher images of the plain images (a)–(d), respectively. #### 4.3.1 Statistical Attack A cryptosystem is said to be secure if it has high resistance against statistical attacks. The strength of resistance against statistical attacks is measured by entropy, correlation and histogram tests. All of these tests are applied to evaluate the performance of the discussed scheme. 1. (1) Histogram. A histogram is a graphical way to display the frequency distribution of pixel values of an image. A secure cryptosystem generates cipher images with uniform histograms. The histograms of the encrypted images using the proposed method are available at github . However, the respective histograms for the images in Figure 1 are shown in Figure 2. The histograms of the encrypted images are almost uniform. Moreover, the histogram of an encrypted image is totally different from that of the respective plain image, so that it does not allow useful information to the adversaries, and the proposed algorithm can resist any statistical attack. a b c d e f g h Figure 2: (a)–(d) Histograms of Figure 1(a)–(d); (e)–(h) histograms of Figure 1(e)–(h). 2. (2) Entropy. Entropy is a standout feature to measure the disorder. Let $I$ be a source of information over a set of symbols $N$. Then the entropy of $I$ is defined by: ${\rm H}(I)=\sum_{i=1}^{\\#N}p(I_{i}){\rm log}_{2}\frac{1}{p(I_{i})},$ (17) where $p(I_{i})$ is the probability of occurrence of symbol $i.$ The ideal value of ${\rm H}(I)$ is ${\rm log}_{2}{(\\#N)}$, if all symbols of $N$ occur in $I$ with the same probability. Thus, an image $I$ emanating $256$ gray levels is highly random if ${\rm H}(I)$ is close to $8$ (notice, however, that this definition of entropy does not take into account pixel correlations). The entropy results for all images encrypted by the suggested technique are shown in Figure 3, where the minimum, average and maximum values are $7.9966,7.9986$ and $7.9999$, respectively. These results are close to $8$, and hence the developed mechanism is secure against entropy attacks. 3. (3) Pixel correlation. A meaningful image has strong correlation among the adjacent pixels. In fact, a good cryptosystem has the ability to break the pixel correlation and bring it close to zero. For any two gray values $x$ and $y$, the pixel correlation can be computed as: $C_{xy}=\frac{E\big{[}(x-E[x])(y-E[y])\big{]}}{\sqrt{K[x]K[y]}},$ (18) where $E[x]$ and $K[x]$ denote expectation and variance of $x$, respectively. The range of $C_{xy}$ is $-1$ to $1$. The gray values $x$ and $y$ are in low correlation if $C_{xy}$ is close to zero. As the pixels may be adjacent in horizontal, diagonal and vertical directions, the correlation coefficients of all encrypted images along all three directions are shown in Figure 3, where the respective ranges of $C_{xy}$ are [$-0.0078$, $0.0131$], [$-0.0092$,$0.0080$] and [$-0.0100$,$0.0513$]. These results show that the presented method is capable of reducing the pixel correlation near to zero. a b c d Figure 3: (a)–(c) The horizontal, diagonal and vertical correlations among pixels of each image in USC-SIPI database; (d) the entropy of each image in USC-SIPI database. In addition, $2000$ pairs of adjacent pixels of the plain image and cipher image of Lena512×512 are randomly selected. Then correlation distributions of the adjacent pixels in all three directions are shown in Figure 4, which reveals the strong pixel correlation in the plain image but a weak pixel correlation in the cipher image generated by the current scheme. a b c d e f g h Figure 4: (b)–(d) The distribution of pixels of the plane image in the horizontal, diagonal and vertical directions; (f)–(h) the distribution of pixels of the cipher image in the horizontal, diagonal and vertical directions. #### 4.3.2 Differential Attack In differential attacks the opponents try to get the secret keys by studying the relation between the plain image and cipher image. Normally attackers encrypt two images by applying a small change to these images, then compare the properties of the corresponding cipher images. If a minor change in the original image can cause a significant change in the encrypted image, then the cryptosystem has a high security level. The two tests NPCR (number of pixels change rate) and UACI (unified average changing intensity) are usually used to describe the security level against differential attacks. For two plain images $P$ and $P^{{}^{\prime}}$ different at only one pixel value, let $C_{\rm P}$ and $C_{\rm P^{{}^{\prime}}}$ be the cipher images of $P$ and $P^{{}^{\prime}}$, respectively, then NPCR and UACI are calculated as: $\displaystyle{\rm NPCR}$ $\displaystyle=\sum_{u=1}^{m}\sum_{v=1}^{n}\frac{\tau(u,v)}{m\times n},$ (19) $\displaystyle{\rm UACI}$ $\displaystyle=\sum_{u=1}^{m}\sum_{v=1}^{n}\frac{|C_{\rm P}(u,v)-C_{\rm P^{{}^{\prime}}}(u,v)|}{255\times m\times n},$ (20) where $\tau(u,v)=0$ if $C_{\rm P}(u,v)=C_{\rm P^{{}^{\prime}}}(u,v)$ and $\tau(u,v)=1,$ otherwise. The expected values of NPCR and UACI for 8-bit images are $0.996094$ and $0.334635$, respectively Wu (2019). We applied the above two tests to each image of the database by randomly changing the pixel value of each image. The experimental results are shown in Figure 5, giving average values of NPCR and UACI of $0.9961$ and $0.3334$, respectively. It follows from the obtained results that our scheme is capable of resisting a differential attack. a b c d Figure 5: (a–b) The NPCR and UACI results for each image in the USC-SIPI database; (c) First $256$ pseudo-random numbers and (d) two S-boxes generated for Lena512×512 with a small change in an input key $t$. #### 4.3.3 Key Analysis For a secure cryptosystem it is essential to perform well against key attacks. A cryptosystem is highly secure against key attacks if it has key sensitivity and large key space and strongly opposes the known-plaintext/chosen-plaintext attack. The proposed scheme is analyzed against key attacks as follows. 1. (1) Key sensitivity. Attackers usually use slightly different keys to encrypt a plain image and then compare the obtained cipher image with the original cipher image to get the actual keys. Thus, high key sensitivity is essential for higher security. That is, cipher images of a plain image generated by two slightly different keys should be entirely different. The difference of the cipher images is quantified by Equations (19) and (20). In experiments we encrypted the whole database by changing only one key, while other keys remain unchanged. The key sensitivity results are shown in Table 9, where the average values of NPCR and UACI are $0.9960$ and $0.3341$, respectively, which specify the remarkable difference in the cipher images. Moreover, our cryptosytem is based on the pseudo-random numbers and S-boxes. The sensitivity of pseudo- random numbers sequences $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(2,S_{\rm P})$ and $\beta_{\stackrel{{\scriptstyle*}}{{T}}}(1,S_{\rm P})$ and S-boxes $\zeta_{E_{p}}(p,2,S_{\rm P})$ and $\zeta_{E_{p}}(p,1,S_{\rm P})$ for Lena512×512 is shown in Figure 5. Table 9: Difference between two encrypted images when key $t=2$ is changed to $t=1$. NPCR: number of pixels change rate; UACI: unified average changing intensity. Image | NPCR(%) | UACI(%) | Image | NPCR(%) | UACI(%) | Image | NPCR(%) | UACI(%) ---|---|---|---|---|---|---|---|--- Female | 99.62 | 33.39 | House | 99.62 | 33.23 | Couple | 99.56 | 33.30 Tree | 99.59 | 33.35 | Beans | 99.64 | 33.23 | Splash | 99.60 | 33.97 2. (2) Key space. In order to resist a brute force attack, key space should be sufficiently large. For any cryptosystem, key space represents the set of all possible keys required for the encryption process. Generally, the size of the key space should be greater than $2^{128}.$ In the present scheme the parameters $a_{1},b_{1},a_{2},b_{2},a_{3},\delta,L,S_{\rm P},t$ and $p$ are used as secret keys, and we store each of them in $28$ bits. Thus the key space of the proposed cryptosystem is $2^{280}$ which is larger than $2^{128}$ and hence capable to resist a brute force attack. 3. (3) Known-plaintext/chosen-plaintext attack. In a known-plaintext attack, the attacker has partial knowledge about the plain image and cipher image, and tries to break the cryptosystem, while in a chosen-plaintext attack the attacker encrypts an arbitrary image to get the encryption keys. An all- white/black image is usually encrypted to test the performance of a scheme against these powerful attacks Wang (2019); Toughi (2017). We analyzed our scheme by encrypting an all-white/black image of size $256\times 256$. The results are shown in Figure 6 and Table 10, revealing that the encrypted images are significantly randomized. Thus the proposed system is capable of preventing the above mentioned attacks. a b c d e f Figure 6: (a) All-white; (b) all-black; (c)–(d) cipher images of (a)–(b); (e)–(f) histograms of (c)–(d). Table 10: Security analysis of all-white/black encrypted images by the proposed encryption technique. Plain Image | Entropy | Correlation of Plain Image | NPCR (%) | UACI (%) ---|---|---|---|--- Hori. | Diag. | Ver. All-white | 7.9969 | 0.0027 | 0.0020 | $-$0.0090 | 99.60 | 33.45 All-black | 7.9969 | $-$0.0080 | 0.0035 | 0.0057 | 99.62 | 33.41 ### 4.4 Comparison and Discussion Apart from security analyses, the proposed scheme is compared with some well- known image encryption techniques. The gray scale images of Lena256×256 and Lena512×512 are encrypted using the presented method, and experimental results are listed in Table 11. Table 11: Comparison of the proposed encryption scheme with several existing cryptosystems for image Lenam×m, $m$ = 256,512. Size $m$ | Algorithm | Entropy | Correlation | NPCR (%) | UACI(%) | $\\#$ | Dynamic ---|---|---|---|---|---|---|--- Hori. | Diag. | Ver. | S-Boxes | S-Boxes $256$ | Ours | 7.9974 | 0.0001 | $-$0.0007 | $-$0.0001 | 99.91 | 33.27 | 1 | Yes Ref. Hayat (2019) | 7.9993 | 0.0012 | 0.0003 | 0.0010 | 99.60 | 33.50 | 1 | Yes Ref. El-Latif (2013) | 7.9973 | - | - | - | 99.50 | 33.30 | 0 | - Ref. Rehman (2016) | 7.9046 | 0.0164 | $-$0.0098 | 0.0324 | 98.92 | 32.79 | >1<50 | Yes Ref. Belazi (2016) | 7.9963 | $-$0.0048 | $-$0.0045 | $-$0.0112 | 99.62 | 33.70 | 8 | Yes Ref. Wu (2017) | 7.9912 | $-$0.0001 | 0.0091 | 0.0089 | 100 | 33.47 | 0 | - Ref. Wan (2020) | 7.9974 | 0.0020 | 0.0020 | 0.0105 | 99.59 | 33.52 | 0 | - $512$ | Ours | 7.9993 | 0.0001 | 0.0042 | 0.0021 | 99.61 | 33.36 | 1 | Yes Ref. Cheng (2015) | 7.9992 | 0.0075 | 0.0016 | 0.0057 | 99.61 | 33.38 | 1 | No Ref. Toughi (2017) | 7.9993 | $-$0.0004 | 0.0001 | $-$0.0018 | 99.60 | 33.48 | 1 | No - | Ref. Tong (2016) | 7.9970 | $-$0.0029 | 0.0135 | 0.0126 | 99.60 | 33.48 | 0 | - Ref. Zhang (2014) | 7.9994 | 0.0018 | $-$0.0012 | 0.0011 | 99.62 | 33.44 | >1 | Yes Ref. Zhang (2014) | 7.9993 | 0.0032 | 0.0011 | $-$0.0002 | 99.60 | 33.47 | >1 | Yes It is deduced that our scheme generates cipher images with comparable security. Furthermore, we remark that the scheme in Toughi (2017) generates pseudo-random numbers using group law on EC, while the proposed method generates pseudo-random numbers by constructing triads using auxiliary parameters of elliptic surfaces. Group law consists of many operations, which makes the pseudo-random number generation process slower than the one we present here. The scheme in Belazi (2016) decomposes an image to eight blocks and uses dynamic S-boxes for encryption purposes. The computation of multiple S-boxes takes more time than computing only one S-box. Similarly the techniques in Zhang (2014); Rehman (2016) use a set of S-boxes and encrypt an image in blocks, while our newly developed scheme encrypts the whole image using only one dynamic S-box. Thus, our scheme is faster than the schemes in Zhang (2014); Rehman (2016). The security system in Tong (2016) uses a chaotic system to encrypt blocks of an image. The results in Table 11 reveal that our proposed system is cryptographically stronger than the scheme in Tong (2016). The algorithms in Wu (2017); El-Latif (2013) combine chaotic systems and different ECs to encrypt images. It follows from Table 11 that the security level of our scheme is comparable to that of the schemes in Wu (2017); El- Latif (2013). The technique in Wan (2020) uses double chaos along with DNA coding to get good results, as shown in Table 11, but the results obtained by the new scheme are better than that of Wan (2020). Similarly the technique in Hayat (2019) encrypts images using ECs but does not guarantee an S-box for each set of input parameters, thus making our scheme faster and more robust than the scheme developed in Hayat (2019). Furthermore, the following facts put our scheme in a favorable position: * (i) Our scheme uses a dynamic S-box for each input image while the S-box used in Toughi (2017) is a static one, which is vulnerable Rosenthal (2003) and less secure than a dynamic one Kazlauskas (2009). * (ii) The presented scheme guarantees an S-box for each image, which is not the case in Hayat (2019). * (iii) To get random numbers, the described scheme generates triads for all images of the same size, while in Hayat (2019) the computation of an EC for each input image is necessary, which is time consuming. * (iv) The scheme in Belazi (2016) uses eight dynamic S-boxes for a plain image, while the current scheme uses only one dynamic S-box for each image to get the desired cryptographic security. ## 5 Conclusions An image encryption scheme based on quasi-resonant triads and MECs was introduced. The proposed technique constructs triads to generate pseudo-random numbers and computes an MEC to construct an S-box for each input image. The pseudo-random numbers and S-box are then used for altering and scrambling the pixels of the plain image, respectively. As for the advantages of our proposed method, firstly triads are based on auxiliary parameters of elliptic surfaces, and thus pseudo-random numbers and S-boxes generated by our method are highly sensitive to the plain image, which prevents adversaries from initiating any successful attack. Secondly, generation of triads using auxiliary parameters of elliptic surfaces consumes less time than computing points on ECs (we find a 4x speed increase for a range of image resolutions $m\in[128,512]$), which makes the new encryption system relatively faster. Thirdly, our algorithm generates the cipher images with an appropriate security level. In summary, all of the above analyses imply that the presented scheme is able to resist all attacks. It has high encryption efficiency and less time complexity than some of the existing techniques. In the future, the current scheme will be further optimized by means of new ideas to construct the S-boxes using the constructed triads, so that we will not need to compute an MEC for each input image. All authors contributed equally to this work. This research is funded through the HEC project NRPU-7433. ###### Acknowledgements. We thank Gene Kopp for useful comments and suggestions. The authors declare no conflict of interest. The funding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results. The following abbreviations are used in this manuscript: MEC | Mordell elliptic curve ---|--- S-box | Substitution box EC | Elliptic curves no References ## References * Mahmud (2020) Mahmud, M.; Lee, M.; Choi, J.Y. Evolutionary-Based Image Encryption using RNA Codons Truth Table. Optics Laser Technol. 2020, 121, 105818. * Zhang (2014) Zhang, X.; Mao, Y.; Zhao, Z. An Efficient Chaotic Image Encryption Based on Alternate Circular S-boxes. Nonlinear Dyn. 2014, 78, 359–369. * El-Latif (2013) El-Latif, A.A.A.; Niu, X. A Hybrid Chaotic System and Cyclic Elliptic Curve for Image Encryption. AEU-Int. J. Electron. Commun. 2013, 67, 136–143. * Yang (2015) Yang, Y.G.; Pan, Q.X.; Sun, S.J.; Xu, P. Novel Image Encryption Based on Quantum Walks. Sci. Rep. 2015, 5, 1–9. * Zhong (2019) Zhong, H.; Chen, X.; Tian, Q. An Improved Reversible Image Transformation using K-Means Clustering and Block Patching. Information 2019, 10, 1–17. * Li (2017) Li, C.; Lin, D.; Lü, J. Cryptanalyzing an Image-Scrambling Encryption Algorithm of Pixel Bits. IEEE MultiMedia 2017, 24, 64–71. * Hua (2018) Hua, Z.; Yi, S.; Zhou, Y. Medical Image Encryption using High-Speed Scrambling and Pixel Adaptive Diffusion. Signal Process. 2018, 144, 134–144. * Xie (2017) Xie, E.Y.; Li, C.; Yu, S.; Lü, J. On the Cryptanalysis of Fridrich’s Chaotic Image Encryption Scheme. Signal Process. 2017, 132, 150–154. * Azam (2017) Azam, N.A. A Novel Fuzzy Encryption Technique Based on Multiple Right Translated AES Gray S-Boxes and Phase Embedding. Secur. Commun. Netw. 2017, _2017_ , 5790189. * Luo (2018) Luo, Y.; Tang, S.; Qin, X.; Cao, L.; Jiang, F.; Liu, J. A Double-Image Encryption Scheme Based on Amplitude-Phase Encoding and Discrete Complex Random Transformation. IEEE Access 2018, 6, 77740–77753. * Li (2015) Li, J.; Li, J.S.; Pan, Y.Y.; Li, R. Compressive Optical Image Encryption. Sci. Rep. 2015, 5, 1–10. * Hua (2019) Hua, Z.; Xu, B.; Jin, F.; Huang, H. Image Encryption using Josephus Problem and Filtering Diffusion. IEEE Access 2019, 7, 8660–8674. * Wu (2019) Wu, J.; Cao, X.; Liu, X.; Ma, L.; Xiong, J. Image Encryption using the Random FrDCT and the Chaos-Based Game of Life. J. Modern Opt. 2019, 66, 764–775. * Yousaf (2020) Yousaf, A.; Alolaiyan, H.; Ahmad, M.; Dilbar, N.; Razaq, A. Comparison of Pre and Post-Action of a Finite Abelian Group Over Certain Nonlinear Schemes. IEEE Access 2020, _8_ , 39781–39792. * Ismaila (2020) Ismail, S.M.; Said, L.A.; Radwan, A.G.; Madian, A.H.; Abu-ElYazeed, M.F. A Novel Image Encryption System Merging Fractional-Order Edge Detection and Generalized Chaotic Maps. Signal Process. 2020, 167, 107280. * Tang (2019) Tang, Z.; Yang, Y.; Xu, S.; Yu, C.; Zhang, X. Image Encryption with Double Spiral Scans and Chaotic Maps. Secur. Commun. Netw. 2019, 1–16. doi:10.1155/2019/8694678. * Abdelfatah (2019) Abdelfatah, R.I. Secure Image Transmission using Chaotic-Enhanced Elliptic Curve Cryptography. IEEE Access 2019, _8_ , 1–16. * Yu (2020) Yu, J.; Guo, S.; Song, X.; Xie, Y.; Wang, E. Image Parallel Encryption Technology Based on Sequence Generator and Chaotic Measurement Matrix. Entropy 2020, 22, 76. * Zhu (2018) Zhu, S.; Zhu, C.; Wang, W. A Novel Image Compression-Encryption Scheme Based on Chaos and Compression Sensing. IEEE Access 2018, 6, 67095–67107. * ElKamchouchi (2020) ElKamchouchi, D.H.; Mohamed, H.G.; Moussa, K.H. A Bijective Image Encryption System Based on Hybrid Chaotic Map Diffusion and DNA Confusion. Entropy 2020, 22, 180. * Zhou (2013) Zhou, Y.; Bao, L.; Chen, C.P. Image Encryption using a New Parametric Switching Chaotic System. Signal Process. 2013, 93, 3039–3052. * Hu (2019) Hua, Z.; Zhou, Y.; Huang, H. Cosine-Transform-Based Chaotic System for Image Encryption. Inf. Sci. 2019, 480, 403–419. * Xu (2014) Xu, Y.; Wang, H.; Li, Y.; Pei, B. Image Encryption Based on Synchronization of Fractional Chaotic Systems. Commun. Nonlinear Sci. Numer. Simul. 2014, 19, 3735–3744. * Ahmad (2015) Ahmad, M.; Shamsi, U.; Khan, I.R. An Enhanced Image Encryption Algorithm using Fractional Chaotic Systems. Procedia Comput. Sci. 2015, 57, 852–859. * Cheng (2015) Cheng, P.; Yang, H.; Wei, P.; Zhang, W. A Fast Image Encryption Algorithm Based on Chaotic Map and Lookup Table. Nonlinear Dyn. 2015, 79, 2121–2131. * Belazi (2016) Belazi, A.; El-Latif, A.A.A.; Belghith, S. A Novel Image Encryption Scheme Based on Substitution-Permutation Network and Chaos. Signal Process. 2016, 128, 155–170. * Rehman (2016) Rehman, A.U.; Khan, J.S.; Ahmad, J.; Hwang, S.O. A New Image Encryption Scheme Based on Dynamic S-boxes and Chaotic Maps. 3D Res. 2016, 7, 1–8. * Jia (2016) Jia, N.; Liu, S.; Ding, Q.; Wu, S.; Pan, X. A New Method of Encryption Algorithm Based on Chaos and ECC. J. Inf. Hiding Multimedia Signal Process. 2016, 7, 637–643. * Toughi (2017) Toughi, S.; Fathi, M.H.; Sekhavat, Y.A. An Image Encryption Scheme Based on Elliptic Curve Pseudo Random and Advanced Encryption System. Signal Process. 2017, 141, 217–227. * Liu (2014) Liu, H.; Liu, Y. Cryptanalyzing an Image Encryption Scheme Based on Hybrid Chaotic System and Cyclic Elliptic Curve. Opt. Laser Technol. 2014, 56, 15–19. * Hayat (2019) Hayat, U.; Azam, N.A. A Novel Image Encryption Scheme Based on an Elliptic Curve. Signal Process. 2019, 155, 391–402. * Bustamante (2013) Bustamante, M.D.; Hayat, U. Complete Classification of Discrete Resonant Rossby/Drift Wave Triads on Periodic Domains. Commun. Nonlinear Sci. Numer Simul. 2013, 18, 2402–2419. * Hayat (2016) Hayat, U.; Amanullah, S.; Walsh, S.; Abdullah, M.; Bustamante, M.D. Discrete Resonant Rossby/Drift Wave Triads: Explicit Parameterisations and a Fast Direct Numerical Search Algorithm. Commun. Nonlinear Sci. Numer. Simul. 2019, 79, 104896. * Azam (2018) Azam, N.A.; Hayat, U.; Ullah, I. An Injective S-Box Design Scheme over an Ordered Isomorphic Elliptic Curve and Its Characterization. Secur. Commun. Netw. 2018, _2018_ , 3421725. * Charney (1948) Charney, J.G. On the scale of atmospheric motions. Geophys. Public 1948, 17, 3–17. * Hasegawa (1978) Hasegawa, A.; Mima, K. Pseudo-three-dimensional turbulence in magnetized nonuniform plasma. Phys. Fluids 1978, 21, 87–92. * Connaughton (2010) Connaughton, C.P.; Nadiga, B.T.; Nazarenko, S.V.; Quinn, B.E. Modulational instability of Rossby and drift waves and generation of zonal jets. J. Fluid Mech. 2010, 654, 207–231. * Harris (2013) Harris, J.; Connaughton, C.; Bustamante, M.D. Percolation Transition in the Kinematics of Nonlinear Resonance Broadening in Charney–Hasegawa–Mima Model of Rossby Wave Turbulence. New J. Phys. 2013, 15, 083011. * Galperin (2019) Galperin, B.; Read, P.L. (Eds.) _Zonal Jets: Phenomenology, Genesis, and Physics_ ; Cambridge University Press: Cambridge, UK, 2019. * Kopp (2017) Kopp, G.S. The Arithmetic Geometry of Resonant Rossby Wave Triads. SIAM J. Appl. Algebra Geomet. 2017, 1, 352–373. * (41) Washington, L.C. _Elliptic Curves Number Theory and Cryptography, Discrete Mathematics and its Applications_ , 2nd ed.; Chapman and Hall/CRC, University of Maryland College Park: College Park, MD, USA, 2003. * Azam1 (2019) Azam, N.A.; Hayat, U.; Ullah, I. Efficient Construction of S-boxes Based on a Mordell Elliptic Curve Over a Finite Field. Front. Inf. Technol. Electron. Eng. 2019, 20, 1378–1389. * Adams (1990) Adams, C.; Tavares, S. The Structured Design of Cryptographically Good S-boxes. J. Cryptol. 1990, 3, 27–41. * Matsui (1994) Matsui, M. Linear cryptanalysis method of DES cipher. In Advances in Cryptology, Proceedings of the Workshop on the Theory and Application of of Cryptographic Techniques (EURO-CRYPT-93), Lofthus, Norway, 23–27 May 1993; Springer: Berlin/Heidelberg, Germany, 1994; pp. 386–397. * Webster (1985) Webster, A.; Tavares, S.E. On the design of S-boxes. In Conference on the Theory and Application of Cryptographic Techniques; Springer: Berlin/Heidelberg, Germany, 1985; pp. 523–534. * Biham (1991) Biham, E.; Shamir, A. Differential Cryptanalysis of DES-like Cryptosystems. J. Cryptol. 1991, 4, 3–72. * Ye (2018) Ye, T.; Zhimao, L. Chaotic S-box: Six-Dimensional Fractional Lorenz–Duffing Chaotic System and O-shaped Path Scrambling. Nonlinear Dyn. 2018, 94, 2115–2126. * Ozkaynak (2017) Özkaynak, F.; Çelik, V.; Özer, A.B. A New S-box Construction Method Based on the Fractional-Order Chaotic Chen System. Signal Image Video Process. 2017, 11, 659–664. * Çavuşoğlu (2017) Çavuşoğlu, Ü.; Zengin, A.; Pehlivan, I.; Kaçar, S. A Novel Approach for Strong S-Box Generation Algorithm Design Based on Chaotic Scaled Zhongtang System. Nonlinear Dyn. 2017, 87, 1081–1094. * Belazi (2017) Belazi, A.; El-Latif, A.A.A. A Simple yet Efficient S-box Method Based on Chaotic Sine Map. Optik 2017, 130, 1438–1444. * Özkaynak (2019) Özkaynak, F. Construction of robust substitution boxes based on chaotic systems. Neural Comput. Appl. 2019, 31, 3317–3326. * Liu (2018) Liu, L.; Zhang, Y.; Wang, X. A Novel Method for Constructing the S-box Based on Spatiotemporal Chaotic Dynamics. Appl. Sci. 2018, 8, 2650. * Hayat (2018) Hayat, U.; Azam, N.A.; Asif, M. A Method of Generating $8\times 8$ Substitution Boxes Based on Elliptic Curves. Wirel. Pers. Commun. 2018, 101, 439–451. * Wang (2019) Wang, X.; Çavuşoğlu, Ü.; Kacar, S.; Akgul, A.; Pham, V.T.; Jafari, S.; Alsaadi, F.E.; Nguyen, X.Q. S-box Based Image Encryption Application using a Chaotic System without Equilibrium. Appl. Sci. 2019, 9, 781. * Alzaidi (2018) Alzaidi, A.A.; Ahmad, M.; Ahmed, H.S.; Solami, E.A. Sine-Cosine Optimization-Based Bijective Substitution-Boxes Construction using Enhanced Dynamics of Chaotic Map. Complexity 2018, _2018_ , 9389065. * (56) USC-SIPI Image Database. available online: http://sipi.usc.edu/database/database.php (accessed on 21/02/2020). * (57) https://github.com/ikram702314/Results * Wang (2019) Wang, X.; Zhao, H.; Hou, Y.; Luo, C.; Zhang, Y.; Wang, C. Chaotic Image Encryption Algorithm Based on Pseudo-Random Bit Sequence and DNA Plane. Modern Phys. Lett. B 2019, 33, 1950263. * Wu (2017) Wu, J.; Liao, X.; Yang, B. Color Image Encryption Based on Chaotic Systems and Elliptic Curve ElGamal Scheme. Signal Process. 2017, 141, 109–124. * Wan (2020) Wan, Y.; Gu, S.; Du, B. A New Image Encryption Algorithm Based on Composite Chaos and Hyperchaos Combined with DNA Coding. Entropy 2020, 22, 171. * Tong (2016) Tong, X.J.; Zhang, M.; Wang, Z.; Ma, J. A Joint Color Image Encryption and Compression Scheme Based on Hyper-Chaotic System. Nonlinear Dyn. 2016, 84, 2333–2356. * Zhang (2014) Zhang, Y.; Xiao, D. An Image Encryption Scheme Based on Rotation Matrix Bit-Level Permutation and Block Diffusion. Commun. Nonlinear Sci. Numer. Simul. 2014, 19, 74–82. * Rosenthal (2003) Rosenthal, J. A Polynomial Description of the Rijndael Advanced Encryption Standard. J. Algebra . Its Appl. 2003, 2, 223–236. * Kazlauskas (2009) Kazlauskas K.; Kazlauskas J. Key-Dependent S-box Generation in AES Block Cipher system. Informatica 2009, 20, 23–34.
2024-09-04T02:54:57.980511
2020-03-06T21:19:44
2003.03444
{ "authors": "Yuval Pinter and Cassandra L. Jacobs and Max Bittker", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26091", "submitter": "Yuval Pinter", "url": "https://arxiv.org/abs/2003.03444" }
arxiv-papers
# NYTWIT: A Dataset of Novel Words in the New York Times Yuval Pinter Georgia Institute of Technology Atlanta, GA, USA <EMAIL_ADDRESS> &Cassandra L. Jacobs University of Wisconsin Madison, WA, USA <EMAIL_ADDRESS> &Max Bittker School for Poetic Computation New York, NY, USA <EMAIL_ADDRESS> ###### Abstract We present the New York Times Word Innovation Types dataset, or NYTWIT, a collection of over 2,500 novel English words published in the New York Times between November 2017 and March 2019, manually annotated for their class of novelty (such as lexical derivation, dialectal variation, blending, or compounding). We present baseline results for both uncontextual and contextual prediction of novelty class, showing that there is room for improvement even for state-of-the-art NLP systems. We hope this resource will prove useful for linguists and NLP practitioners by providing a real-world environment of novel word appearance. ## 1 Introduction Novel words, or Out-Of-Vocabulary words (OOVs), are a pervasive problem in modern natural language processing [Brill, 1995, Young et al., 2018]. A common scenario in which this problem appears is that of a pre-trained model containing a word representation component such as an embedding table, encountering a previously-unseen word in a downstream task such as question answering or natural language inference. Multiple lines of work attempt to alleviate the downstream effect of OOV words [Müller and Schütze, 2011, Pinter et al., 2017], but most tend to focus on individual categories of OOVs: typographical errors [Sakaguchi et al., 2017], domain-specific terminology [Du et al., 2016], stylistic variability [Eisenstein, 2013, van der Goot, 2019], morphological productivity [Bhatia et al., 2016], or novel named entities [Hoffart et al., 2014]. In reality, unseen texts contain all these classes of novelty, and more. OOVs are a typically presented as a significant challenge for generalization or understanding in noisy user-generated text (e.g. Twitter) and/or domain-specific content. Nevertheless, even large corpora that are narrow in domain (edited news stories) contain linguistic innovations, including but not limited to novel morphological processes, typographical errors, and loan words. In this paper, we present a dataset of novel words in English relative to the corpus of articles published by the New York Times (NYT), as collected automatically in real time by a Twitter bot. We name it the New York Times Word Innovation Types corpus, or NYTWIT for short. We annotated each word for one of eighteen linguistically-informed categories of novelty within the context of the NYT corpus, as well as for its date of publication and a retrieval document identifier to enable context extraction.111Context article excerpts are not freely available without copyright licensing from the New York Times, who have ignored all contact attempts to date. To our knowledge, this is the first resource to include novel words along with their contextual information in addition to linguistically-informed annotation, a method that enables expansion beyond dictionary-based methods [Cook and Stevenson, 2010, Dhuliawala et al., 2016, Ahmad, 2000] and decontextualized neologisms [Kulkarni and Wang, 2018]. In contrast with resources which provide examples and attestations to lexical forms, NYTWIT was constructed in a corpus- comprehensive manner where novelty guides curation and not vice versa. In addition, we provide results for the task of classifying words into their categories based on word form and contextual information, a task which can both provide data for linguistic analysis of lexical enrichment and serve as a processing step for NLP systems which may work better if different modules are applied to different classes of novel words. We show that both character-level models and large pre-trained sentence encoders struggle on this task, illustrating the challenges of modeling language innovation. We release the data at https://github.com/yuvalpinter/nytwit under the GNU General Public License v3.0. The project is ongoing, and this document pertains to version 1.1. ## 2 The New York Times Word Innovation Types Dataset Our dataset is built upon two bots developed by the third author. The first stage of data collection relies on tweets from the NYT_First_Said bot222https://twitter.com/NYT_First_Said, which operates by scraping new articles as they post on the NYT site and tweeting out novel words following a filtering process which we will describe at a high level.333The code for the bot is available at https://github.com/MaxBittker/NYT-first-said. After tokenizing on white space and punctuation, the precision-oriented script rejects capitalized words in order to avoid proper nouns (at the cost of missing sentence-initial true OOVs). langid [Lui and Baldwin, 2012] is used to reject non-English sentences, while still allowing loanwords in English sentences. Words are queried against the historical NYT search API to detect unpublished words.444We note that the search index relies on imperfect, although extensive, digitization artifacts. At the time of writing, in a sample of 450 terms from our dataset, [Cassie: 4] were entries in the Oxford English Dictionary, nearly all of which belong to the domain or foreign categories. For the time range of our collected corpus, November 7, 2017 to March 28, 2019, a bandwidth limit of five words per 30 minutes was imposed, but we confirmed that this did not have a substantial effect on OOV coverage, leaving our artifacts distributionally representative for the news domain. An associated context bot replies to the tweets with links to the original articles.555https://twitter.com/NYT_Said_Where We used the URLs from this bot’s posts as the main reference for the words’ contexts. For 17 words, the article URL was retrieved manually by searching for the target article directly.666One term lacks a context because neither the NYT search engine nor the API support the letter é. As the articles are subject to edits long after publication, there is an increasing but small portion of articles which no longer contain the context, although at time of publication these mostly include the removal of typographical errors from the stories and which are ultimately filtered by our annotation process (see below). ### 2.1 Annotation The extracted data was independently annotated and filtered by the first two authors. Initially, all 2,587 words were assigned one of 20+ tags inspired by the word formation literature [Kiparsky, 1982, Klymenko, 2019]. Certain categories were filter categories intended to capture and exclude false positives from the final dataset: Duplicate for inflections of words already appearing in the dataset in a morphologically simpler form, e.g. batchcode and batchcodes; Foreign and PRP for foreign words and proper names (mostly all- lowercase Twitter usernames) which were not caught by the automatic filtering; Spaces and Typo for unintended cases of space deletion and typographical errors which were not caught by NYT editors.777The overwhelming share of these words have indeed since been deleted from the NYT website. The filtered items are provided in the dataset under the label Filtered. Agreement between the annotators at the preliminary phase was 68% over all labels, and 0.65 Cohen’s Kappa. Following category filtering, amounting to 40% of the original dataset, agreement over the remaining 1,550 words was calculated to be 65% at 0.61 Kappa. At the coarse-grained level, agreement on the four themes (lexical / morphological / syntactic / sociopragmatic) was 89% at 0.75 Kappa.888A reviewer noted that these are low agreement rates, and compared the task to part-of-speech annotation. We dispute the comparison, both on grounds of the novelty of the forms involved and of the mechanical syntactic nature of the majority of POS tagging decisions. The annotators then examined each other’s annotations and agreed on some consolidation of rarely-occurring original labels, as well as introduction of new labels deemed useful post-hoc. ### 2.2 Novel Word Taxonomy We describe the eighteen categories in the finalized dataset, organized by a thematic grouping not explicitly annotated. Counts for each category are provided [in brackets]. ##### Lexical OOVs. We deem certain categories to arise from the fact that the NYT, while being interested in many aspects of life, has not had the chance to delve into each and every one at depth over its 168 years of existence. These are the Domain label for technical terms from uncommon domains (e.g. glossopoeia) [285]; the innovation label for terms coined with no discernable prevailing linguistic process (e.g. swanicles, a term from a work of fiction) [11]; and the Onomatopeia label for sound-based sequences (e.g. ktktk) [23], which includes cases of verbatim vocalization such as trololo. ##### Morphological OOVs. In this group we include categories of words composed of meaning-carrying units present in existing English words which have appeared in the NYT before, manifested in a new form. In increasing order of syntactic and semantic novelty, they are: Infl, unseen inflections of existing wordforms: same part- of-speech, different syntactic attributes (e.g. pennyloafers) [53];999We include the negating prefixes in- and un-, which despite change a word’s meaning, but retain its part-of-speech. Deriv, unseen derivations of existing words into new parts-of-speech which carry no semantic distancing beyond that implicit in the new part-of-speech itself (e.g. foamability) [215]; Affix, affixation of very distinct base words which are typically derivational in nature but include a semantic charge (e.g. extraphotographic, pizzaless) [483]; Affix_Libfix, affixation of distinct base words with particles extracted from another word in a process known as libfixation [Zwicky, 2010] or splintering [Berman, 1961], where the liberated affix still elicits the originating word but can be freely attached to a growing selection of words (e.g. dripware) [18]; Compound_Comp, a concatenation of two complete words each contributing essential semantics to the final form in a way we deem (subjectively, with help of context) to be compositional (e.g. smellwalks, strolls focusing on olfactory input) [121];101010One compound in our dataset, dramatotherapy, adds characters for cadence; another, laysoccerperson, is nonlinear. Compound_New, a concatenation of base words resulting in a new semantic concept deemed remote from the bases (e.g. nothingbuffet, a play on nothingburger) [49]; and Blend, a fusion of two or more base forms together where original characters are lost or shared, or new ones are added (e.g. chipster, a chicano hipster) [142].111111A single blend, pregret, has just one base fused with a prefix. ##### Syntactic OOVs. This group consists solely of the Synth category of tokens which synthesize multiple syntactic words into one form, a rare formation process in English limited typically to auxiliary contractions (e.g. this’ll) [6]. ##### Sociopragmatic OOVs. Words in this group exhibit an orthographic diversion from standard English usually intended as a statement of register or status, or as a faithful representation of a certain linguistic style or sentiment. Archaic, a register of older variants of English or an ironic semblance of such (e.g. shooketh, a mock-archaic form of shake using Middle English morphology) [14]; Dialect, a geographically- or demographically-specific form of a word typically spelled differently in the NYT (e.g. skwarsh, an r-full squash) [46]; Infix, a morphological tool reserved in English for expletive emphasis [McCawley, 1978] (e.g. unfreakingbelievable) [2]; Phonaestheme, a phonological duplication phenomenon used in contemporary English nearly only as derisive echo reduplication borrowed from Yiddish [Wales and Ramsaran, 1990] (e.g. schmarket) [6]; Lengthening, a written manifestation of the expressive elongation of phonetic segments (e.g. greaaaaat) [53]; Variant, spelling alternations or intentional typos which are not intended to be read differently from the standard form of the word, used for branding and jest (e.g. kyllyng) [18]; and Spaces_Sic, the removal of whitespace to simulate breathlessness (e.g. lineafterlineafterline) [5]. #### 2.2.1 Difficult Distinctions Naturally, some annotation cases are not clear-cut, as evidenced by the imperfect inter-annotator agreement. We found the most challenging cases to be among the morphological categories, where an affix is either semantically null (Deriv / Infl) or not (Affix) (14% and 15% of disagreements, respectively); where a sense of the nearest in-vocabulary word can signal the difference between Infl and Deriv (3.4%); where an Affix_Libfix has been liberated enough from the underlying word such that it is now simply an Affix (does cyber- still envoke the full word cybernetics? Does crypto- envoke cryptography?); if it has not been liberated yet, it should be a Blend or a Compound. In addition, the pre-processing phase required a demarcation between Domain and Foreign which was not easy to make given the heavy foreign-word influence in certain knowledge domains such as cuisine (e.g. dinkelbrot). Words adapted into English morphology would usually lead to a Domain label (Domain vs. Compound: 4%). In many cases, we found the contexts in which the words were introduced to give sufficient disambiguation (so, e.g., cybercoach is an affix, but cyberinvasion is a compound). We invite readers to email errata to either of the first two authors, or submit a pull request on Github. ## 3 OOV Classification Task The task of classifying OOVs, i.e. assigning a novel word with a label from the taxonomy we defined above, can be beneficial from both an analytical linguistic standpoint, and from an NLP standpoint concerned with model performance on downstream language understanding tasks. To get a sense of the predictability of the various OOV classes in the dataset, we present several baselines for this straightforward task. The uniqueness of our dataset allows us to apply both type-level and context-dependent systems, the latter operating in the real-world scenario of encountering a word for the first time in the actual context of its introduction to the corpus. First, our Majority class baseline assumes all OOVs are the result of affixation. For all following models we trained a ridge classifier with default regularization parameters in scikit-learn. Scores for all supervised models are reported via 10-fold cross-validation using the same folds for all systems. Due to the class imbalance, we chose to implement training in such a way that upsampled rare classes with replacement at each iteration to equal frequency as the most common class. We report accuracy (Acc) and macro F1 scores. ##### Contextless features. We compare and contrast several input features to our classifier that only have access to the form of the OOV, without consideration of the context: * • Character n-grams. We extract bag-of-character features ranging from one to three characters for each OOV. The feature vocabulary is estimated on the training set and applied to the test set. * • FastText. We infer fasttext vectors [Bojanowski et al., 2017], applying its 3–6 character-ngram representations, from the subword model trained on English Wikipedia.121212wiki.en.bin file obtained May 25, 2020. * • ELMo embeddings. We use the word-level embeddings from ELMo [Peters et al., 2018], obtained via a pre-trained character-level convolutional net for each OOV presented in isolation, with no surrounding sentence context. * • BERT no-context. We apply BERT-Base [Devlin et al., 2019] to the OOVs in a null [CLS] ___ [SEP] . context. The averaged top-layer vectors from all the OOV’s word pieces are passed to the classifier.131313Using just the embedding of the final word piece produced similar results. ##### Context-aware features. * • Character RNN. We train a 2-layer forward- (backward-) character-level GRU language model on 100,000 Wikipedia documents and run it through the beginning (end) of the sentence until the OOV, then use the concatenated final hidden states from each direction as features. * • ELMo. We obtain contextualized embeddings for all words in our sentences and select the top layer representation associated with each OOV. * • BERT. We apply BERT-Base to the entire sentence in which the OOV appears, and use the averaged top-layer embeddings at the indices of each OOV. ### 3.1 Results The results, presented in Table 1, show that pretrained contextual models not only trail behind a contextless, un-pretrained character n-gram baseline, they even fail to improve over their own uncontextualized variants. An analysis of class-specific F1 scores across the different models exposed two general patterns in classifier performance: in all models, performance on the Affix class was in the top four, and the same for Lengthening except for Character RNN. We also observed that models that encode contextual, sentence-level properties are typically better at encoding genre phenomena (e.g. Domain was a top-four category for BERT, Character RNN, fastText, and ELMo). However, for some classes of models, there was a clear benefit to memorizing word forms. All count-based feature representations (e.g. bag-of-character ngrams, bag-of- wordpieces) led to better performance on orthographic properties, namely Phonaestheme, Synth, and Onomatopoiea. These results demonstrate the power that simple surface-form signals from character sequences still possess in meaningful NLP tasks. In future work, we will attempt to supplement the contextual models with auxiliary mechanisms and perform fine-tuning. Contextless | Acc | | F1 | Contextual | Acc | | F1 ---|---|---|---|---|---|---|--- Majority class | .312 | | .026 | | | | Character n-grams | .484 | | .323 | | | | FastText | .433 | | .241 | Character RNN | .128 | | .054 ELMo embeddings | .365 | | .203 | ELMo | .324 | | .135 BERT no-context | .442 | | .288 | BERT | .469 | | .269 Table 1: Baseline results for OOV classification ($N=1550$, $|C|=18$). ## 4 Conclusion We presented a novel dataset of OOVs along with their contexts and linguistic novelty class annotations. We showed that contextual information in the form of other parts of the sentence provides some signal, but simple models relying on character n-gram information alone achieve high performance. The availability of broader document contexts in which these neologisms occur enables many linguistic and technical applications. From the perspective of the study of language growth and formation, the dataset may be used to assess the morphological productivity of different affixes and roots, or the prevalence of the different word formation processes in a realistic setting; or perform in-depth analysis on any of the specific types of innovations we identified. In addition, the in-vivo nature of the dataset provides a reference for neologisms which may or may not be later adopted into everyday use, allowing diachronic studies anchored in the time of word introduction. Analysis of the phonological, morphological, and discourse-level properties of these words may provide insight into lexical adoption dynamics. For NLP researchers, an important component of text applications is proper normalization and segmentation of word forms. Our experiment shows that popular word form encoders, such as ELMo or BERT’s WordPiece, still have a long way to go in terms of recognizing the origins of a novel form. Errors at this stage might lead to inability to handle morphologically complex OOVs in downstream semantic applications [Pinter et al., 2020], although further study of such effects and of the utility of OOV classification in alleviating them is still necessary. Properly leveraging context for morphological decomposition of complex forms also remains an open problem. The resource is an ongoing project; the repository includes plans for the next versions, including increasing the dataset size by including newer words from the bot, and annotating additional information such as part-of-speech tags. ## Acknowledgments We thank Jacob Eisenstein, Kyle Gorman, Arya McCarthy, Sandeep Soni, and the anonymous reviewers for their valuable notes. Yuval Pinter is a Bloomberg Data Science PhD Fellow. Cassandra Jacobs is supported on NSF BCS Grant 1849236 awarded to Maryellen MacDonald. ## References * [Ahmad, 2000] Khurshid Ahmad. 2000\. Neologisms, nonces and word formation. In Proceedings of the Ninth EURALEX International Congress, pages 711–730. * [Berman, 1961] JM Berman. 1961\. Contribution on blending. Zeitschrift für Anglistik und Amerikanistik, 9:278–281. * [Bhatia et al., 2016] Parminder Bhatia, Robert Guthrie, and Jacob Eisenstein. 2016\. Morphological priors for probabilistic neural word embeddings. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 490–500, Austin, Texas, November. Association for Computational Linguistics. * [Bojanowski et al., 2017] Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017\. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. * [Brill, 1995] Eric Brill. 1995\. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 21(4):543–565. * [Cook and Stevenson, 2010] Paul Cook and Suzanne Stevenson. 2010\. Automatically identifying the source words of lexical blends in English. Computational Linguistics, 36(1):129–149. * [Devlin et al., 2019] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019\. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. * [Dhuliawala et al., 2016] Shehzaad Dhuliawala, Diptesh Kanojia, and Pushpak Bhattacharyya. 2016\. SlangNet: A WordNet like resource for English slang. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 4329–4332, Portorož, Slovenia, May. European Language Resources Association (ELRA). * [Du et al., 2016] Jinhua Du, Andy Way, and Andrzej Zydron. 2016\. Using BabelNet to improve OOV coverage in SMT. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 9–15, Portorož, Slovenia, May. European Language Resources Association (ELRA). * [Eisenstein, 2013] Jacob Eisenstein. 2013\. What to do about bad language on the internet. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 359–369, Atlanta, Georgia, June. Association for Computational Linguistics. * [Hoffart et al., 2014] Johannes Hoffart, Yasemin Altun, and Gerhard Weikum. 2014\. Discovering emerging entities with ambiguous names. In Proceedings of the 23rd international conference on World wide web, pages 385–396. * [Kiparsky, 1982] Paul Kiparsky. 1982\. Word-formation and the lexicon. In Proceedings of the Mid-America Linguistics Conference, pages 3–29. University of Kansas. * [Klymenko, 2019] Olga Klymenko. 2019\. Twitterverse: The birth of new words. Proceedings of the Linguistic Society of America, 4(1):11–1. * [Kulkarni and Wang, 2018] Vivek Kulkarni and William Yang Wang. 2018\. Simple models for word formation in slang. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1424–1434, New Orleans, Louisiana, June. Association for Computational Linguistics. * [Lui and Baldwin, 2012] Marco Lui and Timothy Baldwin. 2012\. langid. py: An off-the-shelf language identification tool. In Proceedings of the ACL 2012 system demonstrations, pages 25–30. Association for Computational Linguistics. * [McCawley, 1978] James D McCawley. 1978\. Where you can shove infixes. Syllables and segments, pages 213–221. * [Müller and Schütze, 2011] Thomas Müller and Hinrich Schütze. 2011\. Improved modeling of out-of-vocabulary words using morphological classes. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 524–528, Portland, Oregon, USA, June. Association for Computational Linguistics. * [Peters et al., 2018] Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018\. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana, June. Association for Computational Linguistics. * [Pinter et al., 2017] Yuval Pinter, Robert Guthrie, and Jacob Eisenstein. 2017\. Mimicking word embeddings using subword RNNs. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 102–112, Copenhagen, Denmark, September. Association for Computational Linguistics. * [Pinter et al., 2020] Yuval Pinter, Cassandra L. Jacobs, and Jacob Eisenstein. 2020\. Will it unblend? In Findings of EMNLP. * [Sakaguchi et al., 2017] Keisuke Sakaguchi, Kevin Duh, Matt Post, and Benjamin Van Durme. 2017\. Robsut wrod reocginiton via semi-character recurrent neural network. In Thirty-First AAAI Conference on Artificial Intelligence. * [van der Goot, 2019] Rob van der Goot. 2019\. An in-depth analysis of the effect of lexical normalization on the dependency parsing of social media. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pages 115–120, Hong Kong, China, November. Association for Computational Linguistics. * [Wales and Ramsaran, 1990] Katie Wales and S Ramsaran. 1990\. Phonotactics and phonaesthesia: the power of folk lexicology. Studies in pronunciation of English. A commemorative volume in honour of AC Gimson, pages 339–351. * [Young et al., 2018] Tom Young, Devamanyu Hazarika, Soujanya Poria, and Erik Cambria. 2018\. Recent trends in deep learning based natural language processing. ieee Computational intelligenCe magazine, 13(3):55–75. * [Zwicky, 2010] Arnold Zwicky. 2010\. Libfixes. Arnold Zwicky’s Blog.
2024-09-04T02:54:58.004399
2020-03-06T22:54:46
2003.03461
{ "authors": "Weili Nie, Tero Karras, Animesh Garg, Shoubhik Debnath, Anjul Patney,\n Ankit B. Patel, Anima Anandkumar", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26092", "submitter": "Weili Nie", "url": "https://arxiv.org/abs/2003.03461" }
arxiv-papers
# Semi-Supervised StyleGAN for Disentanglement Learning Weili Nie∗ Tero Karras Animesh Garg Shoubhik Debnath Anjul Patney Ankit B. Patel Anima Anandkumar ###### Abstract Disentanglement learning is crucial for obtaining disentangled representations and controllable generation. Current disentanglement methods face several inherent limitations: difficulty with high-resolution images, primarily focusing on learning disentangled representations, and non-identifiability due to the unsupervised setting. To alleviate these limitations, we design new architectures and loss functions based on StyleGAN (Karras et al., 2019), for semi-supervised high-resolution disentanglement learning. We create two complex high-resolution synthetic datasets for systematic testing. We investigate the impact of limited supervision and find that using only 0.25%$\sim$2.5% of labeled data is sufficient for good disentanglement on both synthetic and real datasets. We propose new metrics to quantify generator controllability, and observe there may exist a crucial trade-off between disentangled representation learning and controllable generation. We also consider semantic fine-grained image editing to achieve better generalization to unseen images. Machine Learning, ICML ## 1 Introduction Disentanglement learning with deep generative models has attracted much attention recently (Chen et al., 2016; Higgins et al., 2017; Locatello et al., 2019). This is crucial for controllable generation, where the style codes specified to the generator need to separately control various factors of variation for faithful generation. Another goal is learning disentangled representations where the input samples can be encoded to latent factors that are disentangled. This has been argued as a key to success of deep learning (Achille & Soatto, 2018). Previous works have primarily focused on only one of the above two objectives. Ideally, the ultimate goal of disentanglement learning is to achieve both the objectives at the same time, especially on more complex high-resolution images, and we pursue this goal in this paper. We first list the three main limitations of the current disentanglement methods. First, much effort has focused on unsupervised disentanglement methods (Chen et al., 2016; Higgins et al., 2017; Nguyen-Phuoc et al., 2019). This is because a large number of fully annotated samples is expensive to obtain. These methods suffer from non-identifiability, which means multiple repeated runs will not reliably observe the same latent representations (Hyvärinen & Pajunen, 1999; Locatello et al., 2019). In addition, human feedback is needed to discern (i) what factors of variation the model has learnt (e.g., object shape and color), and (ii) what semantic meaning different values of the discovered factor code represent (e.g., red and blue in a color factor code). To reliably control generation for practical use, adding a small amount of labeled data may resolve the non-identifiability issue and lead to interpretable factors. Hence, we investigate the impact of limited supervision on disentanglement learning. Second, current disentanglement methods (Locatello et al., 2019, 2020) are mainly developed and evaluated on relatively simple low-resolution images, such as dSprites (Matthey et al., 2017) and 3DShapes (Kim & Mnih, 2018), which raises concerns about their ability to scale up to more diverse, higher- resolution images. For example, the use of 3D representations to disentangle the 3D pose may not easily apply to high-resolution images due to the computational cost. The difficulty of some deep generative models at generating realistic images also limits their application in more complex domains. Furthermore, although there exist many real image datasets of high resolution, the latent factors are typically only partially observed or unbalanced, which makes it hard to scientifically study disentanglement. To gain practically useful insights, it is critical to first test disentanglement methods on complex, high-resolution, synthetic images wherein ground-truth factors are easy to obtain. Third, most previous works (Kim & Mnih, 2018; Chen et al., 2018) have primarily focused on learning disentangled representations by quantifying encoder disentanglement quality, in the hopes that a better disentangled encoder might also lead to a better disentangled generator. However, to the best of our knowledge, there is no clear evidence supporting that a proportional relationship between encoder and generator disentanglement quality always exists. A good analogy is that an art critic can disentangle various painting styles and skills, but may not be able to create a good painting by combining these styles and skills. Thus, results based solely on evaluating encoder disentanglement may be misleading, especially in tasks where the requirement for controllable generation is more critical. This highlights the importance of measuring the generator disentanglement quality in order to properly evaluate and compare different methods. #### Main contributions. In this work, we investigate semi-supervised disentanglement learning based on StyleGAN (Karras et al., 2019), one of the state-of-the-art generative adversarial networks (GANs), for complex high-resolution images. In summary, our main contributions are as follows, * • We first justify the advantages of the StyleGAN architecture in disentanglement learning, by showing that StyleGAN augmented with a mutual information loss (called Info-StyleGAN) ourperforms most state-of-the-art unsupervised disentanglement methods. * • We propose Semi-StyleGAN that achieves near fully-supervised disentanglement quality with limited supervision (0.25%$\sim$2.5%) on synthetic and real data. * • We propose new metrics (termed as MIG-gen and L2-gen) to evaluate the generator controllability, and reveal a crucial trade-off between learning disentangled representations and controllable generation. * • We then extend Semi-StyleGAN to an image-to-image model, enabling semantic fine-grained image editing with better generalization to unseen images. * • We create two high-quality datasets with much higher resolution, better photorealism, and richer factors of variation than existing disentanglement datasets. ## 2 Background and Related Work #### StyleGAN. GANs are a family of generative models that have shown great success. Among various GANs, StyleGAN (Karras et al., 2019) is a state-of-the-art GAN architecture for unsupervised image generation, particularly for high-fidelity human faces of resolution up to 1024x1024. StyleGAN comprises a mapping network whose role is to map a latent vector $z$ to an intermediate space, which then controls the styles at each convolutional layer in the synthesis network with adaptive instance normalization (AdaIN) (Ulyanov et al., 2016; Huang & Belongie, 2017). StyleGAN also enables the separation of fine-grained and coarse-grained features. For example, modifying the styles of low- resolution blocks affects only coarse-grained features (e.g. overall pose and presence of eyeglasses), while modifying the styles of high-resolution blocks affects only fine-grained features (e.g. color scheme and microstructure). These nice properties make it a potentially good candidate for disentanglement learning of high-resolution images. #### Disentanglement learning. In terms of unsupervised disentanglement learning, there exists much prior work based on either Variational Autoencoders (VAEs) (Higgins et al., 2017; Kim & Mnih, 2018; Chen et al., 2018) or GANs (Chen et al., 2016; Lin et al., 2019; Nguyen-Phuoc et al., 2019). The basic idea in disentangled VAEs is to encourage a factorization of the latent code by regularizing the total correlation (Chen et al., 2018). They have shown the state-of-the-art performance on many standard disentanglement benchmarks (Matthey et al., 2017; Kim & Mnih, 2018). Many GAN-based models reply on maximizing the mutual information between the observation and factor code, such as InfoGAN (Chen et al., 2016) and its variants (Lin et al., 2019). Other GANs also learn disentanglement by designing a domain-specific generator architecture to add model-inductive bias, represented by HoloGAN (Nguyen-Phuoc et al., 2019). However, the use of 3D representations in HoloGAN may not scale up to higher- resolution images. Another line of work in disentanglement learning is to use explicit supervision. (Kulkarni et al., 2015) applies a supervised training procedure to encourage each group of the graphics code to distinctly represent a specific factor of variation. (Bouchacourt et al., 2018) proposes the Multi- Level VAE (ML-VAE) to learn disentanglement from the supervision of group information. (Xiao et al., 2017) develops a supervised disentanglement algorithm called DNA-GAN using a swapping policy. (Narayanaswamy et al., 2017) proposes a semi-supervised VAE by employing a general graphical model structure in the encoder and decoder. However, it still remains unclear how the use of supervision impacts the disentanglement learning. (Spurr et al., 2017) proposes ss-InfoGAN by adding few labels to InfoGAN to learn semantically meaningful data representations. More recently, (Locatello et al., 2020) shows the benefits of adding limited supervision into learning disentangled presentations in VAEs. We extend these results to GANs on more complex and higher-resolution images, and also quantify the impact of limited supervision on the generator controllability. #### Conditional GANs. A class of conditional GANs conditions on the class labels for better image generation quality, such as cGAN (Mirza & Osindero, 2014), AC-GAN (Odena et al., 2017), Projection Discriminator (Miyato & Koyama, 2018). The architectural component of semi-StyleGAN makes it a special case of semi- supervised conditional GANs, as it conditions on partially available factor codes. However, the tasks and loss functions of Semi-StyleGAN differ greatly from those of these conditional GANs. First, these conditional GANs mainly focus on generating more realistic images, whereas semi-StyleGAN focuses on two joint tasks: (i) disentangled representation learning and (ii) controllable generation. Second, to this end, both the mutual information loss and a new smoothness regularization are introduced in our loss functions for a better trade-off in disentanglement learning. Another class of conditional GANs conditions on the given images for image-to- image translation, where many works focus on multi-attribute editing, and two representatives are StarGAN (Choi et al., 2018) and AttGAN (He et al., 2019). Although the proposed Semi-StyleGAN-fine in Section 5 share the similar objectives, there are many differences: StarGAN needs to condition on images of a specific domain, namely a set of images sharing the same attribute, which limits itself to only performing discrete/binary attribute control. However, our work is capable of doing continuous style manipulation. AttGAN requires an encoder-decoder structure in the generator while we do not need to. Instead, our work adds controllable fine-grained factors along with a super-resolution process, which is especially suitable for editing fine styles of high- resolution images with a good generalization ability. Figure 1: An illustration of disentanglement learning based on StyleGAN, where the mapping network in the generator conditions on the factor code and the encoder (which shares all layers in the discriminator except for the last layer) predicts its value. ## 3 Why StyleGAN for Disentanglement? In unsupervised disentanglement learning, there exist many state-of-the-art VAE-based models, such as $\beta$-VAE (Higgins et al., 2017), FactorVAE (Kim & Mnih, 2018) and $\beta$-TCVAE (Chen et al., 2018). More recently, GAN-based models, such as InfoGAN-CR (Lin et al., 2019), have also achieved competitive performance by adding more tuning heuristics and regularization. Here, we consider Info-StyleGAN, enabling StyleGAN with a mutual information loss, and show that the structural advance of StyleGAN provides a stronger prior for disentanglement learning compared to regularization used previously in VAEs or GANs. As shown in Figure 1, the mapping network in the generator of Info-StyleGAN now conditions on a factor code, a vector representing each factor of variation in each dimension, by simply concatenating it with the latent code $z$. The output of the mapping network, called conditional styles will modulate each block in the synthesis network using AdaIN. Similar to InfoGAN, the encoder in Info-StyleGAN shares all the network layers except the last one with the discriminator and predicts the value of factor code. Thus, we use $G/D/E$ to represent the generator, discriminator and encoder, respectively. The mutual information loss of InfoGAN can be approximated by an unsupervised code reconstruction loss (Chen et al., 2016), which is $\displaystyle\mathcal{L}_{\text{unsup}}=\sum\nolimits_{c\sim\mathcal{C},z\sim p_{z}}\left\|E(G(c,z))-c\right\|_{2}$ (1) where $\mathcal{C}$ denotes the set of all factor codes, and $p_{z}$ denote the prior distribution of latent code $z$. The respective loss functions for $G$ and $(D,E)$ are given by $\displaystyle\begin{split}\mathcal{L}^{(G)}=&\mathcal{L}_{\text{GAN}}+\gamma\mathcal{L}_{\text{unsup}}\\\ \mathcal{L}^{(D,E)}=&-\mathcal{L}_{\text{GAN}}+\gamma\mathcal{L}_{\text{unsup}}\end{split}$ (2) where we keep the GAN loss function $\mathcal{L}_{\text{GAN}}$ the same as in (Karras et al., 2019). The hyperparameter $\gamma$ controls a trade-off between image realism and disentanglement quality. ### 3.1 Experimental Setup #### Datasets and evaluation metrics. We consider two datasets to compare Info-StyleGAN and state-of-the-art disentanglement models: dSprites (Matthey et al., 2017) and our proposed Isaac3D (See details in Section 4.1). dSprites is a commonly used dataset in disentanglement learning, with 737,280 images and each of resolution 64x64. For experiments on dSprites, we use both Factor score (Kim & Mnih, 2018) and Mutual Information Gap (MIG) (Chen et al., 2018) to evaluate disentanglement. For experiments on Isaac3D, we first downscale the resolution of each image to 128x128 because VAEs have difficulties in generating higher-resolution images. We use MIG to evaluate disentanglement quality and Frechet Inception Distance (FID) (Heusel et al., 2017) to evaluate image quality. #### Experimental protocol. We consider $\beta$-VAE, FactorVAE, $\beta$-TCVAE and InfoGAN-CR for comparison, where all the models are trained based on the implementation in (Locatello et al., 2019). For VAEs, we set $\beta=6$ for $\beta$-VAE, $\gamma=30$ for FactorVAE and $\beta=8$ for $\beta$-TCVAE after a grid search over different hyperparameters. For InfoGAN-CR, we use default hyperparameters on dSprites from the original paper, and perform a grid search over different hyperparameters on Isaac3D to report the best results. For Info-StyleGAN, we keep $\gamma=10$ on dSprites and $\gamma=1$ on Isaac3D. We also keep the progressive training (Karras et al., 2017), as we show that it helps improve disentanglement in Appendix B.1. Because Info-StyleGAN and state-of-the-art disentanglement models have largely different network architectures, for a fairer comparison, we also try to keep their network sizes to be the same. See Appendix B.2 for how we decrease the network size of Info-StyleGAN (called Info-StyleGAN∗) to match those of previous models. Methods | # Params | Factor Score $\uparrow$ | MIG $\uparrow$ ---|---|---|--- $\beta$-VAE | 0.69M | 0.713 $\pm$ 0.095 | 0.132 $\pm$ 0.031 FactorVAE | 5.70M | 0.764 $\pm$ 0.098 | 0.175 $\pm$ 0.057 $\beta$-TCVAE | 0.69M | 0.731 $\pm$ 0.097 | 0.174 $\pm$ 0.046 InfoGAN-CR | 0.76M | 0.853 $\pm$ 0.046 | 0.270 $\pm$ 0.034 Info-StyleGAN* | 0.74M | 0.769 $\pm$ 0.144 | 0.274 $\pm$ 0.096 Info-StyleGAN | 47.89M | 0.840 $\pm$ 0.090 | 0.290 $\pm$ 0.098 (a) dSprites with resolution 64x64 Methods | # Params | FID $\downarrow$ | MIG $\uparrow$ ---|---|---|--- $\beta$-VAE | 1.91M | 122.6 $\pm$ 2.0 | 0.231 $\pm$ 0.068 FactorVAE | 6.93M | 305.8 $\pm$ 142.1 | 0.245 $\pm$ 0.034 $\beta$-TCVAE | 1.91M | 155.4 $\pm$ 13.6 | 0.216 $\pm$ 0.074 InfoGAN-CR | 3.29M | 80.72 $\pm$ 30.79 | 0.342 $\pm$ 0.139 Info-StyleGAN* | 3.44M | 8.10 $\pm$ 2.25 | 0.404 $\pm$ 0.085 Info-StyleGAN | 49.05M | 2.19 $\pm$ 0.48 | 0.328 $\pm$ 0.057 (b) Isaac3D with resolution 128x128 Table 1: Comparison of Info-StyleGAN and state-of-the-art disentanglement models on dSprites and (downscaled) Isaac3D. Note that the scores of VAEs are obtained based on the implementation in (Locatello et al., 2019). Info- StyleGAN* represents the smaller version of Info-StyleGAN, in which its number of parameters (i.e., # Params) is similar to that of previous models. ### 3.2 Key Results Table 1 shows that Info-StyleGAN and its variant with smaller network size, termed as Info-StyleGAN∗, consistently outperform state-of-the-art VAE-based methods by a large margin on both dSprites and Isaac3D. Meanwhile, Info- StyleGAN achieves competitive or even better disentanglement performance than the strong GAN baseline. Although unsupervised disentanglement learning is impossible without supervision or inductive bias (Locatello et al., 2019), this result reveals that the network structural improvement of StyleGAN provides a stronger prior for disentanglement learning compared to different explicit loss regularizations in disentangled VAEs or InfoGAN-CR. Besides, we observe that previous methods have much higher FID scores on (downscaled) Isaac3D, along with their poor generated samples in Appendix B.3. We have also increased the capacity of VAEs but the improvement of image quality still cannot close the gap with Info-StyleGAN, as shown in Appendix B.4. These results show that previous disentanglement methods have difficulties on more diverse and complex data, such as Isaac3D, while StyleGAN does not. ## 4 Semi-StyleGAN As pointed out by (Locatello et al., 2019) that unsupervised disentangled methods are formally non-identifiable (Hyvärinen & Pajunen, 1999), the impact of limited supervision on both learning disentangled representations and controllable generation, which has been rarely explored, becomes crucial. In this section, we propose to add semi-supervision into Info-StyleGAN to get a semi-supervised disentanglement model – Semi-StyleGAN. Based on Semi-StyleGAN, we systematically analyze the role of limited supervision on both synthetic and real data. A naive way of applying (semi-)supervision is to add a supervised code reconstruction term for the small amount of labeled data into Eq (2), similar to (Kingma et al., 2014; Odena et al., 2017; Locatello et al., 2020). That is, $\displaystyle\mathcal{L}_{\text{sup}}=\sum\nolimits_{(x,c)\sim\mathcal{J}}\|E(x)-c\|_{2}$ (3) where $\mathcal{J}$ represents the set of labeled pairs of real image and factor code. When considering the limited supervision, we assume the cardinality of the labeled set $\mathcal{J}$ satisfies that $|\mathcal{J}|\ll|\mathcal{X}|$, with $\mathcal{X}$ being the set of all real images. Thus, the semi-supervised loss functions become $\displaystyle\begin{split}\mathcal{L}^{(G)}=&\mathcal{L}_{\text{GAN}}+\gamma_{G}\mathcal{L}_{\text{unsup}}\\\ \mathcal{L}^{(D,E)}=&-\mathcal{L}_{\text{GAN}}+\gamma_{E}\mathcal{L}_{\text{unsup}}+\beta\mathcal{L}_{\text{sup}}\end{split}$ (4) where $\beta$ is the weight of the supervised term $\mathcal{L}_{\text{sup}}$, and we use different $\gamma$’s (denoted by $\gamma_{G}$ and $\gamma_{E}$) to separately represent the weight of the unsupervised term in $\mathcal{L}^{(G)}$ and $\mathcal{L}^{(D,E)}$. As we show later, $\gamma_{G}$ and $\gamma_{E}$ play an important role in controlling the trade-off between encoder and generator disentanglement. Note that the supervised term $\mathcal{L}_{\text{sup}}$ does not update $G$ directly as shown in Eq. (4). While semi-supervised learning for image recognition is an active research area, many algorithms may not be directly applied to disentanglement learning. Take the consistency regularization (Sajjadi et al., 2016) as an example. Commonly used data perturbations, such as image rotation and color randomization will inevitably cause inconsistency if the considered factors of variation include object rotation or color. In contrast, encouraging smoothness in the latent space of GANs may help improve disentanglement (Karras et al., 2019). Thus, we propose to explicitly add a smoothness regularization by using the idea of MixUp (Zhang et al., 2018; Berthelot et al., 2019). Formally, given a labeled observation-code pair $(x,c)\sim\mathcal{J}$ and a generated pair $(x^{\prime},c^{\prime})$ where $x^{\prime}=G(z,c^{\prime})$, we get a set of mixed observation-code pairs $\mathcal{M}=\left\\{(\tilde{x},\tilde{c})\right\\}$ by $\displaystyle\begin{split}\lambda\sim&\text{Beta}(\xi,\xi),\;\;\lambda^{\prime}=\max(\lambda,1-\lambda)\\\ \tilde{x}&=\lambda^{\prime}x+(1-\lambda^{\prime}){x}^{\prime}\\\ \tilde{c}&=\lambda^{\prime}c+(1-\lambda^{\prime})c^{\prime}\end{split}$ (5) where $\xi$ is a hyperparameter. Thus, the smoothness regularization term is $\displaystyle\mathcal{L}_{\text{sr}}=\sum\nolimits_{(x,c)\sim\mathcal{M}}\|E(x)-c\|_{2}$ (6) and the new semi-supervised loss functions with smoothness regularization become $\displaystyle\begin{split}\mathcal{L}^{(G)}=&\mathcal{L}_{\text{GAN}}+\gamma_{G}\mathcal{L}_{\text{unsup}}+\alpha\mathcal{L}_{\text{sr}}\\\ \mathcal{L}^{(D,E)}=&-\mathcal{L}_{\text{GAN}}+\gamma_{E}\mathcal{L}_{\text{unsup}}+\beta\mathcal{L}_{\text{sup}}+\alpha\mathcal{L}_{\text{sr}}\end{split}$ (7) where $\alpha$ is the weight of the smoothness term $\mathcal{L}_{\text{sr}}$. Different from (Zhang et al., 2018; Berthelot et al., 2019) that combines labeled and unlabeled real data, the MixUp in (5) is performed between real labeled data and generated data. This way, it not only encourages smooth behaviors of both the generator and encoder, but also takes good advantages of enormous fake data for disentanglement. ### 4.1 New Datasets Current disentanglement datasets, such as dSprites (Matthey et al., 2017), 3DShapes (Kim & Mnih, 2018) and MPI3D (Gondal et al., 2019), are of low resolution and mostly lack photorealism. We create two new datasets – Falcor3D and Isaac3D, with much higher resolution, better photorealism and richer factors of variations, as shown in Table 2. #### Falcor3D. It contains 233,280 images and each has a resolution of 1024x1024. This dataset is based on the 3D scene of a living room, where we move the camera positions and change the lighting conditions. Each image is paired with a ground-truth factor code, consisting of 7 factors of variation: lighting intensity (5), lighting $x$-dir (6), lighting $y$-dir (6), lighting $z$-dir (6), camera $x$-pos (6), camera $y$-pos (6), and camera $z$-pos (6). The number $m$ behind each factor represents that the factor has $m$ possible values, uniformly sampled from $[0,1]$. For example, “lighting $x$-dir (6)” represents the lighting direction moving along the $x$-axis and “camera $z$-pos (6)” denotes the camera position moving along the $z$-axis. Both factors have 6 possible values. #### Isaac3D. It contains 737,280 images and each has a resolution of 512x512. This dataset is based on the 3D scene of a kitchen, where we move the camera positions and vary the lighting conditions. There is a robotic arm inside, grasping an object. The robotic arm has two degrees of freedom: $x$-movement (horizontal) and $y$-movement (vertical). The attached object could change its shape, scale or color. All objects in the 3D scene are properly textured for better photorealism. Similarly, each image is paired with a ground-truth factor code, consisting of 9 factors of variation: lighting intensity (4), lighting $y$-dir (6), object color (4), wall color (4), object shape (3), object scale (4), camera height (4), robot $x$-movement (8), and robot $y$-movement (5). The number $m$ behind each factor represents that it has $m$ possible values, uniformly sampled from $[0,1]$. Datasets | # Images | # Factors | Resolution | 3D ---|---|---|---|--- dSprites | 737,280 | 5 | 64x64 | ✗ Noisy dSprites | 737,280 | 7 | 64x64 | ✗ Scream dSprites | 737,280 | 7 | 64x64 | ✗ SmallNORB | 48,600 | 5 | 128x128 | ✓ Cars3D | 17,568 | 3 | 64x64 | ✓ 3DShapes | 480,000 | 7 | 64x64 | ✓ MPI3D | 640,800 | 7 | 64x64 | ✓ Falcor3D | 233,280 | 7 | 1024x1024 | ✓ Isaac3D | 737,280 | 9 | 512x512 | ✓ Table 2: Summary of the proposed two datasets, compared with currently commonly used datasets (Gondal et al., 2019). We can see that the proposed two datasets – Faclor3D and Isaac3D have much larger resolutions than previous datasets, along with the largest number of factors. More importantly, the proposed datasets are of much higher photorealism, as shown in Appendix A. (a) Isaac3D (b) Falcor3D Figure 2: Semi-StyleGAN with the default setting $\gamma_{G}=\beta=\gamma$, $\gamma_{E}=0$, $\alpha=1$ where $\gamma\in\\{1,10\\}$ on (a) Isaac3D and (b) Falcor3D. We vary the portion of labeled data $\eta$ to show the impact of semi-supervision by comparing with Info-StyleGAN (i.e. the unsupervised baseline), and the fully-supervised one ($\eta=1$). Only using 0.25$\sim$2.5% of labeled data achieves near fully-supervised disentanglement. Methods | MIG $\uparrow$ | L2 $\downarrow$ | MIG-gen $\uparrow$ | L2-gen $\downarrow$ ---|---|---|---|--- Encoder-only | 0.731 $\pm$ 0.009 | 0.379 $\pm$ 0.002 | - | - Encoder-only w/ MixUp | 0.834 $\pm$ 0.004 | 0.279 $\pm$ 0.005 | - | - Semi-StyleGAN | 0.812 $\pm$ 0.020 | 0.301 $\pm$ 0.012 | 0.965 $\pm$ 0.014 | 0.052 $\pm$ 0.016 \+ Remove smoothness consistency | 0.765 $\pm$ 0.042 | 0.347 $\pm$ 0.019 | 0.945 $\pm$ 0.011 | 0.072 $\pm$ 0.008 \+ Increase the $\mathcal{L}_{\text{unsup}}$ term in $E$ ($\gamma_{E}=10$) | 0.880 $\pm$ 0.120 | 0.225 $\pm$ 0.222 | 0.888 $\pm$ 0.087 | 0.283 $\pm$ 0.247 \+ Remove the $\mathcal{L}_{\text{unsup}}$ term in $G$ | 0.719 $\pm$ 0.014 | 0.490 $\pm$ 0.024 | 0.130 $\pm$ 0.054 | 1.514 $\pm$ 0.003 (a) Isaac3D ($\eta$ = 0.1%) Methods | MIG $\uparrow$ | L2 $\downarrow$ | MIG-gen $\uparrow$ | L2-gen $\downarrow$ ---|---|---|---|--- Encoder-only | 0.690 $\pm$ 0.007 | 0.271 $\pm$ 0.002 | - | - Encoder-only w/ MixUp | 0.701 $\pm$ 0.005 | 0.265 $\pm$ 0.003 | - | - Semi-StyleGAN | 0.704 $\pm$ 0.007 | 0.285 $\pm$ 0.002 | 0.754 $\pm$ 0.017 | 0.205 $\pm$ 0.022 \+ Remove smoothness consistency | 0.674 $\pm$ 0.011 | 0.296 $\pm$ 0.017 | 0.632 $\pm$ 0.058 | 0.303 $\pm$ 0.088 \+ Increase the $\mathcal{L}_{\text{unsup}}$ term in $E$ ($\gamma_{E}=10$) | 0.643 $\pm$ 0.035 | 0.343 $\pm$ 0.016 | 0.636 $\pm$ 0.065 | 0.346 $\pm$ 0.070 \+ Remove the $\mathcal{L}_{\text{unsup}}$ term in $G$ | 0.680 $\pm$ 0.016 | 0.300 $\pm$ 0.010 | 0.034 $\pm$ 0.028 | 1.096 $\pm$ 0.086 (b) Falcor3D ($\eta$ = 0.5%) Table 3: Ablation studies of Semi-StyleGAN on (a) Isaac3D and (b) Falcor3D, where the default setting is $\gamma_{G}=\beta=10$, $\gamma_{E}=0$, $\alpha=1$. “Encoder-only” means we train the encoder by minimizing the L2 score with the labeled data only, a supervised baseline for the encoder disentanglement. “Encoder-only w/ MixUp” means we train the encoder by using MixUp (Zhang et al., 2018), a semi-supervised baseline for the encoder disentanglement. We set $\eta=0.1\%$ on Isaac3D and $\eta=0.5\%$ on Falcor3D, respectively. ### 4.2 New Metrics Many metrics have been proposed for evaluating disentanglement, such as Factor score (Kim & Mnih, 2018), MIG (Chen et al., 2018), DCI score (Ridgeway & Mozer, 2018), and SAP score (Kumar et al., 2017). See the prior work (Locatello et al., 2019) for their more implementation details. However, they all have some inherent limitations in quantifying semi-supervised disentanglement methods. First, these metrics are designed for unsupervised disentanglement methods, which are non-identifiable (Locatello et al., 2019). But with supervision, the model become identifiable and thus we need to evaluate the semantic meaning of learned representations as well. A simple solution here is to measure the average $L2$ distance between the ground-truth factor code and the prediction of its paired observation using the considered encoder, termed as $L2$ score. Second, these metrics only evaluate the the encoder disentanglement while ignoring the generator controllability, another important characteristic of disentanglement learning. However, there may exist a trade-off between the encoder and generator disentanglement. That is, a high MIG score does not mean a good model in terms of the controllable generation ability. Therefore, we propose new metrics to quantify the generator controllability. Specifically, given a generator $G$ to be evaluated and an oracle encoder $E_{\text{oracle}}$ that can perfectly predict the factor code, we first sample $N$ generated observation-code pairs $({x^{\prime}}^{(n)},{c^{\prime}}^{(n)})$ where ${x^{\prime}}^{(n)}=G(z,{c^{\prime}}^{(n)})$. We then pass the generated sample ${x^{\prime}}^{(n)}$ into $E_{\text{oracle}}$ to get its factor code prediction $\hat{c}^{\prime(n)}=E_{\text{oracle}}(x^{\prime(n)})$. Accordingly, we measure the correlation between $\hat{c}^{\prime(n)}$ and ${c^{\prime}}^{(n)}$ in the same way with prior disentanglement metrics. In particular, we define an MIG-like metric, called MIG-gen, to evaluate the generator, $\displaystyle\text{MIG-gen}=\frac{1}{NK}$ $\displaystyle\sum\limits_{n=0}^{N-1}\sum\limits_{k=0}^{K-1}\frac{1}{H({\hat{c}^{(n)}_{k}})}\cdot$ $\displaystyle\left(I(\hat{c}^{\prime(n)}_{j_{k}};{c^{\prime}}^{(n)}_{k})-\max\limits_{j\neq j_{k}}I(\hat{c}^{\prime(n)}_{j};{c^{\prime}}^{(n)}_{k})\right)$ where $K$ is the length of factor code, $H(\cdot)$ and $I(\cdot;\cdot)$ denote the entropy and mutual information, respectively, and $j_{k}=\arg\max\nolimits_{j}I(\hat{c}^{\prime(n)}_{j},{c^{\prime}}^{(n)}_{k})$. Similarly, we also introduce L2-gen to measure the semantic correctness of the generator, $\text{L2-gen}=\frac{1}{N}\sum\limits_{n=0}^{N-1}\|E_{\text{oracle}}({x}^{\prime(n)})-{c^{\prime}}^{(n)}\|_{2}$ Intuitively, if the oracle encoder is perfect for every ground-truth observation-code pair, any mismatch between its prediction and the corresponding factor code should be contributed to the generator instead. Thus, both MIG-gen and L2-gen can effectively measure the generator controllability. To obtain an oracle encoder for each dataset, such as the proposed Falcor3D and Isaac3D, we pre-train a separate encoder network by minimizing the $L2$ score with all the ground-truth observation-code pairs. ### 4.3 Experimental Setup #### Datasets and evaluation metrics. To test the proposed Semi-StyleGAN on complex high-resolution images that many prior works have difficulty with, we focus on three datasets: Isaac3D with resolution 512x512, Falcor3D with resolution 512x512 and CelebA with resolution 256x256. For the proposed Isaac3D and Falcor3D, we use MIG and L2 to measure the encoder disentanglement, and MIG-gen and L2-gen to measure the generator controllability. For experiments on CelebA, we focus on the latent traversals to qualitatively measure the disentanglement quality. #### Experimental protocol. Before training, we first get the labeled set $\mathcal{J}$, by randomly sampling observation-code pairs from each dataset with a probability $\eta$. All the remaining observations form as the unlabeled set. The value of $\eta\in[0,1]$ controls the portion of labeled data during training. Particularly Semi-StyleGAN becomes a fully-supervised method if $\eta=1$, and reduces to Info-StyleGAN if $\eta=0$. In experiments, we set $\xi=0.75$ in Eq. (5) to be the same with (Berthelot et al., 2019). For the hyperparameters $\\{\gamma_{G},\gamma_{E},\beta,\alpha\\}$, we find that setting $\gamma_{G}=\beta=\gamma$, $\gamma_{E}=0$, $\alpha=1$ works well across different datasets, where we vary $\gamma\in\\{1,10\\}$. Thus, without stated otherwise, we use the above setting by default in Semi-StyleGAN. Our experiments mainly include four aspects. (i) We first vary the supervision rate $\eta$ to show the impact of limited supervision. (ii) Given the supervision rate $\eta$, we train the encoder alone with the labeled data only (called Encoder-only) and with the MixUp (called Encoder-only w/ MixUp), respectively, as supervised and semi-supervised baselines for the encoder disentanglement. (iii) For ablation studies, we vary $\gamma_{G},\gamma_{E}$ to reveal the trade-off between the encoder and generator disentanglement, and vary $\alpha$ to show the impact of smoothness regularization. (iv) We show latent traversal results on both synthetic and real datasets. ### 4.4 Key Results #### Impact of limited supervision. Figure 2 shows the quantitative results of varying $\eta$ in Semi-StyleGAN on Isaac3D and Falcor3D, where we consider two cases of $\gamma\in\\{1,10\\}$. For Isaac3D, only using 0.25% of the labeled data can achieve a very close performance to the fully-supervised one ($\eta=1$) in terms of both the encoder and generator disentanglement. Similarly for Falcor3D, only using 2.5% of the labeled data can also achieve near the fully-supervised disentanglement. It means adding a very small amount of labeled data (0.25%$\sim$2.5%) into the training dataset could benefit significantly the disentanglement learning with Semi-StyleGAN. Besides, we can see that the generator disentanglement is more sensitive to the choice of $\gamma$ than the encoder disentanglement, particularly on Falcor3D. (a) Semi-StyleGAN on Isaac3D with 0.5% of labeled data (b) Semi-StyleGAN on Falcor3D with 1% of labeled data Figure 3: Latent traversal of Semi-StyleGAN on Isaac3D and Falcor3D where $\gamma=10$. Images in the first column (marked by red box) are randomly sampled real images and the rest images in each row are their interpolations, by uniformly varying the given factor from 0 to 1. See Appendix C.1 and C.2 for more results. #### Ablation studies and comparison with baselines. First, Table 3 shows there may exist a crucial trade-off between the encoder and generator disentanglement, governed by the interplay between the unsupervised and supervised loss terms. For example, we can see that Semi- StyleGAN gets the best generator disentanglement by slightly sacrificing the encoder disentanglement, in particular on Isaac3D. Second, by removing the smoothness consistency, we can see a large performance drop in terms of all metrics on both datasets, which demonstrates the effectiveness of the smoothness regularization in Semi-StyleGAN. Besides, there exist a trade-off between the generator controllability and encoder disentanglement. For example, increasing the $\mathcal{L}_{\text{unsup}}$ term in $E$ ($\gamma_{E}=10$) worsens the generator controllability while improving the encoder disentanglement on Isaac3D. Removing the $\mathcal{L}_{\text{unsup}}$ term in $G$ ($\gamma_{G}=0$), we can still get a decent encoder disentanglement despite the generator controllability totally fails. This trade-off also depends on the datasets, evidenced by different behaviors on Isaac3D and Falcor3D. Finally, Table 3 shows the results of comparing Semi- StyleGAN with the supervised and semi-supervised baselines. We can see that Semi-StyleGAN significantly outperforms the supervised baseline (i.e., Encoder-only) on Isaac3D, and also gets a better MIG score on Falcor3D. With good hyperparameters which potentially weigh more on the encoder side, we can also achieve on par or better encoder disentanglement than the semi-supervised baseline (i.e., Encoder-only w/ MixUp). #### Latent traversal on synthetic and real data. Qualitatively, we show the latent traversal results on both synthetic and real data in Figure 3 and 4. When only 0.5% or 1% of the labeled data is available, each factor in the interpolated images changes smoothly without affecting other factors. For Isaac3D and Falcor3D, all the interpolated images visually look the same with their reference real image except for the considered factor, verifying the semantic correctness of Semi-StyleGAN with very limited supervision. For CelebA, we use a higher resolution than the prior work (Chen et al., 2016; Higgins et al., 2017; Nguyen-Phuoc et al., 2019), and achieve visually better disentanglement quality together with a higher image quality. It means that the insights gained in the synthetic datasets also applies to the real domain. With very limited supervision, Semi-StyleGAN can achieve good disentanglement on real data. Figure 4: Latent traversal of Semi-StyleGAN on CelebA with resolution 256x256 by using 0.5% of the labeled data, where we use $\gamma=1$ and disentangle all 40 binary attributes. See Appendix C.3 for the results of other attributes. ## 5 An Extension for Better Generalization Although Semi-StyleGAN performs well in both synthetic and real data, it cannot generalize to unseen data whose high-level content does not match the training data but whose fine-grained styles might. For instance, Semi-StyleGAN trained on Isaac3D cannot generate an image in which the robot arm stands on the right (instead of in the middle as in the training data). In this section, we design a new GAN architecture that extends Semi-StyleGAN to an image-to- image model, that we call Semi-StyleGAN-fine. This model achieves better generalization to unseen data. Inspired by the observations that lower resolution blocks in the StyleGAN generator learn coarse-grained features while its high-resolution blocks account for fine-grained styles, we change the StyleGAN generator to not contain lower-resolution blocks. As shown in Figure 5, the generator instead takes the real image as one of its inputs by downscaling it to a lower resolution $\phi$ (e.g., $\phi=32$ in Figure 5). Accordingly, it generates the high-resolution image by only modulating the (fine-grained) factor code into higher resolution blocks. Also, the encoder predicts the value of factor code from the block with resolution $\phi$, instead of the last output block. The intuition is that lower-resolution blocks in the encoder also have less relationship with fine-grained styles, and thus the code prediction better use its higher resolution blocks only. This way, the generator in Semi-StyleGAN- fine does a semi-supervised controllable fine-grained image editing while its encoder infers the fine-grained factor code that the generator has used. Finally, the loss functions remain the same as in Eq. (7). Figure 5: An illustration of Semi-StyleGAN-fine, where we downsample the real image into 32x32 resolution and replace the lower resolution blocks (4x4 - 32x32) in the generator by a new input block. Also, the encoder predicts the value of (fine-grained) factor code from the 32x32 block instead. ### 5.1 Experimental Setup We mainly focus on the latent traversals of Semi-StyleGAN-fine on Isaac3D and CelebA to qualitatively test its generalization ability. In the training time, we train Semi-StyleGAN-fine on Isaac3D and CelebA, respectively. In the test time, we apply novel test images (with the different high-level content) as the input to evaluate the proposed method. For Isaac3D, the novel test images are given by: (i) shifting the robot position to the right side (instead of standing right in the middle for all the training data), and (ii) attaching the robot arm with an unseen object. For CelebA, we simply download some new face images from online, following by aligning and cropping them into 256x256. Figure 6: Generalized latent traversal results of Semi-StyleGAN-fine trained on Isaac3D with 1% of labeled data where we set $\phi=64$ and interpolate the shown fine styles. In the test images, we shift the position of the robot arm to the right side, and also attach it with an unseen object (i.e., a tetrahedron). Figure 7: Generalized latent traversal results of Semi- StyleGAN-fine trained on CelebA with 1% of labeled data where we set $\phi=64$ and control the shown fine styles. ### 5.2 Key Results Figure 6 and 7 show the results of interpolating fine-grained factors in the Isaac3D dataset and the CelebA dataset, respectively, with different test images where we set $\eta=0.01$ and $\phi=64$. We can see that each considered fine-grained factor in both datasets keeps changing smoothly during its interpolations without affecting other factors, implying good generalized disentanglement. Particularly, the interpolations of the Isaac3D test images all maintain the new robot position and new object shape (i.e., a tetrahedron) with relatively high image quality. See Appendix D for test images with another novel object shape. The interpolations of the CelebA test images also keep the same identities and other coarse-grained features with the given input images. It is worthy to note that the good generalized disentanglement of Semi-StyleGAN-fine has been achieved by using only 1% of labeled data, and further increasing the supervision rate $\eta$ does not visually improve performance. Therefore, these results demonstrate the ability of Semi- StyleGAN-fine in semantic fine-grained image editing with limited supervision that generalizes well to unseen novel images. ## 6 Conclusions In this paper, we designed new loss functions and architectures based on StyleGAN for semi-supervised high-resolution disentanglement learning. We first showed that Info-StyleGAN largely outperforms most state-of-the-art unsupervised disentanglement methods, which justified the advantages of the StyleGAN architecture in disentanglement learning. We then proposed Semi- StyleGAN that achieved near fully-supervised disentanglement with limited supervision (0.25%$\sim$2.5%) on complex high-resolution synthetic and real data. We also proposed new metrics to quantify the generator controllability. To the best of our knowledge, we were the first to reveal that there exists a trade-off between learning disentangled representations and controllable generation. Besides, we extended Semi-StyleGAN to do semantic fine-grained image editing with better generalization to unseen images. Finally, we created two high-quality synthetic datasets to serve as new disentanglement benchmarks. In the future, we want to apply Semi-StyleGAN to even larger-scale high- resolution real datasets. We are aware of the gender and racial biases in the CelebA dataset (Kärkkäinen & Joo, 2019), and thus hope to create better datasets and find other ways to address the algorithmic bias. Besides, It would be interesting to extend Semi-StyleGAN to the weakly-supervised, semi- supervised scenario, where the factors of variation are only partially observed. ## Acknowledgement Thanks to the anonymous reviewers for useful comments. We also thank Zhiding Yu, Anuj Pahuja, Yaosheng Fu, Tan Minh Nguyen and many others at Nvidia for helpful discussions on this work. WN and ABP were supported by IARPA via DoI/IBC contract D16PC00003. ## References * Achille & Soatto (2018) Achille, A. and Soatto, S. Emergence of invariance and disentanglement in deep representations. _The Journal of Machine Learning Research_ , 19(1):1947–1980, 2018. * Berthelot et al. (2019) Berthelot, D., Carlini, N., Goodfellow, I., Papernot, N., Oliver, A., and Raffel, C. A. Mixmatch: A holistic approach to semi-supervised learning. In _Advances in Neural Information Processing Systems_ , 2019. * Bouchacourt et al. (2018) Bouchacourt, D., Tomioka, R., and Nowozin, S. Multi-level variational autoencoder: Learning disentangled representations from grouped observations. In _Thirty-Second AAAI Conference on Artificial Intelligence_ , 2018\. * Chen et al. (2018) Chen, T. Q., Li, X., Grosse, R. B., and Duvenaud, D. K. Isolating sources of disentanglement in variational autoencoders. In _Advances in Neural Information Processing Systems_ , 2018. * Chen et al. (2016) Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., and Abbeel, P. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In _Advances in neural information processing systems_ , 2016. * Choi et al. (2018) Choi, Y., Choi, M., Kim, M., Ha, J.-W., Kim, S., and Choo, J. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018. * Gondal et al. (2019) Gondal, M. W., Wüthrich, M., Miladinović, D., Locatello, F., Breidt, M., Volchkov, V., Akpo, J., Bachem, O., Schölkopf, B., and Bauer, S. On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset. _arXiv preprint arXiv:1906.03292_ , 2019. * He et al. (2019) He, Z., Zuo, W., Kan, M., Shan, S., and Chen, X. Attgan: Facial attribute editing by only changing what you want. _IEEE Transactions on Image Processing_ , 28(11):5464–5478, 2019. * Heusel et al. (2017) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In _Advances in Neural Information Processing Systems_ , 2017. * Higgins et al. (2017) Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. beta-vae: Learning basic visual concepts with a constrained variational framework. In _ICLR_ , 2017. * Huang & Belongie (2017) Huang, X. and Belongie, S. Arbitrary style transfer in real-time with adaptive instance normalization. In _Proceedings of the IEEE International Conference on Computer Vision_ , 2017. * Hyvärinen & Pajunen (1999) Hyvärinen, A. and Pajunen, P. Nonlinear independent component analysis: Existence and uniqueness results. _Neural Networks_ , 12(3):429–439, 1999. * Kärkkäinen & Joo (2019) Kärkkäinen, K. and Joo, J. Fairface: Face attribute dataset for balanced race, gender, and age. _arXiv preprint arXiv:1908.04913_ , 2019. * Karras et al. (2017) Karras, T., Aila, T., Laine, S., and Lehtinen, J. Progressive growing of gans for improved quality, stability, and variation. _arXiv preprint arXiv:1710.10196_ , 2017. * Karras et al. (2019) Karras, T., Laine, S., and Aila, T. A style-based generator architecture for generative adversarial networks. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019. * Kim & Mnih (2018) Kim, H. and Mnih, A. Disentangling by factorising. In _International Conference on Machine Learning_ , 2018. * Kingma et al. (2014) Kingma, D. P., Mohamed, S., Rezende, D. J., and Welling, M. Semi-supervised learning with deep generative models. In _Advances in neural information processing systems_ , 2014. * Kulkarni et al. (2015) Kulkarni, T. D., Whitney, W. F., Kohli, P., and Tenenbaum, J. Deep convolutional inverse graphics network. In _Advances in neural information processing systems_ , 2015. * Kumar et al. (2017) Kumar, A., Sattigeri, P., and Balakrishnan, A. Variational inference of disentangled latent concepts from unlabeled observations. _arXiv preprint arXiv:1711.00848_ , 2017. * Lin et al. (2019) Lin, Z., Thekumparampil, K. K., Fanti, G., and Oh, S. Infogan-cr: Disentangling generative adversarial networks with contrastive regularizers. _arXiv preprint arXiv:1906.06034_ , 2019. * Locatello et al. (2019) Locatello, F., Bauer, S., Lucic, M., Raetsch, G., Gelly, S., Schölkopf, B., and Bachem, O. Challenging common assumptions in the unsupervised learning of disentangled representations. In _International Conference on Machine Learning_ , 2019. * Locatello et al. (2020) Locatello, F., Tschannen, M., Bauer, S., Rätsch, G., Schölkopf, B., and Bachem, O. Disentangling factors of variation using few labels. In _ICLR_ , 2020. * Matthey et al. (2017) Matthey, L., Higgins, I., Hassabis, D., and Lerchner, A. dsprites: Disentanglement testing sprites dataset. https://github.com/deepmind/dsprites-dataset/, 2017. * Mirza & Osindero (2014) Mirza, M. and Osindero, S. Conditional generative adversarial nets. _arXiv preprint arXiv:1411.1784_ , 2014. * Miyato & Koyama (2018) Miyato, T. and Koyama, M. cgans with projection discriminator. In _ICLR_ , 2018. * Narayanaswamy et al. (2017) Narayanaswamy, S., Paige, T. B., Van de Meent, J.-W., Desmaison, A., Goodman, N., Kohli, P., Wood, F., and Torr, P. Learning disentangled representations with semi-supervised deep generative models. In _Advances in Neural Information Processing Systems_ , 2017. * Nguyen-Phuoc et al. (2019) Nguyen-Phuoc, T., Li, C., Theis, L., Richardt, C., and Yang, Y.-L. Hologan: Unsupervised learning of 3d representations from natural images. In _Proceedings of the IEEE International Conference on Computer Vision_ , 2019. * Odena et al. (2017) Odena, A., Olah, C., and Shlens, J. Conditional image synthesis with auxiliary classifier gans. In _Proceedings of the 34th International Conference on Machine Learning_ , 2017. * Ridgeway & Mozer (2018) Ridgeway, K. and Mozer, M. C. Learning deep disentangled embeddings with the f-statistic loss. In _Advances in Neural Information Processing Systems_ , 2018. * Sajjadi et al. (2016) Sajjadi, M., Javanmardi, M., and Tasdizen, T. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In _Advances in neural information processing systems_ , 2016. * Spurr et al. (2017) Spurr, A., Aksan, E., and Hilliges, O. Guiding infogan with semi-supervision. In _Joint European Conference on Machine Learning and Knowledge Discovery in Databases_. Springer, 2017. * Ulyanov et al. (2016) Ulyanov, D., Vedaldi, A., and Lempitsky, V. Instance normalization: The missing ingredient for fast stylization. _arXiv preprint arXiv:1607.08022_ , 2016. * Xiao et al. (2017) Xiao, T., Hong, J., and Ma, J. Dna-gan: Learning disentangled representations from multi-attribute images. _arXiv preprint arXiv:1711.05415_ , 2017. * Zhang et al. (2018) Zhang, H., Cisse, M., Dauphin, Y. N., and Lopez-Paz, D. Mixup: Beyond empirical risk minimization. In _ICLR_ , 2018. ## Appendix ## Appendix A More details of Two New Datasets ### A.1 Examples of the Isaac3D Dataset Figure 8: Examples of the Isaac3D dataset where we vary each factor of variation individually to see how each factor of variation changes with its corresponding ground-truth factor code. ### A.2 Examples of the Falcor3D Dataset Figure 9: Examples in the Falcor3D dataset where we vary each factor of variation individually to see how each factor of variation changes with its corresponding ground-truth factor code. ## Appendix B More results of Info-StyleGAN ### B.1 Progressive Training for Disentanglement Learning Progressive growing has been shown to improve the image quality of GANs (Karras et al., 2017, 2019), however, its impact on disentanglement learning remains unknown so far. Thus, we compare the MIG scores of Info-StyleGAN with progressive and non-progressive growing, respectively, on both dSprites and Isaac3D, and the results are shown in Figure 10. We can see that with progressive growing, the disentanglement quality tends to be better on both two datasets. Besides, the gap of average MIG scores with different values of the hyperparameter $\gamma$ is much smaller if the progressive growing is applied, which implies Info-StyleGAN with progressive growing seems to be less sensitive to hyperparameters. Therefore, unless stated otherwise, we use progressive growing for the GAN training in all the experiments. Figure 10: The impact of progressive training on the disentanglement learning with Info-StyleGAN. We can see that with progressive growing, the disentanglement quality tends to be better on both two datasets. Besides, the gap of average MIG scores with different values of the hyperparameter $\gamma$ is much smaller if the progressive growing is applied, which implies Info- StyleGAN with progressive growing seems to be less sensitive to hyperparameters. ### B.2 How to Get Info-StyleGAN* with Smaller Network Size Mapping Network --- (FC $\times$ $n_{\text{mp}}$) $f_{\text{mp}}$ $\times$ $f_{\text{mp}}$ Synthesis Network (4$\times$4 Conv) 3$\times$3$\times$ $f_{\text{0}}$ $\times$ $f_{\text{0}}$ (4$\times$4 Conv) 3$\times$3$\times$ $f_{\text{0}}$ $\times$ $f_{\text{0}}$ (8$\times$8 Conv) 3$\times$3$\times$ $f_{\text{0}}$ $\times$ $f_{\text{0}}$ (8$\times$8 Conv) 3$\times$3$\times$ $f_{\text{0}}$ $\times$ $f_{\text{0}}$ (16$\times$16 Conv) 3$\times$3$\times$ $f_{\text{0}}$ $\times$ $f_{\text{0}}$ (16$\times$16 Conv) 3$\times$3$\times$ $f_{\text{0}}$ $\times$ $f_{\text{0}}$ (32$\times$32 Conv) 3$\times$3$\times$ $f_{\text{0}}$ $\times$ $f_{\text{0}}$ (32$\times$32 Conv) 3$\times$3$\times$ $f_{\text{0}}$ $\times$ $f_{\text{0}}$ (64$\times$64 Conv) 3$\times$3$\times$ $\frac{f_{\text{0}}}{2}$ $\times$ $\frac{f_{\text{0}}}{2}$ (64$\times$64 Conv) 3$\times$3$\times$ $\frac{f_{\text{0}}}{2}$ $\times$ $\frac{f_{\text{0}}}{2}$ (a) Generator (64$\times$64 Conv) 3$\times$3$\times$ $\frac{f_{\text{0}}}{2}$ $\times$ $\frac{f_{\text{0}}}{2}$ --- (64$\times$64 Conv) 3$\times$3$\times$ $\frac{f_{\text{0}}}{2}$ $\times$ $\frac{f_{\text{0}}}{2}$ (32$\times$32 Conv) 3$\times$3$\times$${f_{\text{0}}}$$\times$${f_{\text{0}}}$ (32$\times$32 Conv) 3$\times$3$\times$${f_{\text{0}}}$$\times$${f_{\text{0}}}$ (16$\times$16 Conv) 3$\times$3$\times$${f_{\text{0}}}$$\times$${f_{\text{0}}}$ (16$\times$16 Conv) 3$\times$3$\times$${f_{\text{0}}}$$\times$${f_{\text{0}}}$ (8$\times$8 Conv) 3$\times$3$\times$${f_{\text{0}}}$$\times$${f_{\text{0}}}$ (8$\times$8 Conv) 3$\times$3$\times$${f_{\text{0}}}$$\times$${f_{\text{0}}}$ (4$\times$4 Conv) 3$\times$3$\times$${f_{\text{0}}}$$\times$${f_{\text{0}}}$ (4$\times$4 FC) (16$f_{\text{0}}$) $\times$ 64 (4$\times$4 FC) 64$\times$(1+ code_length) (b) Discriminator / Encoder Table 4: Generator and discriminator (or encoder) architectures in the implementation of Info-StylGAN for generating the image of resolution $128\times 128$, where we use “FC $\times n_{\text{mp}}$” to denote that there are $n_{\text{mp}}$ dense layers in the given block, and use “8$\times$8 Conv” to denote the convolutional layer in the 8$\times$8 resolution block. Note that there is not the last block (i.e., 64$\times$64 Conv) in the generator and not the first block (i.e., 64$\times$64 Conv) in the encoder if we want to generate the image of resolution $64\times 64$. For the original Info- StyleGAN, we have $n_{\text{mp}}=8$, $f_{\text{0}}=512$, $f_{\text{0}}=512$. For Info-StyleGAN* on dSprites, we set $n_{\text{mp}}=3$, $f_{\text{mp}}=64$, $f_{\text{0}}=64$ with 0.74M parameters in total. For Info-StyleGAN* on Downscaled Isaac3D, we set $n_{\text{mp}}=3$, $f_{\text{mp}}=256$, $f_{\text{0}}=128$ with 3.44M parameters in total. ### B.3 Randomly Generated Samples of Baseline Models and Info-StyleGAN* (with Smaller Network Size) (a) $\beta$-VAE (FID=120.3) (b) FactorVAE (FID=358.2) (c) $\beta$-TCVAE (FID=143.8) (d) InfoGAN-CR (FID=73.43) (e) Info-StyleGAN* (FID=8.38) Figure 11: Randomly sampled images of baseline models and Info-StyleGAN* on (downscaled) Isaac3D of resolution 128x128. Note that for VAE-based models, we use the similar network architectures as in (Locatello et al., 2019), and Info-StyleGAN* denotes the smaller version of Info-StyleGAN, in which its number of parameters is similar to baseline models. We can see that compared with Info-StyleGAN* (of the same network size), the generated images of VAE- based models (i) tend to be quite blurry and of low quality, and (ii) fail to cover all the variations in the dataset. As a strong GAN baseline, InfoGAN-CR is also significantly worse than Info-StyleGAN in terms of image quality. The results demonstrate that our proposed dataset can serve as a new challenging benchmark for disentanglement learning, in particular regarding the much higher resolution, and larger variation of factors. ### B.4 Randomly Generated Samples of baseline models† (with Larger Network Size) and Info-StyleGAN (a) $\beta$-VAE† (FID=60.71) (b) FactorVAE† (FID=60.67) (c) $\beta$-TCVAE† (FID=77.48) (d) InfoGAN-CR (FID=30.41) (e) Info-StyleGAN (FID=2.50) Figure 12: Randomly sampled images of baseline models† and Info-StyleGAN on (downscaled) Isaac3D of resolution 128x128. Note that for VAE-based models, we increase the number of featuremaps ($\times$8) in each layer of the network architectures in (Locatello et al., 2019), so that their number of parameters is similar to Info-StyleGAN. We also apply the same operations for InfoGAN-CR to match the network size of In-StyleGAN. We can see that (i) the image quality gets better after increasing the network size, (ii) compared with Info-StyleGAN (of the same network size), the generated images of VAE-based models still have issues with blurriness and failure in capturing all variations, (iii) the generated images of InfoGAN-CR are better than VAEs but still worse than Info-StyleGAN. ### B.5 Other Experimental Settings Our experiments are based on the StyleGAN implementation (Karras et al., 2019), where the GAN loss $L_{\text{GAN}}$, batch sizes, learning rates for both generator and discriminator, and other hyperparameters in the Adam optimizer and weights in each resolution black are all kept the same with (Karras et al., 2019), unless stated otherwise. Different from the original StyleGAN implementation, we do not apply truncation tricks. We also do not add noise inputs to introduce another randomness, as we consider the case where the factor code and latent $z$ will capture all the factors in the data. For all the quantitative results in the paper, we report the error bars by taking the mean and standard deviation of four runs with random seeds. For the implementation of evaluation metrics, we use 50K random sampled real images and fake images to calculate the FID score. We use 5K ground-truth observation-code pairs as training samples and 2K ground-truth observation- code pairs as test samples to evaluate the Factor score. We use 10K ground- truth observation-code pairs and 10K generated observation-code pairs to calculate the MIG and MIG-gen scores, respectively. We also use 1K ground- truth observation-code pairs and 1K generated observation-code pairs to calculate the L2 and L2-gen scores, respectively. ## Appendix C More Results of Semi-StyleGAN ### C.1 Semi-StyleGAN with 0.5% of Labeled Data on Isaac3D with Resolution 512x512 Figure 13: Latent traversal of Semi-StyleGAN on Isaac3D by using 0.5% of the labeled data. Images in the first column (marked by red box) are randomly sampled real images of resolution 512x512 and the rest images in each row are their interpolations, respectively, by uniformly varying the given factor from 0 to 1. We can see that each factor changes smoothly during its interpolation without affecting other factors, and the interpolated images in each row visually look almost the same with their input image except the considered varying factor. Also, the image quality does not get worse during the interpolations. ### C.2 Semi-StyleGAN with 1% of Labeled Data on Falcor3D with Resolution 512x512 Figure 14: Latent traversal of Semi-StyleGAN on Falcor3D by using 0.5% of the labeled data. Images in the first column (marked by red box) are randomly sampled real images of resolution 512x512 and the rest images in each row are their interpolations, respectively, by uniformly varying the given factor from 0 to 1. We can see that each factor changes smoothly during its interpolation without affecting other factors, and the interpolated images in each row visually look almost the same with their input image except the considered varying factor. Also, the image quality does not get worse during the interpolations. ### C.3 Semi-StyleGAN with 0.5% of Labeled Data on CelebA with Resolution 256x256 Figure 15: More latent traversal results of Semi-StyleGAN on CelebA with resolution 256x256 by using 0.5% of the labeled data, where we control all 40 binary attributes at the same time. We can see that Semi-StyleGAN with only 0.5% of the labeled data is capable of controlling the considered attributes. We note that the image background may also change slightly over interpolations of some attributions. We argue that it is because the other nuisance factors (i.e., those not in the set of considered 40 attributes) including background strongly confound the observed factors, which has been a common and difficult problem in high-dimensional partially observed latent variables models. We leave the investigation into how to further alleviate this confounding issue as the future work. Figure 16: Latent traversal of Semi-StyleGAN on CelebA with resolution 256x256 by using 0.5% of the labeled data, where we control all 40 binary attributes at the same time. We can see that Semi-StyleGAN with only 0.5% of the labeled data is capable of controlling the considered attributes. We note that the image background may also change slightly over interpolations of some attributions. We argue that it is because the other nuisance factors (i.e., those not in the set of considered 40 attributes) including background strongly confound the observed factors, which has been a common and difficult problem in high-dimensional partially observed latent variables models. We leave the investigation into how to further alleviate this confounding issue as the future work. ## Appendix D More results on Semi-StyleGAN-fine ### D.1 Semi-StyleGAN-fine with 1% of Labeled Data on Isaac3D Novel Images with Resolution 512x512 Figure 17: Generalization of Semi-StyleGAN-fine with 1% of the labeled data where we set $\phi=64$ and interpolate three fine styles: (lighting intensity, object color, wall color). In the test image, we shift the position of the robot arm to the right side, and also attach it with an unseen object (i.e., octahedron). Images in the first column (marked by red box) are real novel images of resolution $512\times 512$ and the rest images in each row are their interpolations, respectively, by uniformly varying the given factor from 0 to 1. We can see that Semi-StyleGAN-fine with only 1% of the labeled data is capable of controlling the considered fine-grained attributes without affecting the coarse-grained factors. ### D.2 Semi-StyleGAN-fine with 1% of Labeled Data on CelebA Novel Images with Resolution 256x256 Figure 18: Generalized latent traversal results of Semi-StyleGAN-fine trained on CelebA with 1% of labeled data where we set $\phi=64$ and control the shown fine styles. Images in the first column (marked by red box) are real novel images of resolution $256\times 256$ and the rest images in each row are their interpolations, respectively, by uniformly varying the given factor from 0 to 1. We can see that Semi-StyleGAN-fine with only 1% of the labeled data is capable of controlling the considered fine-grained attributes without affecting the coarse-grained factors, in particular the personal identity.
2024-09-04T02:54:58.016641
2020-03-07T00:17:58
2003.03475
{ "authors": "Rajan Puri", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26093", "submitter": "Rajan Puri", "url": "https://arxiv.org/abs/2003.03475" }
arxiv-papers
# Beta Critical for the Schrödinger Operator with Delta Potential Rajan Puri ###### Abstract. For the one dimensional Schrödinger operator in the case of Dirichlet boundary condition, we show that $\beta_{cr}$ is positive and zero for the case of Neumann and Robin boundary condition considering the potential energy of the form $V(x)=-\beta\delta(x-a)$ where, $\beta\geq 0,\ a>0.$ We prove that the $\beta_{cr}$ goes to infinity when the delta potential moves towards the boundary in dimension one with Dirichlet boundary condition. We also show that the $\beta_{cr}>0$ and $\beta\in(0,\frac{1}{2})$ considering Dirichlet problem with delta potential on the circle in dimension two. ###### Key words and phrases: Schrodinger Operator, Delta Potential, Critical Value, Beta Critical, Negative Eigenvalues. ###### 2010 Mathematics Subject Classification: 35J25, 35P15, 47A10, 47D07 Department of Mathematics and Statistics, Wake Forest University, Winston Salem, NC 27109, USA, ([email protected]). ## 1\. Introduction This paper is a continuation of our work [4] where we studied the beta critical of the coupling constant of exterior elliptic problems and proved that $\beta_{cr}>0$ in the case of Dirichlet boundary condition and $\beta_{cr}=0$ in the case of the Neumann boundary condition in the dimension d=1 and 2 by studying the truncated resolvent operator. It was shown that the choice $\beta_{cr}>0$ or $\beta_{cr}=0$ depends on whether the truncated resolvent is bounded or goes to infinity when $\lambda\to 0^{-}$. In fact, $\beta_{cr}$ was expressed through truncated resolvent operator. There has been a considerable interest in the study of coupling constant and the problem was investigated by several researchers like Barry Simon in[5], Martin Klaus in [3], Cranston, Koralov, Molchanov and Vainberg in [1], and Yuriy Golovaty in [2]. It is known that the spectrum of $-\Delta-\beta V(x)$ consists of the absolutely continuous part $[0,\infty)$ and at most a finite number of of negative eigenvalues. $\sigma(-\Delta-\beta V(x))=\\{\lambda_{j}\\}\cup[0,\infty),\ 0\leq j\leq N,\quad\lambda_{j}\leq 0.$ We proved the dependence of beta critical with the boundary condition and dimension in our earlier paper [4]. Namely, ###### Theorem 1.1 ([4]). Consider the following elliptic problems in $\Omega$. $H_{0}u-\beta V(x)u-\lambda u=f,\ \ x\in\Omega,$ (1) where $H_{0}=-\text{div}(a(x)\nabla)$, the potential $V(x)\geq 0$ is compactly supported and continuous, $\beta\geq 0$ , $a(x)>0\ ,\ a(x)\in C^{1}(\Omega)$, and $a=1\ \text{when}\ |x|>>1$. If $d=1$ or $2$ then $\beta_{cr}>0$ in the case of the Dirichlet boundary condition, and $\beta_{cr}=0$ in the cases of the Neumann boundary condition. In [4], we also studied the dependence of $\beta_{cr}$ on the distance between the support of the potential and the boundary of the domain. In fact, it was proven that, in the case of the Dirichlet boundary condition in the dimension one, $\beta_{cr}$ tends to infinity as potential moves towards the boundary. In dimension two with the Dirichlet boundary condition, the behavior of $\beta_{cr}$ was interesting and depends on the relation between the rates of the shrinking of the support of the potential and the speed of its motion towards the boundary. We did not consider the Neumann boundary condition when $d=1$ or $2$ since $\beta_{cr}$ is always zero. In particular, we proved the following theorem in [4]. ###### Theorem 1.2 ([4]). If $d=1$, then $\beta_{cr}$ for the Schrödinger operator $-\Delta-\beta V(x)$ goes to infinity as $n\to\infty$ for the Dirichlet boundary condition. The same is true if $d=2$ and $|x(n)-x_{0}|<C/n,~{}n\to\infty$. If $d=2$ and $|x(n)-x_{0}|\to 0,~{}|x(n)-x_{0}|>C/n^{\delta},~{}n\to\infty,$ with some $\delta\in(0,1)$, then $\beta_{cr}$ remains bounded as $n\to\infty$. If $d\geq 3$, then $\beta_{cr}$ remains bounded as $n\to\infty$ for both the Dirichlet and Neumann boundary conditions. ## 2\. Schrödinger operator with Delta Potential In this paper, we would present results on the beta critical in the case of one and two dimensional Schrödinger equation with delta potential given by $-y^{{}^{\prime\prime}}-\beta\delta(x-a)y(x)=\lambda y(x),$ (2) where $\beta\geq 0,\ a>0$. The delta function is a infinitely high, infinitesimally narrow spike at $x=a$. This allows solutions for both the bound states $\lambda<0$ and scattering states $\lambda>0.$ The classification of the spectrum into discrete and continuous parts usually corresponds to a classification of the dynamics into localized (bound) states and locally decaying states when time increases (scattering), respectively. The lower bound, $0$, of the absolutely continuous spectrum is called the ionization threshold. This follows from the fact that the particle is no longer localized, but moves freely when $\lambda>0$. This classification is related to the space-time behaviour of solutions of the corresponding Schrödinger equation. We are interested to study the beta critical, the critical value the coupling constant denoted by $\beta_{cr},$ the value of $\beta$ such that equation (2) does not have negative eigenvalues for $\beta<\beta_{cr}$ and has them if $\beta>\beta_{cr}$. We can find the solution to Schrodinger equation (2) in the region I and II as shown in the figure 1. Figure 1. Delta Potential at x=a. In both regions: region I or $0\leq x<a$ and region 2 or $x>a$, the potential is $V(x)=0.$ ###### Theorem 2.1. If we consider the following one dimensional Schrödinger equation in the half axis ${}-y^{{}^{\prime\prime}}-\beta\delta(x-a)y(x)=\lambda y(x),\ \lambda=-k^{2}<0\ x\in[0,\infty).$ (3) then the $\beta_{cr}>0$ in the case of Dirichlet Boundary condition and $\beta_{cr}=0$ in the case of Neumann and Robin boundary condition. ###### Proof. The Dirichlet problem is given by ${}-y^{{}^{\prime\prime}}-\beta\delta(x-a)y(x)=\lambda y(x),\ y(0)=0,\ y(a)=1,\ \lambda=-k^{2}<0.$ (4) The solution of the problem (4) is given by $y(x)=Pe^{-kx}+Qe^{kx}.$ We can determine the value of constant P and Q by using the given condition $y(0)=0,y(a)=1$ with the continuity of the solution. Actually we can divide the solution of the problem (4) in to two different regions. $\begin{cases}y_{1}(x)=\frac{\sinh kx}{\sinh ka},&\text{if }0\leq x\leq a\\\ y_{2}(x)=e^{k(a-x)},&\text{if }a\leq x.\end{cases}$ We integrated the Schrödinger equation with respect to $x$ over a small interval $\Delta\epsilon$. $\int_{a-\epsilon}^{a+\epsilon}(-y^{{}^{\prime\prime}}-\beta\delta(x-a)y\ )dx=\int_{a-\epsilon}^{a+\epsilon}(\lambda y)dx.$ The integral of the second derivative is just the first derivative function and the integral over the function in the right side goes to zero, since it is a continuous, single valued function. We get, $-y^{{}^{\prime}}|_{a-\epsilon}^{a+\epsilon}-\beta y(a)=0.$ When $\epsilon\xrightarrow{}0,$ $k+k\coth ka=\beta.$ which gives, $\frac{k}{\beta}=\frac{1}{1+\coth ka}$ and $\frac{ka}{\beta a}=\frac{1}{1+\coth ka}.$ Let $ka=A,\beta a=B,$ then $e^{-2A}=1-\frac{2A}{B}.$ Again let $2A=z,$ then we have, ${}e^{-z}=1-\frac{z}{B}.$ (5) From equation (5) $1-\frac{z}{B}\geq 0.$ Hence, we get $\frac{\beta}{2}\geq k,$ and it follows $\frac{\beta^{2}}{4}\geq k^{2}.$ Since $\lambda=-k^{2}\geq\frac{-\beta^{2}}{4}.$ Hence, if $\beta=0$ then there is no possibility of having negative eigenvalues so $\beta_{cr}$ must be greater than zero to produce negative eigenvalues. If we consider the Neumann boundary condition then the equation (3) becomes ${}-y^{{}^{\prime\prime}}-\beta\delta(x-a)y(x)=\lambda y(x),\ y^{{}^{\prime}}(0)=0,\ y(a)=1,\ \lambda=-k^{2}<0,k>0.$ (6) As above, we can divide the solution of this problem in to two different regions, i.e region (I) with $0\leq x<a$ and region (II) with $a<x$. $\begin{cases}y_{1}(x)=\frac{\cosh kx}{\cosh ka},&\text{if }0\leq x\leq a\\\ y_{2}(x)=e^{k(a-x)},&\text{if }a\leq x.\end{cases}$ We will again integrate the Schrödinger equation with respect to $x$ over a small interval and take $\epsilon\xrightarrow{}0.$ Then, $k+k\tanh ka=\beta.$ Which gives, $\frac{k}{\beta}=\frac{1}{1+\tanh ka}$ and $\frac{ka}{\beta a}=\frac{1}{1+\tanh ka}.$ Let $ka=A,\ \beta a=B$ then $e^{-2A}=\frac{2A}{B}-1.$ Again let $2A=z,$ then $e^{-z}=\frac{z}{B}-1.$ It tells us that $\frac{z}{B}-1\geq 0.$ From here, we get the following relation $\frac{\beta}{2}\leq k$ which follows $\frac{\beta^{2}}{4}\leq k^{2}.$ Since $\lambda=-k^{2}\leq\frac{-\beta^{2}}{4}.$ Hence, if $\beta=0$ then there is still a possibility of having a negative eigenvalues so $\beta_{cr}=0.$ If we consider the Robin boundary condition then the equation (3) becomes ${}-y^{{}^{\prime\prime}}-\beta\delta(x-a)y(x)=\lambda y(x),\ \frac{dy}{dx}+y\bigg{|}_{x=0}=0,\ y(a)=1,\ \lambda=-k^{2}<0.$ (7) We can divide the solution of this problem in to two different region as described below. $\begin{cases}y_{1}(x)=\frac{k\cosh kx-\sinh kx}{k\cosh ka-\sinh ka},&\text{if }0\leq x\leq a\\\ y_{2}(x)=e^{k(a-x)},&\text{if }a\leq x.\end{cases}$ As above, we will integrate the Schrödinger equation with respect to $x$ over a small interval $\Delta\epsilon$. $\int_{a-\epsilon}^{a+\epsilon}(-y^{{}^{\prime\prime}}-\beta\delta(x-a)y\ )dx=\int_{a-\epsilon}^{a+\epsilon}(\lambda y)dx.$ After solving this integral problem, we come up with the following equation when $\epsilon\xrightarrow{}0,$ $k+k\bigg{(}\frac{k-\coth ka}{k\coth ka-1}\bigg{)}=\beta.$ which gives, $\frac{k}{\beta}=\frac{1}{1+\bigg{(}\frac{k-\coth ka}{k\coth ka-1}\bigg{)}},$ and $\frac{ka}{\beta a}=\frac{1}{1+\bigg{(}\frac{k-\coth ka}{k\coth ka-1}\bigg{)}}.$ Let $ka=A,\beta a=B$ then $e^{-2A}=\frac{2A^{2}-2Aa-AB+Ba}{AB+Ba}.$ It tells us that $\frac{2A^{2}-2Aa-AB+Ba}{AB+Ba}$ must be $\geq 0.$ After solving this inequality, we get the following relation $\frac{\beta}{2}\leq k,$ which tells us that $\frac{\beta^{2}}{4}\leq k^{2}.$ Since $\lambda=-k^{2}\leq\frac{-\beta^{2}}{4}.$ Hence if $\beta=0$ then there is a possibility of having negative eigenvalues so $\beta_{cr}$ must be zero. ∎ Now, It will be shown that the beta critical goes to infinity in the case of Dirichlet boundary condition in the dimension one considering delta potential. We will not consider the Neumann and Robin boundary condition since $\beta_{cr}=0.$ ###### Theorem 2.2. Consider the Dirichlet boundary condition where the delta potential is located at $x=a_{n}$. ${}-y^{{}^{\prime\prime}}-\beta\delta(x-a_{n})y(x)=\lambda y(x),\ y(0)=0,\ y(a_{n})=1,\ \lambda=-k^{2}<0,k>0,a_{n}>0.$ (8) Then, the $\beta_{cr}\xrightarrow{}\infty$ as $a_{n}\xrightarrow{}0.$ ###### Proof. We will not have a solution of the equation (5) if $\frac{1}{B}\geq 1.$ That means there is no negative eigenvalues when $\frac{1}{a}\geq\beta.$ However, if we have the case $\frac{1}{B}\leq 1$ then we would have a solution of the equation (5). It tells us that when $\frac{1}{a}\leq\beta,$ we will have the existence of negative eigenvalues. As we defined $\beta_{cr},$ the value of $\beta$ such that equation (4) does not have negative eigenvalues for $\beta<\beta_{cr}$ and has them if $\beta>\beta_{cr}$, we conclude that $\beta_{cr}=\frac{1}{a}$ for the equation (5). Similarly, if we consider the case where the delta potential is located at $x=a_{n}$ where $a_{n}\xrightarrow{}0$ as $n\xrightarrow{}\infty,$ given by (8) then $\beta_{cr}=\frac{1}{a_{n}}.$ If we approach the potential towards to the boundary which is $a_{n}\xrightarrow{}0.$ We get, $\beta_{cr}\xrightarrow{}\infty$ as $a_{n}\xrightarrow{}0.$ ∎ ## 3\. Dirichlet Boundary condition for $d=2$ with delta potential on the circle ###### Theorem 3.1. If we consider the case of Dirichlet Boundary condition for $d=2$ with delta potential on the circle is given by ${}-\Delta y(x)-\beta\delta_{1+a}y(x)=\lambda y(x),\ y(1)=0,\ y(1+a)=1,\ \lambda=-k^{2}<0,k>0.$ (9) then the $\beta_{cr}>0$ and $\beta\in(0,\frac{1}{2}).$ Figure 2. Delta potential in dimension 2. The rotational invariance suggests that the two dimensional Laplacian should take a particularly simple form in polar coordinates. We use polar coordinates $(r,\theta)$ and look for solutions depending only on r. Thus equation (9) becomes ${}y^{{}^{\prime\prime}}+\frac{y^{\prime}}{r}-\beta\delta_{1+a}\ y(r)=\lambda y(r),\ y(1)=0,\ y(1+a)=1,\\\ \lambda=-k^{2}<0,k>0.$ (10) ###### Proof. We will divide the solution of this problem (10) into two different regions, i.e region (I) with $1\leq r<1+a$ and region (II) with $1+a<r$. $\begin{cases}y_{1}(r)=\frac{Y_{0}(k)J_{0}(kr)-J_{0}(k)Y_{0}(kr)}{Y_{0}(k)J_{0}(k(1+a))-J_{0}(k)Y_{0}(k(1+a))},&\text{if }1\leq r\leq 1+a\\\ y_{2}(r)=\frac{K_{0}(kr)}{K_{0}(k(1+a))},&\text{if }1+a\leq r.\end{cases}$ As above, $-y^{{}^{\prime}}\bigg{|}_{1+a-\epsilon}^{1+a+\epsilon}-\beta y(1+a)=0.$ Which gives, $-\big{(}y_{2}^{{}^{\prime}}|_{1+a+\epsilon}-y_{1}^{{}^{\prime}}|_{1+a-\epsilon}\big{)}=\beta.$ When $\epsilon\xrightarrow{}0,$ ${}\frac{-Y_{0}(k)J_{1}(k(1+a)+J_{0}(k)Y_{1}(k(1+a))}{Y_{0}(k)J_{0}(k(1+a))-J_{0}(k)Y_{0}(k(1+a))}-\frac{-K_{1}(k(1+a))}{K_{0}(k(1+a)}=\frac{\beta}{k}.$ (11) Where $Y_{0}$ and $Y_{1}$ are Bessel function of second kind and and $J_{0}$ and $J_{1}$ are Bessel function of first kind. Similarly, $K_{0}$ and $K_{1}$ are a modified Bessel function of second kind. Define $g(k,a)=\frac{g_{1}(k,a)}{g_{2}(k,a)}=\frac{-Y_{0}(k)J_{1}(k(1+a)+J_{0}(k)Y_{1}(k(1+a))}{Y_{0}(k)J_{0}(k(1+a))-J_{0}(k)Y_{0}(k(1+a))}.$ It seems that $g(k,a)=-1$ for all the values of $a$ as shown in the figure below. Figure 3. Graph of the function g(k,.5). Figure 4. Graph of the function g(k,2). Figure 5. Graph of the function g(k,4). Figure 6. Graph of the function g(k,10). From equation (11), we get ${}-1+\frac{K_{1}(k(1+a))}{K_{0}(k(1+a)}=\frac{\beta}{k}.$ (12) We will use the following fact from [6] to prove that $\beta_{cr}>0.$ ###### Lemma 3.2 ([6]). Let $p,q\geq 0.$ Then the double inqualities ${}1+\frac{1}{2(x+p)}<\frac{K_{1}(x)}{K_{0}(x)}<1+\frac{1}{2(x+q)}$ (13) hold for all $x>0$ if only if $p\geq 1/4$ and $q=0.$ Now, from equation (12) and (13) we get, $\frac{1}{2(x+p)}<\frac{K_{1}(x)}{K_{0}(x)}-1<\frac{1}{2(x+q)}.$ When $x=k(a+1)>0,$ $\frac{1}{2(k(a+1)+p)}<\frac{K_{1}(k(a+1))}{K_{0}(k(a+1))}-1<\frac{1}{2(k(a+1)+q)}.$ $\frac{1}{2(k(a+1)+p)}<\frac{\beta}{k}<\frac{1}{2(k(a+1)+q)}.$ $\frac{1}{2((a+1)+\frac{p}{k})}<\beta<\frac{1}{2((a+1)+\frac{q}{k})}.$ Since $a>0,p\geq\frac{1}{4},k>0$, which tells us that $\beta>0$ and hence $\beta_{cr}>0$ and $\beta\in(0,\frac{1}{2}).$ ∎ ## References * [1] M Cranston, L Koralov, S Molchanov, and B Vainberg. Continuous model for homopolymers. Journal of Functional Analysis, 256(8):2656–2696, 2009. * [2] Yuriy Golovaty. On coupling constant thresholds in one dimension. arXiv preprint arXiv:1905.10766, 2019. * [3] M Klaus. On the bound state of schrödinger operators in one dimension. Annals of Physics, 108(2):288–300, 1977. * [4] Rajan Puri and Boris Vainberg. On the critical value of the coupling constant in exterior elliptic problems. Applicable Analysis, pages 1–10, 2020. * [5] Barry Simon. The bound state of weakly coupled schrödinger operators in one and two dimensions. Annals of Physics, 97(2):279–288, 1976. * [6] Zhen-Hang Yang and Yu-Ming Chu. On approximating the modified bessel function of the second kind. Journal of Inequalities and Applications, 2017(1):1–8, 2017.
2024-09-04T02:54:58.025364
2020-03-07T03:04:56
2003.03495
{ "authors": "Ryota Nakano and Masatoshi Hirabayashi", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26094", "submitter": "Ryota Nakano", "url": "https://arxiv.org/abs/2003.03495" }
arxiv-papers
# Mass shedding activities of Asteroid (3200) Phaethon enhanced by its rotation Ryota Nakano Department of Aerospace Engineering Auburn University, 211 Davis Hall Auburn, AL 36849, USA Masatoshi Hirabayashi Department of Aerospace Engineering Auburn University, 211 Davis Hall Auburn, AL 36849, USA ###### Abstract Asteroid (3200) Phaethon, a B-type asteroid, has been active during its perihelion passages. This asteroid is considered to be a source of the Geminid meteor stream. It is reported that this asteroid is spinning at a rotation period of $3.60\ hr$ and has a top shape (an oblate body with an equatorial ridge) with a mean equatorial diameter of $6.25\ km$. Here, we report that Phaethon’s rotation state may be close to or above its critical rotation period when the bulk density is $0.5\ -\ 1.5\ {g/cm^{3}}$ (a typical bulk density of a B-type asteroid). We found that in this condition, the structure of Phaethon is sensitive to failure unless the cohesive strength is ${\sim}50\ Pa\ -\ {\sim}260\ Pa$. This result implies that if there are some surface processes driven by, for example, thermal waves, large-scaled deformation may happen and cause mass shedding. From this interpretation, we propose the processes that produced the Geminid meteor stream in the past and dust tails recently. Phaethon initially rotated at a spin period shorter than the current period. The magnitude of structural deformation at this stage was higher than the present spin condition, and a large mass shedding event, i.e., the Geminid meteor stream, occurred. After this deformation process, the body became more oblate, and its spin slowed down. At this point, while the spin was high enough for the body to have mass shedding events, the magnitude of these events became small. comets: general — meteorites, meteors, meteoroids — minor planets, asteroids: general — minor planets, asteroids: individual (3200 Phaethon) ††journal: ApJL ## 1 Introduction Asteroid (3200) Phaethon may be a source of the Geminid meteor stream (Whipple, 1983; Gustafson, 1989; Williams & Wu, 1993) and has been observed for a long time (Cochran & Barker, 1984; Chamberlin et al., 1996; Hsieh & Jewitt, 2005; Wiegert et al., 2008). In 2009 (Jewitt & Li, 2010) and 2012 (Li & Jewitt, 2013), dust tails from Phaethon were observed during the perihelion passage, revealing that this asteroid is indeed an active asteroid. However, the activities detected near the perihelion left a mystery of the production of the Geminid meteor stream. The dust mass inferred from the observations is ${\sim}3\times 10^{5}\ kg$ (Li & Jewitt, 2013), which is much smaller than the estimated mass of the Geminid meteor stream, ${10}^{12}-{10}^{13}\ kg$ (Hughes & McBride, 1989; Jenniskens, 1994). Also, the estimated average mass-loss rate of $3\ {kg}/s$ is too small to replenish the Geminid meteor stream mass (Jewitt et al., 2013), if the dynamical lifetime of the Geminid meteor stream is ${\sim}10^{3}\ yr$ (Gustafson, 1989; Ryabova, 2007). While most of the proposed mechanisms were found incapable of producing dust tails, thermal fracture and cracking due to dehydration in surface materials might be reasonable processes to generate dust tails (Jewitt & Li, 2010; Jewitt, 2012; Li & Jewitt, 2013). Radar observations during the 2017 apparition revealed Phaethon’s shape. Taylor et al. (2019) reported that Phaethon might be an oblate shape with an equatorial ridge, or the so-called top shape. The equivalent diameter of this asteroid was estimated to be $6\ km$ in diameter. The examples of top-shaped asteroids are OSIRIS-REx’s target (101955) Bennu (Lauretta et al., 2019) and Hayabusa2’s target (162173) Ryugu (Watanabe et al., 2019; Sugita et al., 2019; Kitazato et al., 2019). Phaethon is currently spinning at a rotation period of $3.60\ hr$ (Taylor et al., 2019). The radar albedo is reported to be the lowest among the cataloged near-Earth asteroids (Taylor et al., 2019), implying that its spectral type is consistent with a B-type (Taylor et al., 2019), and thus the bulk density may be as low as ${\sim}1.0\ g/{cm}^{3}$ (Scheeres et al., 2019; Watanabe et al., 2019). We hypothesize that the mass shedding activities of Phaethon may have been enhanced by fast rotation, given recent work arguing that the equatorial ridges of top-shaped asteroids were evolved by rotationally driven reshaping (Walsh et al., 2008, 2012). We propose that structural failure on the surface and/or inside the body triggered by fast rotation plays an important role in mass shedding mechanism. This study provides better insights into the physical activities of Phaethon to support DESTINY+, a planned flyby mission concept led by the Japan Aerospace Exploration Agency (Arai et al., 2018). ## 2 Semi-analytical model for structural failure in a top-shaped body Phaethon was reported to be a top-shaped body. Recent work has shown that the global failure condition in uniformly rotating top shapes can be roughly determined by assuming that they are triaxial ellipsoids (Hirabayashi, 2015). For Phaethon, the shape is not well known at present while an effort in analyzing it from radar observation data is being made (Taylor, personal communication). Therefore, a simplified model that uses a triaxial ellipsoid can still reasonably provide structural failure in this asteroid. We note that the heterogeneity in structural failure may be critical once the detailed shape of this asteroid is considered (Hirabayashi & Scheeres, 2019). We consider that Phaethon has a mean equatorial diameter of $6.25\ km$ and an oblateness (the ratio of semi-minor axis to semi-major axis) of $0.889$, which is the same as Bennu’s (Barnouin et al., 2019). We note that Taylor et al. (2019) did not specify the oblateness of Phaethon but implied that it would be similar to that of Bennu. For the oblateness, we do not account for the semi- intermediate axis to simplify the discussion. We assume that Phaethon’s structure is uniform because the internal condition is unknown. We consider three types of the bulk density, $0.5\ g/{cm}^{3}$, $1.0\ g/{cm}^{3}$, and $1.5\ g/{cm}^{3}$, the averaged of which is consistent with that of a B-type asteroid (Scheeres et al., 2019). Later, we denote the oblateness, the bulk density, and the gravitational constant as $\epsilon$, $\rho$, and $G$, respectively. Phaethon is assumed to be rotating along the maximum principal axis. We define a three-dimensional Cartesian coordinate system such that the $z$ axis is lined up along the rotation axis, and the $x$ and $y$ axes are along the maximum and intermediate moment of inertia axes, respectively, on the equatorial plane. Using these definitions, we compute how Phaethon experiences structural failure at a give rotation period, $P$. To determine the failure condition of Phaethon, we extend the technique by Hirabayashi (2015), who only considered a sphere. We analyze when the stress field in a given element reaches its yield condition. In this model, the material behavior is assumed to be elastic-perfectly plastic, where a plastic flow begins once the stress reaches its yield condition without material hardening and softening. To describe such a material behavior, we use the following material properties: Poisson’s ratio, $\nu$; Young’s modulus, $E$; the angle of friction, $\phi$; and the cohesive strength, $Y$. It is worth noting that the evolution of plastic deformation is a function of loading paths. However, our purpose is not to track plastic deformation but to determine what element would first have its plastic state in a quasi-static condition where deformation is small enough that the time-variation is negligible (Hirabayashi, 2015). Although we follow the terminologies of the cohesive strength by Hirabayashi (2015), we reintroduce them here to facilitate the following discussion. We compute the minimum cohesive strength that can prevent structural failure of a given element in an asteroid rotating at $P$. We call this strength ‘critical cohesive strength’ and denote it as $Y^{*}$. On the other hand, we use ‘actual cohesive strength’ as an assumed strength that an asteroid may have. We denote this as $Y$. Also, the critical rotation period $P_{c}$ is the rotation period at which a small particle on the equatorial surface of Phaethon gains the centrifugal acceleration larger than the gravitational acceleration and is lifted off from the surface. ### 2.1 Stress field computation Similar to Hirabayashi et al. (2015) and Hirabayashi (2015), we apply a technique by Dobrovolskis (1982) and Holsapple (2001) to provide an elastic stress in a triaxial ellipsoid that is uniformly spinning at rotation period of $P$. Here, while noting that the details are found in Dobrovolskis (1982), we briefly introduce the formulation process. The displacement $u$ in Cartesian coordinates can be expressed in terms of twelve unknown constants $A$ through $L$: $u_{x}=x\left[A+B\frac{x^{2}}{a^{2}}+C\frac{y^{2}}{b^{2}}+D\frac{z^{2}}{c^{2}}\right],$ (1) $u_{y}=y\left[E+F\frac{x^{2}}{a^{2}}+G\frac{y^{2}}{b^{2}}+H\frac{z^{2}}{c^{2}}\right],$ (2) $u_{z}=z\left[I+J\frac{x^{2}}{a^{2}}+K\frac{y^{2}}{b^{2}}+L\frac{z^{2}}{c^{2}}\right].$ (3) where $a$, $b$, and $c$ $(a\geq b\geq c)$ are the principle semi-axes of the triaxial ellipsoid. The strain is obtained by: $\epsilon_{ij}=\frac{1}{2}\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{\partial u_{j}}{\partial x_{i}}\right).$ (4) Applying Hooke’s law, the stress tensor is also expressed in terms of $A$ through $L$: $\sigma_{ij}=\lambda\epsilon_{kk}\delta_{ij}+2\mu\epsilon_{ij},$ (5) where $\epsilon_{kk}=\epsilon_{xx}+\epsilon_{yy}+\epsilon_{zz}$, and $\delta_{ij}$ is the Kronecker delta. $\lambda$ and $\mu$ are the Lame’s constants obtained from $\lambda=\frac{E\nu}{(1+\nu)(1-2\nu)},$ (6) $\mu=\frac{E}{2(1+\nu)}.$ (7) To determine the twelve unknown constants, we must impose twelve linearly independent relations. The stresses $\sigma_{ij}$ in a body in equilibrium under body forces must satisfy the stress equilibrium equations: $\frac{\partial}{\partial x_{j}}\sigma_{ji}=-\rho b_{i},$ (8) where $b_{i}$ is the body force, which is driven by gravitational and rotational effects in our problem. Equation (8) provides three constraints out of the twelve required relations. Remaining nine relations are then imposed from the traction free boundary condition: $\sigma_{ij}n_{j}=0,$ (9) where $n_{j}$ is the unit normal to the surface and given by: $n_{x}=\frac{x}{a^{2}w},$ (10) $n_{y}=\frac{y}{b^{2}w},$ (11) $n_{z}=\frac{z}{c^{2}w},$ (12) where $w=\left(\frac{x^{2}}{a^{4}}+\frac{y^{2}}{b^{4}}+\frac{z^{2}}{c^{4}}\right)^{1/2}$ (13) The twelve unknown constants are, hence, provided by twelve linearly independent relations. ### 2.2 Structural failure condition Once the stress field is obtained from the previous section, we use it to determine the structural failure condition of a given element. Here, we apply the Drucker-Prager yield criterion, a smooth approximation of the Mohr-Coulomb yield criterion (Chen & Han, 1988): $f=\alpha I_{1}+\sqrt{J_{2}}-s\leq 0.$ (14) $I_{1}$ and $J_{2}$ are the stress invariants: $I_{1}=\sigma_{1}+\sigma_{2}+\sigma_{3},$ (15) $J_{2}=\frac{1}{6}\\{(\sigma_{1}-\sigma_{2})^{2}+(\sigma_{2}-\sigma_{3})^{2}+(\sigma_{3}-\sigma_{1})^{2}\\},$ (16) where $\sigma_{i}$ ($i=1,2,3$) is the principal stress component, which can be obtained from the derived stress field above. $\alpha$ and $s$ are material constants and given by (Chen & Han, 1988): $\alpha=\frac{2\sin{\phi}}{\sqrt{3}(3-\sin{\phi)}},$ (17) $s=\frac{6Y\cos{\phi}}{\sqrt{3}(3-\sin{\phi)}}.$ (18) From the equal condition of Equation (14), we obtain the following expression for $Y^{*}$: $Y^{*}=\frac{\sqrt{3}(3-\sin{\phi})}{6\cos{\phi}}\left(\alpha I_{1}+\sqrt{J_{2}}\right).$ (19) If Equation (19) becomes negative, we consider $Y^{*}$ to be $0\ Pa$, which means that no strength is necessary for an element to keep the original shape. ## 3 Results We investigate the critical cohesive strength based on the following assumed parameters, $\nu=0.25$, $E=10^{7}\ Pa$, and $\phi=35\degree$, to make our discussion consistent with earlier work (e.g., Hirabayashi & Scheeres 2015). We note that Young’s modulus does not influence our stress field calculation (Love, 2013), and the variations in Poisson’s ratio and the angle of friction do not affect our results for geological materials significantly (Lambe & Whitman, 1969; Hirabayashi & Scheeres, 2019). The current rotation period is set to be $3.60\ hr$ (Taylor et al., 2019). We consider the bulk density to be $0.5$, $1.0$, and $1.5\ g/{cm}^{3}$. Table 1 lists the parameters considered in the current study. We find that the failure mode varies with the bulk density. We plot the distribution of $Y^{*}$ at the $x$-$z$ plane for different bulk densities in Figure 1. Panels a, b, and c describe the bulk densities of $0.5$, $1.0$, and $1.5\ g/{cm}^{3}$, respectively. In this range of the bulk density, the body should have a cohesive strength to keep the original shape. For the case of $\rho=0.5\ g/{cm}^{3}$, $P_{c}$ is found to be $4.83\ hr$. Therefore, the current rotation period is shorter than $P_{c}$, indicating that materials should be shed and highly sensitive to structural failure. Figure 1a shows $Y^{*}>0$ everywhere except the pole region. $Y^{*}$ is higher in the interior than on the surface, and its maximum value is $259\ Pa$ at the center. This indicates that the interior is more sensitive to structural failure than the surface. If the actual cohesive strength $Y$ is smaller than $Y^{*}$, the central region structurally fails first. For the case of $\rho=1.0\ g/{cm}^{3}$, $P_{c}$ is found to be $3.42\ hr$ and is shorter than the current period. However, the interior still exhibits high $Y^{*}$ in the major regions (Panel b). The maximum value of $Y^{*}$ is $179\ Pa$ and is located at the center. For the case of $\rho=1.5\ g/{cm}^{3}$, $P_{c}$ is found to be $2.79\ hr$. Unlike the other two cases, the interior has $Y^{*}=0$ in the most areas; however, high $Y^{*}$ is still distributed beneath the surface (${\sim}50\ Pa$)(Panel c). All these three cases show the sensitivity of Phaethon to structural failure. This body needs a cohesive strength to keep the original condition without shape deformation. However, if there is a trigger of reshaping, it is likely that the deformation process would be enhanced by rotation, as seen from the derived sensitivity. Because of the observed activities, the cohesive strength of Phaethon is ${\sim}50\ Pa\ -\ {\sim}260\ Pa$, depending on the bulk density. This range is consistent with that of observed small bodies (Scheeres & Sánchez, 2018). Table 1: Parameter settings Parameter | Symbol | Value | Units ---|---|---|--- Gravitational constant | G | 6.6738 ×10^-11 | m^3 ⋅kg^-1 ⋅s^-2 Semi-major axis | a | 3200 | m Semi-minor axis | c | 2847 | m Oblateness | ϵ | 0.889 | - Current rotation period | P | 3.60 | hr Bulk density | ρ | 0.5, 1.0, 1.5 | g ⋅cm^-3 Poisson’s ratio | ν | 0.25 | - Elastic modulus | E | 10^7 | Pa Friction angle | ϕ | 35 | deg Figure 1: Distribution of $Y^{*}$ at the $x$-$z$ plane. The rotation period is $3.60\ hr$. Panels a, b, and c describe bulk densities of $0.5$, $1.0$, and $1.5\ g/{cm}^{3}$, respectively. ## 4 Discussions Generation of dust tails at present During perihelion passage in 2009 and 2012, the observations of Phaethon’s dust tails revealed that this asteroid was an active asteroid (Jewitt & Li, 2010; Li & Jewitt, 2013). While the detailed mechanisms are not well known, thermal waves may be one of plausible explanations for the reported mass shedding activities (Li & Jewitt, 2013). At the current rotation period of $3.60\ hr$, the body is sensitive to structural failure regardless of the bulk density and thus needs cohesive strength to maintain its shape. The derived cohesive strength of Phaethon is less than ${\sim}50\ Pa\ -\ {\sim}260\ Pa$, which is consistent with that of small bodies ranging up to a few hundred pascals (Scheeres & Sánchez, 2018). We interpret this sensitivity as a potential enhancement of mass shedding. If there is a trigger of reshaping even at small scales, the structure of Phaeton would be perturbed, leading to rotationally driven reshaping at larger scales. Such a trigger can be thermal waves in thin surface layers (Jewitt & Li, 2010; Jewitt, 2012; Li & Jewitt, 2013). Micrometeoroid impacts or other processes may also be possible, as seen on Bennu (Lauretta et al., 2019), although thermal waves, again, are more consistent to explain the activities of Phaethon around its perihelion passage (Li & Jewitt, 2013). Possible source of the Geminid meteor stream We expect that rotationally induced structural failure makes Phaethon more oblate, i.e., $\epsilon$ becomes lower (Hirabayashi, 2015). Because the angular momentum is constant during deformation, Phaethon may have been less oblate and rotated faster at an earlier stage before it had large deformation. Figure 2 shows the dependence of $P$ on $\epsilon$ and $P_{c}$ for different bulk densities. We find that if Phaethon is less oblate, the rotation period is shorter, and thus $Y^{*}$ should be higher at a shorter rotation period. If Phaethon is a sphere ($\epsilon=1.0$; $a=b=c$), which is the shape condition that is the least affected by rotation, the rotation period should become $P=3.38\ hr$. Figure 3 shows the distribution of $Y^{*}$ for this case. Similar features observed in Figure 1 are evident in this case; however, $Y^{*}$ is higher, implying that Phaethon should have been more sensitive to structural failure at a shorter rotation period. Therefore, the failure mode may become severer, and more materials should be shed significantly at a shorter rotation period. Figure 2: Rotation period as a function of the oblateness (the blue solid line). $P=3.60\ hr$ when $\epsilon=0.889$ and $P=3.38\ hr$ when $\epsilon=1.0$. The dotted lines indicate $P_{c}$ for different bulk densities: the red, green, and magenta show bulk densities of $0.5$, $1.0$, and $1.5\ g/{cm}^{3}$, respectively. Figure 3: Distribution of $Y^{*}$ at the $x$-$z$ plane. The rotation period is $3.38\ hr$. Panels a, b, and c describe bulk densities of $0.5$, $1.0$, and $1.5\ g/{cm}^{3}$, respectively. From these results, we propose a possible evolution scenario of Phaethon (Figure 4). Phaethon was originally less oblate and spinning at a shorter rotation period than the current period. This stage is before the Geminid meteor stream was generated. Possible initiation processes such as thermal waves during Phaethon’s apparition passages triggered reshaping, and rotational deformation enhanced this reshaping process significantly. Because the rotation period at this stage was closer to or above $P_{c}$, the reshaping process caused mass shedding at large scale, which became a source of the Geminid meteor stream. Thus, the current oblate shape maybe a remnant of this large deformation event. When the oblateness evolved, the rotation of Phaethon slowed down. However, the structure was still sensitive to failure. When there is similar perturbation such as thermal waves at present, rotationally driven failure can be triggered; however, because the centrifugal effect is less significant, the magnitude of mass shedding is less intense than that in the past. While we cannot strongly constrain whether the generation of the Geminid meteor stream was a single event or episodic, our study gives some hints of large-scaled mass shedding processes as a source of the Geminid meteor stream. Because the generation of the Geminid meteor stream may have occurred within the last $1\ ka$ (Gustafson, 1989; Williams & Wu, 1993; Ryabova, 2007), the large-scaled reshaping and mass shedding processes may have occurred in this timescale. These processes should be more rapid and intense to be completed (within the last $1\ ka$) than the YORP driven evolution, which may take ${\sim}1\ Ma$ based on earlier work (Čapek & Vokrouhlický, 2004; Bottke et al., 2006). Given this short timescale, a possible explanation of the deformation mode may be internal failure, which can provide large-scaled deformation and thus mass shedding at fast rotation (Hirabayashi, 2015). We note that the total mass of the Geminid meteor stream is about $2.5\%$ of that of Phaethon. If a mass shedding even that can produce the same magnitude of the Geminid meteor stream occurs every $1ka$, the lifetime of Phaethon would only be ${\sim}40\ ka$, which may be much shorter than that predicted by the orbital evolution, ${\sim}26\ Ma$ (De León et al., 2010). While there is no decisive evidence, the YORP effect may give some insights into this discrepancy. The YORP evolution timescale of Phaethon may become longer by many different factors such as the mass, shape, surface composition, and stochastic evolution (Bottke et al., 2015). Furthermore, its highly eccentric orbit, $e=0.890$, with small perihelion distance of 0.14 AU (JPL Small-Body Database) may give strong variations in solar radiation acting on Phaethon. Thus, it may be possible that Phaethon has stochastically spun up by the YORP effect for the entire orbital age, ${\sim}26Ma$ (De León et al., 2010), and it recently experienced large-scaled mass shedding that formed the Geminid meteor stream. Then, a large mass shedding event that produced the Geminid meteor stream may have decelerated Phaethon’s spin, but the spin state after this event may have been still high enough to have some mass shedding events at small level, similar to what we observed in 2009 and 2012 (Jewitt & Li, 2010; Jewitt, 2012; Li & Jewitt, 2013). To fully address the detailed timescale of the rotationally driven reshaping process is beyond the scope of this study; we leave this problem as future work. Finally, We assumed that Phaethon currently has a top shape having Bennu’s oblateness, $\epsilon=0.889$, by following Taylor et al. (2019). To check if this setting is consistent with other top-shaped asteroids, we consider six top-shaped near-earth asteroids: Ryugu (Watanabe et al., 2019), 1994 KW4 (Ostro et al., 2006), 2008 EV5 (Busch et al., 2011), 1994 CC (Brozović et al., 2011), 2001 SN263 (Becker et al., 2015), and 2000 DP107 (Naidu et al., 2015). We find the range of $\epsilon$ to be $0.873$ – $0.968$, and thus Bennu’s $\epsilon$ is within this range. We conduct the same analyses for these objects and found trends similar to Bennu’s. The variation in the oblateness at this magnitude does not affect the results of $Y^{*}$ significantly, which is less than 12%. Therefore, we conclude that our oblateness setting is meaningful to capture a possible scenario of Phaethon’s activities. Figure 4: Possible evolution scenario of Phaethon. Phaethon has a highly eccentric orbit ($e=0.890$) with small perihelion distance of 0.14 AU. At an earlier stage, Phaethon was less oblate and rotating faster, leading to mass shedding at larger scale. For both stages, perturbation such as thermal waves may be a possible trigger for mass shedding. Potential issues We finally address issues of our analysis approach. This study explored Phaethon’s rotationally induced structural failure by modeling Phaethon as a triaxial ellipsoid. We did not account for local topographic features; therefore, our semi-analytical model does not capture local deformations which may differ from global deformations (Hirabayashi & Scheeres, 2019; Hirabayashi et al., 2019). However, Hirabayashi (2015), who compared an analytical solution and a FEM solution, found that there was no significant variation between the two. Therefore, we conclude that our semi-analytical model can provide solutions with reasonable accuracy. It is our future work to perform FEM analysis using a shape model to investigate the local deformations. Another issue to be addressed is that our model does not assert the detailed failure mode. While numerous research studies have been undertaken, it is still not well determined how top shape asteroids deform. Hirabayashi (2015) showed that depending on the internal structure and the bulk density, the failure mode may become different – either surface processing or internal deformation. However, because our semi-analytical model can only describe a homogeneous structure, we cannot infer how Phaethon’s deformation is controlled by the heterogeneity. Therefore, the detailed failure mode cannot be specified. If Phaethon’s heterogeneous structure is revealed, we need to employ different approach (i.e. Hirabayashi et al. 2015). There still remain many unknowns regarding Phaethon’s physical properties. We will further elaborate our approach to give constraints on the activity of Phaethon. Concurrently, further constraints are vitally important to support DESTINY+. RN and MH acknowledge support from NASA/Solar System Workings (NNH17ZDA001N/80NSSC19K0548) and Auburn University/Intramural Grant Program. ## References * Arai et al. (2018) Arai, T., Kobayashi, M., Ishibashi, K., & Yoshida, F. 2018, in 49th LPSC * Barnouin et al. (2019) Barnouin, O. S., Daly, M. G., Palmer, E. E., et al. 2019, Nature Geoscience, 12, 247, doi: 10.1038/s41561-019-0330-x * Becker et al. (2015) Becker, T. M., Howell, E. S., Nolan, M. C., et al. 2015, Icarus, 248, 499, doi: 10.1016/j.icarus.2014.10.048 * Bottke et al. (2006) Bottke, W. F., Vokrouhlický, D., Rubincam, D. P., & Nesvorný, D. 2006, Annual Review of Earth and Planetary Sciences, 34, 157, doi: 10.1146/annurev.earth.34.031405.125154 * Bottke et al. (2015) Bottke, W. F., Vokrouhlický, D., Walsh, K. J., et al. 2015, Icarus, 247, 191, doi: 10.1016/j.icarus.2014.09.046 * Brozović et al. (2011) Brozović, M., Benner, L. A., Taylor, P. A., et al. 2011, Icarus, 216, 241, doi: 10.1016/j.icarus.2011.09.002 * Busch et al. (2011) Busch, M. W., Ostro, S. J., Benner, L. A., et al. 2011, Icarus, 212, 649, doi: 10.1016/j.icarus.2011.01.013 * Čapek & Vokrouhlický (2004) Čapek, D., & Vokrouhlický, D. 2004, Icarus, 172, 526, doi: 10.1016/j.icarus.2004.07.003 * Chamberlin et al. (1996) Chamberlin, A. B., McFadden, L. A., Schulz, R., Schleicher, D. G., & Bus, S. J. 1996, Icarus, 119, 173, doi: 10.1006/icar.1996.0009 * Chen & Han (1988) Chen, W.-F., & Han, D.-J. 1988, Plasticity for Structural Engineers (2007, J. Ross Publishing) (Springer-Verlag) * Cochran & Barker (1984) Cochran, A. L., & Barker, E. S. 1984, Icarus, 59, 296, doi: 10.1016/0019-1035(84)90029-0 * De León et al. (2010) De León, J., Campins, H., Tsiganis, K., Morbidelli, A., & Licandro, J. 2010, Astronomy and Astrophysics, 513, 1, doi: 10.1051/0004-6361/200913609 * Dobrovolskis (1982) Dobrovolskis, A. R. 1982, Icarus, 52, 136, doi: 10.1016/0019-1035(82)90174-9 * Gustafson (1989) Gustafson, B. A. S. 1989, Astronomy and Astrophysics, 225, 533 * Hirabayashi (2015) Hirabayashi, M. 2015, Monthly Notices of the Royal Astronomical Society, 454, 2249, doi: 10.1093/mnras/stv2017 * Hirabayashi et al. (2015) Hirabayashi, M., Sánchez, D. P., & Scheeres, D. J. 2015, Astrophysical Journal, 808, 63, doi: 10.1088/0004-637X/808/1/63 * Hirabayashi & Scheeres (2015) Hirabayashi, M., & Scheeres, D. J. 2015, Astrophysical Journal Letters, 798, doi: 10.1088/2041-8205/798/1/L8 * Hirabayashi & Scheeres (2019) —. 2019, Icarus, 317, 354, doi: 10.1016/j.icarus.2018.08.003 * Hirabayashi et al. (2019) Hirabayashi, M., Tatsumi, E., Miyamoto, H., et al. 2019, The Astrophysical Journal Letters, 874, L10 * Holsapple (2001) Holsapple, K. A. 2001, Icarus, 154, 432, doi: 10.1006/icar.2001.6683 * Hsieh & Jewitt (2005) Hsieh, H. H., & Jewitt, D. 2005, The Astrophysical Journal, 624, 1093, doi: 10.1086/429250 * Hughes & McBride (1989) Hughes, D. W., & McBride, N. 1989, Monthly Notices of the Royal Astronomical Society, 240, 73, doi: https://doi.org/10.1093/mnras/240.1.73 * Jenniskens (1994) Jenniskens, P. 1994, Astronomy and Astrophysics, 287, 990 * Jewitt (2012) Jewitt, D. 2012, Astronomical Journal, 143, doi: 10.1088/0004-6256/143/3/66 * Jewitt & Li (2010) Jewitt, D., & Li, J. 2010, Astronomical Journal, 140, 1519, doi: 10.1088/0004-6256/140/5/1519 * Jewitt et al. (2013) Jewitt, D., Li, J., & Agarwal, J. 2013, Astrophysical Journal Letters, 771, 1, doi: 10.1088/2041-8205/771/2/L36 * Kitazato et al. (2019) Kitazato, K., Milliken, R. E., Iwata, T., et al. 2019, Science, 364, 272, doi: 10.1126/science.aav7432 * Lambe & Whitman (1969) Lambe, W., & Whitman, R. 1969, Soil Mechanics By Lambe and Whitman.pdf (Wiley) * Lauretta et al. (2019) Lauretta, D. S., Hergenrother, C. W., Chesley, S. R., & Leonard, J. M. 2019, Science, 366, doi: 10.1126/science.aay3544.1 * Li & Jewitt (2013) Li, J., & Jewitt, D. 2013, Astronomical Journal, 145, doi: 10.1088/0004-6256/145/6/154 * Love (2013) Love, A. E. H. 2013, A treatise on the mathematical theory of elasticity (Cambridge university press) * Naidu et al. (2015) Naidu, S. P., Margot, J. L., Taylor, P. A., et al. 2015, Astronomical Journal, 150, 54, doi: 10.1088/0004-6256/150/2/54 * Ostro et al. (2006) Ostro, S. J., Margot, J. L., Benner, L. A. M., Giorgini, J. D., & Scheeres, D. J. 2006, Nature, 314, 1276 * Ryabova (2007) Ryabova, G. O. 2007, Monthly Notices of the Royal Astronomical Society, 375, 1371, doi: 10.1111/j.1365-2966.2007.11392.x * Scheeres & Sánchez (2018) Scheeres, D. J., & Sánchez, P. 2018, Progress in Earth and Planetary Science, 5, doi: 10.1186/s40645-018-0182-9 * Scheeres et al. (2019) Scheeres, D. J., McMahon, J. W., French, A. S., et al. 2019, Nature Astronomy, 3, 352, doi: 10.1038/s41550-019-0721-3 * Sugita et al. (2019) Sugita, S., Honda, R., Morota, T., et al. 2019, Science, 364, doi: 10.1126/science.aaw0422 * Taylor et al. (2019) Taylor, P. A., Rivera-Valentín, E. G., Benner, L. A., et al. 2019, Planetary and Space Science, 167, 1, doi: 10.1016/j.pss.2019.01.009 * Walsh et al. (2008) Walsh, K. J., Richardson, D. C., & Michel, P. 2008, Nature, 454, 188, doi: 10.1038/nature07078 * Walsh et al. (2012) —. 2012, Icarus, 220, 514, doi: 10.1016/j.icarus.2012.04.029 * Watanabe et al. (2019) Watanabe, S., Hirabayashi, M., Hirata, N., et al. 2019, Science, 272, eaav8032, doi: 10.1126/science.aav8032 * Whipple (1983) Whipple, F. 1983, International Astronomical Union Circular, 3881 * Wiegert et al. (2008) Wiegert, P. A., Houde, M., & Peng, R. 2008, Icarus, 194, 843, doi: 10.1016/j.icarus.2007.12.013 * Williams & Wu (1993) Williams, I. P., & Wu, Z. 1993, Monthly Notices of the Royal Astronomical Society
2024-09-04T02:54:58.036023
2020-03-07T03:44:21
2003.03508
{ "authors": "Marnus Stoltz, Gene Stoltz, Kazushige Obara, Ting Wang, David Bryant", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26095", "submitter": "Stephanus Marnus Stoltz", "url": "https://arxiv.org/abs/2003.03508" }
arxiv-papers
# A 1000-fold Acceleration of Hidden Markov Model Fitting using Graphical Processing Units, with application to Nonvolcanic Tremor Classification. Marnus Stoltz1 Gene Stoltz2,3 Kazushige Obara4 Ting Wang1 David Bryant1 $1.$ Department of Mathematics and Statistics, University of Otago, New Zealand $2.$ Council for Scientific and Industrial Research of South Africa, Pretoria, South Africa $3.$ Department of Electronic Engineering, University of Johannesburg, South Africa $4.$ Earthquake Research Institute, University of Tokyo, Japan $*$ Corresponding author<EMAIL_ADDRESS> ###### Abstract Hidden Markov models (HMMs) are general purpose models for time-series data widely used across the sciences because of their flexibility and elegance. However fitting HMMs can often be computationally demanding and time consuming, particularly when the the number of hidden states is large or the Markov chain itself is long. Here we introduce a new Graphical Processing Unit (GPU) based algorithm designed to fit long chain HMMs, applying our approach to an HMM for nonvolcanic tremor events developed by Wang et al. (2018). Even on a modest GPU, our implementation resulted in a 1000-fold increase in speed over the standard single processor algorithm, allowing a full Bayesian inference of uncertainty related to model parameters. Similar improvements would be expected for HMM models given large number of observations and moderate state spaces (<80 states with current hardware). We discuss the model, general GPU architecture and algorithms and report performance of the method on a tremor dataset from the Shikoku region, Japan. Keywords— Bayesian inference, Computational hardware, Seismology, Algorithm design. ## missingum@section Introduction Slow slip events (SSEs), a type of slow earthquakes, play an important role in releasing strain energy in subduction zones, the region where one tectonic plate moves underneath another tectonic plate and sinks. It is currently understood that SSEs occur as shear slips on the bottom tip of subduction zones that transition between a fixed region above and slipping region below (Beroza and Ide, 2011). Recent evidence suggest that nonvolcanic tremors are observed in close association with SSEs, however the causal relationship between the two phenomena is not yet well understood. Classifying nonvolcanic tremors helps to better understand this link but can be time consuming when typically done by hand. Recently, an automated procedure was developed by Wang et al. (2018) to classsify spatio-temporal migration patterns of nonvolcanic tremors. The procedure classifies tremor source regions into distinct segments in 2-D space using a Hidden Markov Model. The model is fitted using the Expectation Maximisation algorithm. Here we implement a Bayesian approach. However, fitting the model in either a frequentest or Bayesian framework is extremely demanding computationally, often taking days or weeks for large dataset with moderate state space. Fortunately, technological advances in hardware have the potential to solve this issue. Specifically, we make use of fast and affordable graphic processing units (GPUs). In recent years HMM algorithms on GPUs have been implemented in various fields. A non-exhaustive list includes implementations in bioinformatics (Yao et al., 2010), speech recognition (Yu et al., 2015), a registered patent in speech matching (Chong et al., 2014) and workload classification (Cuzzocrea et al., 2016), as well as HMMer (Horn et al., 2005) an open-source project for use with protein databases. The HMM implementations are application specific often with large number of states and mostly focused on increasing throughput of the Verterbi and Baum-Welch algorithms (Zhang et al., 2009; Li et al., 2009; Liu, 2009). This leads to a range of concurrent approaches. Here we focus on the efficient implementation of the forward algorithm of an HMM model given a large number of observations and a moderate number of states. The outline of the paper is as follows: In Section 2 we describe the HMM for classifying nonvolcanic tremors and discuss the likelihood algorithm in a serial and parallel context. Thereafter we give details on the OpenCL implementation of the parallel likelihood algorithm. In Section 3 we discuss performance of the OpenCL implementation and compare it to the standard Forward algorithm. In Section 4 we report our analysis on a large tremor dataset from the Shikoku region, Japan. ## missingum@section An HMM for classifying nonvolcanic tremors Nonvolcanic tremor activity is clustered spatially and each spatial cluster seems to recur episodically. To represent this phenomenon using an HMM, Wang et al. (2018) introduce one hidden state for each spatial cluster. The tremors themselves (including the absence of a tremor) are the observations. The frequency and spatial distribution of tremors changes according to the hidden state. More formally, we suppose that the observations of nonvolcanic tremors are a sample path of a stochastic process $\\{X_{i}\\}_{i=0,\dots,N}$ with observations represented in the state space $I=\\{\emptyset,\mathbb{R}^{2}\\}$ generated under an HMM with $K$ numbered hidden states. For each hidden state $k=1,\dots,K$ we introduce parameters $p_{k}$, $\bm{\mu}^{(k)}$ and $\mathbf{\Sigma}^{(k)}$, where $p_{k}$ is the probability of observing a tremor and $\bm{\mu}^{(k)}$, $\mathbf{\Sigma}^{(k)}$ are the mean and variance of a bivariate normal distribution modelling where a tremor is likely to occur, if it does occur. To simplify notation we introduce for each observation $\mathbf{x}$ a $K\times K$ diagonal matrix $\mathbf{P}(\mathbf{x})$, also called the emission matrix, with the $k$th diagonal element corresponding to the probability of observing $\mathbf{x}$ given state $k$ $\bm{P}(\bm{x})_{kk}=\begin{cases}p_{k}\phi(\bm{x}|\bm{\mu}^{(k)},\mathbf{\Sigma}^{(k)})\\\ 1-p_{k}.\end{cases}$ (1) Here $\phi(.)$ is the density function of bivariate normal distribution. Let $\mathbf{\Gamma}=(\Gamma_{ij})$ denote the $K\times K$ transition matrix of the HMM, where $\Gamma_{ij}$ indicate the the transition probability from hidden state $K=i$ to $K=j$. Also, let $\bm{\delta}=\delta_{1},\dots,\delta_{K}$ denote the vector of probabilities for the initial state. Now the likelihood function for the parameters given the observed data can be written as $\mathrm{L}\big{(}\mathbf{\Gamma},\bm{\delta},\\{p_{k},\bm{\mu}^{(k)},\mathbf{\Sigma}^{(k)}\\}_{k=1,\ldots,K}|\mathbf{x}_{0},\ldots,\mathbf{x}_{N}\big{)}=\bm{\delta}^{T}\mathbf{\Gamma}\mathbf{P}(\mathbf{x}_{0})\dots\mathbf{\Gamma}\mathbf{P}(\mathbf{x}_{N})\mathbf{1}.$ (2) ## missingum@section GPU computing framework GPUs have had a large impact across statistical and computing sciences due to cost-effect parallelism (Kindratenko, 2014). However in order to translate an algorithm from CPU to GPU some careful consideration is needed in terms of 1. 1. Reducing latency (how to concurrently execute instructions on GPU in order to optimise data throughput.) 2. 2. Managing memory (how to effectively distribute and utilise memory across processors to avoid bandwidth bottlenecks). 3. 3. Designing robust algorithms with respect to varying GPU architecture between models and vendors as well as the rapidly changing landscape of computational hardware. Frameworks like OpenCL and CUDA, allow programmers to implement GPU algorithms with some level of generality. The implementation we describe here was carried out in the OpenCL framework. OpenCL is an open standard maintained by the non- profit technology consortium Khronos Group, see https://www.khronos.org for more details on the non-profit organisation. The OpenCL framework consists of a host (CPU; terms in brackets relate to computation on GPU architecture) controlling one or more compute devices (We just used one GPU). Each compute device (GPU) is divided into compute units (streaming multiprocessors). Compute units are further divided into compute elements (microprocessors or cores). Each compute unit has access to global memory of the compute device. This access though is slow. Each compute unit also has a shared memory to allow efficient data exchange between compute elements. Each compute element has exclusive access to private memory (registers) for computation. ## missingum@section The likelihood algorithm ### 4.1 Overview Our implementation will work well on a range of GPU models. For our studies we used a NVIDIA GeForce GTX 1080 Ti GPU with 28 compute units (streaming multiprocessors) each with 48KB of shared memory, 128 compute elements (cores) and a register file that can contain up to 32,768 32-bit elements distributed across the compute elements (cores). For the host we used an Intel Core i7-7700K CPU at 4.20GHz. There are two main limitation for the OpenCL algorithm in terms of hardware specifications 1. 1. The number of registers per compute element. 2. 2. The size of shared memory on a compute unit. For example given the hardware described above we have (32,768/128)=256 registers per compute element. This implies that we can store up to roughly 200 32-bit matrix elements on a compute element (we also need some registers left to store counters and other meta variables). Our implementation assumes that at least two matrix rows can fit into the registers of a compute element. This gives an upper limit for the number of hidden states of $K<100$. In order to efficiently distribute rows of a matrix and update matrix elements we need space for two matrices in the shared memory of the compute unit. Our configuration has 48KB of shared memory per compute unit. Implying that we can fit a total of $(48\cdot 2^{10})/4=12288$ 32-bit matrix elements per compute unit. This gives a second upperlimit for number of hidden states of $K<80$. To handle a large number of states, alternative parallel computing strategies should be used (Horn et al., 2005; Yu et al., 2015). First we consider how the algorithm for the likelihood (2) would be implemented on a single processor unit. To avoid matrix-matrix multiplications we would start with the stationary vector $\bm{\delta}$ on the left, and then sequentially multiply that by transition matrices and emission matrices: Algorithm 1 The Forward algorithm on a CPU 1:procedure Compute- likelihood($\\{p_{k},\bm{\mu}^{(k)},\mathbf{\Sigma}^{(k)}\\}_{k=1,\ldots,K},\\{\mathbf{x}_{0},\dots,\mathbf{x}_{N}\\}$) 2: $\bm{v}\leftarrow\bm{\delta}^{T}$ 3: for $k$ from $0$ to $N$ do 4: Compute $\bm{P}(x_{k})$ 5: $\bm{v}\leftarrow\bm{v}\bm{\Gamma}$ 6: $\bm{v}\leftarrow\bm{v}\bm{P}(x_{k})$ 7: return $\bm{v}\bm{1}$ Running time will be dominated by the matrix-vector multiplication in steps 5 and 6, taking $\mathcal{O}(K^{2})$ time per iteration. Hence the running time, or work, for this implementation is $\mathcal{O}(NK^{2})$. Next we compare it with the parallel implementation. The overview of our implementation is as follows: 1. 1. We compute all of the emission matrices $\bm{P}(x_{0}),\ldots\bm{P}(x_{N})$ in parallel 2. 2. We then multiply the emission matrices by the transition matrices, all in parallel storing N matrices $\bm{\Gamma}\bm{P}(x_{0}),\dots,\bm{\Gamma}\bm{P}(X_{N})$. 3. 3. Instead of computing $\bm{\Gamma}\bm{P}(x_{0}),\dots,\bm{\Gamma}\bm{P}(X_{N})$ as a single sequence of vector-matrix multiplications, we multiply the matrices $(\bm{\Gamma}\bm{P}(x_{0})),\dots,(\bm{\Gamma}\bm{P}(X_{N}))$ together. This increases the work done: we are carrying out matrix-matrix multiplications instead of matrix-vector multiplications, but it allows us to spread the computation over multiple processors. We now discuss steps (1) to (3) in greater detail. ### 4.2 Step 1: The emission probability evaluation on GPU The goal in this step is to compute the emission matrices $\mathbf{P}(x_{i})$ for each observation $x_{i}$. The emission probability is defined by (1) and makes use of the parameters $p_{k},\bm{\Sigma}_{k},\bm{\mu}_{k}$ for each hidden state $k$. These parameters are initially copied to the registers of each core and remains there until all the datapoints have been evaluated. The compute elements work in parallel. Each is allocated a data point $x_{i}$, uses the stored values to compute $\bm{P}(x_{i})$ and copies the diagonal matrix computed to global memory. Note that a compute element can request and copy the next data point at the same time as it processes the current data point. For this step there is no data sharing between compute elements, allowing for data-level parallelism. Therefore it is more efficient to allow compute device compiler to optimise the work-load scheduling and data transfer between compute units in order to fully utilise SIMD (Single instruction multiple data) instructions. Output from compute elements are collected and copied to global memory to form the list of new inputs $\\{\mathbf{\Gamma},\mathbf{P}(x_{0}),\dots,\mathbf{P}(x_{N})\\}$ for the next kernel. ### 4.3 Step 2: The transmission-emission matrix multiplication on GPU During the next step we compute $\bm{\Gamma}\bm{P}(x_{i})$ for all data points $x_{i}$, again in parallel. At this point we run into limitations with memory. While the register of a single compute element is large enough to store the diagonal matrix $\bm{P}(x_{i})$, it is not large enough to store the full transition matrix $\bm{\Gamma}$ nor the product matrix $\bm{\Gamma}\bm{P}(x_{i})$. The solution is to break down the multiplication of $\bm{\Gamma}$ and $\bm{P}(x_{i})$ by computing only a few rows at once. We query the register size for each compute element to determine how many rows of $\mathbf{\Gamma}$ can be copied. The rows remain in the register until all data points have been evaluated. Thereafter the next set of rows is copied into the registers and the data points is evaluated again until all the rows of $\mathbf{\Gamma}\mathbf{P_{r}}$ for $r=0,\dots,N$ have been computed. As $\bm{P}(x_{i})$ is diagonal, the product of rows of $\bm{\Gamma}$ with $\bm{P}(x_{i})$ is computed by simply rescaling the corresponding columns. The next diagonal matrix subset is requested while scaling subset for current data point. Again, there is no data sharing between compute elements, allowing for optimal data-level parallelism. Output from compute elements are collected and a new list of inputs, namely $\\{(\mathbf{\Gamma P_{0}}),\dots,(\mathbf{\Gamma P_{N}})\\}$ is compiled for the final GPU kernel. ### Step 3: The Square Matrix-Chain Multiplication on GPU The third step is the most time-consuming, and also the most involved. The general idea is to avoid the long sequence of matrix vector calculations $\bm{\delta}^{T}\mathbf{\Gamma}\mathbf{P}(x_{0})\dots\mathbf{\Gamma}\mathbf{P}(x_{N})\bm{1}$ which cannot be readily parallelized, by instead multiplying the matrices together in parallel. Our algorithm here roughly follows (Masliah et al., 2016). Recall the general hierarchical structure of a GPU calculations, as described above. The CPU controls the GPU. Each compute GPU is divided into compute units (streamline multiprocessors). Compute units are further divided into compute elements (microprocessors or cores). The CPU is actually faster than the compute units for individual computations, the speed of GPUs being due to parallelism. Our algorithm takes advantage of all three levels: The sequence of matrices (known in computer science as a matrix chain) is divided into multiple segments, one for each compute unit. The compute units then carry out matrix multiplication directly, making use of multiple compute elements to share out the rows in each matrix-matrix computation. We then use the CPU to carry out the final sequence of matrix-vector computations, using the matrices returned by the computational units of the GPU. Note that in practise we compute $\log$L rather than L and shift the registers either up or down using the scale coefficients from compute units to avoid underflow. ## missingum@section Performance assessment of OpenCL implementation ### 5.1 OpenCL algorithm vs Forward algorithm One of the factors that influence the use of an algorithm on GPUs is whether it is actually faster than a Forward algorithm. To check this we compare computational times of the GPU algorithm with the Forward algorithm from the software library Tensor flow. First we fixed the number of HMM states to $K=25$ while increasing the number of datapoints over a range of magnitude orders $N=10^{2},\dots,10^{5}$. Thereafter we fixed the number of datapoints to $N=100,000$ and increased the number of HMM states for $K=5,10,\dots,50$. In each case model parameters were drawn from the prior disribution (discussed in the next section) and thereafter data was simulated using the R software package in Wang et al. (2018). The results are shown in Figure 1 and Figure 2. We see that the GPU algorithm executes orders of magnitude faster then a Forward algorithm. $10^{2}$$10^{3}$$10^{4}$$10^{5}$$10^{-1}$$10^{0}$$10^{1}$$10^{2}$$10^{3}$$10^{4}$Number of DatapointsExecution Time [ms]Execution Time for 25 StatesForwardOpenCL-GPU Figure 1: We compare computational time of OpenCL algorithm on GPU with a Forward algorithm on CPU. Computational time is indicated on the y-axis and number of datapoints are indicated by the x-axis. We see that with $10^{5}$ datapoints, the GPU algorithm runs $\sim 10^{3}$ times faster. $10$$20$$30$$40$$50$$60$$10^{0}$$10^{1}$$10^{2}$$10^{3}$$10^{4}$Matrix SizeExecution Time [ms]Execution Time for 100,000 Data pointsForwardOpenCL-GPU Figure 2: We compare computational time of OpenCL algorithm on GPU with a Forward algorithm on CPU. Computational time is indicated on the y-axis and number of HMM states are indicated by the x-axis. We see that the GPU algorithm slows down as the register capacity of compute elements is reached. However it still outperforms the Forward algorithm by orders of magnitude. ### 5.2 Comparing execution time of matrix-chain multiplication Here we specifically compare computation time of step 3 in the OpenCL algorithm with matrix-chain multiplication using popular GPU BLAS (Basic Linear Algebra Subprograms) libraries. We use subroutines from the CLBlast library as well as the MAGMA BLAS libary to do the matrix-chain multiplication. CLBLast is a general BLAS library in OpenCL that automatically tunes subroutines for specific hardware based on compile time. MAGMA BLAS is a CUDA library exclusively available for NVIDIA GPUs. We followed the same procedure as in the previous two experiments except that we fixed the number of HMM states to $K=50$. We show results in Figure 3 and Figure 4. Using the MAGMA library gives roughly the same performance as the OpenCL algorithm for small matrices. We note that using these libraries in the OpenCL algorithm is not straightforward due to small tweaks and scaling coefficients that we keep track of in addition to performing the matrix-chain multiplication. The algorithm became very slow when the HMM had more than 100 states due to memory limitations previously discussed. $10^{2}$$10^{3}$$10^{4}$$10^{5}$$10^{-1}$$10^{0}$$10^{1}$$10^{2}$$10^{3}$$10^{4}$Data PointsExecution Time [ms]Speed Comparison for 50 StatesCLBlastMAGMAOWN Figure 3: For this computational comparison (in miliseconds) with the BLAS libraries we fix the number of HMM states to $K=50$ and increase the number of datapoints over a range of magnitude orders. $10$$20$$30$$40$$50$$10^{-1}$$10^{0}$$10^{1}$$10^{2}$$10^{3}$$10^{4}$Matrix SizeExecution Time [ms]Speed Comparison for 100000 Data PointsCLBlastMAGMAOWN Figure 4: For this computational comparison (in miliseconds) with the BLAS libraries we fix the number of datapoints to $N=100,000$ and increase the number of HMM states for $K=5,10,\dots,50$. ## missingum@section Bayesian analysis of nonvolcanic tremor data ### 6.1 Monte Carlo Markov Chains Bayesian techniques have become a popular method of statistical inference across a broad range of sciences (Jóhannesson et al., 2016; Kruschke, 2010; Moore and Zuev, 2005; Stoltz et al., 2019; Turner et al., 2016; Woolrich et al., 2009). This is due to advances in numerical techniques and the affordability of powerful computers (Andrieu et al., 2004). In a Bayesian analysis the aim is to compute the joint posterior distribution of model parameters, simply referred to as the posterior distribution. The posterior distribution summarizes the uncertainty related to model parameters. Typically due to model complexity the posterior distribution is an analytically intractable function. However methods such as Monte Carlo Markov Chains (MCMCs) use random walks to estimate the posterior distribution. The basic concept behind MCMCs is that a Markov chain can be constructed with a stationary distribution, where the stationary distribution is in fact the posterior distribution. An MCMC is initialized by choosing a random state, typically by drawing a sample from the prior distribution (we discuss prior distributions below). The MCMC is then simulated by accepting or rejecting proposed MCMC states based on a ratio of the likelihood function and prior distribution of both the current and proposed MCMC state. The MCMC is simulated until after the stationary distribution is reached. Stationarity of an MCMC is assessed by looking at trace plots of parameters as well as computing the number of effective independent samples. Samples of the stationary MCMC is then used to approximate the posterior distribution. ### 6.2 Model priors Using ratios of the likelihood and prior distributions is an elegant way of sampling from the posterior distribution. It sidesteps some nasty calculations if we were to compute the posterior distribution directly instead. Roughly speaking prior distributions is a way to incorporate knowledge about model parameters before looking at the data. However prior distributions can easily be neglected but they are in fact an important part of the model. Therefore choosing prior distributions needs to be carefully considered and requires some justification. For instance, it is known that tremors occur in sequence bursts that cluster around the same area (Wang et al., 2018). This observation we translate into the model by specifying a model prior centred around sparse transition matrices. More formally, we specify a symmetric Dirichlet prior with concentration parameter 0.01 on $\bm{\Gamma}$ (formulas for prior densities are given in Appendix A). Furthermore we expect that for some hidden states we are more likely to observe tremors than others. Therefore we specify independent gamma distributions on state probabilities $\\{p_{k}\\}_{k=1,\ldots,K}$ , half of the state probabilities with mean 0.1 and variance 0.001 and the other half with mean 0.9 and variance 0.001. Also, we specify a uniform prior on hidden state means $\\{\bm{\mu}^{(k)}\\}_{k=1,\ldots,K}$ restricted to a rectangular domain that contains all observations. We have no prior information on the shape of the hidden states therefore we specify an uninformative Inverse-Wishart prior on the covariance matrices $\\{\mathbf{\Sigma}^{(k)}\\}_{k=1,\ldots,K}$ with degrees of freedom equal to the number of states $K$ and scale matrix set to a $K\times K$ identity matrix. ### 6.3 GPUeR-hmmer In order to simulate MCMCs for the model, we incorporated the GPU likelihood algorithm along with the prior distributions into a general purpose MCMC sampler (Christen et al., 2010). This R package bundle is freely available at https://github.com/genetica/HMMTremorRecurrencePatterns. Note that OpenCL 1.2 and Python 3.6 (or later versions) needs to be separately installed on a system in order to support the back-end of the R package. The R package also contains a simple example using simulated data from the HMM described in Section 2. Additionally we provide instructions on how to modify the OpenCL code if an HMM with a different emission function is required. In order to assess convergence of the MCMC chains we used Tracer. For a brief tutorial on how to use Tracer to assess convergence, see https://beast.community/analysing_beast_output. Furthermore if some problems are encountered with convergence of the simulated MCMCs see https://beast.community/tracer_convergence for some recommendations. ### 6.4 Tremor dataset of the Shikoku region We use a large tremor dataset from the Shikoku region, Japan to demonstrate the sort of Bayesian analysis that can be done with GPUeR-hmmer. The Shikoku region is one the three major regions in Japan (the other two being the Tokai region and Kii region) in which nonvolcanic tremor occurrences have been repeatedly detected. Tremor activity spans along the strike of the Phillipines Sea plate for about 600km and the depth ranges from 30 to 45 km on the plate interface. The original waveform data is supplied by the High Sensivity Seismograph Network of the National institute for Earth Sciences and Disaster prevention in Japan. The dataset analysed by Wang et al. (2018) was extracted from the waveform data. It consists of $105,000$ data point measurements between $2001$ and $2012$. It is hourly control measurements determined using clustering and correlation methods described in Obara et al. (2010). ### 6.5 Model fitting A full Bayesian analysis of the model will sample the number of hidden states along with the rest of the model parameters. However sampling from different parameter spaces is quite challenging and is an active and ongoing area of research (Lunn et al., 2009). Instead we incorporate the choice of number of hidden states $K$ into the model fitting process. We start with a small number of hidden states and incrementally increase the number of hidden states, while doing so we assess the posterior distribution for each case. The posterior distribution of each model is estimated by running the MCMC sampler for 1,000,000 iterations. Running each chain took approximately $\sim 4$ hours. In Figure 7 we summarize the posterior distributions for model fitted with number of hidden states $K=5,10,\dots,30$. Typically, the background states (i.e states that cover large areas) have the highest variance in posterior distribution. Whereas states covering smaller areas have considerably less variance in posterior distribution of parameters. We also see in Figure 7(d) that parameters used in Wang et al. (2018) are recovered by the posterior distribution. Typically as we increase the number of states some states are divided into two, with rare new clusters. Furthermore we see for $K=30$ that some additional hidden states ($k=4,8,26$) doesn’t fit over one particular cluster of points, covers a large area, has a low probability of observing tremors and a low stationary probability (i.e time spent in state). Thereafter we also fitted models with hidden states for $K=26,27$ (see Appendix B) and we find that additional hidden states have the same undesirable properties and therefore use $K=25$ as our choice for number of hidden states for the model (see MCMC summary statistics in Appendix C). (a) $K=5$ (b) $K=10$ (a) $K=15$ (b) $K=20$ (a) $K=25$ (b) $K=30$ Figure 7: Posterior distributions of fitted models with number of hidden states $K=5,10,\dots,30$ for tremor occurrences in Shikoku region. Ellipses each map represent the 2D normal density of one hidden state for one sample from the posterior distribution. States are numbered in red. Colour of an ellipse indicate how likely a tremor will occur given the process is in the hidden state. In the bottom right corner of each map we give the mean transition matrix of the posterior distribution. Transition probabilities (array entries) and state probabilities (colour of ellipse) both use same colormap given in bottom right corner. Furthermore grey dots represent the Shikoku tremor data points. Black ellipses and -dots represent mean parameters. ### 6.6 Forecasting We carry out a Bayesian forecast from the model for a 5 day period (from December 11, 2012 to December 16, 2012). Note that the data for this period was excluded in the model fitting process. In order to forecast tremors we simulated 120 hourly datapoints (i.e for 5 days) from the model (with fixed number of hidden states $K=25$) for every 1000th MCMC sample (total of 500 simulations) of the approximate posterior distribution. Note that we used the same realization of the MCMC that was generated in the model fitting process (see previous section). We used the HMM simulator in the R package HMMextra0s (freely available at https://rdrr.io/cran/HMMextra0s/man/HMMextra0s-package.html). We summarize the 500 forecast simulations as a density in a longitude plot over time and a latitude plot over time (Figure 8). Furthermore we plot the actual data as a scatterplot with red datapoints. We also include the last day (December 10, 2012) of the data used for model fitting (as a scatterplot with black datapoints). We see that the model works well for the first two days. It captures nicely in which area the tremors occur. We also see that see that we get coverage from the forecast (density plot) for all the data points except for one outlier. Furthermore we see that the variance in the model predictions increase with time. This is not unexpected since the further away our forecasts are from the present the less information our data contains about the future states of the process. It would be very unlikely to make an accurate forecast of more than a week. (a) (b) Figure 8: We summarize the forecast simulations as two density plots. (a) Latitude predictions and data plotted against time (in hours). (b) Longitude predictions and data plotted against time (in hours). The red dots in both figures are the hourly Shikoku data for the time period from December 11, 2012 to December 30, 2012 (not included in data used for model fitting). Furthermore black dots in both figures are the hourly Shikoku data for the time period December 10, 2012 (included in data used for model fitting). ## missingum@section Discussion In this paper we present an algorithm for evaluating HMM likelihoods that can run several orders of magnitude faster than the traditional Forward algorithm. Our algorithm requires more work, but the high level of parallization of the likelihood calculation translates into high data throughput. We have implemented the algorithm for an HMM model that categorises nonvolcanic tremor data. Furthermore we have integrated the algorithm as part of an R package for Bayesian analysis using the OpenCL framework with Python under the hood. It is however expected that a CUDA implementation for NVIDIA GPUs will achieve higher data throughput but this limits the algorithm to a single vendor. OpenCL on the other hand allows execution of the algorithm on any OpenCL compliant device such as Intel CPUs, AMD CPUs and GPUs, Qualcomm processors, Xilinx FPGAs (Field-programmable gate array) and even NVIDIA GPUs. We have reported some runtime comparisons with implementations of the Forward algorithm. The efficieny gains in computation of the likelihood allowed us to conduct a detailed Bayesian analysis for tremor data of Shikoku region of Japan. Lastly, the OpenCL algorithm can be easily modified for other HMM models. In some cases only the evaluation function of the emission matrix needs to be updated. ## missingum@section Acknowledgement Marnus Stolz received a doctoral scholarship from the NZ Marsden Fund (PIs David Bryant and Steven Higgins). ## References * Andrieu et al. (2004) Andrieu, C., A. Doucet, and C. Robert. 2004. Computational advances for and from bayesian analysis. Statistical science Pages 118–127. * Beroza and Ide (2011) Beroza, G. C. and S. Ide. 2011. Slow earthquakes and nonvolcanic tremor. Annual review of Earth and planetary sciences 39:271–296. * Chong et al. (2014) Chong, J., I. R. Lane, and S. W. Buthpitiya. 2014. Utilizing multiple processing units for rapid training of hidden markov models. US Patent 8,886,535. * Christen et al. (2010) Christen, J. A., C. Fox, et al. 2010. A general purpose sampling algorithm for continuous distributions (the t-walk). Bayesian Analysis 5:263–281. * Cuzzocrea et al. (2016) Cuzzocrea, A., E. Mumolo, N. Timeus, and G. Vercelli. 2016. GPU-Aware Genetic Estimation of Hidden Markov Models for Workload Classification Problems. Proceedings - International Computer Software and Applications Conference 1:674–682. * Horn et al. (2005) Horn, D. R., M. Houston, and P. Hanrahan. 2005. ClawHMMER: A streaming HMMer-search implementation. Proceedings of the ACM/IEEE 2005 Supercomputing Conference, SC’05 2005. * Jóhannesson et al. (2016) Jóhannesson, G., R. R. de Austri, A. Vincent, I. Moskalenko, E. Orlando, T. Porter, A. Strong, R. Trotta, F. Feroz, P. Graff, et al. 2016. Bayesian analysis of cosmic ray propagation: evidence against homogeneous diffusion. The Astrophysical Journal 824:16. * Kindratenko (2014) Kindratenko, V. 2014. Numerical computations with GPUs. Springer. * Kruschke (2010) Kruschke, J. K. 2010. What to believe: Bayesian methods for data analysis. Trends in cognitive sciences 14:293–300. * Li et al. (2009) Li, J., S. Chen, and Y. Li. 2009. The fast evaluation of hidden Markov models on GPU. Proceedings - 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems, ICIS 2009 4:426–430. * Liu (2009) Liu, C. 2009. cuHMM: a CUDA implementation of hidden Markov model training and classification. The Chronicle of Higher Education Pages 1–13. * Lunn et al. (2009) Lunn, D. J., N. Best, and J. C. Whittaker. 2009. Generic reversible jump mcmc using graphical models. Statistics and Computing 19:395. * Masliah et al. (2016) Masliah, I., A. Abdelfattah, A. Haidar, S. Tomov, M. Baboulin, J. Falcou, and J. Dongarra. 2016. High-performance matrix-matrix multiplications of very small matrices. Pages 659–671 _in_ Proceedings of the 22Nd International Conference on Euro-Par 2016: Parallel Processing Springer-Verlag New York, Inc. * Moore and Zuev (2005) Moore, A. W. and D. Zuev. 2005. Internet traffic classification using bayesian analysis techniques. Pages 50–60 _in_ Proceedings of the 2005 ACM SIGMETRICS international conference on Measurement and modeling of computer systems. * Obara et al. (2010) Obara, K., S. Tanaka, T. Maeda, and T. Matsuzawa. 2010. Depth-dependent activity of non-volcanic tremor in southwest japan. Geophysical Research Letters 37. * Rannala and Yang (2017) Rannala, B. and Z. Yang. 2017. Efficient bayesian species tree inference under the multispecies coalescent. Systematic biology 66:823–842. * Stoltz et al. (2019) Stoltz, M., B. Bauemer, R. Bouckart, C. Fox, G. Hiscott, and D. Bryant. 2019. Bayesian inference of species trees using diffusion models. arXiv preprint arXiv:1909.07276 . * Turner et al. (2016) Turner, B. M., P. B. Sederberg, and J. L. McClelland. 2016. Bayesian analysis of simulation-based models. Journal of Mathematical Psychology 72:191–199. * Wang et al. (2018) Wang, T., J. Zhuang, J. Buckby, K. Obara, and H. Tsuruoka. 2018. Identifying the recurrence patterns of nonvolcanic tremors using a 2-d hidden markov model with extra zeros. Journal of Geophysical Research: Solid Earth 123:6802–6825. * Woolrich et al. (2009) Woolrich, M. W., S. Jbabdi, B. Patenaude, M. Chappell, S. Makni, T. Behrens, C. Beckmann, M. Jenkinson, and S. M. Smith. 2009. Bayesian analysis of neuroimaging data in fsl. Neuroimage 45:S173–S186. * Yao et al. (2010) Yao, P., H. An, M. Xu, G. Liu, X. Li, Y. Wang, and W. Han. 2010. CuHMMer: A load-balanced CPU-GPU cooperative bioinformatics application. Proceedings of the 2010 International Conference on High Performance Computing and Simulation, HPCS 2010 Pages 24–30. * Yu et al. (2015) Yu, L., Y. Ukidave, and D. Kaeli. 2015. GPU-Accelerated HMM for Speech Recognition. Proceedings of the International Conference on Parallel Processing Workshops 2015-May:395–402. * Zhang et al. (2009) Zhang, D., R. Zhao, L. Han, T. Wang, and J. Qu. 2009. An implementation of viterbi algorithm on GPU. 2009 1st International Conference on Information Science and Engineering, ICISE 2009 Pages 121–124. ## APPENDIX A: Formulas for prior distributions ### Symmetric Dirichlet distributions $f(\gamma_{1},\dots,\gamma_{K^{2}},\alpha)=\frac{\Gamma(\alpha K^{2})}{\alpha^{K^{2}}}\prod_{i=1}^{K^{2}}\gamma_{i}^{\alpha-1}\qquad\text{, for }\alpha<1,$ we note that probability mass is sparsely distributed among $\gamma_{1},\dots,\gamma_{K^{2}}$ if $\alpha<1.$ ### Inverse-wishart distributions Suppose $\Psi$ is the scale matrix and $\nu$ the degrees of freedom then $f(\mathbf{x},\Psi,\nu)=\frac{|\Psi|^{\nu/2}}{2^{\nu p/2}\Gamma_{K}(\frac{\nu}{2})}=|\mathbf{x}|^{-(\nu+K+1)/2}e^{-\frac{1}{2}tr(\Psi x^{-1})},$ where $\Gamma_{K}$ is a multivariate gamma function $\Gamma_{K}\left(\frac{\nu}{2}\right)=\pi^{(\frac{\nu}{2})(\frac{\nu}{2}-1)/4}\prod^{K}_{j=1}\Gamma\left(\left(\frac{\nu}{2}\right)+(1-j)/2\right).$ ### Gamma distribution $f(x,\alpha,\beta)=\frac{\beta^{\alpha}x^{\alpha-1})e^{-\beta x}}{\Gamma(\alpha)},\qquad\text{for $x>0$}\qquad\alpha,\beta>0.$ ## APPENDIX B: Additionals model fitted for nonvolcanic tremor data (a) $K=26$ (b) $K=27$ Figure 9: Posterior distributions of fitted models with number of hidden states $K=26,27$ for tremor occurrences in Shikoku region. Ellipses each map represent the 2D normal density of one hidden state for one sample from the posterior distribution. States are numbered in red. Colour of an ellipse indicate how likely a tremor will occur given the process is in the hidden state. In the bottom right corner of each map we give the mean transition matrix of the posterior distribution. Transition probabilities (array entries) and state probabilities (colour of ellipse) both use same colormap given in bottom right corner. Furthermore grey dots represent the Shikoku tremor data points. Black ellipses and -dots represent mean parameters. ## APPENDIX C: Tabulated posterior statistics for number of hidden states K=25 Table 1: Combined GPU-hmmer parameter summary after 5,000,000 MCMC iterations for soybean dataset hmmer | mean | variance | HPD | ACT | ESS | ---|---|---|---|---|---|--- posterior | -1.0987E+03 | 7.6624E+04 | -1.6110E+03 | -4.0624E+02 | 35830.8997 | 239.5444 Gamma1 | 8.6710E-01 | 2.4186E-04 | 8.3770E-01 | 8.9248E-01 | 29571.9564 | 169.0791 Gamma2 | 1.6289E-02 | 4.7111E-05 | 3.8358E-03 | 2.8083E-02 | 14455.9226 | 345.8790 Gamma3 | 1.8235E-03 | 8.1549E-06 | 5.3391E-06 | 5.9633E-03 | 31080.9139 | 160.8704 Gamma4 | 1.8604E-03 | 9.1692E-06 | 8.0933E-06 | 6.5181E-03 | 23964.1982 | 208.6446 Gamma5 | 1.4205E-03 | 1.2756E-06 | 4.4740E-05 | 3.3164E-03 | 14213.5729 | 351.7764 Gamma6 | 3.4741E-03 | 7.6753E-06 | 2.8037E-05 | 8.2833E-03 | 21277.4832 | 234.9902 Gamma7 | 2.9676E-03 | 3.5263E-06 | 4.7983E-05 | 5.7603E-03 | 15272.0127 | 327.3963 Gamma8 | 1.9523E-03 | 5.1619E-06 | 1.8840E-05 | 4.8152E-03 | 27378.2848 | 182.6265 Gamma9 | 8.8053E-04 | 1.4839E-06 | 6.1831E-07 | 2.6187E-03 | 15509.9601 | 322.3735 Gamma10 | 4.0934E-03 | 1.6603E-05 | 8.1382E-15 | 1.2107E-02 | 57380.8895 | 87.1370 Gamma11 | 9.3274E-03 | 1.4699E-05 | 1.9398E-03 | 1.5410E-02 | 45068.2804 | 110.9428 Gamma12 | 6.1730E-04 | 3.1284E-07 | 4.1541E-07 | 1.7926E-03 | 14440.9950 | 346.2365 Gamma13 | 3.5715E-03 | 5.4052E-05 | 1.2686E-04 | 8.2704E-03 | 24168.9799 | 206.8767 Gamma14 | 9.6052E-04 | 7.1797E-07 | 8.2637E-06 | 2.5498E-03 | 18857.9341 | 265.1404 Gamma15 | 1.3255E-03 | 1.4142E-06 | 4.1034E-06 | 4.0367E-03 | 31384.2830 | 159.3154 Gamma16 | 1.5935E-03 | 1.1240E-05 | 7.3288E-06 | 4.9660E-03 | 20798.4511 | 240.4025 Gamma17 | 3.8764E-02 | 3.7291E-05 | 2.6857E-02 | 4.8477E-02 | 48946.9321 | 102.1514 Gamma18 | 1.0956E-03 | 1.9142E-06 | 7.6476E-06 | 3.7071E-03 | 45031.8581 | 111.0325 Gamma19 | 1.0625E-03 | 4.7366E-06 | 5.9749E-06 | 2.6002E-03 | 17168.5493 | 291.2302 Gamma20 | 2.6584E-03 | 5.6930E-06 | 2.0232E-05 | 6.8404E-03 | 35644.4221 | 140.2744 Gamma21 | 2.9296E-02 | 2.9234E-05 | 1.9465E-02 | 4.0877E-02 | 42258.6322 | 118.3190 Gamma22 | 1.9319E-03 | 3.1337E-06 | 8.6148E-05 | 6.0423E-03 | 16100.7869 | 310.5438 Gamma23 | 2.3521E-03 | 3.8152E-06 | 3.0464E-05 | 6.1173E-03 | 22249.4145 | 224.7250 Gamma24 | 2.8243E-03 | 7.0229E-06 | 1.4940E-04 | 7.1177E-03 | 15713.5026 | 318.1977 Gamma25 | 7.5430E-04 | 1.2261E-06 | 0.0000E+00 | 2.8100E-03 | 33773.0435 | 148.0471 Gamma26 | 7.5800E-04 | 9.2460E-07 | 2.7447E-06 | 2.1539E-03 | 19423.7468 | 257.4169 Gamma27 | 9.7455E-01 | 1.8772E-05 | 9.6572E-01 | 9.8144E-01 | 16385.9590 | 305.1393 Gamma28 | 1.1656E-03 | 5.5348E-07 | 3.8569E-06 | 2.4886E-03 | 433749.2599 | 11.5274 Gamma29 | 2.6091E-04 | 1.4724E-07 | 6.0330E-07 | 5.3802E-04 | 34952.5560 | 143.0511 Gamma30 | 1.1413E-03 | 2.5184E-07 | 3.8769E-04 | 1.6722E-03 | 12690.4915 | 393.9958 Gamma31 | 2.5300E-03 | 2.2597E-07 | 1.7208E-03 | 3.7603E-03 | 62747.9452 | 79.6839 Gamma32 | 7.2782E-04 | 6.3201E-07 | 8.2677E-05 | 1.5357E-03 | 22661.4152 | 220.6394 Gamma33 | 4.7610E-03 | 2.8677E-06 | 2.1785E-03 | 7.5344E-03 | 128377.9942 | 38.9475 Gamma34 | 1.4483E-04 | 3.3611E-07 | 0.0000E+00 | 4.4469E-04 | 18494.6559 | 270.3484 Gamma35 | 8.8254E-04 | 9.1352E-06 | 1.1430E-179 | 2.6530E-03 | 17104.6382 | 292.3184 Gamma36 | 2.2049E-04 | 5.6664E-07 | 3.2433E-07 | 8.8779E-04 | 13504.7449 | 370.2402 Gamma37 | 2.8423E-04 | 1.8785E-06 | 1.3566E-07 | 5.3904E-04 | 16705.1430 | 299.3090 Gamma38 | 1.8184E-03 | 8.9467E-07 | 1.0682E-03 | 2.9130E-03 | 12987.5478 | 384.9841 Gamma39 | 1.5827E-04 | 3.8431E-07 | 8.2793E-240 | 5.5617E-04 | 10408.8774 | 480.3592 Gamma40 | 1.5324E-04 | 5.9919E-08 | 1.7997E-06 | 3.9441E-04 | 35391.8500 | 141.2755 Gamma41 | 1.1195E-04 | 6.7680E-07 | 4.5519E-07 | 1.0529E-04 | 6175.3438 | 809.6715 Gamma42 | 1.6970E-04 | 5.8092E-08 | 2.3201E-68 | 4.8800E-04 | 19975.0180 | 250.3127 Gamma43 | 5.4145E-04 | 1.7014E-06 | 8.0357E-07 | 1.2248E-03 | 12214.7293 | 409.3419 Gamma44 | 9.4007E-05 | 1.2845E-07 | 0.0000E+00 | 3.0288E-04 | 15527.8979 | 322.0011 Gamma45 | 5.8817E-04 | 2.5773E-06 | 2.6848E-07 | 1.7851E-03 | 16773.6991 | 298.0857 Gamma46 | 2.7130E-04 | 1.6124E-06 | 0.0000E+00 | 9.7746E-04 | 12013.6336 | 416.1938 Gamma47 | 8.3608E-04 | 8.7118E-07 | 8.1343E-06 | 1.6621E-03 | 25000.7219 | 199.9942 Gamma48 | 6.1769E-03 | 1.6042E-06 | 3.7912E-03 | 8.2029E-03 | 72454.3676 | 69.0090 Gamma49 | 1.3814E-03 | 1.2691E-07 | 8.2454E-04 | 1.8834E-03 | 16414.5669 | 304.6075 Gamma50 | 2.7088E-04 | 1.8704E-07 | 1.9386E-06 | 8.3363E-04 | 20676.1070 | 241.8250 Gamma51 | 3.8265E-03 | 1.4101E-05 | 1.5164E-06 | 1.1256E-02 | 54832.1289 | 91.1874 Gamma52 | 4.1702E-02 | 2.7147E-04 | 1.4143E-02 | 7.0675E-02 | 177277.4811 | 28.2044 Gamma53 | 6.5895E-01 | 1.1852E-02 | 5.3762E-01 | 8.6165E-01 | 179266.1419 | 27.8915 Gamma54 | 5.2686E-02 | 1.3514E-04 | 3.0989E-02 | 7.5725E-02 | 20727.5986 | 241.2243 Gamma55 | 3.7342E-03 | 8.7625E-06 | 4.2153E-06 | 8.8395E-03 | 35075.5349 | 142.5495 Gamma56 | 3.7250E-03 | 1.4367E-05 | 2.1957E-05 | 1.0399E-02 | 13703.4629 | 364.8713 Gamma57 | 3.4035E-03 | 7.6310E-06 | 1.1752E-06 | 9.5530E-03 | 60398.7204 | 82.7832 Gamma58 | 1.1569E-02 | 5.7239E-05 | 1.4684E-03 | 2.6915E-02 | 126050.2525 | 39.6667 Gamma59 | 8.3384E-03 | 5.0755E-05 | 2.4430E-05 | 1.9246E-02 | 108972.2649 | 45.8832 Gamma60 | 7.5111E-03 | 1.8539E-05 | 1.3374E-03 | 1.5506E-02 | 35526.5778 | 140.7397 Gamma61 | 2.4433E-03 | 5.7828E-06 | 3.7794E-05 | 7.1885E-03 | 23869.7025 | 209.4706 Gamma62 | 2.9865E-03 | 1.1750E-05 | 0.0000E+00 | 9.4605E-03 | 24301.5009 | 205.7486 Gamma63 | 4.6511E-03 | 1.1309E-05 | 4.0228E-04 | 1.1406E-02 | 17927.0292 | 278.9085 Gamma64 | 3.4551E-03 | 1.0811E-05 | 8.0421E-06 | 9.2846E-03 | 60480.2163 | 82.6717 Gamma65 | 1.7704E-03 | 2.9336E-06 | 2.0011E-05 | 5.5824E-03 | 31567.4252 | 158.3911 Gamma66 | 1.8257E-03 | 5.5040E-06 | 4.8305E-06 | 6.7129E-03 | 21359.3236 | 234.0898 Gamma67 | 4.4132E-03 | 1.0302E-05 | 2.3097E-04 | 1.0175E-02 | 30529.5466 | 163.7758 Gamma68 | 3.3537E-03 | 1.3710E-05 | 4.6203E-06 | 1.1097E-02 | 47189.7634 | 105.9552 Gamma69 | 2.5590E-03 | 1.2021E-05 | 1.6972E-05 | 8.6447E-03 | 17168.6412 | 291.2286 Gamma70 | 3.6075E-03 | 2.1892E-05 | 1.3317E-05 | 1.4296E-02 | 51779.6476 | 96.5630 Gamma71 | 2.7341E-03 | 1.0429E-05 | 6.6654E-05 | 9.0064E-03 | 50454.1729 | 99.0998 Gamma72 | 1.6084E-01 | 8.9549E-03 | 1.7265E-04 | 2.6376E-01 | 52563.5955 | 95.1229 Gamma73 | 4.7693E-03 | 1.3051E-05 | 9.0943E-04 | 1.3995E-02 | 31721.7813 | 157.6204 Gamma74 | 3.2346E-03 | 1.1070E-05 | 2.3274E-05 | 8.6432E-03 | 32056.6747 | 155.9738 Gamma75 | 1.9117E-03 | 3.3446E-06 | 0.0000E+00 | 5.6489E-03 | 15101.7754 | 331.0869 Gamma76 | 4.0374E-03 | 1.3862E-05 | 5.6963E-06 | 1.1135E-02 | 17448.4236 | 286.5588 Gamma77 | 1.8735E-02 | 1.3793E-04 | 1.4735E-03 | 3.7747E-02 | 153773.7616 | 32.5153 Gamma78 | 5.2840E-02 | 1.4351E-03 | 8.7997E-03 | 1.2569E-01 | 109036.5946 | 45.8562 Gamma79 | 7.7194E-01 | 4.7660E-04 | 7.3193E-01 | 8.1135E-01 | 97858.7582 | 51.0940 Gamma80 | 2.3710E-03 | 6.3079E-06 | 8.1711E-209 | 6.0531E-03 | 40191.5185 | 124.4044 Gamma81 | 9.8015E-03 | 2.1754E-05 | 2.7665E-03 | 1.9849E-02 | 14021.4458 | 356.5966 Gamma82 | 8.0723E-03 | 1.8660E-05 | 1.5913E-03 | 1.6722E-02 | 13912.5818 | 359.3869 Gamma83 | 4.2189E-03 | 1.5787E-05 | 5.4825E-05 | 1.2599E-02 | 17447.4497 | 286.5748 Gamma84 | 1.5088E-02 | 5.6666E-05 | 2.5035E-03 | 3.0542E-02 | 25580.4554 | 195.4617 Gamma85 | 9.6710E-03 | 1.6331E-05 | 2.9544E-03 | 1.8682E-02 | 31452.3565 | 158.9706 Gamma86 | 1.6917E-03 | 2.8601E-06 | 8.1084E-06 | 5.7956E-03 | 8240.6163 | 606.7507 Gamma87 | 5.7498E-03 | 2.7984E-05 | 2.2783E-04 | 1.5747E-02 | 21734.0305 | 230.0540 Gamma88 | 6.3461E-03 | 1.9530E-05 | 4.5535E-04 | 1.4662E-02 | 26108.6270 | 191.5076 Gamma89 | 5.2471E-03 | 1.3105E-05 | 2.0134E-04 | 1.0240E-02 | 13017.1322 | 384.1092 Gamma90 | 2.0111E-03 | 4.2089E-06 | 1.1327E-05 | 5.9145E-03 | 25705.1264 | 194.5137 Gamma91 | 1.0309E-03 | 1.6301E-06 | 0.0000E+00 | 3.8256E-03 | 28956.6968 | 172.6716 Gamma92 | 2.2777E-03 | 3.1269E-06 | 1.0209E-05 | 5.9816E-03 | 14923.2556 | 335.0475 Gamma93 | 2.1677E-03 | 9.3605E-06 | 0.0000E+00 | 5.6929E-03 | 23761.2254 | 210.4269 Gamma94 | 1.0397E-03 | 3.5146E-06 | 4.8823E-06 | 4.4321E-03 | 23742.1096 | 210.5963 Gamma95 | 3.3163E-03 | 5.2410E-06 | 1.4686E-04 | 7.9886E-03 | 30114.5464 | 166.0327 Gamma96 | 2.5090E-03 | 1.6163E-05 | 2.7992E-05 | 7.1467E-03 | 41015.2414 | 121.9059 Gamma97 | 5.8867E-02 | 1.3932E-03 | 2.0425E-05 | 1.0662E-01 | 31943.0005 | 156.5288 Gamma98 | 5.7913E-03 | 2.4696E-05 | 4.1050E-50 | 1.4469E-02 | 35391.9791 | 141.2749 Gamma99 | 1.4880E-03 | 2.3845E-06 | 0.0000E+00 | 4.5140E-03 | 25023.0598 | 199.8157 Gamma100 | 3.6941E-03 | 6.4817E-06 | 3.0497E-04 | 7.7939E-03 | 20250.6950 | 246.9051 Gamma101 | 1.0241E-02 | 7.2072E-05 | 1.0568E-04 | 2.7836E-02 | 20759.9054 | 240.8489 Gamma102 | 1.0804E-01 | 4.6584E-04 | 6.1602E-02 | 1.4489E-01 | 14058.9871 | 355.6444 Gamma103 | 5.6760E-03 | 2.0145E-05 | 5.4058E-06 | 1.4865E-02 | 82129.7028 | 60.8793 Gamma104 | 2.8431E-03 | 6.8754E-06 | 1.2424E-04 | 8.9155E-03 | 34299.2175 | 145.7759 Gamma105 | 7.1477E-01 | 7.0890E-04 | 6.6167E-01 | 7.6297E-01 | 11826.7772 | 422.7694 Gamma106 | 6.5209E-03 | 1.7024E-05 | 1.5508E-04 | 1.4459E-02 | 18234.4629 | 274.2060 Gamma107 | 2.8461E-02 | 2.2138E-04 | 7.8455E-03 | 5.4417E-02 | 17921.7664 | 278.9904 Gamma108 | 3.3291E-03 | 8.5009E-06 | 1.2591E-05 | 9.5390E-03 | 26847.8163 | 186.2349 Gamma109 | 1.0648E-02 | 4.2232E-05 | 3.1849E-04 | 2.3514E-02 | 37656.6887 | 132.7785 Gamma110 | 1.4558E-02 | 9.6667E-05 | 1.4384E-03 | 3.0609E-02 | 19327.2137 | 258.7026 Gamma111 | 2.6155E-03 | 1.2609E-05 | 0.0000E+00 | 1.0664E-02 | 35246.2265 | 141.8592 Gamma112 | 3.1530E-02 | 9.3212E-05 | 1.5939E-02 | 5.2513E-02 | 21814.3469 | 229.2070 Gamma113 | 3.0332E-03 | 9.1017E-06 | 2.1723E-05 | 8.8238E-03 | 21228.0396 | 235.5375 Gamma114 | 2.8459E-03 | 5.0074E-06 | 8.5689E-05 | 8.0017E-03 | 20882.2793 | 239.4375 Gamma115 | 3.2308E-03 | 1.2224E-05 | 5.1372E-05 | 9.8159E-03 | 26005.5177 | 192.2669 Gamma116 | 1.7967E-03 | 3.4193E-06 | 5.5578E-06 | 5.4687E-03 | 14293.7529 | 349.8032 Gamma117 | 5.1239E-03 | 2.2861E-05 | 7.6876E-06 | 1.4478E-02 | 14718.0883 | 339.7180 Gamma118 | 3.9297E-03 | 1.7284E-05 | 0.0000E+00 | 1.0973E-02 | 30107.3335 | 166.0725 Gamma119 | 2.4660E-03 | 6.7519E-06 | 3.6767E-05 | 8.8143E-03 | 39887.9865 | 125.3510 Gamma120 | 5.9192E-03 | 2.0708E-05 | 2.1575E-05 | 1.4863E-02 | 41440.4025 | 120.6552 Gamma121 | 3.3092E-03 | 7.1388E-06 | 1.2865E-04 | 9.1760E-03 | 19966.6857 | 250.4171 Gamma122 | 1.4188E-02 | 1.4186E-04 | 1.0583E-04 | 3.8582E-02 | 29696.9340 | 168.3675 Gamma123 | 6.1469E-03 | 3.7944E-05 | 3.5337E-103 | 2.0600E-02 | 14266.5070 | 350.4712 Gamma124 | 5.3542E-03 | 6.3464E-05 | 2.1460E-05 | 1.3922E-02 | 16095.5744 | 310.6444 Gamma125 | 3.4203E-03 | 2.1335E-05 | 0.0000E+00 | 1.0130E-02 | 45807.5273 | 109.1524 Gamma126 | 1.4618E-02 | 1.1325E-04 | 2.8701E-04 | 3.6125E-02 | 13691.7749 | 365.1827 Gamma127 | 3.2206E-01 | 1.0197E-03 | 2.6087E-01 | 3.7722E-01 | 49086.4278 | 101.8612 Gamma128 | 7.2857E-03 | 3.1361E-05 | 1.3948E-04 | 1.8385E-02 | 42005.7489 | 119.0313 Gamma129 | 1.6891E-02 | 9.2670E-05 | 6.3445E-04 | 3.3494E-02 | 7378.6105 | 677.6344 Gamma130 | 4.6442E-03 | 1.3438E-05 | 1.1728E-04 | 1.1089E-02 | 13249.6606 | 377.3682 Gamma131 | 4.9890E-01 | 1.3868E-03 | 4.2268E-01 | 5.6160E-01 | 22054.7456 | 226.7086 Gamma132 | 8.1591E-03 | 2.6096E-05 | 2.4168E-04 | 1.6574E-02 | 11093.0986 | 450.7307 Gamma133 | 5.9883E-03 | 2.4691E-05 | 0.0000E+00 | 1.6412E-02 | 26398.9966 | 189.4011 Gamma134 | 7.2804E-03 | 2.6287E-05 | 8.8405E-05 | 1.8001E-02 | 18358.7256 | 272.3501 Gamma135 | 8.7256E-03 | 2.1526E-05 | 2.2706E-04 | 1.7155E-02 | 23918.9752 | 209.0391 Gamma136 | 2.9880E-03 | 7.3198E-06 | 2.0184E-06 | 7.9991E-03 | 19008.6479 | 263.0382 Gamma137 | 4.1896E-03 | 1.5155E-05 | 0.0000E+00 | 1.0915E-02 | 25792.6040 | 193.8540 Gamma138 | 4.0759E-03 | 1.7561E-05 | 0.0000E+00 | 1.2611E-02 | 26961.9908 | 185.4462 Gamma139 | 1.9013E-03 | 2.7532E-06 | 3.6100E-06 | 5.6579E-03 | 24439.0003 | 204.5910 Gamma140 | 3.0420E-03 | 9.0646E-06 | 0.0000E+00 | 9.7842E-03 | 13533.4384 | 369.4553 Gamma141 | 5.9292E-03 | 9.9349E-05 | 0.0000E+00 | 1.8168E-02 | 26232.0800 | 190.6063 Gamma142 | 3.5377E-03 | 9.6379E-06 | 0.0000E+00 | 1.0215E-02 | 23203.7127 | 215.4828 Gamma143 | 1.5168E-02 | 6.2250E-05 | 1.9025E-03 | 3.0656E-02 | 15722.1103 | 318.0235 Gamma144 | 5.7933E-03 | 4.4980E-05 | 1.7462E-289 | 1.6168E-02 | 22946.4046 | 217.8991 Gamma145 | 5.7463E-03 | 3.2700E-05 | 2.3855E-05 | 1.7080E-02 | 21825.2432 | 229.0925 Gamma146 | 2.6456E-03 | 1.2255E-05 | 2.5832E-06 | 9.3531E-03 | 24569.0253 | 203.5083 Gamma147 | 1.4579E-02 | 1.2048E-04 | 0.0000E+00 | 3.9936E-02 | 16737.9921 | 298.7216 Gamma148 | 8.6204E-03 | 5.7072E-05 | 2.7282E-06 | 2.0417E-02 | 38093.7914 | 131.2550 Gamma149 | 1.6937E-02 | 6.7864E-05 | 4.1349E-03 | 3.4704E-02 | 17925.6478 | 278.9299 Gamma150 | 1.0294E-02 | 4.6247E-05 | 1.4572E-03 | 2.1769E-02 | 18136.6081 | 275.6855 Gamma151 | 1.3152E-02 | 7.3651E-05 | 4.6341E-05 | 3.0154E-02 | 15556.8030 | 321.4028 Gamma152 | 3.7244E-02 | 2.8304E-04 | 2.8950E-03 | 6.5610E-02 | 33478.3899 | 149.3501 Gamma153 | 1.3758E-02 | 6.6246E-05 | 4.0890E-04 | 3.0621E-02 | 29346.9173 | 170.3756 Gamma154 | 5.9445E-03 | 2.1308E-05 | 1.4043E-04 | 1.5699E-02 | 32066.2635 | 155.9271 Gamma155 | 2.8742E-02 | 1.1982E-04 | 7.3737E-03 | 5.2091E-02 | 33898.0279 | 147.5012 Gamma156 | 2.5899E-03 | 7.0253E-06 | 7.4161E-06 | 7.8707E-03 | 18689.6816 | 267.5273 Gamma157 | 7.4292E-01 | 1.0465E-03 | 6.7896E-01 | 7.9747E-01 | 17523.3619 | 285.3334 Gamma158 | 3.7707E-03 | 1.0295E-05 | 0.0000E+00 | 8.9664E-03 | 39356.2073 | 127.0448 Gamma159 | 4.6426E-02 | 1.6495E-04 | 1.9526E-02 | 6.8351E-02 | 20248.3966 | 246.9331 Gamma160 | 9.4584E-03 | 2.2591E-05 | 2.0420E-03 | 1.9575E-02 | 15479.8442 | 323.0007 Gamma161 | 2.4250E-03 | 6.0102E-06 | 2.2032E-05 | 6.5940E-03 | 24814.9610 | 201.4914 Gamma162 | 1.1943E-02 | 5.1395E-05 | 1.7040E-03 | 2.6546E-02 | 19386.5635 | 257.9106 Gamma163 | 2.5954E-03 | 5.8674E-06 | 1.6756E-05 | 7.9368E-03 | 48028.9088 | 104.1040 Gamma164 | 2.1189E-03 | 4.6098E-06 | 6.1450E-06 | 5.7563E-03 | 21749.6357 | 229.8889 Gamma165 | 2.2330E-03 | 1.8865E-05 | 8.4136E-06 | 6.9105E-03 | 32478.0637 | 153.9501 Gamma166 | 3.6582E-03 | 8.4473E-06 | 9.5463E-05 | 9.4273E-03 | 22048.8286 | 226.7694 Gamma167 | 3.6427E-03 | 1.1416E-05 | 1.0440E-05 | 9.4321E-03 | 28140.4471 | 177.6802 Gamma168 | 1.1421E-02 | 6.9643E-05 | 4.6150E-04 | 2.6784E-02 | 25694.6466 | 194.5931 Gamma169 | 5.1223E-03 | 4.4866E-05 | 1.0712E-04 | 1.6117E-02 | 24383.3685 | 205.0578 Gamma170 | 4.9605E-03 | 2.9550E-05 | 1.9398E-05 | 1.4858E-02 | 19684.0267 | 254.0131 Gamma171 | 2.9722E-03 | 1.0856E-05 | 0.0000E+00 | 7.2370E-03 | 19550.7361 | 255.7448 Gamma172 | 1.9379E-02 | 2.5865E-04 | 2.0458E-04 | 5.2602E-02 | 42800.9521 | 116.8198 Gamma173 | 1.3881E-02 | 7.6848E-05 | 6.7001E-04 | 3.1088E-02 | 23168.4207 | 215.8110 Gamma174 | 6.8947E-03 | 1.6571E-05 | 1.2776E-03 | 1.5726E-02 | 18513.8606 | 270.0679 Gamma175 | 2.7526E-03 | 4.9136E-06 | 1.9243E-63 | 7.1387E-03 | 36917.1373 | 135.4385 Gamma176 | 2.6324E-02 | 2.7210E-04 | 8.3009E-04 | 5.7787E-02 | 34809.0990 | 143.6406 Gamma177 | 5.5414E-01 | 2.2137E-03 | 4.7383E-01 | 6.4615E-01 | 47621.5790 | 104.9944 Gamma178 | 2.5632E-02 | 2.7222E-04 | 3.2199E-03 | 6.4055E-02 | 63701.0484 | 78.4916 Gamma179 | 1.3130E-02 | 9.7449E-05 | 8.0479E-04 | 3.2826E-02 | 19797.6125 | 252.5557 Gamma180 | 5.8898E-03 | 2.2534E-05 | 1.5316E-04 | 1.6794E-02 | 45691.9265 | 109.4285 Gamma181 | 7.4855E-03 | 2.9769E-05 | 7.2334E-04 | 1.9642E-02 | 46600.4791 | 107.2950 Gamma182 | 4.8547E-03 | 1.6843E-05 | 0.0000E+00 | 1.3644E-02 | 15670.4387 | 319.0721 Gamma183 | 8.8900E-02 | 1.2533E-03 | 2.7782E-02 | 1.5979E-01 | 55484.9668 | 90.1145 Gamma184 | 6.7210E-03 | 6.9651E-05 | 3.3967E-06 | 2.5447E-02 | 22950.5821 | 217.8594 Gamma185 | 6.8905E-03 | 2.4856E-05 | 2.6100E-05 | 1.6220E-02 | 23021.4924 | 217.1884 Gamma186 | 1.0912E-02 | 1.1490E-04 | 1.9169E-05 | 2.9730E-02 | 24510.3336 | 203.9956 Gamma187 | 7.3192E-03 | 7.6226E-05 | 8.4817E-07 | 2.4485E-02 | 59524.0225 | 83.9997 Gamma188 | 1.2526E-02 | 5.8547E-05 | 7.1780E-04 | 2.5975E-02 | 18057.2891 | 276.8965 Gamma189 | 7.0914E-03 | 4.2373E-05 | 0.0000E+00 | 1.9989E-02 | 26258.6102 | 190.4137 Gamma190 | 5.6499E-03 | 2.0119E-05 | 3.7507E-04 | 1.5291E-02 | 18594.2158 | 268.9008 Gamma191 | 5.3477E-03 | 5.5532E-05 | 1.4105E-214 | 2.0262E-02 | 27848.3877 | 179.5436 Gamma192 | 1.1914E-02 | 1.2016E-04 | 4.8203E-04 | 4.0740E-02 | 42037.8420 | 118.9405 Gamma193 | 8.6947E-02 | 1.2275E-03 | 2.4052E-02 | 1.4001E-01 | 65323.2487 | 76.5424 Gamma194 | 4.3828E-03 | 1.4208E-05 | 5.8816E-05 | 1.2065E-02 | 24807.7751 | 201.5497 Gamma195 | 6.2229E-03 | 4.6592E-05 | 1.5081E-126 | 2.0269E-02 | 19968.8234 | 250.3903 Gamma196 | 3.9935E-03 | 1.6995E-05 | 5.8948E-06 | 1.1468E-02 | 22964.2384 | 217.7298 Gamma197 | 4.8131E-02 | 1.0506E-03 | 1.8951E-03 | 1.1265E-01 | 61968.1768 | 80.6866 Gamma198 | 3.8953E-02 | 4.6221E-04 | 4.3143E-03 | 7.6449E-02 | 103496.0248 | 48.3110 Gamma199 | 6.8304E-03 | 3.0269E-05 | 0.0000E+00 | 1.6100E-02 | 23752.2087 | 210.5067 Gamma200 | 3.8137E-03 | 1.2918E-05 | 0.0000E+00 | 1.1751E-02 | 29431.9507 | 169.8834 Gamma201 | 2.2061E-03 | 2.8632E-05 | 5.1399E-06 | 4.8039E-03 | 19079.3054 | 262.0640 Gamma202 | 3.9100E-03 | 1.0463E-05 | 1.0212E-05 | 1.0611E-02 | 42334.9988 | 118.1056 Gamma203 | 1.3941E-02 | 5.6850E-05 | 4.6010E-03 | 3.2667E-02 | 73870.7415 | 67.6858 Gamma204 | 8.8065E-03 | 4.3442E-05 | 2.1139E-04 | 1.6809E-02 | 25373.7124 | 197.0543 Gamma205 | 3.5903E-03 | 6.3083E-06 | 1.0064E-04 | 8.4176E-03 | 26858.5310 | 186.1606 Gamma206 | 1.8215E-03 | 3.9859E-06 | 0.0000E+00 | 5.5891E-03 | 21877.8619 | 228.5415 Gamma207 | 2.7125E-02 | 5.0313E-05 | 1.2766E-02 | 3.8572E-02 | 18578.7242 | 269.1250 Gamma208 | 4.5804E-03 | 1.6699E-05 | 1.4838E-05 | 1.1127E-02 | 36375.1408 | 137.4565 Gamma209 | 8.0613E-01 | 1.7607E-03 | 7.3269E-01 | 8.7467E-01 | 124530.5793 | 40.1508 Gamma210 | 8.4669E-03 | 1.9079E-05 | 2.2761E-03 | 1.5011E-02 | 20560.2395 | 243.1878 Gamma211 | 2.4497E-03 | 1.0519E-05 | 0.0000E+00 | 9.1696E-03 | 13154.7921 | 380.0896 Gamma212 | 2.7867E-02 | 8.6381E-05 | 1.1267E-02 | 4.2506E-02 | 27908.3662 | 179.1577 Gamma213 | 1.4054E-03 | 2.2605E-06 | 1.4660E-24 | 4.9417E-03 | 18466.9677 | 270.7537 Gamma214 | 1.2470E-02 | 4.7618E-05 | 1.3578E-03 | 2.4040E-02 | 57251.1230 | 87.3345 Gamma215 | 1.9740E-03 | 3.5207E-06 | 2.6479E-05 | 5.2427E-03 | 13247.3981 | 377.4326 Gamma216 | 2.5117E-03 | 2.6384E-05 | 2.9458E-05 | 6.2596E-03 | 19544.4539 | 255.8271 Gamma217 | 2.2047E-03 | 3.0671E-06 | 2.7441E-05 | 5.7185E-03 | 21443.2562 | 233.1735 Gamma218 | 2.1137E-02 | 1.0274E-04 | 3.6511E-03 | 3.9757E-02 | 32932.5392 | 151.8255 Gamma219 | 2.2999E-03 | 1.0651E-05 | 4.5831E-06 | 9.0745E-03 | 42058.9521 | 118.8808 Gamma220 | 2.8658E-03 | 2.5555E-05 | 1.5123E-05 | 9.6544E-03 | 22096.7836 | 226.2773 Gamma221 | 1.2031E-03 | 1.5336E-06 | 1.4189E-05 | 4.1287E-03 | 19430.3636 | 257.3292 Gamma222 | 3.4537E-02 | 5.9361E-04 | 1.6772E-04 | 7.8791E-02 | 189963.0833 | 26.3209 Gamma223 | 3.0790E-03 | 7.7608E-06 | 1.2559E-05 | 7.7606E-03 | 20839.1635 | 239.9329 Gamma224 | 1.6715E-03 | 1.8892E-06 | 4.5916E-05 | 4.3769E-03 | 15313.2087 | 326.5155 Gamma225 | 1.7481E-03 | 7.3093E-06 | 2.5438E-159 | 5.0969E-03 | 12502.8650 | 399.9083 Gamma226 | 3.6707E-02 | 9.9152E-04 | 3.0241E-04 | 1.1157E-01 | 47271.4019 | 105.7722 Gamma227 | 6.2283E-02 | 3.8712E-03 | 2.9392E-05 | 2.0028E-01 | 94429.1491 | 52.9498 Gamma228 | 2.7096E-02 | 2.2985E-04 | 4.3516E-03 | 5.4892E-02 | 26164.9348 | 191.0955 Gamma229 | 3.2028E-02 | 1.4356E-04 | 1.2579E-02 | 5.5498E-02 | 29452.0847 | 169.7673 Gamma230 | 1.0139E-02 | 5.1313E-05 | 4.9129E-04 | 2.3844E-02 | 65162.2063 | 76.7316 Gamma231 | 2.1222E-02 | 2.0195E-04 | 3.6462E-03 | 4.7430E-02 | 22587.0898 | 221.3654 Gamma232 | 3.0705E-02 | 1.8114E-04 | 8.2274E-03 | 5.9118E-02 | 36407.2808 | 137.3352 Gamma233 | 1.5053E-02 | 3.4100E-04 | 0.0000E+00 | 4.8016E-02 | 80339.2098 | 62.2361 Gamma234 | 3.7322E-02 | 1.6725E-04 | 1.0257E-02 | 5.9128E-02 | 12156.2454 | 411.3112 Gamma235 | 2.1579E-01 | 3.2940E-03 | 1.0663E-01 | 3.2862E-01 | 73331.2251 | 68.1838 Gamma236 | 1.2151E-02 | 9.8877E-05 | 8.8663E-05 | 3.5255E-02 | 23871.2828 | 209.4567 Gamma237 | 1.8690E-02 | 7.5997E-05 | 3.7458E-03 | 3.5588E-02 | 13149.2648 | 380.2494 Gamma238 | 6.3248E-03 | 4.0885E-05 | 2.4533E-07 | 2.1520E-02 | 31778.5257 | 157.3390 Gamma239 | 4.9375E-02 | 6.0725E-04 | 1.1591E-02 | 9.9927E-02 | 45728.0585 | 109.3421 Gamma240 | 6.7224E-03 | 4.2430E-05 | 0.0000E+00 | 1.8560E-02 | 36849.3569 | 135.6876 Gamma241 | 9.4317E-02 | 6.8467E-04 | 5.8860E-02 | 1.4398E-01 | 31663.8131 | 157.9090 Gamma242 | 1.7808E-02 | 2.4746E-04 | 3.8456E-06 | 5.3202E-02 | 45864.0363 | 109.0179 Gamma243 | 1.5343E-02 | 1.2346E-04 | 1.0887E-03 | 3.9775E-02 | 25997.7812 | 192.3241 Gamma244 | 5.1805E-02 | 2.3883E-04 | 2.8589E-02 | 8.9097E-02 | 14533.6772 | 344.0286 Gamma245 | 2.0058E-02 | 3.1468E-04 | 1.1706E-04 | 5.0474E-02 | 42293.9325 | 118.2203 Gamma246 | 7.4123E-02 | 5.8961E-04 | 3.8454E-02 | 1.3592E-01 | 35161.7526 | 142.2000 Gamma247 | 1.9739E-02 | 2.0427E-04 | 1.2378E-04 | 4.3046E-02 | 18502.4938 | 270.2338 Gamma248 | 4.6480E-02 | 7.2386E-04 | 1.1234E-02 | 1.0265E-01 | 31479.8738 | 158.8316 Gamma249 | 1.9907E-02 | 1.7395E-04 | 1.2233E-03 | 4.3735E-02 | 38284.4223 | 130.6014 Gamma250 | 5.8817E-02 | 3.0505E-04 | 2.9232E-02 | 8.8807E-02 | 48014.8455 | 104.1345 Gamma251 | 4.8917E-02 | 9.7887E-04 | 1.9972E-03 | 1.0607E-01 | 36523.3956 | 136.8986 Gamma252 | 6.4240E-03 | 5.2194E-05 | 8.1154E-06 | 2.3883E-02 | 33883.8243 | 147.5630 Gamma253 | 8.1575E-03 | 2.8191E-05 | 2.8640E-04 | 1.7617E-02 | 15709.6296 | 318.2761 Gamma254 | 5.3155E-03 | 2.9752E-05 | 0.0000E+00 | 1.2059E-02 | 28570.1134 | 175.0081 Gamma255 | 4.4334E-03 | 1.9147E-05 | 0.0000E+00 | 1.4631E-02 | 34892.3956 | 143.2977 Gamma256 | 5.4825E-03 | 2.9670E-05 | 1.3524E-04 | 1.4669E-02 | 19045.1706 | 262.5337 Gamma257 | 6.8300E-03 | 4.7635E-05 | 0.0000E+00 | 1.8941E-02 | 29033.1484 | 172.2169 Gamma258 | 1.3275E-02 | 9.1295E-05 | 7.4201E-04 | 3.0903E-02 | 28043.2345 | 178.2961 Gamma259 | 3.8815E-03 | 1.7372E-05 | 0.0000E+00 | 1.2537E-02 | 24454.0425 | 204.4652 Gamma260 | 1.0610E-02 | 5.7112E-05 | 3.7106E-04 | 2.4722E-02 | 31783.3337 | 157.3152 Gamma261 | 6.3711E-01 | 2.6489E-03 | 5.4235E-01 | 7.2877E-01 | 65788.9660 | 76.0006 Gamma262 | 8.8268E-03 | 9.8065E-05 | 3.1832E-05 | 2.7775E-02 | 24850.7622 | 201.2011 Gamma263 | 5.9210E-03 | 1.9170E-05 | 8.2042E-05 | 1.4713E-02 | 20750.6234 | 240.9566 Gamma264 | 7.3070E-02 | 3.7953E-04 | 4.4710E-02 | 1.1973E-01 | 40442.9323 | 123.6310 Gamma265 | 4.1779E-03 | 2.8985E-05 | 2.9634E-06 | 1.1795E-02 | 27245.2398 | 183.5183 Gamma266 | 3.6660E-03 | 1.0923E-05 | 0.0000E+00 | 1.1055E-02 | 28282.6364 | 176.7869 Gamma267 | 8.0333E-02 | 5.5329E-04 | 3.7376E-02 | 1.2613E-01 | 24671.3012 | 202.6646 Gamma268 | 3.5894E-02 | 5.3182E-04 | 9.1184E-04 | 7.6760E-02 | 22406.7080 | 223.1475 Gamma269 | 5.0000E-03 | 1.6646E-05 | 8.0503E-06 | 1.2885E-02 | 20377.3673 | 245.3703 Gamma270 | 6.0987E-03 | 5.0104E-05 | 0.0000E+00 | 1.9194E-02 | 20392.5245 | 245.1879 Gamma271 | 5.9221E-03 | 4.8925E-05 | 2.2163E-06 | 1.9486E-02 | 26498.4931 | 188.6900 Gamma272 | 5.4337E-03 | 3.5837E-05 | 3.2224E-05 | 1.7387E-02 | 24207.3908 | 206.5485 Gamma273 | 5.5041E-03 | 2.6215E-05 | 0.0000E+00 | 1.5638E-02 | 25953.9721 | 192.6487 Gamma274 | 5.7329E-03 | 1.9426E-05 | 6.6190E-06 | 1.4090E-02 | 13558.9442 | 368.7603 Gamma275 | 3.9834E-03 | 1.6668E-05 | 8.5724E-05 | 1.2867E-02 | 33148.8933 | 150.8346 Gamma276 | 2.9681E-03 | 7.5712E-06 | 3.9389E-05 | 7.4899E-03 | 24184.5647 | 206.7434 Gamma277 | 3.7934E-03 | 2.5369E-05 | 7.5299E-06 | 1.0718E-02 | 21527.6928 | 232.2590 Gamma278 | 4.1185E-03 | 1.0647E-05 | 0.0000E+00 | 1.0465E-02 | 33540.4810 | 149.0736 Gamma279 | 4.2983E-03 | 1.5452E-05 | 3.3467E-06 | 1.0948E-02 | 25237.1937 | 198.1203 Gamma280 | 1.0858E-02 | 2.4034E-05 | 9.7111E-04 | 2.0302E-02 | 13321.8237 | 375.3240 Gamma281 | 5.0261E-03 | 8.7573E-06 | 1.0469E-03 | 1.1525E-02 | 13832.8982 | 361.4572 Gamma282 | 6.5803E-03 | 1.8902E-05 | 2.3867E-05 | 1.5060E-02 | 19289.4499 | 259.2091 Gamma283 | 5.0972E-03 | 2.5601E-05 | 3.0547E-05 | 1.6494E-02 | 20443.3423 | 244.5784 Gamma284 | 4.7241E-02 | 9.0826E-05 | 2.9230E-02 | 6.4696E-02 | 12236.9987 | 408.5969 Gamma285 | 5.9855E-03 | 1.7391E-05 | 1.7215E-05 | 1.3667E-02 | 16319.6658 | 306.3788 Gamma286 | 6.7989E-03 | 4.6773E-05 | 4.4576E-06 | 1.2852E-02 | 37887.2385 | 131.9706 Gamma287 | 7.0529E-01 | 6.5625E-04 | 6.5503E-01 | 7.5514E-01 | 69707.2689 | 71.7285 Gamma288 | 4.1422E-03 | 1.0170E-05 | 1.0113E-04 | 1.0852E-02 | 19079.0956 | 262.0669 Gamma289 | 4.8746E-02 | 1.5836E-04 | 3.0056E-02 | 7.2797E-02 | 26323.2367 | 189.9462 Gamma290 | 2.5543E-03 | 2.4406E-05 | 2.8525E-05 | 9.1834E-03 | 17013.3777 | 293.8864 Gamma291 | 1.1412E-03 | 3.3345E-06 | 0.0000E+00 | 2.4611E-03 | 17679.4058 | 282.8149 Gamma292 | 2.8599E-03 | 6.6291E-06 | 1.2848E-04 | 8.7528E-03 | 22401.1303 | 223.2030 Gamma293 | 1.1679E-01 | 2.8930E-04 | 8.4514E-02 | 1.4702E-01 | 67755.4724 | 73.7948 Gamma294 | 1.9097E-03 | 3.0563E-06 | 0.0000E+00 | 4.9579E-03 | 24372.6881 | 205.1477 Gamma295 | 2.5202E-03 | 6.4498E-06 | 0.0000E+00 | 6.4033E-03 | 38714.7146 | 129.1499 Gamma296 | 1.5994E-03 | 7.4314E-06 | 0.0000E+00 | 4.0390E-03 | 23306.1716 | 214.5354 Gamma297 | 3.7095E-03 | 6.1932E-05 | 7.2249E-05 | 1.0261E-02 | 17278.2065 | 289.3819 Gamma298 | 2.3703E-03 | 6.8958E-06 | 0.0000E+00 | 6.7324E-03 | 21231.4268 | 235.5000 Gamma299 | 1.8840E-03 | 6.6609E-06 | 1.8302E-06 | 6.1451E-03 | 19788.1064 | 252.6770 Gamma300 | 1.7226E-03 | 3.0834E-06 | 7.0403E-06 | 5.5546E-03 | 44347.4723 | 112.7460 Gamma301 | 2.9327E-02 | 4.2531E-04 | 3.2592E-03 | 6.3432E-02 | 30510.9965 | 163.8753 Gamma302 | 1.2290E-01 | 6.3671E-04 | 7.8302E-02 | 1.7072E-01 | 24623.5504 | 203.0576 Gamma303 | 7.1168E-03 | 2.6278E-05 | 2.5334E-04 | 1.8564E-02 | 15769.2543 | 317.0727 Gamma304 | 5.7247E-03 | 3.1242E-05 | 5.5859E-05 | 1.8575E-02 | 27674.4285 | 180.6722 Gamma305 | 4.6415E-03 | 6.6660E-05 | 1.1610E-04 | 1.1787E-02 | 18508.2806 | 270.1494 Gamma306 | 3.0414E-03 | 7.5922E-06 | 1.2860E-05 | 8.4326E-03 | 18594.0679 | 268.9030 Gamma307 | 3.2661E-03 | 1.1904E-05 | 2.5661E-06 | 1.0671E-02 | 61553.7933 | 81.2298 Gamma308 | 3.6162E-03 | 9.8520E-06 | 5.5995E-06 | 9.4542E-03 | 29464.3761 | 169.6964 Gamma309 | 1.8790E-03 | 3.4469E-06 | 0.0000E+00 | 5.8284E-03 | 28845.1062 | 173.3396 Gamma310 | 4.3399E-03 | 2.5936E-05 | 3.4381E-06 | 1.1586E-02 | 17978.2400 | 278.1140 Gamma311 | 4.5431E-03 | 9.5163E-06 | 2.4009E-04 | 9.5250E-03 | 28937.6912 | 172.7850 Gamma312 | 1.5384E-03 | 2.1657E-06 | 1.7842E-06 | 4.4478E-03 | 25970.4808 | 192.5263 Gamma313 | 7.3209E-01 | 1.1326E-03 | 6.6362E-01 | 7.8946E-01 | 16866.8588 | 296.4393 Gamma314 | 1.8622E-03 | 3.4278E-06 | 6.9320E-07 | 5.5181E-03 | 36575.9053 | 136.7020 Gamma315 | 3.1327E-03 | 4.8260E-06 | 1.2577E-06 | 7.2462E-03 | 16861.5038 | 296.5335 Gamma316 | 2.5437E-03 | 2.2604E-05 | 3.6101E-05 | 7.5467E-03 | 15749.6754 | 317.4669 Gamma317 | 9.1956E-03 | 3.5181E-05 | 4.3308E-05 | 2.1015E-02 | 27536.5460 | 181.5769 Gamma318 | 5.6658E-03 | 2.1964E-05 | 5.5923E-05 | 1.5150E-02 | 16059.1005 | 311.3499 Gamma319 | 4.1368E-03 | 2.9159E-05 | 5.8963E-05 | 1.1556E-02 | 21005.5369 | 238.0325 Gamma320 | 5.5329E-03 | 1.8947E-05 | 1.9608E-04 | 1.5499E-02 | 16956.3464 | 294.8748 Gamma321 | 2.4158E-02 | 6.8231E-05 | 1.0261E-02 | 3.7861E-02 | 6766.1119 | 738.9768 Gamma322 | 3.6947E-03 | 1.1449E-05 | 6.7122E-138 | 9.9310E-03 | 20442.0729 | 244.5936 Gamma323 | 9.9583E-03 | 5.0129E-05 | 5.4610E-04 | 2.2615E-02 | 24769.9109 | 201.8578 Gamma324 | 2.0962E-03 | 5.0383E-06 | 0.0000E+00 | 6.3166E-03 | 18788.8331 | 266.1155 Gamma325 | 4.0082E-03 | 1.7494E-05 | 1.9199E-05 | 1.4044E-02 | 35960.8150 | 139.0402 Gamma326 | 2.4669E-03 | 6.8498E-06 | 2.6688E-06 | 7.8518E-03 | 30845.1923 | 162.0998 Gamma327 | 2.7149E-03 | 1.0769E-05 | 2.8700E-246 | 7.3696E-03 | 20326.7589 | 245.9812 Gamma328 | 4.2796E-03 | 1.2370E-05 | 9.5174E-05 | 1.0872E-02 | 32612.1957 | 153.3169 Gamma329 | 2.5964E-03 | 7.0825E-06 | 5.3002E-06 | 7.6621E-03 | 20614.7673 | 242.5446 Gamma330 | 1.7047E-03 | 1.7693E-05 | 9.7357E-06 | 5.5125E-03 | 23923.4003 | 209.0004 Gamma331 | 1.0921E-03 | 2.7578E-06 | 5.4262E-06 | 3.8869E-03 | 23087.1659 | 216.5705 Gamma332 | 1.5254E-03 | 4.0615E-06 | 2.2535E-05 | 3.9127E-03 | 8463.8927 | 590.7447 Gamma333 | 4.5659E-03 | 9.2828E-06 | 1.0744E-04 | 1.0630E-02 | 43707.7232 | 114.3963 Gamma334 | 8.1321E-03 | 2.3897E-05 | 1.5494E-03 | 1.8872E-02 | 22710.8803 | 220.1588 Gamma335 | 1.6005E-02 | 4.9995E-05 | 5.6207E-03 | 2.8615E-02 | 64528.7687 | 77.4848 Gamma336 | 2.3665E-02 | 6.6610E-05 | 1.0211E-02 | 4.3433E-02 | 21460.8618 | 232.9823 Gamma337 | 4.5288E-02 | 9.5400E-05 | 2.6571E-02 | 6.3111E-02 | 35760.0416 | 139.8209 Gamma338 | 1.9542E-03 | 2.6936E-06 | 1.1975E-105 | 5.3739E-03 | 16936.6509 | 295.2178 Gamma339 | 7.3149E-01 | 6.6108E-04 | 6.8932E-01 | 7.8483E-01 | 41576.3816 | 120.2606 Gamma340 | 1.4309E-03 | 1.5812E-05 | 8.7102E-06 | 3.1581E-03 | 17139.3611 | 291.7262 Gamma341 | 1.5732E-03 | 4.4854E-06 | 0.0000E+00 | 4.3259E-03 | 24026.0963 | 208.1070 Gamma342 | 5.0500E-03 | 1.5580E-05 | 2.3646E-04 | 1.1701E-02 | 33677.8372 | 148.4656 Gamma343 | 1.2851E-01 | 3.7326E-04 | 9.3754E-02 | 1.6243E-01 | 50482.0442 | 99.0451 Gamma344 | 1.1096E-03 | 1.0636E-05 | 1.0640E-103 | 3.2372E-03 | 20442.1127 | 244.5931 Gamma345 | 1.1524E-03 | 1.5336E-06 | 2.3512E-05 | 3.4638E-03 | 15780.5214 | 316.8463 Gamma346 | 2.6502E-03 | 7.4835E-06 | 1.1649E-209 | 6.1499E-03 | 23170.2820 | 215.7937 Gamma347 | 2.4587E-03 | 1.0587E-05 | 2.4929E-66 | 7.9508E-03 | 26962.0248 | 185.4460 Gamma348 | 2.5180E-03 | 4.8702E-06 | 0.0000E+00 | 7.5496E-03 | 18700.3745 | 267.3743 Gamma349 | 2.8176E-03 | 9.8475E-06 | 5.2767E-05 | 8.0141E-03 | 20869.7338 | 239.5814 Gamma350 | 3.2411E-03 | 2.9135E-05 | 2.8385E-06 | 1.0693E-02 | 25938.5472 | 192.7633 Gamma351 | 2.4793E-02 | 3.3018E-04 | 9.0062E-04 | 5.9350E-02 | 33327.8158 | 150.0248 Gamma352 | 5.1001E-02 | 6.1874E-04 | 2.6070E-03 | 9.2298E-02 | 37286.2379 | 134.0977 Gamma353 | 6.1075E-03 | 6.7308E-05 | 0.0000E+00 | 2.5894E-02 | 38588.0048 | 129.5739 Gamma354 | 3.7852E-03 | 1.4613E-05 | 0.0000E+00 | 1.2001E-02 | 8916.7918 | 560.7398 Gamma355 | 7.7790E-03 | 6.9994E-05 | 2.7015E-05 | 2.5643E-02 | 31182.7687 | 160.3450 Gamma356 | 4.9178E-03 | 3.2680E-05 | 0.0000E+00 | 1.7044E-02 | 34944.3649 | 143.0846 Gamma357 | 6.1894E-03 | 5.0146E-05 | 0.0000E+00 | 1.8185E-02 | 29584.5974 | 169.0069 Gamma358 | 7.1705E-03 | 6.8995E-05 | 0.0000E+00 | 2.7270E-02 | 24876.9587 | 200.9892 Gamma359 | 4.6590E-03 | 2.1364E-05 | 6.4461E-06 | 1.3523E-02 | 28535.3499 | 175.2213 Gamma360 | 9.1733E-03 | 5.5219E-05 | 3.8856E-05 | 2.3279E-02 | 37390.2793 | 133.7246 Gamma361 | 6.2608E-03 | 7.4079E-05 | 0.0000E+00 | 2.5291E-02 | 40482.6649 | 123.5097 Gamma362 | 3.8615E-03 | 1.0624E-05 | 7.8225E-06 | 9.8337E-03 | 14735.0059 | 339.3280 Gamma363 | 7.0250E-03 | 5.3764E-05 | 0.0000E+00 | 2.0566E-02 | 24005.2184 | 208.2880 Gamma364 | 4.3183E-03 | 2.7370E-05 | 1.8785E-06 | 1.2305E-02 | 29506.0729 | 169.4566 Gamma365 | 6.0414E-01 | 3.5649E-03 | 4.6988E-01 | 7.0603E-01 | 45696.0153 | 109.4187 Gamma366 | 1.6645E-02 | 1.3086E-04 | 4.1311E-04 | 3.2876E-02 | 12823.9967 | 389.8940 Gamma367 | 3.3010E-02 | 3.7879E-04 | 6.9911E-03 | 6.6756E-02 | 28993.9751 | 172.4496 Gamma368 | 1.8315E-02 | 1.6982E-04 | 3.3724E-04 | 4.5120E-02 | 12876.7647 | 388.2963 Gamma369 | 6.8801E-03 | 4.4235E-05 | 1.1847E-04 | 2.0388E-02 | 28752.5357 | 173.8977 Gamma370 | 1.0269E-01 | 1.5614E-03 | 2.4741E-02 | 1.7153E-01 | 36482.7202 | 137.0512 Gamma371 | 1.8484E-02 | 1.1383E-04 | 1.8662E-04 | 3.6468E-02 | 22382.4058 | 223.3897 Gamma372 | 1.0651E-02 | 1.2899E-04 | 0.0000E+00 | 3.1775E-02 | 20097.0589 | 248.7926 Gamma373 | 1.7435E-02 | 1.5667E-04 | 3.3653E-04 | 3.8187E-02 | 20262.4381 | 246.7620 Gamma374 | 8.3769E-03 | 9.3021E-05 | 0.0000E+00 | 3.0907E-02 | 36945.3840 | 135.3349 Gamma375 | 1.6333E-02 | 1.4734E-04 | 8.4483E-04 | 4.1077E-02 | 27353.5296 | 182.7918 Gamma376 | 2.5206E-03 | 5.0930E-06 | 2.9205E-05 | 7.2401E-03 | 20979.2645 | 238.3306 Gamma377 | 3.1381E-03 | 9.6389E-06 | 0.0000E+00 | 9.6390E-03 | 23175.4021 | 215.7460 Gamma378 | 3.2783E-03 | 4.6262E-06 | 1.4914E-04 | 7.0207E-03 | 18371.5781 | 272.1595 Gamma379 | 1.9466E-03 | 4.3353E-06 | 1.9932E-05 | 6.0063E-03 | 37457.2433 | 133.4855 Gamma380 | 1.5848E-03 | 3.9237E-06 | 0.0000E+00 | 4.5166E-03 | 20140.3158 | 248.2583 Gamma381 | 3.8456E-03 | 2.2601E-05 | 2.3276E-05 | 1.4362E-02 | 28454.7058 | 175.7179 Gamma382 | 2.3889E-03 | 1.7898E-05 | 5.8433E-06 | 7.2489E-03 | 20698.5296 | 241.5631 Gamma383 | 1.9750E-03 | 4.4970E-06 | 5.0407E-06 | 5.5307E-03 | 20741.2644 | 241.0653 Gamma384 | 3.1523E-03 | 6.6443E-06 | 1.1953E-04 | 7.5092E-03 | 20241.7043 | 247.0148 Gamma385 | 3.1986E-02 | 6.3682E-05 | 1.9275E-02 | 4.8261E-02 | 20170.9672 | 247.8810 Gamma386 | 1.9295E-03 | 5.7482E-06 | 9.4874E-06 | 6.3732E-03 | 21484.8529 | 232.7221 Gamma387 | 1.3705E-03 | 2.1416E-06 | 9.1667E-06 | 4.3620E-03 | 30328.2155 | 164.8630 Gamma388 | 1.6270E-03 | 2.5758E-06 | 1.9328E-07 | 4.6204E-03 | 27632.8669 | 180.9439 Gamma389 | 1.7055E-03 | 2.6181E-06 | 0.0000E+00 | 5.0687E-03 | 27168.1551 | 184.0390 Gamma390 | 1.0854E-02 | 2.1510E-05 | 3.3281E-03 | 2.0403E-02 | 29059.2892 | 172.0620 Gamma391 | 7.1509E-01 | 5.9987E-04 | 6.6839E-01 | 7.6232E-01 | 16853.6152 | 296.6723 Gamma392 | 2.8702E-03 | 1.3946E-05 | 0.0000E+00 | 1.1007E-02 | 33926.4352 | 147.3777 Gamma393 | 3.2691E-03 | 1.4818E-05 | 5.2630E-06 | 1.2622E-02 | 12896.3590 | 387.7063 Gamma394 | 3.4814E-02 | 9.0375E-05 | 1.6997E-02 | 5.2017E-02 | 22320.0818 | 224.0135 Gamma395 | 1.3375E-01 | 2.6785E-04 | 9.8756E-02 | 1.6443E-01 | 31435.8188 | 159.0542 Gamma396 | 3.6185E-03 | 1.2274E-05 | 2.1532E-04 | 1.0387E-02 | 17364.7334 | 287.9399 Gamma397 | 3.5225E-03 | 1.0651E-05 | 0.0000E+00 | 9.8298E-03 | 23028.6780 | 217.1206 Gamma398 | 1.2023E-02 | 4.8850E-05 | 5.7406E-04 | 2.4273E-02 | 43936.2306 | 113.8013 Gamma399 | 7.1335E-03 | 2.2365E-05 | 2.8489E-04 | 1.5523E-02 | 19134.3241 | 261.3105 Gamma400 | 1.0605E-02 | 3.5265E-05 | 2.2637E-03 | 1.9447E-02 | 18513.1736 | 270.0780 Gamma401 | 1.4145E-01 | 5.6709E-04 | 9.5771E-02 | 1.8805E-01 | 46561.6260 | 107.3846 Gamma402 | 1.1253E-02 | 6.0266E-05 | 8.2796E-06 | 2.6947E-02 | 31745.4089 | 157.5031 Gamma403 | 1.0800E-02 | 2.5388E-05 | 1.8994E-03 | 2.0844E-02 | 14008.5219 | 356.9256 Gamma404 | 1.4723E-03 | 2.4668E-06 | 8.5575E-07 | 4.0427E-03 | 27293.1690 | 183.1960 Gamma405 | 5.1690E-03 | 6.1864E-06 | 8.0014E-04 | 1.0367E-02 | 17573.1706 | 284.5246 Gamma406 | 2.4912E-03 | 4.0882E-06 | 1.5439E-04 | 6.6381E-03 | 17474.4905 | 286.1314 Gamma407 | 2.6837E-03 | 5.2157E-06 | 3.7603E-05 | 7.0031E-03 | 25231.3303 | 198.1663 Gamma408 | 2.7472E-03 | 7.4211E-06 | 4.1349E-05 | 8.4109E-03 | 83638.5474 | 59.7810 Gamma409 | 2.4838E-03 | 8.3758E-06 | 3.0316E-05 | 8.3449E-03 | 14372.5505 | 347.8854 Gamma410 | 9.7625E-03 | 3.1184E-05 | 8.8820E-04 | 2.0287E-02 | 23276.3846 | 214.8100 Gamma411 | 2.5801E-02 | 7.9244E-05 | 6.7707E-03 | 4.3522E-02 | 31942.0713 | 156.5334 Gamma412 | 3.0284E-03 | 6.9727E-06 | 1.9368E-05 | 7.8618E-03 | 24849.1321 | 201.2143 Gamma413 | 4.9011E-03 | 1.0844E-05 | 9.8124E-05 | 1.0737E-02 | 12502.6231 | 399.9161 Gamma414 | 6.3975E-03 | 1.7203E-05 | 1.3161E-03 | 1.6126E-02 | 13817.0982 | 361.8705 Gamma415 | 5.2195E-03 | 1.0166E-05 | 1.4682E-04 | 1.1186E-02 | 17327.3056 | 288.5619 Gamma416 | 2.4188E-03 | 3.8366E-06 | 0.0000E+00 | 5.6751E-03 | 13496.7349 | 370.4600 Gamma417 | 7.0950E-01 | 6.3763E-04 | 6.5662E-01 | 7.5761E-01 | 46169.2866 | 108.2971 Gamma418 | 1.0090E-02 | 4.1201E-05 | 2.1459E-04 | 1.9922E-02 | 49123.0598 | 101.7852 Gamma419 | 2.8005E-03 | 7.3404E-06 | 4.7947E-05 | 9.7403E-03 | 25931.2775 | 192.8173 Gamma420 | 4.8523E-03 | 2.1230E-05 | 1.3610E-05 | 1.5738E-02 | 16063.6154 | 311.2624 Gamma421 | 2.1635E-02 | 6.2529E-05 | 6.5801E-03 | 3.7325E-02 | 16900.7761 | 295.8444 Gamma422 | 2.8975E-03 | 6.7063E-06 | 0.0000E+00 | 8.1080E-03 | 27776.0669 | 180.0111 Gamma423 | 5.6688E-03 | 2.0444E-05 | 6.2900E-05 | 1.4027E-02 | 15309.3126 | 326.5986 Gamma424 | 2.8593E-03 | 8.1163E-06 | 8.5713E-06 | 8.6236E-03 | 64005.0544 | 78.1188 Gamma425 | 1.6124E-03 | 2.2344E-06 | 3.4202E-06 | 4.5395E-03 | 25226.5574 | 198.2038 Gamma426 | 2.0740E-03 | 4.0899E-06 | 2.0857E-05 | 6.1500E-03 | 31370.0390 | 159.3878 Gamma427 | 9.9544E-03 | 2.0338E-05 | 6.3560E-04 | 1.6414E-02 | 43002.3537 | 116.2727 Gamma428 | 2.9525E-03 | 7.1782E-06 | 2.1226E-04 | 7.0641E-03 | 25471.4357 | 196.2983 Gamma429 | 1.6369E-03 | 2.7502E-06 | 9.2684E-06 | 4.9787E-03 | 24336.0265 | 205.4567 Gamma430 | 1.3922E-03 | 3.1853E-06 | 0.0000E+00 | 5.0678E-03 | 32406.8350 | 154.2884 Gamma431 | 1.4245E-03 | 6.8981E-06 | 5.7031E-06 | 4.4679E-03 | 26461.6035 | 188.9530 Gamma432 | 1.4795E-03 | 2.3451E-06 | 6.7581E-06 | 3.8377E-03 | 28039.3865 | 178.3206 Gamma433 | 1.5035E-02 | 6.3199E-05 | 3.4628E-03 | 2.6602E-02 | 43256.3488 | 115.5900 Gamma434 | 5.0448E-03 | 7.4097E-06 | 3.6836E-04 | 1.0538E-02 | 33605.9976 | 148.7830 Gamma435 | 1.7197E-03 | 9.7988E-06 | 6.9005E-06 | 4.5959E-03 | 14081.1043 | 355.0858 Gamma436 | 3.7395E-03 | 1.7747E-05 | 9.4233E-05 | 1.1876E-02 | 17872.4177 | 279.7607 Gamma437 | 4.4831E-02 | 8.2069E-05 | 2.4317E-02 | 5.9053E-02 | 55420.5429 | 90.2193 Gamma438 | 2.0099E-03 | 2.9398E-06 | 4.0242E-06 | 5.3215E-03 | 23310.4666 | 214.4959 Gamma439 | 5.7917E-02 | 5.4776E-05 | 4.4385E-02 | 7.1556E-02 | 18974.4135 | 263.5128 Gamma440 | 1.2071E-03 | 8.7368E-07 | 2.0884E-05 | 3.0730E-03 | 21556.4801 | 231.9488 Gamma441 | 7.7877E-04 | 1.2320E-06 | 3.9643E-06 | 2.7132E-03 | 34442.7165 | 145.1686 Gamma442 | 2.8372E-03 | 6.8045E-06 | 3.3803E-06 | 5.9055E-03 | 14566.5388 | 343.2524 Gamma443 | 8.3304E-01 | 1.6314E-04 | 8.0840E-01 | 8.5844E-01 | 32451.6730 | 154.0753 Gamma444 | 7.0302E-04 | 5.6680E-07 | 2.1415E-235 | 1.9467E-03 | 22644.6403 | 220.8028 Gamma445 | 1.2805E-03 | 1.7479E-06 | 0.0000E+00 | 3.6397E-03 | 22109.1297 | 226.1509 Gamma446 | 7.8547E-04 | 8.4742E-07 | 1.3043E-05 | 2.7585E-03 | 31651.8098 | 157.9689 Gamma447 | 2.0870E-03 | 4.3782E-06 | 1.0134E-05 | 6.9000E-03 | 20642.3162 | 242.2209 Gamma448 | 2.9245E-03 | 1.2113E-05 | 2.6652E-06 | 6.9217E-03 | 13529.3141 | 369.5679 Gamma449 | 1.1445E-03 | 1.0144E-06 | 1.9544E-05 | 3.1386E-03 | 20352.2298 | 245.6733 Gamma450 | 2.0047E-03 | 3.2108E-06 | 3.2630E-06 | 5.7242E-03 | 17725.7266 | 282.0759 Gamma451 | 7.1105E-03 | 4.8820E-05 | 5.8517E-06 | 2.3692E-02 | 26338.7144 | 189.8346 Gamma452 | 1.2013E-02 | 1.0252E-04 | 1.1897E-05 | 3.1603E-02 | 23952.7096 | 208.7447 Gamma453 | 4.1419E-03 | 3.2262E-05 | 8.8946E-05 | 1.6086E-02 | 48457.2786 | 103.1837 Gamma454 | 6.0563E-03 | 1.1015E-04 | 0.0000E+00 | 1.7585E-02 | 22774.3455 | 219.5453 Gamma455 | 5.1958E-03 | 1.9394E-05 | 1.4232E-04 | 1.3593E-02 | 29107.8441 | 171.7750 Gamma456 | 9.3003E-03 | 3.1841E-05 | 3.9921E-04 | 2.1125E-02 | 24715.7050 | 202.3005 Gamma457 | 5.6489E-03 | 2.3863E-05 | 0.0000E+00 | 1.6950E-02 | 25998.2037 | 192.3210 Gamma458 | 2.2142E-03 | 5.1309E-06 | 0.0000E+00 | 7.4352E-03 | 33010.7600 | 151.4658 Gamma459 | 3.2802E-03 | 1.5509E-05 | 0.0000E+00 | 8.8765E-03 | 11562.4319 | 432.4350 Gamma460 | 2.5709E-02 | 1.0707E-04 | 7.3453E-03 | 4.4883E-02 | 19696.9727 | 253.8461 Gamma461 | 3.2744E-03 | 8.1703E-06 | 2.0800E-05 | 8.9429E-03 | 50279.2180 | 99.4447 Gamma462 | 3.7221E-03 | 1.0454E-05 | 2.1471E-06 | 1.1410E-02 | 21433.3031 | 233.2818 Gamma463 | 3.6508E-03 | 1.4389E-05 | 1.4977E-05 | 1.0892E-02 | 28069.2042 | 178.1312 Gamma464 | 4.3139E-03 | 1.7651E-05 | 5.4271E-05 | 1.2524E-02 | 21386.6295 | 233.7909 Gamma465 | 6.8082E-03 | 2.5981E-05 | 3.0086E-04 | 1.5746E-02 | 15262.2073 | 327.6066 Gamma466 | 6.6788E-02 | 2.5808E-04 | 3.8270E-02 | 9.6085E-02 | 33162.6412 | 150.7721 Gamma467 | 4.7363E-03 | 2.5146E-05 | 2.1756E-05 | 1.3580E-02 | 24894.7832 | 200.8453 Gamma468 | 2.5679E-03 | 1.2170E-05 | 0.0000E+00 | 9.7807E-03 | 45902.5130 | 108.9265 Gamma469 | 5.5808E-01 | 1.4357E-03 | 4.7133E-01 | 6.2205E-01 | 28278.1542 | 176.8149 Gamma470 | 1.6805E-01 | 8.0602E-04 | 1.2388E-01 | 2.3427E-01 | 35056.1824 | 142.6282 Gamma471 | 4.1599E-03 | 2.0401E-05 | 1.7421E-06 | 1.2528E-02 | 17364.7550 | 287.9396 Gamma472 | 7.6984E-03 | 5.7707E-05 | 1.7294E-66 | 2.1739E-02 | 33902.9657 | 147.4797 Gamma473 | 5.9583E-02 | 5.5918E-04 | 1.4012E-02 | 9.9320E-02 | 102335.1947 | 48.8590 Gamma474 | 5.1308E-03 | 3.7137E-05 | 2.9308E-05 | 1.9383E-02 | 18410.6335 | 271.5822 Gamma475 | 2.0768E-02 | 6.8060E-05 | 6.6840E-03 | 3.7132E-02 | 18835.5876 | 265.4550 Gamma476 | 4.5246E-03 | 2.9527E-05 | 1.2246E-05 | 1.5367E-02 | 28418.4006 | 175.9423 Gamma477 | 7.9723E-03 | 3.4245E-05 | 3.6387E-04 | 1.6831E-02 | 28131.9693 | 177.7337 Gamma478 | 1.8001E-03 | 3.8621E-06 | 3.6605E-06 | 5.4703E-03 | 34500.3679 | 144.9260 Gamma479 | 2.1294E-03 | 2.6664E-06 | 2.6298E-06 | 5.2907E-03 | 10274.7777 | 486.6285 Gamma480 | 1.8446E-03 | 3.9387E-06 | 2.2398E-05 | 5.2873E-03 | 45227.5387 | 110.5521 Gamma481 | 1.7509E-03 | 3.8855E-06 | 3.6611E-97 | 4.4719E-03 | 21426.9780 | 233.3507 Gamma482 | 2.7513E-03 | 1.2285E-05 | 1.4434E-05 | 6.9678E-03 | 25084.5453 | 199.3259 Gamma483 | 3.2090E-03 | 9.4208E-06 | 6.0894E-05 | 8.6159E-03 | 35241.6479 | 141.8776 Gamma484 | 1.2637E-03 | 1.5646E-06 | 0.0000E+00 | 3.2678E-03 | 26065.9621 | 191.8210 Gamma485 | 6.6660E-03 | 1.2467E-05 | 1.1263E-03 | 1.3715E-02 | 21821.2805 | 229.1341 Gamma486 | 9.9720E-04 | 1.2262E-06 | 0.0000E+00 | 3.8547E-03 | 40447.7451 | 123.6163 Gamma487 | 1.6581E-03 | 5.8962E-06 | 8.4528E-06 | 5.3982E-03 | 11862.0810 | 421.5112 Gamma488 | 3.1126E-03 | 6.7946E-06 | 6.7868E-05 | 7.3754E-03 | 23350.4316 | 214.1288 Gamma489 | 1.2443E-03 | 2.4282E-06 | 6.0437E-06 | 3.9047E-03 | 26562.3243 | 188.2365 Gamma490 | 1.1049E-02 | 2.9569E-05 | 8.7339E-04 | 1.9780E-02 | 23895.0789 | 209.2481 Gamma491 | 4.8344E-02 | 1.0015E-04 | 2.9564E-02 | 6.9302E-02 | 25991.7975 | 192.3684 Gamma492 | 2.3050E-03 | 3.3382E-06 | 9.9957E-05 | 5.4752E-03 | 17442.8134 | 286.6510 Gamma493 | 1.8940E-03 | 1.0811E-05 | 0.0000E+00 | 4.7140E-03 | 30163.3613 | 165.7640 Gamma494 | 3.2402E-02 | 5.7514E-05 | 2.1006E-02 | 4.7081E-02 | 24356.8705 | 205.2809 Gamma495 | 8.2957E-01 | 3.3422E-04 | 7.9682E-01 | 8.6667E-01 | 20813.6358 | 240.2271 Gamma496 | 1.7734E-03 | 2.7448E-06 | 1.5778E-06 | 5.3464E-03 | 27772.7873 | 180.0323 Gamma497 | 2.8291E-03 | 7.4174E-06 | 2.0672E-238 | 7.7762E-03 | 66433.4908 | 75.2632 Gamma498 | 1.9807E-02 | 1.1193E-04 | 4.7007E-03 | 3.9870E-02 | 55844.1523 | 89.5349 Gamma499 | 3.7384E-03 | 1.0750E-05 | 6.1443E-06 | 9.4147E-03 | 27539.7138 | 181.5560 Gamma500 | 5.3652E-03 | 9.2246E-06 | 8.6062E-04 | 1.1890E-02 | 72948.9489 | 68.5411 Gamma501 | 1.3157E-01 | 2.7214E-04 | 1.0350E-01 | 1.6442E-01 | 16829.9330 | 297.0897 Gamma502 | 3.6686E-03 | 1.1225E-05 | 1.7460E-06 | 1.0535E-02 | 37864.7786 | 132.0488 Gamma503 | 8.1465E-03 | 1.5319E-05 | 2.2713E-03 | 1.5033E-02 | 16359.8070 | 305.6271 Gamma504 | 3.0005E-03 | 7.1040E-06 | 2.8331E-07 | 8.7553E-03 | 22353.0670 | 223.6830 Gamma505 | 2.3504E-03 | 4.2734E-06 | 6.3767E-06 | 6.3886E-03 | 19613.4301 | 254.9274 Gamma506 | 3.4265E-03 | 6.4921E-06 | 1.4338E-04 | 7.8648E-03 | 28198.0034 | 177.3175 Gamma507 | 2.2086E-03 | 4.2489E-06 | 6.6654E-06 | 6.9498E-03 | 28960.2604 | 172.6504 Gamma508 | 1.8329E-03 | 3.4288E-06 | 2.5296E-05 | 4.4587E-03 | 20578.9871 | 242.9663 Gamma509 | 1.7202E-03 | 3.1004E-06 | 0.0000E+00 | 5.2366E-03 | 22627.6876 | 220.9682 Gamma510 | 2.4846E-02 | 5.0564E-05 | 8.7965E-03 | 3.6573E-02 | 19652.0538 | 254.4263 Gamma511 | 3.0427E-03 | 7.0590E-06 | 4.7212E-07 | 7.9829E-03 | 24954.0345 | 200.3684 Gamma512 | 1.6581E-03 | 2.4683E-06 | 5.2152E-07 | 4.7452E-03 | 42106.3515 | 118.7469 Gamma513 | 2.7028E-03 | 7.0896E-06 | 3.5590E-205 | 8.0023E-03 | 33245.8452 | 150.3947 Gamma514 | 2.5833E-03 | 8.3656E-06 | 1.1496E-06 | 6.5102E-03 | 21024.7301 | 237.8152 Gamma515 | 1.4949E-02 | 2.1112E-05 | 7.6734E-03 | 2.3874E-02 | 14787.2077 | 338.1301 Gamma516 | 4.5785E-03 | 9.8908E-06 | 1.4013E-04 | 9.7391E-03 | 28731.1410 | 174.0272 Gamma517 | 2.1947E-02 | 1.2392E-04 | 7.0573E-03 | 4.2178E-02 | 11922.9571 | 419.3591 Gamma518 | 2.0214E-03 | 3.1687E-06 | 1.8265E-05 | 5.6039E-03 | 16504.4170 | 302.9492 Gamma519 | 1.9063E-03 | 1.5384E-05 | 0.0000E+00 | 5.5721E-03 | 17525.8421 | 285.2930 Gamma520 | 2.5680E-03 | 1.0041E-05 | 1.2352E-253 | 7.7990E-03 | 17917.6273 | 279.0548 Gamma521 | 7.4956E-01 | 6.0872E-04 | 7.0241E-01 | 7.8861E-01 | 30806.1306 | 162.3054 Gamma522 | 2.2303E-03 | 4.3630E-06 | 1.5289E-05 | 5.6639E-03 | 9786.4206 | 510.9120 Gamma523 | 3.3383E-03 | 2.8680E-05 | 1.7586E-06 | 1.4263E-02 | 31699.7803 | 157.7298 Gamma524 | 1.2529E-03 | 1.4225E-06 | 0.0000E+00 | 3.6943E-03 | 12450.4991 | 401.5903 Gamma525 | 2.8892E-03 | 3.2576E-05 | 2.7673E-06 | 1.0769E-02 | 24567.9091 | 203.5175 Gamma526 | 1.0670E-03 | 1.3093E-06 | 3.7588E-305 | 3.7336E-03 | 25525.8323 | 195.8800 Gamma527 | 6.8261E-03 | 2.3397E-05 | 2.8033E-04 | 1.6208E-02 | 51743.5942 | 96.6303 Gamma528 | 3.3637E-02 | 2.6144E-04 | 2.8814E-03 | 5.1314E-02 | 56874.1537 | 87.9134 Gamma529 | 1.2611E-02 | 3.3020E-05 | 1.6657E-03 | 2.2728E-02 | 169947.5985 | 29.4208 Gamma530 | 1.6999E-03 | 5.2823E-06 | 1.3537E-05 | 4.3540E-03 | 25726.4129 | 194.3528 Gamma531 | 3.5967E-03 | 3.5060E-06 | 8.3529E-06 | 7.0724E-03 | 17345.1166 | 288.2656 Gamma532 | 3.1536E-03 | 5.6114E-06 | 7.2577E-113 | 7.7095E-03 | 33188.7328 | 150.6535 Gamma533 | 8.3641E-03 | 1.9661E-05 | 1.6485E-03 | 1.7416E-02 | 26295.3614 | 190.1476 Gamma534 | 8.4514E-03 | 2.7209E-05 | 2.8089E-04 | 1.7232E-02 | 124216.1730 | 40.2524 Gamma535 | 2.6716E-03 | 1.3608E-05 | 4.0078E-06 | 7.4991E-03 | 81553.5622 | 61.3094 Gamma536 | 9.3193E-04 | 2.9252E-06 | 7.9932E-06 | 3.8403E-03 | 18179.8773 | 275.0294 Gamma537 | 1.1114E-03 | 1.3498E-06 | 6.6574E-06 | 3.2932E-03 | 19534.9622 | 255.9514 Gamma538 | 1.5258E-03 | 3.8594E-06 | 2.1416E-06 | 4.9796E-03 | 15821.6917 | 316.0218 Gamma539 | 1.3697E-03 | 6.3098E-06 | 2.7123E-06 | 5.3163E-03 | 18226.7989 | 274.3213 Gamma540 | 1.8537E-03 | 8.1875E-06 | 9.3136E-07 | 8.5293E-03 | 103611.6168 | 48.2571 Gamma541 | 1.5786E-03 | 8.1487E-06 | 8.1707E-06 | 4.0892E-03 | 21485.0467 | 232.7200 Gamma542 | 1.4865E-03 | 6.0516E-06 | 0.0000E+00 | 5.9249E-03 | 27535.5807 | 181.5832 Gamma543 | 1.1454E-03 | 2.7441E-06 | 9.3162E-06 | 4.0491E-03 | 28161.6175 | 177.5466 Gamma544 | 1.0501E-03 | 3.8784E-06 | 9.7212E-238 | 4.1070E-03 | 17075.4013 | 292.8189 Gamma545 | 1.1498E-03 | 1.1439E-06 | 2.2421E-06 | 3.1272E-03 | 12510.7885 | 399.6551 Gamma546 | 8.7337E-04 | 2.8368E-06 | 1.0350E-05 | 2.3897E-03 | 12978.9781 | 385.2383 Gamma547 | 8.8294E-01 | 2.7952E-04 | 8.5074E-01 | 9.1435E-01 | 18172.1093 | 275.1469 Gamma548 | 5.6532E-03 | 2.3565E-05 | 6.5690E-05 | 1.6203E-02 | 123254.6503 | 40.5664 Gamma549 | 1.4508E-03 | 1.4327E-05 | 6.4857E-07 | 4.0106E-03 | 10878.6212 | 459.6171 Gamma550 | 1.3803E-02 | 1.8331E-04 | 2.2917E-03 | 4.4794E-02 | 34666.3023 | 144.2323 Gamma551 | 6.7310E-03 | 2.7680E-05 | 1.1266E-04 | 1.7192E-02 | 35142.0623 | 142.2796 Gamma552 | 2.4209E-01 | 8.7090E-03 | 3.8008E-02 | 3.7133E-01 | 162869.1094 | 30.6995 Gamma553 | 3.1470E-03 | 6.9356E-06 | 2.8290E-05 | 8.0054E-03 | 33272.2618 | 150.2753 Gamma554 | 3.9453E-03 | 1.3509E-05 | 1.0103E-04 | 1.2127E-02 | 24520.4763 | 203.9112 Gamma555 | 3.6845E-03 | 9.3521E-06 | 5.2495E-05 | 9.5950E-03 | 24818.6934 | 201.4610 Gamma556 | 7.3746E-03 | 2.0730E-05 | 1.2310E-03 | 1.6435E-02 | 29287.3525 | 170.7222 Gamma557 | 4.5495E-03 | 1.0316E-05 | 3.0097E-04 | 1.0594E-02 | 21781.4116 | 229.5535 Gamma558 | 9.9648E-03 | 5.5931E-05 | 9.4161E-04 | 2.7924E-02 | 32907.5859 | 151.9407 Gamma559 | 1.9224E-03 | 3.7264E-06 | 1.4318E-05 | 5.3916E-03 | 25555.6939 | 195.6511 Gamma560 | 8.1419E-03 | 2.9276E-05 | 1.1023E-03 | 1.9852E-02 | 81626.1404 | 61.2549 Gamma561 | 1.9963E-03 | 7.0339E-06 | 1.4485E-286 | 5.6362E-03 | 39974.1572 | 125.0808 Gamma562 | 1.4446E-03 | 2.2048E-06 | 1.4879E-05 | 4.5418E-03 | 44515.1380 | 112.3213 Gamma563 | 6.0201E-03 | 5.3759E-05 | 9.1406E-05 | 1.8311E-02 | 43368.0244 | 115.2923 Gamma564 | 2.6932E-03 | 6.5481E-06 | 8.6165E-05 | 7.5424E-03 | 19725.0931 | 253.4842 Gamma565 | 4.9005E-03 | 8.1615E-06 | 3.5200E-04 | 1.0600E-02 | 45810.4813 | 109.1453 Gamma566 | 6.7075E-03 | 6.6712E-05 | 3.4673E-06 | 1.4944E-02 | 50579.6758 | 98.8539 Gamma567 | 3.6168E-03 | 1.1999E-05 | 1.0541E-04 | 1.0273E-02 | 22432.4382 | 222.8915 Gamma568 | 5.4354E-03 | 2.0577E-05 | 4.0865E-07 | 1.4374E-02 | 34782.8385 | 143.7491 Gamma569 | 1.3744E-02 | 4.3454E-05 | 4.8660E-03 | 2.7866E-02 | 89800.7279 | 55.6788 Gamma570 | 2.0294E-02 | 2.9998E-04 | 2.1426E-03 | 6.0205E-02 | 108491.3389 | 46.0866 Gamma571 | 1.2219E-03 | 2.3025E-06 | 0.0000E+00 | 3.5279E-03 | 26854.9602 | 186.1853 Gamma572 | 1.0003E-02 | 3.9214E-05 | 6.9248E-04 | 2.2608E-02 | 70626.9782 | 70.7945 Gamma573 | 6.1290E-01 | 1.9880E-02 | 3.9848E-01 | 9.0596E-01 | 63459.6414 | 78.7902 Gamma574 | 7.6587E-03 | 2.6901E-05 | 1.7922E-04 | 1.6135E-02 | 37523.9088 | 133.2484 Gamma575 | 9.8089E-03 | 4.2591E-05 | 1.0069E-03 | 2.3786E-02 | 63298.2454 | 78.9911 Gamma576 | 8.3441E-03 | 5.9303E-05 | 1.0641E-04 | 2.4359E-02 | 48248.9802 | 103.6291 Gamma577 | 1.1359E-01 | 5.5179E-04 | 7.9416E-02 | 1.6391E-01 | 29470.7144 | 169.6600 Gamma578 | 2.0572E-03 | 4.9978E-06 | 1.9914E-06 | 5.1430E-03 | 19695.0448 | 253.8710 Gamma579 | 5.0992E-03 | 1.8647E-05 | 1.0823E-05 | 1.2779E-02 | 45835.5124 | 109.0857 Gamma580 | 3.0145E-03 | 1.6992E-05 | 9.0696E-06 | 9.2287E-03 | 18390.8363 | 271.8745 Gamma581 | 6.3175E-03 | 2.1863E-05 | 2.0929E-04 | 1.5309E-02 | 37372.7549 | 133.7873 Gamma582 | 5.5664E-03 | 1.5330E-05 | 2.5135E-04 | 1.2761E-02 | 27636.1311 | 180.9226 Gamma583 | 7.1495E-03 | 2.5749E-05 | 2.6355E-05 | 1.7411E-02 | 16700.0796 | 299.3998 Gamma584 | 2.3959E-03 | 9.8418E-06 | 0.0000E+00 | 8.6192E-03 | 31609.9241 | 158.1782 Gamma585 | 1.3901E-02 | 5.7898E-05 | 1.1593E-04 | 2.6953E-02 | 28774.4422 | 173.7653 Gamma586 | 4.5053E-03 | 1.5922E-05 | 1.3999E-05 | 1.1896E-02 | 24670.6474 | 202.6700 Gamma587 | 2.3749E-03 | 4.3713E-06 | 4.6294E-06 | 6.3873E-03 | 18980.6039 | 263.4268 Gamma588 | 6.2443E-03 | 1.9280E-05 | 6.4128E-04 | 1.6022E-02 | 19087.7909 | 261.9475 Gamma589 | 2.6017E-03 | 7.3585E-06 | 0.0000E+00 | 7.6996E-03 | 29942.2713 | 166.9880 Gamma590 | 3.9186E-03 | 3.8209E-05 | 8.5562E-07 | 8.2811E-03 | 38640.8552 | 129.3967 Gamma591 | 3.3120E-02 | 1.3032E-04 | 1.4226E-02 | 5.1841E-02 | 14839.2761 | 336.9437 Gamma592 | 1.1356E-02 | 6.4249E-05 | 3.9847E-04 | 2.5152E-02 | 36637.9856 | 136.4704 Gamma593 | 2.5612E-03 | 7.2270E-06 | 1.8899E-05 | 8.2262E-03 | 43793.8830 | 114.1712 Gamma594 | 4.0829E-03 | 9.9005E-06 | 1.9282E-139 | 9.1253E-03 | 26159.3926 | 191.1359 Gamma595 | 1.6455E-02 | 1.4091E-04 | 9.8861E-04 | 4.0410E-02 | 25487.0538 | 196.1780 Gamma596 | 4.6105E-03 | 7.1124E-05 | 1.6447E-05 | 2.3031E-02 | 12820.6463 | 389.9959 Gamma597 | 1.2425E-02 | 1.0413E-04 | 2.9849E-04 | 3.5192E-02 | 29425.0675 | 169.9231 Gamma598 | 5.9868E-03 | 2.9668E-05 | 9.7568E-307 | 1.4641E-02 | 19891.5709 | 251.3628 Gamma599 | 7.1850E-01 | 1.4458E-03 | 6.4775E-01 | 7.9540E-01 | 43147.8720 | 115.8806 Gamma600 | 3.8137E-03 | 1.5407E-05 | 3.7176E-107 | 1.2743E-02 | 36807.9025 | 135.8404 Gamma601 | 4.5700E-03 | 2.5936E-05 | 1.2255E-05 | 1.4629E-02 | 61878.8181 | 80.8031 Gamma602 | 5.9703E-02 | 5.4771E-04 | 8.3209E-03 | 9.8580E-02 | 97691.4403 | 51.1816 Gamma603 | 3.7547E-03 | 1.2219E-05 | 1.3979E-06 | 1.0969E-02 | 18740.0983 | 266.8076 Gamma604 | 3.3626E-03 | 3.0958E-05 | 7.2764E-06 | 9.5617E-03 | 23786.6937 | 210.2016 Gamma605 | 2.8119E-03 | 1.2861E-05 | 1.2029E-05 | 1.0376E-02 | 39130.1276 | 127.7788 Gamma606 | 8.8095E-03 | 3.7352E-05 | 7.7500E-04 | 2.0562E-02 | 16802.9664 | 297.5665 Gamma607 | 5.9677E-03 | 2.1292E-05 | 1.5958E-05 | 1.4337E-02 | 26793.4490 | 186.6128 Gamma608 | 6.4906E-03 | 3.8496E-05 | 8.2154E-05 | 1.8456E-02 | 25738.6885 | 194.2601 Gamma609 | 3.4075E-03 | 1.1365E-05 | 1.9948E-06 | 9.8028E-03 | 19701.1370 | 253.7925 Gamma610 | 3.7020E-02 | 1.5706E-04 | 1.8155E-02 | 6.0903E-02 | 30342.1325 | 164.7874 Gamma611 | 2.7671E-03 | 7.2091E-06 | 1.7110E-06 | 8.7172E-03 | 13700.4153 | 364.9524 Gamma612 | 2.5592E-03 | 7.2477E-06 | 0.0000E+00 | 7.1332E-03 | 42692.5417 | 117.1165 Gamma613 | 5.1943E-03 | 1.5530E-05 | 1.3503E-06 | 1.2906E-02 | 51838.7161 | 96.4530 Gamma614 | 3.7065E-03 | 1.0699E-05 | 2.1408E-06 | 1.0606E-02 | 24862.7780 | 201.1038 Gamma615 | 4.0503E-03 | 9.8144E-06 | 4.5949E-05 | 1.0229E-02 | 42991.9561 | 116.3008 Gamma616 | 1.5509E-02 | 6.6455E-05 | 5.7150E-03 | 3.5507E-02 | 34656.8095 | 144.2718 Gamma617 | 5.9456E-03 | 2.8665E-05 | 1.7264E-04 | 1.7430E-02 | 31807.6331 | 157.1950 Gamma618 | 1.0786E-02 | 5.9151E-05 | 5.5395E-05 | 2.5488E-02 | 16244.7560 | 307.7916 Gamma619 | 8.3767E-03 | 2.0980E-05 | 3.4127E-04 | 1.7033E-02 | 16412.7446 | 304.6413 Gamma620 | 1.8622E-02 | 2.2345E-04 | 2.2046E-04 | 5.1229E-02 | 134100.5341 | 37.2855 Gamma621 | 3.7281E-03 | 1.7661E-05 | 0.0000E+00 | 1.0159E-02 | 30196.9492 | 165.5796 Gamma622 | 9.0645E-02 | 2.0393E-03 | 2.7301E-02 | 1.7825E-01 | 13895.2991 | 359.8339 Gamma623 | 1.4109E-02 | 1.2263E-04 | 8.9471E-05 | 3.6890E-02 | 39533.6827 | 126.4744 Gamma624 | 5.2883E-03 | 2.2810E-05 | 0.0000E+00 | 1.3935E-02 | 33696.7899 | 148.3821 Gamma625 | 6.7281E-01 | 1.8216E-03 | 5.8844E-01 | 7.4477E-01 | 102972.9902 | 48.5564 mean00 | 3.3153E+01 | 6.2187E-06 | 3.3148E+01 | 3.3157E+01 | 92575.8051 | 54.0098 mean01 | 1.3207E+02 | 2.1123E-05 | 1.3206E+02 | 1.3207E+02 | 40296.2464 | 124.0810 mean02 | 3.3530E+01 | 1.1900E-05 | 3.3524E+01 | 3.3535E+01 | 23306.5483 | 214.5320 mean03 | 1.3223E+02 | 1.5145E-05 | 1.3223E+02 | 1.3224E+02 | 18904.7864 | 264.4833 mean04 | 3.3143E+01 | 3.1642E-05 | 3.3134E+01 | 3.3154E+01 | 312025.9022 | 16.0243 mean05 | 1.3222E+02 | 9.8978E-05 | 1.3221E+02 | 1.3224E+02 | 185982.0670 | 26.8843 mean06 | 3.3227E+01 | 4.6647E-05 | 3.3215E+01 | 3.3240E+01 | 142592.6571 | 35.0649 mean07 | 1.3218E+02 | 3.5168E-05 | 1.3217E+02 | 1.3219E+02 | 45558.0184 | 109.7502 mean08 | 3.3646E+01 | 4.3518E-05 | 3.3640E+01 | 3.3660E+01 | 36341.5325 | 137.5836 mean09 | 1.3244E+02 | 1.0831E-05 | 1.3244E+02 | 1.3245E+02 | 14098.1236 | 354.6571 mean10 | 3.3349E+01 | 2.8868E-05 | 3.3337E+01 | 3.3358E+01 | 42864.7199 | 116.6460 mean11 | 1.3212E+02 | 3.0243E-05 | 1.3211E+02 | 1.3213E+02 | 19599.8797 | 255.1036 mean12 | 3.3524E+01 | 5.5641E-05 | 3.3512E+01 | 3.3540E+01 | 55988.4161 | 89.3042 mean13 | 1.3238E+02 | 4.4698E-05 | 1.3237E+02 | 1.3239E+02 | 40444.0229 | 123.6277 mean14 | 3.3468E+01 | 4.4562E-05 | 3.3455E+01 | 3.3476E+01 | 136234.5080 | 36.7014 mean15 | 1.3274E+02 | 4.3066E-05 | 1.3274E+02 | 1.3276E+02 | 142630.1781 | 35.0557 mean16 | 3.3375E+01 | 9.4142E-05 | 3.3356E+01 | 3.3389E+01 | 113550.4454 | 44.0333 mean17 | 1.3251E+02 | 7.9729E-05 | 1.3249E+02 | 1.3252E+02 | 42178.8880 | 118.5427 mean18 | 3.3736E+01 | 6.2646E-04 | 3.3699E+01 | 3.3785E+01 | 214245.2839 | 23.3377 mean19 | 1.3315E+02 | 1.3344E-03 | 1.3309E+02 | 1.3322E+02 | 104416.6575 | 47.8851 mean20 | 3.3720E+01 | 7.8763E-05 | 3.3704E+01 | 3.3737E+01 | 69508.4940 | 71.9337 mean21 | 1.3294E+02 | 9.5506E-05 | 1.3292E+02 | 1.3296E+02 | 108031.1650 | 46.2829 mean22 | 3.3536E+01 | 5.8907E-06 | 3.3531E+01 | 3.3541E+01 | 18076.7866 | 276.5978 mean23 | 1.3269E+02 | 4.6310E-05 | 1.3268E+02 | 1.3271E+02 | 66099.5260 | 75.6435 mean24 | 3.3957E+01 | 2.6005E-05 | 3.3950E+01 | 3.3965E+01 | 4633.3471 | 1079.1335 mean25 | 1.3324E+02 | 1.9592E-05 | 1.3323E+02 | 1.3324E+02 | 13775.2710 | 362.9693 mean26 | 3.3621E+01 | 1.9563E-05 | 3.3615E+01 | 3.3627E+01 | 75168.1397 | 66.5175 mean27 | 1.3286E+02 | 2.7768E-05 | 1.3285E+02 | 1.3287E+02 | 72290.7471 | 69.1651 mean28 | 3.3967E+01 | 1.9263E-06 | 3.3964E+01 | 3.3969E+01 | 7635.2819 | 654.8547 mean29 | 1.3344E+02 | 9.1469E-06 | 1.3344E+02 | 1.3345E+02 | 21692.2676 | 230.4969 mean30 | 3.3965E+01 | 1.7468E-06 | 3.3962E+01 | 3.3968E+01 | 30107.9168 | 166.0693 mean31 | 1.3372E+02 | 2.5383E-05 | 1.3371E+02 | 1.3373E+02 | 29180.7359 | 171.3459 mean32 | 3.3807E+01 | 9.2728E-06 | 3.3800E+01 | 3.3811E+01 | 51407.7307 | 97.2616 mean33 | 1.3317E+02 | 5.2719E-05 | 1.3316E+02 | 1.3318E+02 | 64555.7808 | 77.4524 mean34 | 3.3589E+01 | 1.6618E-05 | 3.3583E+01 | 3.3596E+01 | 24340.1354 | 205.4220 mean35 | 1.3289E+02 | 1.0972E-05 | 1.3288E+02 | 1.3289E+02 | 25239.1536 | 198.1049 mean36 | 3.3957E+01 | 5.4964E-06 | 3.3953E+01 | 3.3961E+01 | 33494.4226 | 149.2786 mean37 | 1.3388E+02 | 1.4084E-05 | 1.3387E+02 | 1.3389E+02 | 19086.3787 | 261.9669 mean38 | 3.3869E+01 | 1.1937E-04 | 3.3852E+01 | 3.3888E+01 | 146107.2442 | 34.2214 mean39 | 1.3381E+02 | 2.1818E-04 | 1.3379E+02 | 1.3383E+02 | 126098.6469 | 39.6515 mean40 | 3.3899E+01 | 6.3637E-06 | 3.3896E+01 | 3.3903E+01 | 29684.1219 | 168.4402 mean41 | 1.3334E+02 | 1.2572E-05 | 1.3334E+02 | 1.3335E+02 | 28284.2642 | 176.7767 mean42 | 3.4005E+01 | 8.7598E-05 | 3.3990E+01 | 3.4022E+01 | 34455.4313 | 145.1150 mean43 | 1.3439E+02 | 7.7585E-05 | 1.3437E+02 | 1.3441E+02 | 44015.9496 | 113.5952 mean44 | 3.4089E+01 | 1.0985E-05 | 3.4087E+01 | 3.4092E+01 | 10783.1209 | 463.6876 mean45 | 1.3389E+02 | 2.1354E-06 | 1.3389E+02 | 1.3390E+02 | 16814.1550 | 297.3685 mean46 | 3.4030E+01 | 1.4371E-05 | 3.4023E+01 | 3.4037E+01 | 24344.4476 | 205.3856 mean47 | 1.3368E+02 | 6.3650E-06 | 1.3368E+02 | 1.3369E+02 | 26869.3262 | 186.0858 mean48 | 3.3974E+01 | 1.8305E-05 | 3.3969E+01 | 3.3982E+01 | 16220.0577 | 308.2603 mean49 | 1.3418E+02 | 4.3357E-05 | 1.3417E+02 | 1.3419E+02 | 33984.2140 | 147.1271 sig0 | 1.0397E-02 | 5.1172E-06 | 7.3910E-03 | 1.4208E-02 | 101753.1033 | 49.1386 sig1 | 1.1494E-02 | 2.2762E-05 | 6.9227E-03 | 1.6940E-02 | 89742.0891 | 55.7152 sig2 | 2.2147E-02 | 7.5291E-06 | 1.7002E-02 | 2.6493E-02 | 18702.3423 | 267.3462 sig3 | 2.5312E-02 | 1.3278E-05 | 2.0509E-02 | 3.3648E-02 | 16758.0436 | 298.3642 sig4 | 6.7377E-02 | 3.4771E-05 | 5.8035E-02 | 7.7321E-02 | 90886.8280 | 55.0135 sig5 | 1.2037E-01 | 8.2671E-05 | 1.0577E-01 | 1.3751E-01 | 79241.5240 | 63.0982 sig6 | 7.6083E-02 | 1.2216E-05 | 6.9972E-02 | 8.3122E-02 | 55230.0747 | 90.5304 sig7 | 8.9054E-02 | 1.8748E-05 | 8.0247E-02 | 9.5237E-02 | 33024.4230 | 151.4031 sig8 | 2.3056E-02 | 3.7189E-06 | 1.8870E-02 | 2.6342E-02 | 23756.2145 | 210.4712 sig9 | 2.7127E-02 | 3.8963E-06 | 2.4081E-02 | 3.1347E-02 | 28984.7066 | 172.5048 sig10 | 4.4864E-02 | 1.5913E-05 | 3.8071E-02 | 5.1483E-02 | 21330.6113 | 234.4049 sig11 | 2.8569E-02 | 1.6763E-05 | 2.2633E-02 | 3.5289E-02 | 79364.6376 | 63.0004 sig12 | 7.3945E-02 | 1.4626E-05 | 6.6090E-02 | 8.0964E-02 | 16359.3461 | 305.6357 sig13 | 6.0247E-02 | 4.9425E-05 | 5.0572E-02 | 7.5667E-02 | 45307.7279 | 110.3564 sig14 | 4.7560E-02 | 6.7658E-05 | 3.7400E-02 | 5.9961E-02 | 71546.1485 | 69.8850 sig15 | 4.5253E-02 | 5.1485E-05 | 3.2712E-02 | 5.8367E-02 | 122438.7684 | 40.8367 sig16 | 8.1149E-02 | 6.4461E-05 | 7.3386E-02 | 9.7219E-02 | 35379.4699 | 141.3249 sig17 | 9.8157E-02 | 1.0560E-04 | 8.1357E-02 | 1.1856E-01 | 65447.2037 | 76.3975 sig18 | 1.7481E-01 | 4.9086E-04 | 1.4254E-01 | 2.1546E-01 | 118957.1502 | 42.0319 sig19 | 3.7826E-01 | 2.1703E-03 | 2.9221E-01 | 4.5453E-01 | 98079.9965 | 50.9788 sig20 | 6.3216E-02 | 3.0365E-05 | 5.2831E-02 | 7.1833E-02 | 63138.9261 | 79.1905 sig21 | 8.3011E-02 | 3.0218E-05 | 7.2707E-02 | 9.3286E-02 | 67121.1605 | 74.4922 sig22 | 4.2591E-02 | 5.3225E-06 | 3.8782E-02 | 4.7331E-02 | 11863.5828 | 421.4578 sig23 | 7.9679E-02 | 1.6019E-05 | 7.4067E-02 | 8.7572E-02 | 30533.1187 | 163.7566 sig24 | 4.0758E-02 | 1.3113E-05 | 3.5724E-02 | 4.5320E-02 | 18413.7036 | 271.5369 sig25 | 3.6311E-02 | 5.4108E-05 | 2.5687E-02 | 4.5761E-02 | 319437.7767 | 15.6525 sig26 | 3.3717E-02 | 1.8502E-05 | 2.7022E-02 | 4.1190E-02 | 38797.8225 | 128.8732 sig27 | 3.2957E-02 | 1.9160E-05 | 2.7969E-02 | 4.3278E-02 | 85318.1770 | 58.6042 sig28 | 1.3705E-02 | 1.3658E-06 | 1.1576E-02 | 1.5649E-02 | 11658.0919 | 428.8867 sig29 | 1.7861E-02 | 5.2670E-05 | 1.3820E-02 | 2.7240E-02 | 8839.8629 | 565.6196 sig30 | 2.3851E-02 | 4.9658E-06 | 2.1458E-02 | 2.6093E-02 | 20196.0798 | 247.5728 sig31 | 6.3047E-02 | 1.3221E-05 | 5.5082E-02 | 6.8559E-02 | 20046.4462 | 249.4208 sig32 | 3.3400E-02 | 3.7970E-06 | 2.9305E-02 | 3.6581E-02 | 50200.1253 | 99.6013 sig33 | 8.9384E-02 | 1.3365E-05 | 8.3359E-02 | 9.6302E-02 | 40222.3843 | 124.3089 sig34 | 1.6552E-02 | 7.3247E-05 | 1.2323E-02 | 2.3667E-02 | 20320.9017 | 246.0521 sig35 | 1.9078E-02 | 9.0345E-06 | 1.5246E-02 | 2.6690E-02 | 18222.0431 | 274.3929 sig36 | 2.9753E-02 | 3.2640E-06 | 2.6535E-02 | 3.3308E-02 | 12711.3278 | 393.3499 sig37 | 3.9156E-02 | 9.5006E-06 | 3.3603E-02 | 4.3555E-02 | 15530.8432 | 321.9400 sig38 | 4.3736E-02 | 8.1981E-05 | 3.3489E-02 | 5.9352E-02 | 87473.4563 | 57.1602 sig39 | 5.6483E-02 | 7.0326E-05 | 4.4777E-02 | 7.5767E-02 | 44498.5518 | 112.3632 sig40 | 3.2181E-02 | 1.0031E-05 | 2.9040E-02 | 3.4087E-02 | 18767.5229 | 266.4177 sig41 | 5.8665E-02 | 7.4578E-06 | 5.3512E-02 | 6.2982E-02 | 8008.1789 | 624.3617 sig42 | 4.3597E-02 | 2.8268E-05 | 3.5905E-02 | 5.3344E-02 | 35259.9516 | 141.8039 sig43 | 4.2660E-02 | 1.8198E-05 | 3.4373E-02 | 5.0306E-02 | 31057.7121 | 160.9906 sig44 | 2.0933E-02 | 2.7451E-05 | 1.7862E-02 | 2.2511E-02 | 15465.7156 | 323.2957 sig45 | 2.1647E-02 | 1.2810E-06 | 1.9749E-02 | 2.4108E-02 | 18882.7570 | 264.7918 sig46 | 3.4843E-02 | 1.1288E-05 | 3.0400E-02 | 4.1280E-02 | 12880.6637 | 388.1788 sig47 | 2.9820E-02 | 4.6206E-06 | 2.5811E-02 | 3.3832E-02 | 14872.4660 | 336.1917 sig48 | 3.8268E-02 | 1.7943E-05 | 3.2641E-02 | 4.4989E-02 | 37424.5456 | 133.6022 sig49 | 7.3317E-02 | 4.4131E-05 | 6.2792E-02 | 8.3872E-02 | 37615.4550 | 132.9241 rho0 | -1.6297E-02 | 5.2185E-02 | -4.6878E-01 | 3.7495E-01 | 26512.5997 | 188.5896 rho1 | -6.2773E-01 | 9.5917E-03 | -7.8494E-01 | -4.5777E-01 | 26590.6405 | 188.0361 rho2 | 9.3617E-01 | 1.7745E-04 | 9.1451E-01 | 9.5750E-01 | 230515.6332 | 21.6905 rho3 | 3.8187E-01 | 2.3841E-03 | 2.6518E-01 | 4.5914E-01 | 44177.0986 | 113.1808 rho4 | -4.7966E-01 | 4.1922E-03 | -5.9941E-01 | -3.5422E-01 | 19050.5638 | 262.4594 rho5 | 1.1060E-01 | 1.5303E-02 | -1.2180E-01 | 2.9967E-01 | 57679.1470 | 86.6864 rho6 | -4.9259E-01 | 2.0951E-02 | -6.5592E-01 | -8.0627E-02 | 116877.5009 | 42.7798 rho7 | -9.3090E-01 | 3.7978E-04 | -9.5990E-01 | -8.9145E-01 | 105535.7505 | 47.3773 rho8 | 7.6369E-01 | 8.3119E-04 | 6.9430E-01 | 8.1311E-01 | 46400.7472 | 107.7569 rho9 | 7.9118E-01 | 1.3539E-02 | 5.9939E-01 | 9.1923E-01 | 145895.9604 | 34.2710 rho10 | 2.3060E-02 | 3.1655E-02 | -2.6114E-01 | 3.6095E-01 | 144155.6079 | 34.6847 rho11 | -6.2912E-02 | 7.3775E-03 | -1.9036E-01 | 1.2205E-01 | 27496.0159 | 181.8445 rho12 | -2.5315E-01 | 7.2828E-03 | -3.9030E-01 | -6.6908E-02 | 38868.7962 | 128.6379 rho13 | 5.4412E-01 | 5.7701E-03 | 4.2760E-01 | 7.1766E-01 | 86363.9160 | 57.8945 rho14 | 3.1846E-02 | 6.8279E-03 | -1.2925E-01 | 1.6057E-01 | 39234.2156 | 127.4398 rho15 | -4.9963E-03 | 3.9516E-03 | -1.1434E-01 | 1.0178E-01 | 42726.6201 | 117.0231 rho16 | 3.6540E-01 | 3.1031E-03 | 2.8682E-01 | 4.8095E-01 | 126902.3632 | 39.4004 rho17 | -7.3498E-01 | 5.5361E-03 | -8.5863E-01 | -6.2345E-01 | 23100.7809 | 216.4429 rho18 | -5.7640E-01 | 3.0451E-03 | -6.7629E-01 | -4.8114E-01 | 29505.4889 | 169.4600 rho19 | -9.6126E-01 | 1.9381E-04 | -9.8534E-01 | -9.3612E-01 | 42478.2570 | 117.7073 rho20 | -4.5866E-01 | 1.6254E-03 | -5.2164E-01 | -3.5626E-01 | 36729.7248 | 136.1295 rho21 | 6.6679E-01 | 6.4360E-03 | 5.3659E-01 | 8.0395E-01 | 25260.9133 | 197.9343 rho22 | -5.5560E-01 | 1.3427E-03 | -6.1193E-01 | -4.7605E-01 | 25955.2914 | 192.6389 rho23 | 2.9981E-01 | 4.8525E-03 | 1.7641E-01 | 4.3718E-01 | 14442.0845 | 346.2104 rho24 | -8.1558E-01 | 1.4636E-03 | -8.8326E-01 | -7.4729E-01 | 51720.3490 | 96.6737 p0 | 9.4841E-03 | 8.8157E-06 | 3.7773E-03 | 1.4609E-02 | 19695.7896 | 253.8614 p1 | 1.3091E-03 | 5.0283E-07 | 6.7313E-04 | 1.9354E-03 | 19060.7841 | 262.3187 p2 | 5.2091E-01 | 3.1162E-02 | 2.0918E-01 | 7.0672E-01 | 39399.0310 | 126.9067 p3 | 6.8907E-01 | 1.1123E-03 | 6.3295E-01 | 7.5842E-01 | 122348.5804 | 40.8668 p4 | 4.8204E-01 | 1.1005E-03 | 4.0973E-01 | 5.4387E-01 | 25151.2564 | 198.7972 p5 | 6.1286E-01 | 3.2653E-03 | 5.2855E-01 | 7.4532E-01 | 45865.3068 | 109.0149 p6 | 4.5864E-01 | 1.9018E-03 | 3.7606E-01 | 5.4930E-01 | 33871.6865 | 147.6159 p7 | 6.4258E-01 | 3.2882E-02 | 3.4405E-01 | 1.0000E+00 | 257471.7527 | 19.4196 p8 | 4.9533E-01 | 2.1998E-03 | 4.2800E-01 | 5.9699E-01 | 82995.0090 | 60.2446 p9 | 9.6980E-01 | 1.4932E-03 | 8.9655E-01 | 9.9985E-01 | 38433.8119 | 130.0938 p10 | 7.1425E-01 | 3.0174E-03 | 6.2227E-01 | 7.9271E-01 | 15718.6406 | 318.0937 p11 | 7.7223E-01 | 9.7882E-04 | 7.0868E-01 | 8.3627E-01 | 45482.2677 | 109.9330 p12 | 4.1091E-01 | 2.1401E-03 | 3.2501E-01 | 4.9506E-01 | 29664.4736 | 168.5518 p13 | 7.0933E-01 | 8.8617E-04 | 6.5631E-01 | 7.6445E-01 | 47446.1868 | 105.3825 p14 | 5.4156E-01 | 5.2855E-03 | 4.2643E-01 | 6.7479E-01 | 24435.7993 | 204.6178 p15 | 8.4811E-01 | 6.0109E-04 | 8.0251E-01 | 8.9474E-01 | 20822.5178 | 240.1247 p16 | 6.0290E-01 | 8.1748E-04 | 5.4996E-01 | 6.6087E-01 | 22349.5548 | 223.7181 p17 | 5.6851E-02 | 7.7910E-05 | 3.9254E-02 | 7.0971E-02 | 14496.7123 | 344.9058 p18 | 8.3788E-01 | 2.4751E-03 | 7.5064E-01 | 9.5049E-01 | 31895.1454 | 156.7637 p19 | 3.6076E-02 | 7.1633E-05 | 1.8826E-02 | 5.1437E-02 | 48798.1506 | 102.4629 p20 | 7.9248E-01 | 1.0195E-03 | 7.4217E-01 | 8.4501E-01 | 19238.1132 | 259.9007 p21 | 2.4458E-02 | 3.2593E-04 | 1.0902E-02 | 6.8324E-02 | 37652.1742 | 132.7945 p22 | 2.1440E-01 | 7.4085E-03 | 7.4024E-02 | 3.8601E-01 | 108925.6785 | 45.9029 p23 | 4.4622E-01 | 2.4041E-03 | 3.5150E-01 | 5.2919E-01 | 20916.1565 | 239.0497 p24 | 4.5593E-01 | 1.6771E-03 | 3.7982E-01 | 5.3879E-01 | 129941.7537 | 38.4788
2024-09-04T02:54:58.066099
2020-03-07T04:21:02
2003.03513
{ "authors": "Thoan Pham Duc, Tuyen Nguyen Dang and Vangty Noulorvang", "full_text_license": null, "license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/", "provenance": "arxiv-papers-0000.json.gz:26096", "submitter": "Duc Thoan Pham", "url": "https://arxiv.org/abs/2003.03513" }
arxiv-papers
# Finiteness of meromorphic mappings from Kähler manifold into projective space Pham Duc Thoan Department of Mathematics, National University of Civil Engineering 55 Giai Phong street, Hai Ba Trung, Hanoi, Vietnam<EMAIL_ADDRESS>, Nguyen Dang Tuyen Department of Mathematics, National University of Civil Engineering 55 Giai Phong street, Hai Ba Trung, Hanoi, Vietnam<EMAIL_ADDRESS>and Noulorvang Vangty Department of Mathmatics, National University of Education 136-Xuan Thuy str., Hanoi, Vietnam<EMAIL_ADDRESS> ###### Abstract. The purpose of this paper is to prove the finiteness theorems for meromorphic mappings of a complete connected Kähler manifold into projective space sharing few hyperplanes in subgeneral position without counting multiplicity, where all zeros with multiplicities more than a certain number are omitted. Our results are extensions and generalizations of some recent ones. ††footnotetext: 2010 Mathematics Subject Classification: Primary 32H30, 32A22; Secondary 30D35. Key words and phrases: finiteness theorems, meromorphic mapping, complete Kähler manifold. ## 1\. Introduction Let $f$ be a non-constant meromorphic mapping of $\mathbb{C}^{m}$ into $\mathbb{P}^{n}(\mathbb{C})$ and let $H$ be a hyperplane in $\mathbb{P}^{n}(\mathbb{C})$. Denote by $\nu_{(f,H_{j})}(z)$ the intersecting multiplicity of the mapping $f$ with the hyperplane $H_{j}$ at the point $f(z)$. For a divisor $\nu$ on $\mathbb{C}^{m}$ and for a positive integer $k$ or $k=+\infty$, we set $\nu_{\leqslant k}(z)=\begin{cases}0&{\text{ if }}\nu(z)>k,\\\ \nu(z)&{\text{ if }}\nu(z)\leqslant k.\end{cases}$ Similarly, we define $\nu_{>k}(z).$ If $\varphi$ is a meromorphic function, the zero divisor of $\varphi$ is denoted by $\nu_{\varphi}.$ Let $H_{1},H_{2},\ldots,H_{q}$ be hyperplanes of $\mathbb{P}^{n}(\mathbb{C})$ (in subgeneral position or in general position) and let $k_{1},\ldots,k_{q}$ be positive integers or $+\infty$. Assume that $f$ is a meromorphic mapping satisfying $\dim\\{z:\nu_{(f,H_{i}),\leqslant k_{i}}(z)\cdot\nu_{(f,H_{j}),\leqslant k_{j}}(z)\\}\leqslant m-2\ \ (1\leqslant i<j\leqslant q).$ Let $d$ be an integer number. We denote by $\mathcal{F}(f,\\{H_{j},k_{j}\\}_{j=1}^{q},d)$ the set of all meromorphic mappings $g:\mathbb{C}^{m}\to\mathbb{P}^{n}(\mathbb{C})$ satisfying the following two conditions: * (a) $\min(\nu_{(f,H_{j}),\leqslant k_{j}},d)=\min(\nu_{(g,H_{j}),\leqslant k_{j}},d)$ ($1\leqslant j\leqslant q$). * (b) $f(z)=g(z)$ on $\bigcup_{j=1}^{q}\\{z:\nu_{(f,H_{j}),\leqslant k_{j}}(z)>0\\}$. If $k_{1}=\cdots=k_{q}=+\infty$, we will simply use notation $\mathcal{F}(f,\\{H_{j}\\}_{j=1}^{q},d)$ instead of $\mathcal{F}(f,\\{H_{j},\infty\\}_{j=1}^{q},d).$ In 1926, Nevanlinna [8] showed that two distinct nonconstant meromorphic functions $f$ and $g$ on the complex plane cannot have the same inverse images for five distinct values, and that $g$ is a linear fractional transformation of $f$ if they have the same inverse images counted with multiplicities for four distinct values. After that, many authors have extended and improved Nevanlinna’s results to the case of meromorphic mappings into complex projective spaces such as Fujimoto [3, 5, 6], Smiley [15], Ru-Sogome [14], Chen-Yan [1], Dethloff-Tan [2], Quang [16, 17, 18, 19], Nhung-Quynh [9]…. These theorems are called uniqueness theorems or finiteness theorems. The first finiteness theorem for the case of meromorphic mappings from $\mathbb{C}^{m}$ into complex projective space $\mathbb{P}^{n}(\mathbb{C})$ sharing $2n+2$ hyperplanes is given by Quang [17] in 2012 and its correction [20] in 2015. Recently, he [18] extended his results and obtained the following finiteness theorem, in which he did not need to count all zeros with multiplicities more than certain values. Theorem A (see [18, Theorem 1.1]) Let $f$ be a linearly nondegenerate meromorphic mapping of $\mathbb{C}^{m}$ into $\mathbb{P}^{n}(\mathbb{C})$. Let $H_{1},\ldots,H_{2n+2}$ be $2n+2$ hyperplanes of $\mathbb{P}^{n}(\mathbb{C})$ in general position and let $k_{1},\ldots,k_{2n+2}$ be positive integers or $+\infty$. Assume that $\sum_{i=1}^{2n+2}\frac{1}{k_{i}+1}<\min\left\\{\frac{n+1}{3n^{2}+n},\frac{5n-9}{24n+12},\frac{n^{2}-1}{10n^{2}+8n}\right\\}.$ Then $\sharp\mathcal{F}(f,\\{H_{i},k_{i}\\}_{i=1}^{2n+2},1)\leqslant 2.$ Note that the condition $\displaystyle\sum_{i=1}^{2n+2}\frac{1}{k_{i}+1}<\min\left\\{\frac{n+1}{3n^{2}+n},\frac{5n-9}{24n+12},\frac{n^{2}-1}{10n^{2}+8n}\right\\}$ in Theorem A becomes $\displaystyle\sum_{i=1}^{2n+2}\frac{1}{k_{i}+1}<\frac{n+1}{3n^{2}+n}$ when $n\geq 5.$ We now consider the general case, where $f:M\to\mathbb{P}^{n}(\mathbb{C})$ is a meromorphic mapping of an $m$-dimensional complete connected Kähler manifold $M$, whose universal covering is biholomorphic to a ball $B(R_{0})=\\{z\in{\mathbf{C}}^{m}\ :\ ||z||<R_{0}\\}$ $(0<R_{0}\leqslant\infty)$, into $\mathbb{P}^{n}(\mathbb{C})$. Let $H_{1},\ldots,H_{q}$ be hyperplanes of $\mathbb{P}^{n}(\mathbb{C})$ and let $k_{1},\ldots,k_{q}$ be integers or $+\infty$. Then, the family $\mathcal{F}(f,\\{H_{i},k_{i}\\}_{i=1}^{q},d)$ are defined similarly as above, where $d$ is an integer number. For $\rho\geqslant 0,$ we say that $f$ satisfies the condition $(C_{\rho})$ if there exists a nonzero bounded continuous real-valued function $h$ on $M$ such that $\rho\Omega_{f}+dd^{c}\log h^{2}\geq\text{Ric}\omega,$ where $\Omega_{f}$ is the full-back of the Fubini-Study form $\Omega$ on $\mathbb{P}^{n}(\mathbb{C})$, $\omega=\dfrac{\sqrt{-1}}{2}\sum_{i,j}h_{i\bar{j}}dz_{i}\wedge d\overline{z}_{j}$ is Kähler form on $M$, $\text{Ric}\omega=dd^{c}\log(det(h_{i\overline{j}}))$, $d=\partial+\overline{\partial}$ and $d^{c}=\dfrac{\sqrt{-1}}{4\pi}(\overline{\partial}-\partial)$. Very recently, Quang [19] obtained a finiteness theorem for meromorphic mappings from such Kähler manifold $M$ into $\mathbb{P}^{n}(\mathbb{C})$ sharing hyperplanes regardless of multiplicities by giving new definitions of ”functions of small intergration” and ”functions of bounded intergration” as well as proposing a new method to deal with the difficulties when he met on the Kähler manifold. We would like to emphasize that Quang’s result is also the first finiteness theorem for meromorphic mappings on the Kähler manifold, although the uniqueness theorems were discovered early by Fujimoto [5] and later by many authors such as Ru-Sogome [14] or Nhung-Quynh [9] and others. Here is his result. Theorem B (see [19, Theorem 1.1]). Let $M$ be an $m$-dimensional connected Kähler manifold whose universal covering is biholomorphic to $\mathbb{C}^{m}$ or the unit ball $B(1)$ of $\mathbb{C}^{m}$, and let $f$ be a linearly nondegenerate meromorphic mapping of $M$ into $\mathbb{P}^{n}(\mathbb{C})\ (n\geqslant 2)$. Let $H_{1},\ldots,H_{q}$ be $q$ hyperplanes of $\mathbb{P}^{n}(\mathbb{C})$ in general position. Assume that $f$ satisfies the condition $(C_{\rho})$. If $\displaystyle q>n+1+\frac{3nq}{6n+1}+\rho\frac{(n^{2}+4q-3n)(6n+1)}{6n^{2}+2}$ then $\sharp\mathcal{F}(f,\\{H_{i}\\}_{i=1}^{q},1)\leqslant 2.$ Unfortunately, in this result, all zeros with multiplicities must need to be counted and hence Theorem B can not be an extension or a generalization of Theorem A. Our purpose in this article is to prove a similar result to Theorems A and B for the case of a meromorphic mapping from a complete connected Kähler manifold into projective space, in which all zeros with multiplicities more than a certain number are omitted. However, the key used in the proof of Theorem A is technique “rearranging counting functions” to compare counting functions with characteristic functions, which is not valid on the Kähler manifold. In addition, the proof of Theorem B cannot work on the case of $k_{i}<\infty$. To overcome these difficulties, we use the technique in [22] and the methods in [19], as well as considering new auxiliary functions to obtain a new finiteness theorem which will generalize and extend the theorems cited above. Namely, we will prove the following theorem. ###### Theorem 1.1. Let $M$ be an $m$-dimensional connected Kähler manifold whose universal covering is biholomorphic to $\mathbb{C}^{m}$ or the unit ball $B(1)$ of $\mathbb{C}^{m}$, and let $f$ be a linearly nondegenerate meromorphic mapping of $M$ into $\mathbb{P}^{n}(\mathbb{C})\ (n\geqslant 2)$. Let $H_{1},\ldots,H_{q}$ be $q$ hyperplanes of $\mathbb{P}^{n}(\mathbb{C})$ in $N$-subgeneral position and let $k_{1},\ldots,k_{q}$ be integers or $+\infty$. Assume that $f$ satisfies the condition $(C_{\rho})$. Let $k$ be the largest integer number not exceeding $\dfrac{q-2N-2}{2}$ and let $l$ be the smallest integer number not less than $\dfrac{2N-2}{k+2}+2$ if $k>0$ or let $l=2N+1$ if $k=0.$ Then $\sharp\mathcal{F}(f,\\{H_{i},k_{i}\\}_{i=1}^{q},1)\leqslant 2$ if $\displaystyle q$ $\displaystyle>2N-n+1+\sum_{i=1}^{q}\frac{n}{k_{i}+1}+\rho\big{(}n(2N-n+1)+\frac{4(q-n)n}{n-1}\big{)}$ $\displaystyle+\max\left\\{\frac{3nq}{2\big{(}3n+1+\frac{n-1}{l}\big{)}},\frac{4q+3nq-14}{4q+3n-14},\frac{3nq^{2}}{6nq+(n-2)(q-2)+4q-6n-8}\right\\}.$ Remark 1. It is easy to see that $\dfrac{3nq}{2\big{(}3n+1+\frac{n-1}{l}\big{)}}<\dfrac{3nq}{6n+2}<\dfrac{3nq}{6n+1},$ and $\dfrac{3nq^{2}}{6nq+(n-2)(q-2)+4q-6n-8}<\dfrac{3nq^{2}}{6nq+q}=\dfrac{3nq}{6n+1},\forall n\geq 2.$ We now show that $\frac{4q+3nq-14}{4q+3n-14}<\dfrac{3nq}{6n+1},\forall n\geq 3.$ Indeed, it suffices to prove that $12nq^{2}-9n^{2}q-69nq-4q+84n+14>0$ for all $n\geq 3.$ Since $q\geq 2n+2$, we have $12nq^{2}-9n^{2}q-69nq-4q\geq q(15n^{2}-45n-4)>0$ for all $n\geq 4.$ For $n=3,$ we have $12nq^{2}-9n^{2}q-69nq-4q+84n+14=36q^{2}-292q+266>0$ since $q\geq 8.$ Hence, when $k_{1}=\cdots=k_{q}=+\infty$ and $N=n$, Theorem 1.1 is an extension of Theorem B. When $q=2n+2$, $M=\mathbb{C}^{n}$ and $H_{1},\ldots,H_{q}$ are in general position, by $\rho=0$, $N=n$, $k=0$ and $l=2n+1,$ we obtain the following corollary from Theorem 1.1. ###### Corollary 1.2. Let $f$ be a linearly nondegenerate meromorphic mapping of $\mathbb{C}^{m}$ into $\mathbb{P}^{n}(\mathbb{C})$. Let $H_{1},\ldots,H_{2n+2}$ be $2n+2$ hyperplanes of $\mathbb{P}^{n}(\mathbb{C})$ in general position and let $k_{1},\ldots,k_{n+2}$ be positive integers or $+\infty$. Then $\sharp\mathcal{F}(f,\\{H_{i},k_{i}\\}_{i=1}^{2n+2},1)\leqslant 2$ provided $\sum_{i=1}^{2n+2}\frac{1}{k_{i}+1}<\min\left\\{\frac{1}{2n},\frac{n^{3}+2n+3}{n(7n^{2}+5n+3)}\right\\}.$ In particular, if $n\geq 4$ then $\sharp\mathcal{F}(f,\\{H_{i},k_{i}\\}_{i=1}^{2n+2},1)\leqslant 2$ provided $\sum_{i=1}^{2n+2}\frac{1}{k_{i}+1}<\frac{1}{2n}.$ Remark 2. Consider the quantities $A=\min\left\\{\frac{n+1}{3n^{2}+n},\frac{5n-9}{24n+12},\frac{n^{2}-1}{10n^{2}+8n}\right\\}$ in Theorem A and $B=\min\left\\{\frac{1}{2n},\frac{n^{3}+2n+3}{n(7n^{2}+5n+3)}\right\\}$ in Corollary 1.2. We have the following estimates. $\bullet$ For $n\geq 5$, $A=\frac{n+1}{3n^{2}+n}<\frac{1}{2n}=B.$ $\bullet$ For $n=4$, $A=\frac{n^{2}-1}{10n^{2}+8n}<\frac{1}{2n}=B.$ $\bullet$ For $n=3$, $A=\frac{n^{2}-1}{10n^{2}+8n}<\frac{n^{3}+2n+3}{n(7n^{2}+5n+3)}=B.$ $\bullet$ For $n=2$, $A=\frac{5n-9}{24n+12}<\frac{n^{3}+2n+3}{n(7n^{2}+5n+3)}=B.$ In all the cases, always $A<B$. Therefore, Corollary 1.2 is a nice improvement of Theorem A. In order to prove our results, we first give an new estimate of the counting function of the Cartan’s auxiliary function (see Lemma 2.8). We second improve the algebraically dependent theorem of three meromorphic mappings (see Lemma 3.3). After that we use arguments similar to those used by Quang [19] to finish the proofs. ## 2\. Basic notions and auxiliary results from Nevanlinna theory We will recall some basic notions in Nevanlinna theory due to [13, 21]. 2.1. Counting function. We set $||z||=\big{(}|z_{1}|^{2}+\dots+|z_{n}|^{2}\big{)}^{1/2}$ for $z=(z_{1},\dots,z_{n})\in\mathbb{C}^{m}$ and define $\displaystyle B(r):=\\{z\in\mathbb{C}^{m}:||z||<r\\},\quad S(r):=\\{z\in\mathbb{C}^{m}:||z||=r\\}\ (0<r\leqslant\infty),$ where $B(\infty)=\mathbb{C}^{m}$ and $S(\infty)=\emptyset$. Define $v_{m-1}(z):=\big{(}dd^{c}||z||^{2}\big{)}^{m-1}\quad\quad\text{and}$ $\sigma_{m}(z):=d^{c}\text{log}||z||^{2}\land\big{(}dd^{c}\text{log}||z||^{2}\big{)}^{m-1}\text{on}\quad\mathbb{C}^{m}\setminus\\{0\\}.$ A divisor $E$ on a ball $B(R_{0})$ is given by a formal sum $E=\sum\mu_{\nu}X_{\nu}$, where $\\{X_{\nu}\\}$ is a locally family of distinct irreducible analytic hypersurfaces in $B(R_{0})$ and $\mu_{\nu}\in\mathbb{Z}$. We define the support of the divisor $E$ by setting $\mathrm{Supp}\,(E)=\cup_{\mu_{\nu}\neq 0}X_{\nu}$. Sometimes, we identify the divisor $E$ with a function $E(z)$ from $B(R_{0})$ into $\mathbb{Z}$ defined by $E(z):=\sum_{X_{\nu}\ni z}\mu_{\nu}$. Let $M,k$ be positive integers or $+\infty$. We define the truncated divisor $E^{[M]}$ by $\displaystyle E^{[M]}:=\sum_{\nu}\min\\{\mu_{\nu},M\\}X_{\nu},$ and the truncated counting function to level $M$ of $E$ by $\displaystyle N^{[M]}(r,r_{0};E):=\int\limits_{r_{0}}^{r}\frac{n^{[M]}(t,E)}{t^{2m-1}}dt\quad(r_{0}<r<R_{0}),$ where $\displaystyle n^{[M]}(t,E):=\begin{cases}\int\limits_{\mathrm{Supp}\,(E)\cap B(t)}E^{[M]}v_{m-1}&\text{ if }m\geqslant 2,\\\ \sum_{|z|\leqslant t}E^{[M]}(z)&\text{ if }m=1.\end{cases}$ We omit the character [M] if $M=+\infty$. Let $\varphi$ be a non-zero meromorphic function on $B(R_{0})$. We denote by $\nu^{0}_{\varphi}$ (resp. $\nu^{\infty}_{\varphi}$) the divisor of zeros (resp. divisor of poles ) of $\varphi$. The divisor of $\varphi$ is defined by $\nu_{\varphi}=\nu^{0}_{\varphi}-\nu^{\infty}_{\varphi}.$ For a positive integer $M$ or $M=\infty$, we define the truncated divisors of $\nu_{\varphi}$ by $\nu^{[M]}_{\varphi}(z)=\min\ \\{M,\nu_{\varphi}(z)\\},\quad\nu^{[M]}_{\varphi,\leqslant k}(z):=\begin{cases}\nu^{[M]}_{\varphi}(z)&\text{ if }\nu^{[M]}_{\varphi}(z)\leqslant k,\\\ 0&\text{ if }\nu^{[M]}_{\varphi}(z)>k.\end{cases}$ For convenience, we will write $N_{\varphi}(r,r_{0})$ and $N^{[M]}_{\varphi,\leqslant k}(r,r_{0})$ for $N(r,r_{0};\nu^{0}_{\varphi})$ and $N^{[M]}(r,r_{0};\nu^{0}_{\varphi,\leqslant k})$ respectively. 2.2. Characteristic function. Let $f:B(R_{0})\longrightarrow\mathbb{P}^{n}(\mathbb{C})$ be a meromorphic mapping. Fix a homogeneous coordinates system $(w_{0}:\cdots:w_{n})$ on $\mathbb{P}^{n}(\mathbb{C})$. We take a reduced representation $f=(f_{0}:\cdots:f_{n})$, which means $f_{i}\ (0\leqslant i\leqslant n)$ are holomorphic functions and $f(z)=\big{(}f_{0}(z):\dots:f_{n}(z)\big{)}$ outside the analytic subset $\\{f_{0}=\dots=f_{n}=0\\}$ of codimension at least two. Set $\|f\|=\big{(}|f_{0}|^{2}+\dots+|f_{n}|^{2}\big{)}^{1/2}$. Let $H$ be a hyperplane in $\mathbb{P}^{n}(\mathbb{C})$ defined by $H=\\{(\omega_{0},\ldots,\omega_{n}):a_{0}\omega_{0}+\cdots+a_{n}\omega_{n}=0\\}$. We set $H(f)=a_{0}f_{0}+\cdots+a_{n}f_{n}$ and $\|H\|=\big{(}|a_{0}|^{2}+\dots+|a_{n}|^{2}\big{)}^{1/2}.$ The characteristic function of $f$ (with respect to Fubini Study form $\Omega$) is defined by $\displaystyle T_{f}(r,r_{0}):=\int_{t=r_{0}}^{r}\dfrac{dt}{t^{2m-1}}\int_{B(t)}f^{*}\Omega\wedge v_{m-1},\quad\quad 0<r_{0}<r<R_{0}.$ By Jensen’s formula we have $\displaystyle T_{f}(r,r_{0})=\int_{S(r)}\log||f||\sigma_{m}-\int_{S(r_{0})}\log||f||\sigma_{m},\quad\quad 0<r_{0}<r<R_{0}.$ Through this paper, we assume that the numbers $r_{0}$ and $R_{0}$ are fixed with $0<r_{0}<R_{0}$. By notation “$||\ P$”, we mean that the asseartion $P$ hold for all $r\in[r_{0},R_{0}]$ outside a set $E$ such that $\int_{E}dr<\infty$ in case $R_{0}=\infty$ and $\int_{E}\dfrac{1}{R_{0}-r}dr<\infty$ in case $R_{0}<\infty$. 2.3. Functions of small intergration. We recall some definitions due to Quang [19]. Let $f^{1},\ldots,f^{k}$ be $k$ meromorphic mappings from the complete Kähler manifold $B(1)$ into $\mathbb{P}^{m}(\mathbb{C})$, which satisfies the condition $(C_{\rho})$ for a non-negative number $\rho$. For each $1\leqslant u\leqslant k$, we fix a reduced representation $f^{u}=(f_{0}^{u}:\cdots:f_{n}^{u})$ of $f^{u}$. A non-negative plurisubharmonic function $g$ on $B(1)$ is said to be of small intergration with respective to $f^{1},\ldots,f^{k}$ at level $l_{0}$ if there exists an element $\alpha=(\alpha_{1},\ldots,\alpha_{m})\in\mathbb{N}^{m}$ with $|\alpha|\leqslant l_{0}$, a positive number $K$, such that for every $0\leqslant tl_{0}<p<1$ then $\int_{S(r)}|z^{\alpha}g|^{t}\sigma_{m}\leqslant K\left(\frac{R^{2m-1}{R-r}}{\sum}_{u=1}^{m}T_{f^{u}}(r,r_{0})\right)^{p}$ for all $r$ with $0<r_{0}<r<R<1,$ where $z^{\alpha}=z_{1}^{\alpha_{1}}\cdots z_{m}^{\alpha_{m}}.$ We denote by $S(l_{0};f^{1},\ldots,f^{k})$ the set of all non-negative plurisubharmonic functions on $B(1)$ which are of small intergration with respective to $f^{1},\ldots,f^{k}$ at level $l_{0}.$ We see that, if $g\in S(l_{0};f^{1},\ldots,f^{k})$ then $g\in S(l;f^{1},\ldots,f^{k})$ for all $l>l_{0}.$ Moreover, if $g$ is a constant function then $g\in S(0;f^{1},\ldots,f^{k})$. By [19, Proposition 3.2], if $g_{i}\in S(l_{i};f^{1},\ldots,f^{k})$, then $g_{1}\cdots g_{s}\in S(\sum_{i=1}^{s}l_{i};f^{1},\ldots,f^{k})$. A meromorphic function $h$ on $B(1)$ is said to be of bounded intergration with bi-degree $(p,l_{0})$ for the family $\\{f^{1},\ldots,f^{k}\\}$ if there exists $g\in S(l_{0};f^{1},\ldots,f^{k})$ satisfying $|h|\leqslant||f^{1}||^{p}\cdots||f^{u}||^{p}\cdot g,$ outside a proper analytic subset of $B(1).$ We denote by $B(p,l_{0};f^{1},\ldots,f^{k})$ the set of all meromorphic functions on $B(1)$ which are of bounded intergration of bi-degree $p,l_{0}$ for $\\{l_{0};f^{1},\ldots,f^{k}\\}$. We have the following assertions: $\bullet$ For a meromorphic mapping $h$, $|h|\in S(l_{0};f^{1},\ldots,f^{k})$ iff $h\in B(0,l_{0};f^{1},\ldots,f^{k})$. $\bullet$ $B(p,l_{0};f^{1},\ldots,f^{k})\subset B(p,l;f^{1},\ldots,f^{k})$ for all $0\leqslant l_{0}<l.$ $\bullet$ If $h_{i}\in B(p_{i},l_{i};f^{1},\ldots,f^{k})$ then $h_{1}\cdots h_{s}\in B(\sum_{i=1}^{s}p_{i},\sum_{i=1}^{s}l_{i};f^{1},\ldots,f^{k})$. 2.4. Some Lemmas and Propositions. ###### Lemma 2.1. [6, Lemma 3.4] If $\Phi^{\alpha}(F,G,H)=0$ and $\Phi^{\alpha}\left(\frac{1}{F},\frac{1}{G},\frac{1}{H}\right)=0$ for all $\alpha$ with $|\alpha|\leqslant 1$, then one of the following assertions holds: (i) $F=G,G=H$ or $H=F$. (ii) $\frac{F}{G},\frac{G}{H}$ and $\frac{H}{F}$ are all constants. ###### Proposition 2.2 (see [11, 12]). _Let $H_{1},\ldots,H_{q}$ $(q>2N-n+1)$ be hyperplanes in $\mathbb{P}^{n}(\mathbb{C})$ located in $N$-subgeneral position. Then there exists a function $\omega:\\{1,\ldots,q\\}\to(0,1]$ called a Nochka weight and a real number $\tilde{\omega}\geqslant 1$ called a Nochka constant satisfying the following conditions: (i) If $j\in\\{1,\ldots,q\\}$, then $0<\omega_{j}\tilde{\omega}\leqslant 1.$ (ii) $q-2N+n-1=\tilde{\omega}(\sum^{q}_{j=1}\omega_{j}-n-1).$ (iii) For $R\subset\\{1,\ldots,q\\}$ with $|R|=N+1$, then $\sum_{i\in R}\omega_{i}\leqslant n+1.$ (iv) $\frac{N}{n}\leqslant\tilde{\omega}\leqslant\frac{2N-n+1}{n+1}.$ (v) Given real numbers $\lambda_{1},\ldots,\lambda_{q}$ with $\lambda_{j}\geqslant 1$ for $1\leqslant j\leqslant q$ and given any $R\subset\\{1,\ldots,q\\}$ and $|R|=N+1,$ there exists a subset $R^{1}\subset R$ such that $|R^{1}|=\text{rank}\\{H_{i}\\}_{i\in R^{1}}=n+1$ and_ $\prod_{j\in R}\lambda_{j}^{\omega_{j}}\leqslant\prod_{i\in R^{1}}\lambda_{i}.$ ###### Proposition 2.3 (see [21], Lemma 3.2). Let $\\{H_{i}\\}_{i=1}^{q}\ (q\geqslant n+1)$ be a set of hyperplanes of $\mathbb{P}^{n}(\mathbb{C})$ satisfying $\cap_{i=1}^{q}H_{i}=\emptyset$ and let $f:B(R_{0})\longrightarrow\mathbb{P}^{n}(\mathbb{C})$ be a meromorphic mapping. Then there exist positive constants $\alpha$ and $\beta$ such that $\alpha\|f\|\leqslant\max\limits_{i\in\\{1,\ldots,q\\}}|H_{i}(f)|\leqslant\beta\|f\|.$ ###### Proposition 2.4 (see [4], Proposition 4.5). Let $F_{1},\ldots,F_{n+1}$ be meromorphic functions on $B(R_{0})\subset\mathbb{C}^{m}$ such that they are linearly independent over $\mathbb{C}$. Then there exists an admissible set $\\{\alpha_{i}=(\alpha_{i1},\ldots,\alpha_{im})\\}_{i=1}^{n+1}$ with $\alpha_{ij}\geq 0$ being integers, $|\alpha_{i}|=\sum_{j=1}^{m}|\alpha_{ij}|\leqslant i$ for $1\leqslant i\leqslant n+1$ such that the generalized Wronskian $W_{\alpha_{1},\ldots,\alpha_{n+1}}(F_{1},\ldots,F_{n+1})\not\equiv 0$ where $W_{\alpha_{1},\ldots,\alpha_{n+1}}(F_{1},\ldots,F_{n+1})=det\left(\mathcal{D}^{\alpha_{i}}F_{j}\right)_{1\leqslant i,j\leqslant n+1}.$ Let $L_{1},\ldots,L_{n+1}$ be linear forms of $n+1$ variables and assume that they are linearly independent. Let $F=(F_{1}:\cdots:F_{n+1}):B(R_{0})\to\mathbb{P}^{n}(\mathbb{C})$ be a meromorphic mapping and $(\alpha_{1},\ldots,\alpha_{n+1})$ be an admissible set of $F$. Then we have following proposition. ###### Proposition 2.5 (see [13], Proposition 3.3). In the above situation, set $l_{0}=|\alpha_{1}|+\cdots+|\alpha_{n+1}|$ and take $t,p$ with $0<tl_{0}<p<1.$ Then, for $0<r_{0}<R_{0}$ there exists a positive constant $K$ such that for $r_{0}<r<R<R_{0},$ $\int\limits_{S(r)}\left|z^{\alpha_{1}+\cdots+\alpha_{n+1}}\dfrac{W_{\alpha_{1},\ldots,\alpha_{n+1}}(F_{1},\ldots,F_{n+1})}{L_{1}(F)\cdots L_{n+1}(F)}\right|^{t}\sigma_{m}\leqslant K\left(\dfrac{R^{2m-1}}{R-r}T_{F}(R,r_{0})\right)^{p},$ where $z^{\alpha}=z_{1}^{\alpha_{1}}\cdots z_{m}^{\alpha_{m}}$ for $z=(z_{1},\ldots,z_{m})$ and $\alpha=(\alpha_{1},\ldots,\alpha_{m})$. For convenience of presentation, for meromorphic mappings $f^{u}:B(R)\to\mathbb{P}^{n}(\mathbb{C})$ and hyperplanes $\\{H_{i}\\}_{i=1}^{q}$ of $\mathbb{P}^{n}(\mathbb{C})$, we denote by $\mathcal{S}$ the closure of $\cup_{1\leqslant u\leqslant 3}I(f^{u})\cup\cup_{1\leqslant i<j\leqslant q}\\{z:\nu_{(f,H_{i}),\leqslant k_{i}}(z)\cdot\nu_{(f,H_{j}),\leqslant k_{j}}(z)>0\\}.$ We see that $\mathcal{S}$ is an analysis subset of codimension two of $B(R)$. ###### Lemma 2.6. [22, Lemma 2.6] Let $f^{1},f^{2},f^{3}$ be three mappings in $\mathcal{F}(f,\\{H_{i},k_{i}\\}_{i=1}^{q},1)$. Suppose that there exist $s,t,l\in\\{1,\ldots,q\\}$ such that $P:=Det\left(\begin{array}[]{ccc}(f^{1},H_{s})&(f^{1},H_{t})&(f^{1},H_{l})\\\ (f^{2},H_{s})&(f^{2},H_{t})&(f^{2},H_{l})\\\ (f^{3},H_{s})&(f^{3},H_{t})&(f^{3},H_{l})\end{array}\right)\not\equiv 0.$ Then we have $\displaystyle\nu_{P}(z)\geq\sum_{i=s,t,l}(\min_{1\leqslant u\leqslant 3}\\{\nu_{(f^{u},H_{i}),\leqslant k_{i}}(z)\\}-\nu^{[1]}_{(f^{1},H_{i}),\leqslant k_{i}}(z))+2\sum_{i=1}^{q}\nu^{[1]}_{(f^{1},H_{i}),\leqslant k_{i}}(z),\forall z\not\in\mathcal{S}.$ ###### Lemma 2.7. [22, Lemma 2.7] Let $f$ be a linearly nondegenerate meromorphic mapping from $B(R_{0})$ into $\mathbb{P}^{n}(\mathbb{C})$ and let $H_{1},H_{2},\ldots,H_{q}$ be $q$ hyperplanes of $\mathbb{P}^{n}(\mathbb{C})$ in $N$-subgeneral position. Set $l_{0}=|\alpha_{0}|+\cdots+|\alpha_{n}|$ and take $t,p$ with $0<tl_{0}<p<1.$ Let $\omega(j)$ be Nochka weights with respect to $H_{j}$, $1\leqslant j\leqslant q$ and let $k_{j}\ (j=1,\ldots,q)$ be positive integers not less than $n$. For each $j$, we put $\hat{\omega}(j):=\omega{(j)}\big{(}1-\frac{n}{k_{j}+1}).$ Then, for $0<r_{0}<R_{0}$ there exists a positive constant $K$ such that for $r_{0}<r<R<R_{0},$ $\int\limits_{S(r)}\left|z^{\alpha_{0}+\cdots+\alpha_{n}}\frac{W_{\alpha_{0}\ldots\alpha_{n}}(f)}{(f,H_{1})^{\hat{\omega}(1)}\cdots(f,H_{q})^{\hat{\omega}(q)}}\right|^{t}\bigl{(}\|f\|^{\sum_{j=1}^{q}\hat{\omega}(j)-n-1}\bigr{)}^{t}\sigma_{m}\leqslant K\bigl{(}\frac{R^{2m-1}}{R-r}T_{f}(R,r_{0})\bigl{)}^{p},$ In fact, Lemma 2.7 is another version of Lemma 8 in [10], in which $\omega{(j)}$ is replaced by $\hat{\omega}(j)$. ###### Lemma 2.8. Let $M$, $f$ and $H_{1},H_{2},\ldots,H_{q}$ be as in Theorem 1.1. Let $P$ be a holomorphic function on $M$ and $\beta$ be a positive real number such that $P^{\beta}\in B(\alpha,l_{0};f^{1},f^{2},f^{3})$ and $\displaystyle\sum_{u=1}^{3}\sum_{i=1}^{q}\nu^{[n]}_{H_{i}(f^{u}),\leqslant k_{i}}\leqslant\beta\nu_{P},$ where $f^{1},f^{2},f^{3}\in\mathcal{F}(f,\\{H_{j},k_{j}\\}_{j=1}^{q},1)$. Then $q\leqslant 2N-n+1+\sum_{i=1}^{q}\frac{n}{k_{i}+1}+\rho\big{(}n(2N-n+1)+\frac{2}{3}l_{0}\big{)}+{\alpha}.$ ###### Proof. Let $F_{u}=(f^{u}_{0}:\cdots:f^{u}_{n})$ be a reduced representation of $f^{u}\ (1\leqslant u\leqslant 3)$. By routine arguments in the Nevanlinna theory and using Proposition 2.2 (i), we have $\displaystyle\sum\limits_{i=1}^{q}\omega_{i}\nu_{H_{i}(f^{u})}(z)$ $\displaystyle-\nu_{W_{\alpha_{u,0}\cdots\alpha_{u,n}}(F_{u})}(z)$ $\displaystyle\leqslant\sum\limits_{i=1}^{q}\omega_{i}\min\\{n,\nu_{H_{i}(f^{u})}(z)\\}$ $\displaystyle=\sum\limits_{i=1}^{q}\omega_{i}\min\\{n,\nu_{H_{i}(f^{u}),\leqslant k_{i}}(z)\\}+\sum\limits_{i=1}^{q}\omega_{i}\min\\{n,\nu_{H_{i}(f^{u}),>k_{i}}(z)\\}$ $\displaystyle\leqslant\sum\limits_{i=1}^{q}\frac{1}{\tilde{\omega}}\nu^{[n]}_{H_{i}(f^{u}),\leqslant k_{i}}(z)+\sum\limits_{i=1}^{q}\omega_{i}\dfrac{n}{k_{i}+1}\nu_{H_{i}(f^{u})}(z).$ Hence, it is easy to see from the assumption that (2.9) $\displaystyle\sum_{i=1}^{q}{\hat{\omega}_{i}}(\nu_{H_{i}(f^{1})}+\nu_{H_{i}(f^{2})}+\nu_{H_{i}(f^{3})})-(\nu_{W_{\alpha_{1}}(F_{1})}+\nu_{W_{\alpha_{2}}(F_{2})}+\nu_{W_{\alpha_{3}}(F_{3})})\leqslant\frac{\beta}{\tilde{\omega}}\nu_{P},$ where $\hat{\omega}_{i}:=\omega_{i}\big{(}1-\dfrac{n}{k_{i}+1}\big{)}$ for all $1\leqslant i\leqslant q$. Since the universal covering of $M$ is biholomorphic to $B(R_{0}),0<R_{0}\leqslant\infty$, by using the universal covering if necessary, we may assume that $M=B(R_{0})\subset{\mathbf{C}}^{m}$. We consider the following cases. $\bullet$ First case: $R_{0}=\infty$ or $\lim\sup_{r\to R_{0}}\dfrac{T_{f^{1}}(r,r_{0})+T_{f^{2}}(r,r_{0})+T_{f^{3}}(r,r_{0})}{\log(1/(R_{0}-r))}=\infty$. Integrating both sides of inequality (2.9), we get (2.10) $\displaystyle\beta N_{P}(r)$ $\displaystyle\geqslant{\tilde{\omega}}\sum_{u=1}^{3}(\sum_{i=1}^{q}{\omega_{i}}N_{H_{i}(f^{u})}(r,r_{0})-N_{W_{\alpha}(F_{u})}(r,r_{0}))-\sum_{u=1}^{3}\sum_{i=1}^{q}\frac{\tilde{\omega}\omega_{i}n}{k_{i}+1}(T_{f^{u}}(r,r_{0})+O(1).$ Applying Lemma 2.7 to $\omega_{i}\ (1\leqslant i\leqslant q),$ we have $\int\limits_{S(r)}\left|z^{\alpha_{0}+\cdots+\alpha_{n}}\frac{W_{\alpha_{0}\ldots\alpha_{n}}(F_{u})}{H_{1}^{{\omega}_{1}}(f^{u})(z)\cdots H_{q}^{{\omega}_{q}}(f^{u})(z)}\right|^{t_{u}}\left(\|f^{u}\|^{\sum_{i=1}^{q}{\omega}_{i}-n-1}\right)^{t_{u}}\sigma_{m}\leqslant K\bigl{(}\frac{R^{2m-1}}{R-r}T_{f^{u}}(R,r_{0})\bigl{)}^{p_{u}}.$ By the concativity of the logarithmic function, we obtain $\displaystyle\int\limits_{S(r)}\log|z^{\alpha_{0}+\cdots+\alpha_{n}}|\sigma_{m}$ $\displaystyle+(\sum_{i=1}^{q}{\omega}_{i}-n-1)\int\limits_{S(r)}\log||f^{u}||\sigma_{m}+\int\limits_{S(r)}\log|W_{\alpha_{0}\ldots\alpha_{n}}(F_{u})|\sigma_{m}$ $\displaystyle-\sum_{i=1}^{q}\omega_{i}\int\limits_{S(r)}\log|H_{i}(f^{u})|\sigma_{m}\leqslant\frac{p_{u}K}{t_{u}}\big{(}\log^{+}\frac{1}{R_{0}-r}+\log^{+}T_{f^{u}}(r,r_{0})\big{)}.$ By the definition of the characteristic function and the counting function, we get the following estimate $\displaystyle||\ (\sum_{i=1}^{q}{\omega}_{i}-n-1)T_{f^{u}}(r,r_{0})$ $\displaystyle\leqslant\sum_{i=1}^{q}\omega_{i}N_{H_{i}(f^{u})}(r,r_{0})-N_{W_{\alpha_{1}\ldots\alpha_{n}}F_{u})}(r)$ $\displaystyle+K_{1}\big{(}\log^{+}\frac{1}{R_{0}-r}+\log^{+}T_{f^{u}}(r,r_{0})\big{)}.$ Using Proposition 2.2 (ii), we get $\displaystyle||\ (q-2N+n-1)T_{f^{u}}(r,r_{0})$ $\displaystyle\leqslant{\tilde{\omega}}\left(\sum_{i=1}^{q}{\omega_{i}}N_{H_{i}(f^{u})}(r,r_{0})-N_{W_{\alpha_{0}\ldots\alpha_{n}}(F_{u})}(r,r_{0})\right)$ $\displaystyle+{\tilde{\omega}}{K_{1}}\big{(}\log^{+}\frac{1}{R_{0}-r}+\log^{+}T_{f^{u}}(r,r_{0})\big{.}$ Combining these inequalities with (2.10) and noticing that $\tilde{\omega}\omega_{i}\leqslant 1$, we get (2.11) $\displaystyle||\ \beta N_{P}(r)$ $\displaystyle\geqslant(q-2N+n-1)T(r,r_{0})-\sum_{i=1}^{q}\frac{n}{k_{i}+1}T(r,r_{0})+O(1),$ where $T(r,r_{0}):=T_{f}(r,r_{0})+T_{g}(r,r_{0}).$ Since the assumption $P^{\beta}\in B(\alpha,l_{0};f^{1},f^{2},f^{3})$, there exists $g\in S(l_{0};f^{1},f^{2},f^{3})$ satisfying $|P|^{\beta}\leqslant||f^{1}||^{\alpha}\cdot||f^{2}||^{\alpha}\cdot||f^{3}||^{\alpha}\cdot g,$ outside a proper analytic subset of $B(1).$ Hence, by Jensen’s formula and the definition of the characteristic function, we have the following estimate (2.12) $\displaystyle||\ \beta N_{P}(r)=$ $\displaystyle\int_{S(r)}\log|P|^{\beta}\sigma_{n}+O(1)$ $\displaystyle\leqslant$ $\displaystyle\int_{S(r)}({\alpha}\sum_{u=1}^{3}\log||f^{u}||+\log||g||)\sigma_{n}+O(1)$ $\displaystyle=$ $\displaystyle{\alpha}T_{f}(r,r_{0})+o(T(r,r_{0})).$ Together (2.11) with (2.12), we obtain $\displaystyle(q-2N+n-1)T(r,r_{0})-\sum_{i=1}^{q}\frac{n}{k_{i}+1}T(r,r_{0})\leqslant{\alpha}T(r,r_{0})+o(T(r,r_{0}))$ for every $r$ outside a Borel finite measure set. Letting $r\rightarrow\infty$, we deduce that $q-2N+n-1-\sum_{i=1}^{q}\frac{n}{k_{i}+1}\leqslant\rho\big{(}n(2N-n+1)+\frac{2}{3}l_{0}\big{)}+{\alpha}$ with $\rho=0.$ $\bullet$ Second Case: $R_{0}<\infty$ and $\lim\sup_{r\to R_{0}}\dfrac{T_{f^{1}}(r,r_{0})+T_{f^{2}}(r,r_{0})+T_{f^{3}}(r,r_{0})}{\log(1/(R_{0}-r))}<\infty$. It suffices to prove the lemma in the case where $B(R_{0})=B(1)$. Suppose that $q>2N-n+1+\sum_{i=1}^{q}\frac{n}{k_{i}+1}+\rho\big{(}n(2N-n+1)+\frac{2}{3}l_{0}\big{)}+{\alpha}.$ Then, we have $q>2N-n+1+\sum_{i=1}^{q}{\tilde{\omega}}{\omega_{i}}\frac{n}{k_{i}+1}+\rho\big{(}n(2N-n+1)+\frac{2}{3}l_{0}\big{)}+\alpha.$ It follows from Proposition 2.2 ii), iv) that $\displaystyle\sum_{i=1}^{q}{{\omega_{i}}}\big{(}1-\frac{n}{k_{i}+1}\big{)}-(n+1)-\dfrac{\alpha}{\tilde{\omega}}$ $\displaystyle>\rho\big{(}\frac{n(2N-n+1)}{\tilde{\omega}}+\frac{2}{3}\frac{l_{0}}{\tilde{\omega}}\big{)}$ $\displaystyle\geqslant\rho\big{(}n(n+1)+\frac{2}{3}\frac{l_{0}}{\tilde{\omega}}\big{)}.$ Put $t=\dfrac{\frac{2\rho}{3}}{\displaystyle\sum_{i=1}^{q}{\hat{\omega}_{i}}-(n+1)-\dfrac{\alpha}{\tilde{\omega}}}.$ It implies that (2.13) $\displaystyle\big{(}\frac{3n(n+1)}{2}+\frac{l_{0}}{\tilde{\omega}}\big{)}t<1.$ Put $\psi_{u}=z^{\alpha_{u,0}+\cdots+\alpha_{u,n}}\dfrac{W_{\alpha_{u,0}\cdots\alpha_{u,n}}(F_{u})}{H_{1}^{\hat{\omega}_{1}}(f^{u})\cdots H_{q}^{\hat{\omega}_{q}}(f^{u})}\ \ (1\leqslant u\leqslant 3)$. It follows from (2.9) that $\psi_{1}^{t}\psi_{2}^{t}\psi_{3}^{t}P^{\frac{t\beta}{\tilde{\omega}}}$ is holomorphic. Hence $a=\log|\psi_{1}^{t}\psi_{2}^{t}\psi_{3}^{t}P^{\frac{t\beta}{\tilde{\omega}}}|$ is plurisubharmonic on $B(1)$. We now write the given Kähler metric form as ${\omega}=\frac{\sqrt{-1}}{2\pi}\sum\limits_{i,j}h_{i\bar{j}}dz_{i}\wedge d\bar{z}_{j}.$ From the assumption that $f^{1}$, $f^{2}$ and $f^{3}$ satisfy condition $(C_{\rho})$, there are continuous plurisubharmonic functions $a^{\prime}_{u}$ on $B(1)$ such that $e^{a^{\prime}_{u}}\text{det}(h_{i\bar{j}})^{\frac{1}{2}}\leqslant\|f^{u}\|^{\rho},u=1,2,3.$ Put $a_{u}=\frac{2}{3}a^{\prime}_{u}$, $u=1,2,3$ and we get $e^{a_{u}}\text{det}(h_{i\bar{j}})^{\frac{1}{3}}\leqslant\|f^{u}\|^{\frac{2\rho}{3}}.$ Therefore, by the definition of $t$, we get $\displaystyle e^{a+a_{1}+a_{2}+a_{3}}\text{det}(h_{i\bar{j}})$ $\displaystyle\leqslant e^{a}\|f^{1}\|^{\frac{2\rho}{3}}\|f^{2}\|^{\frac{2\rho}{3}}\|f^{3}\|^{\frac{2\rho}{3}}$ $\displaystyle=|\psi_{1}|^{t}|\psi_{2}|^{t}|\psi_{3}|^{t}|P|^{\frac{t\beta}{\tilde{\omega}}}\|f^{1}\|^{\frac{2\rho}{3}}\|f^{2}\|^{\frac{2\rho}{3}}\|f^{3}\|^{\frac{2\rho}{3}}$ $\displaystyle\leqslant|\psi_{1}|^{t}|\psi_{2}|^{t}|\psi_{3}|^{t}\big{(}\|f^{1}\|\|f^{2}\|\|f^{3}|\big{)}^{\frac{t\alpha}{\tilde{\omega}}}\|f^{1}\|^{\frac{2\rho}{3}}\|f^{2}\|^{\frac{2\rho}{3}}\|f^{3}\|^{\frac{2\rho}{3}}\cdot|g|^{\frac{t}{\tilde{\omega}}}$ $\displaystyle=|\psi_{1}|^{t}|\psi_{2}|^{t}|\psi_{3}|^{t}\big{(}\|f^{1}\|\|f^{2}\|\|f^{3}\|\big{)}^{t(\frac{\alpha}{\tilde{\omega}}+\frac{2\rho}{3t})}\cdot|g|^{\frac{t}{\tilde{\omega}}}$ $\displaystyle=|\psi_{1}|^{t}|\psi_{2}|^{t}|\psi_{3}|^{t}\big{(}\|f^{1}\|\|f^{2}\|\|f^{3}\|\big{)}^{t(\sum_{i=1}^{q}\hat{\omega}_{i}-n-1)}\cdot|g|^{\frac{t}{\tilde{\omega}}}.$ Note that the volume form on $B(1)$ is given by $dV:=c_{m}\text{det}(h_{i\bar{j}})v_{m};$ therefore, $\int\limits_{B(1)}e^{a+a_{1}+a_{2}+a_{3}}dV\leqslant C\int\limits_{B(1)}\prod_{u=1}^{3}\big{(}|\psi_{u}|\|f^{u}\|^{\sum_{i=1}^{q}\hat{\omega}_{i}-n-1}\big{)}^{t}\cdot|g|^{\frac{t}{\tilde{\omega}}}v_{m},$ with some positive constant $C.$ Setting $x=\dfrac{l_{0}/\tilde{\omega}}{3n(n+1)/2+l_{0}/\tilde{\omega}},\ y=\dfrac{n(n+1)/2}{3n(n+1)/2+l_{0}/\tilde{\omega}}$, then $x+3y=1$. Thus, by the Hölder inequality and by noticing that $v_{m}=(dd^{c}\|z\|^{2})^{m}=2m\|z\|^{2m-1}\sigma_{m}\wedge d\|z\|,$ we obtain $\displaystyle\int\limits_{B(1)}e^{a+a_{1}+a_{2}+a_{3}}dV$ $\displaystyle\leqslant C\prod_{u=1}^{3}\left(\int\limits_{B(1)}\big{(}|\psi_{u}|\|f^{u}\|^{\sum_{i=1}^{q}\hat{\omega}_{i}-n-1}\big{)}^{\frac{t}{y}}v_{m}\right)^{y}\left(\int\limits_{B(1)}|z^{\beta}g|^{\frac{t}{x\tilde{\omega}}}v_{m}\right)^{x}$ $\displaystyle\leqslant C\prod_{u=1}^{3}\bigl{(}2m\int\limits_{0}\limits^{1}r^{2m-1}\bigl{(}\int\limits_{S(r)}\big{(}|\psi_{u}|\|f^{u}\|^{\sum_{i=1}^{q}\hat{\omega}_{i}-n-1}\big{)}^{\frac{t}{y}}\sigma_{m}\bigl{)}dr\bigl{)}^{y}$ $\displaystyle\times\bigl{(}2m\int\limits_{0}\limits^{1}r^{2m-1}\bigl{(}\int\limits_{S(r)}|z^{\beta}g|^{\frac{t}{x\tilde{\omega}}}\sigma_{m}\bigl{)}dr\bigl{)}^{x}.$ We see from (2.13) that $\dfrac{l_{0}t}{\tilde{\omega}x}=\big{(}\dfrac{3n(n+1)}{2}+\dfrac{l_{0}}{\tilde{\omega}}\big{)}t<1$ and $\sum\limits_{s=0}^{n}|\alpha_{u,s}|\dfrac{t}{y}\leqslant\dfrac{n(n+1)}{2}\dfrac{t}{y}=\big{(}\dfrac{3n(n+1)}{2}+\dfrac{l_{0}}{\tilde{\omega}}\big{)}t<1.$ Then, we can choose a positive number $p$ such that $\dfrac{l_{0}t}{\tilde{\omega}x}<p<1$ and $\sum\limits_{s=0}^{n}|\alpha_{u,s}|\dfrac{t}{y}<p<1.$ Applying Lemma 2.7 to $\hat{\omega}_{i}$, and from the property of $g$, we get $\int\limits_{S(r)}\big{(}|\psi_{u}|\|f^{u}\|^{\sum_{i=1}^{q}\hat{\omega}_{i}-n-1}\big{)}^{\frac{t}{y}}\sigma_{m}\leqslant K_{1}\left(\frac{R^{2m-1}}{R-r}T_{f}^{u}(R,r_{0})\right)^{p}$ and $\int\limits_{S(r)}|z^{\beta}g|^{\frac{t}{\tilde{\omega}x}}\sigma_{m}\leqslant K\left(\frac{R^{2m-1}}{R-r}T_{g}(R,r_{0})\right)^{p}$ outside a subset $E\subset[0,1]$ such that $\displaystyle\int\limits_{E}\dfrac{1}{1-r}dr\leqslant+\infty.$ Choosing $R=r+\dfrac{1-r}{eT_{f^{u}}(r,r_{0})},$ we have $T_{f^{u}}(R,r_{0})\leqslant 2T_{f^{u}}(r,r_{0}),$ Hence, the above inequality implies that $\int\limits_{S(r)}\big{(}|\psi_{u}|\|f^{u}\|^{\sum_{i=1}^{q}\hat{\omega}_{i}-n-1}\big{)}^{\frac{t}{y}}\sigma_{m}\leqslant\frac{K_{2}}{(1-r)^{p}}(T_{f^{u}}(r,r_{0}))^{2p}\leqslant\frac{K_{2}}{(1-r)^{p}}(\log\frac{1}{1-r})^{2p},$ since $\lim\limits_{r\to R_{0}}\sup\dfrac{T_{f^{1}}(r,r_{0})+T_{f^{2}}(r,r_{0})+T_{f^{2}}(r,r_{0})}{\log(1/(R_{0}-r))}<\infty.$ It implies that $\int\limits_{0}\limits^{1}r^{2m-1}\left(\int\limits_{S(r)}\big{(}|\psi_{u}|\|f^{u}\|^{\sum_{i=1}^{q}\hat{\omega}_{i}-n-1}\big{)}\sigma_{m}\right)dr\leqslant\int\limits_{0}\limits^{1}r^{2m-1}\frac{K_{2}}{(1-r)^{p}}\left(\log\frac{1}{1-r}\right)^{2p}dr<\infty.$ Similarly, $\int\limits_{0}\limits^{1}r^{2m-1}\left(\int\limits_{S(r)}|z^{\beta}g|^{\frac{t}{\tilde{\omega}x}}\sigma_{m}\right)dr\leqslant\int\limits_{0}\limits^{1}r^{2m-1}\frac{K_{2}}{(1-r)^{p}}\left(\log\frac{1}{1-r}\right)^{2p}dr<\infty.$ Hence, we conclude that $\int\limits_{B(1)}e^{a+a_{1}+a_{2}+a_{3}}dV<\infty,$ which contradicts Yau’s result [23] and Karp’s result [7]. The proof of Lemma 2.8 is complete. ∎ ## 3\. Proof of Theorem 1.1 ###### Lemma 3.1 (see [22], Lemma 3.1). If $q>2N+1+\sum_{v=1}^{q}\frac{n}{k_{v}+1}+\rho n(2N-n+1)$, then every $g\in\mathcal{F}(f,\\{H_{i},k_{i}\\}_{i=1}^{q},1)$ is linearly nondegenerate. ###### Lemma 3.2 (see [10], Lemma 12). Let $q,N$ be two integers satisfying $q\geq 2N+2$, $N\geq 2$ and $q$ be even. Let $\\{a_{1},a_{2},\ldots,a_{q}\\}$ be a family of vectors in a 3-dimensional vector space such that $\text{rank}\\{a_{j}\\}_{j\in R}=2$ for any subset ${R}\subset Q=\\{1,\ldots,q\\}$ with cardinality $|R|=N+1$. Then there exists a partition $\bigcup_{j=1}^{q/2}I_{j}$ of $\\{1,\ldots,q\\}$ satisfying $|I_{j}|=2$ and $\text{rank}\\{a_{i}\\}_{i\in I_{j}}=2$ for all $j=1,\ldots,q/2.$ We need the following result which slightly improves [22, Theorem 1.3]. ###### Lemma 3.3. Let $k$ be the largest integer number not exceeding $\dfrac{q-2N-2}{2}$. If $n\geqslant 2$ then $f^{1}\wedge f^{2}\wedge f^{3}\equiv 0$ for every $f^{1},f^{2},f^{3}\in\mathcal{(}f,\\{H_{i},k_{i}\\}_{i=1}^{q},1)$ provided $q>2N-n+1+\sum_{i=1}^{q}\frac{n}{k_{i}+1}+\rho n(2N-n+1)+\frac{3nq}{2\big{(}q+(n-1)\frac{l+1}{l}\big{)}},$ where $l$ is the smallest integer number not less than $\dfrac{2N+2+2k}{k+2}$ if $k>0$ or $l=2N+1$ if $k=0.$ ###### Proof. We consider $\mathcal{M}^{3}$ as a vector space over the field $\mathcal{M}$ and denote $Q=\\{1,\ldots,q\\}$. For each $i\in Q$, we set $V_{i}=\left((f^{1},H_{i}),(f^{2},H_{i}),(f^{3},H_{i})\right)\in\mathcal{M}^{3}.$ By Lemma 3.1, $f^{1},f^{2},f^{3}$ are linearly nondegenerate. Suppose that $f^{1}\wedge f^{2}\wedge f^{3}\not\equiv 0$. Since the family of hyperplanes $\\{H_{1},H_{2},\ldots,H_{q}\\}$ are in $N$-subgeneral position, for each subset $R\subset Q$ with cardinality $|R|=N+1$, there exist three indices $l,t,s\in R$ such that the vectors $V_{l},V_{t}$ and $V_{s}$ are linearly independent. This means that $P_{I}:=\det\left(\begin{array}[]{ccc}(f^{1},H_{l})&(f^{1},H_{t})&(f^{1},H_{s})\\\ (f^{2},H_{l})&(f^{2},H_{t})&(f^{2},H_{s})\\\ (f^{3},H_{l})&(f^{3},H_{t})&(f^{3},H_{s})\end{array}\right)\not\equiv 0,$ where $I:=\\{l,t,s\\}.$ We separate into the following cases. $\bullet$ Case 1: $q\mod 2=0$ By the assumption, we have $q=2N+2+2k$ $(k\geq 0)$. Applying Lemma 3.2, we can find a partition $\\{J_{1},\ldots,J_{q/2}\\}$ of $Q$ satisfying $|J_{j}|=2$ and $\text{rank}\\{V_{v}\\}_{v\in J_{j}}=2$ for all $j=1,2,\ldots,q/2.$ Take a fixed subset $S_{j}=\\{j_{1},\ldots,j_{k+2}\\}\subset\\{1,\ldots,q\\}$. We claim that: There exists a partition $J^{j}_{1},\ldots,J^{j}_{N+1+k}$ with $k+2$ indices ${r^{j}_{1},\ldots,r^{j}_{k+2}}\in\\{1,\ldots,N+1+k\\}$ satisfying $\text{rank}\\{V_{v},V_{j_{i}}\\}_{v\in J^{j}_{r^{j}_{i}}}=3$ for all $1\leqslant i\leqslant k+2$. Indeed, consider $N$ sets $J_{1},\ldots,J_{N}$ and $j_{1}$. Assume that $\text{rank}\\{V_{j_{1}},V_{t_{2}}\ldots,V_{t_{u}}\\}=1$ where $u$ is maximal. By the assumption, we have $1\leqslant u\leqslant N-1.$ It follows that there exist $N-u$ pairs, for instance $\\{V_{v}\\}_{v\in J_{1}},\ldots,\\{V_{v}\\}_{v\in J_{N-u}}$ which do not contain $V_{j_{1}}$ or $V_{t_{i}}$ with $2\leqslant i\leqslant u$. Obviously, $N-u\geq 1$. Without loss of generality, we can assume that $V_{j_{1}}\in\\{V_{v}\\}_{v\in J_{N}}$. If $u=N-1$ then obviously, $\text{rank}\\{V_{v},V_{j_{1}}\\}_{v\in J_{1}}=3$ since $\sharp(\\{V_{j_{1}},V_{t_{2}},\ldots,V_{t_{N-1}}\\}\cup\\{V_{v}\\}_{v\in J_{1}})=N+1.$ If $u\leqslant N-2$, there are at least two pairs vectors, which do not contain $V_{j_{1}}$ or $V_{t_{i}}$ with $2\leqslant i\leqslant u$. Assume that $V_{j_{1}}\in$ span$\\{V_{v}\\}_{v\in J_{r_{1}}}$ with some $r_{1}\in\\{1,\ldots,N-u\\}$, there exists at least one pair, for instance $\\{V_{v}\\}_{v\in J_{j_{0}}}$ with $j_{0}\in\\{1,\ldots,N-u\\}$ such that $\text{rank}\\{V_{v}\\}_{v\in(J_{r_{1}}\cup J_{j_{0}})}=3$. Indeed, otherwise $\text{rank}\\{V_{v}\\}_{v\in(\cup_{i=1}^{N-u}J_{i})\cup\\{j_{1},t_{2}\ldots,t_{u}\\}}=\text{rank}\\{V_{v}\\}_{v\in J_{r_{1}}}=2$. This is impossible since $\\{V_{v}\\}_{v\in(\cup_{i=1}^{N-u}J_{i})\cup\\{j_{1},t_{2}\ldots,t_{u}\\}}$ has at least $N+2$ vectors. From sets $\\{V_{v}\\}_{v\in J_{r_{1}}}$ and $\\{V_{v}\\}_{v\in J_{j_{0}}}$, we can rebuild two linearly independent pairs $\\{V_{i_{1}},V_{i_{2}}\\}$ and $\\{V_{i_{3}},V_{i_{4}}\\}$ such that $\text{rank}\\{V_{i_{1}},V_{i_{2}},V_{j_{1}}\\}=3$, where $\\{i_{1},i_{2},i_{3},i_{4}\\}=J_{r_{1}}\cup J_{j_{0}}.$ We redenote by $J_{r_{1}}=\\{i_{1},i_{2}\\}$ and $J_{j_{0}}=\\{i_{3},i_{4}\\}$. Therefore, we obtain a partition still denoted by $J_{1},\ldots,J_{N+1+k}$ such that there exists an index ${r^{j}_{1}}\in\\{1,\ldots,N\\}$ satisfying $\text{rank}\\{V_{v},V_{j_{1}}\\}_{v\in J_{r^{j}_{1}}}=3$. Next, we consider $N$ sets $J_{1},\ldots,J_{r^{j}_{1}-1},J_{r^{j}_{1}+1},\ldots,J_{N+1}$ and $j_{2}$. Repeating the above argument, we get a partition still denoted by $J_{1},\ldots,J_{q/2}$ such that there exists an index ${r^{j}_{2}}\in\\{1,\ldots,{r^{j}_{1}-1},{r^{j}_{1}+1},\ldots,N+1\\}$ satisfying $\text{rank}\\{V_{v},V_{j_{2}}\\}_{v\in J_{r^{j}_{2}}}=3$. Of course, this partition still satisfies $\text{rank}\\{V_{v},V_{j_{1}}\\}_{v\in J_{r^{j}_{1}}}=3$. Continue to the process, after $k+2$ times, we will obtain a new partition denoted by $J^{j}_{1},\ldots,J^{j}_{N+1+k}$ such that there exists $k+2$ indices ${r^{j}_{1},\ldots,r^{j}_{k+2}}\in\\{1,\ldots,N+1+k\\}$ satisfying $\text{rank}\\{V_{v},V_{j_{i}}\\}_{v\in J^{j}_{r^{j}_{i}}}=3$ for all $1\leqslant i\leqslant k+2$. The claim is proved. Put $I^{j}_{r^{j}_{i}}=J^{j}_{r^{j}_{i}}\cup\\{j_{i}\\}$, then $P_{{I^{j}_{r^{j}_{i}}}}\not\equiv 0$ for all $1\leqslant i\leqslant k+2$. For each remained index $i\in\\{1,\ldots,N+1+k\\}\setminus\\{r^{j}_{1},\ldots,r^{j}_{k+2}\\}$, we choose a vector $V_{s_{i}}$ such that $\text{rank}\\{V_{v}\\}_{v\in J^{j}_{i}\cup\\{s_{i}\\}}=3.$ Put $I^{j}_{i}=J^{j}_{i}\cup\\{s_{i}\\}$, then $P_{{I^{j}_{i}}}\not\equiv 0$ for all $i.$ $\bullet$ If $k=0$ then $l=2N+1$ and $q=2N+2$. Put $S_{1}=\\{1\\},S_{2}=\\{2\\},\ldots,S_{l-1}=\\{2N\\},S_{l}=\\{2N+1,2N+2\\}.$ $\bullet$ If $k>0$ then $q=(k+2)(l-1)+t$ with $0<t\leqslant k+2.$ Put $S_{1}=\\{1,\ldots,k+2\\},S_{2}=\\{(k+2)+1,\ldots,2(k+2)\\},\ldots,S_{l-1}=\\{(k+2)(l-2)+1,\ldots,(k+2)(l-1)\\},S_{l}=\\{(k+2)(l-1)+1,\ldots,2N+2+2k\\}.$ Applying the claim to each set $S_{j}$ $(1\leqslant j\leqslant l)$, we get a partition $J^{j}_{1},\ldots,J^{j}_{N+1+k}$ with $s_{j}=\sharp S_{j}$ indices ${r^{j}_{1},\ldots,r^{j}_{s_{j}}}\in\\{1,\ldots,N+1+k\\}$ satisfying $\text{rank}\\{V_{v},V_{u}\\}_{v\in J^{j}_{r^{j}_{i}},u\in S_{j}}=3$ for all $1\leqslant i\leqslant s_{j}$. We put $P_{Q}=\prod_{j=1}^{l}\prod_{i=1}^{N+1+k}P_{I^{j}_{i}},$ where $I^{j}_{i}$ is defined as in the above. Since $(\min\\{a,b,c\\}-1)\geq\min\\{a,n\\}+\min\\{b,n\\}+\min\\{c,n\\}-2n-1$ for any positive integers $a,b,c$, we have $\displaystyle\min_{1\leqslant u\leqslant 3}\\{\nu_{(f^{u},H_{v}),\leqslant k_{v}}(z)\\}-\nu^{[1]}_{(f^{k},H_{v}),\leqslant k_{v}}(z)$ $\displaystyle\geq\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{v}),\leqslant k_{v}}(z)-(2n+1)\nu^{[1]}_{(f^{k},H_{v}),\leqslant k_{v}}(z),$ for all $z\in\mathrm{Supp}\,\nu_{(f^{k},H_{v}),\leqslant k_{v}}$. Putting $\nu_{v}(z)=\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{v}),\leqslant k_{v}}(z)-(2n+1)\nu^{[1]}_{(f^{k},H_{v}),\leqslant k_{v}}(z)\ (1\leqslant k\leqslant 3,\ v\in Q),$ from Lemma 2.6, we have $\displaystyle\nu_{P_{{I}^{j}_{i}}}(z)\geq\sum_{v\in{I}^{j}_{i}}\nu_{v}(z)+2\sum_{v=1}^{q}\nu^{[1]}_{(f^{k},H_{v}),\leqslant k_{v}}(z)$ and $\displaystyle\nu_{P_{{I}^{j}_{i}}}(z)\geq\sum_{v\in{J}^{j}_{i}}\nu_{v}(z)+2\sum_{v=1}^{q}\nu^{[1]}_{(f^{k},H_{v}),\leqslant k_{v}}(z).$ Note that for $k=0$ then $l(q-2N-1)-(2N+1)=0$. For $k>0$ then $2N+1\leqslant\frac{q}{k+2}(2k+1)\leqslant l(2k+1)=l(q-2N-1)$. Therefore, we always have $l(q-2N-1)-(2N+1)\geqslant 0.$ It implies that $l(q-2n-1)-(2n+1)\geqslant 0$ since $N\geq n.$ Then, for all $z\not\in\mathcal{S}$, we obtain $\displaystyle\nu_{P_{Q}}(z)$ $\displaystyle\geq l\sum_{v=1}^{q}\nu_{v}(z)+\sum_{v=1}^{q}\nu_{v}(z)+lq\sum_{v=1}^{q}\nu^{[1]}_{(f^{k},H_{v}),\leqslant k_{v}}(z)$ $\displaystyle=(l+1)\sum_{v=1}^{q}(\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{v}),\leqslant k_{v}}(z)-(2n+1)\nu^{[1]}_{(f^{k},H_{v}),\leqslant k_{v}}(z))+lq\sum_{v=1}^{q}\nu^{[1]}_{(f^{k},H_{v}),\leqslant k_{v}}(z)$ $\displaystyle=(l+1)\sum_{v=1}^{q}\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{v}),\leqslant k_{v}}(z)+\big{(}l(q-2n-1)-(2n+1)\big{)}\sum_{v=1}^{q}\nu^{[1]}_{(f^{k},H_{v}),\leqslant k_{v}}(z)$ $\displaystyle\geq\left(l+1+\frac{l(q-2n-1)-(2n+1)}{3n}\right)\sum_{v=1}^{q}\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{v}),\leqslant k_{v}}(z)$ $\displaystyle\geq\frac{l(q+n-1)+n-1}{3n}\sum_{v=1}^{q}\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{v}),\leqslant k_{v}}(z).$ We put $P:=P_{Q}$. The above inequality implies that $\displaystyle\sum_{v=1}^{q}\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{v}),\leqslant k_{v}}(z)\leqslant\frac{3n}{l(q+n-1)+n-1}\nu_{P}(z),\forall z\not\in\mathcal{S}.$ Define $\beta:=\dfrac{3n}{l(q+n-1)+n-1}$ and $\gamma:=\dfrac{lq}{2}$. $\bullet$ Case 2: $q\mod 2=1$. By the assumption, we have $q-1=2N+2+2k.$ We consider any subset ${R}=\\{j_{1},\ldots,j_{q-1}\\}$ of $\\{1,\ldots,q\\}$. By the same argument as in Case 1 for $R$, we get $\displaystyle\nu_{P_{R}}(z)$ $\displaystyle\geq(l+1)\sum_{v=1}^{q-1}\nu_{j_{v}}(z)+l(q-1)\sum_{v=1}^{q}\nu^{[1]}_{(f^{k},H_{v}),\leqslant k_{v}}(z),\forall z\not\in\mathcal{S}.$ We now define $P:=\prod_{|R|=q-1}P_{R},$ so we obtain $\displaystyle\nu_{P}(z)$ $\displaystyle=\sum_{|R|=q-1}\nu_{P_{R}}$ $\displaystyle\geq(q-1)(l+1)\sum_{v=1}^{q}\nu_{v}(z)+ql(q-1)\sum_{v=1}^{q}\nu^{[1]}_{(f^{k},H_{v}),\leqslant k_{v}}(z)$ $\displaystyle\geq(q-1)\frac{l(q+n-1)+n-1}{3n}\sum_{v=1}^{q}\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{v}),\leqslant k_{v}}(z).$ Hence, we have $\displaystyle\sum_{v=1}^{q}\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{v}),\leqslant k_{v}}(z)\leqslant\frac{3n}{(l(q+n-1)+n-1)(q-1)}\nu_{P}(z),\forall z\not\in\mathcal{S}.$ Define $\beta:=\dfrac{3n}{\big{(}l(q+n-1)+n-1\big{)}(q-1)}$ and $\gamma:=\dfrac{(q-1)lq}{2}.$ Then, from all the above cases, we always get $\alpha:=\beta\gamma=\dfrac{3nlq}{2(l(q+n-1)+n-1)}=\frac{3nq}{2\big{(}q+(n-1)\frac{l+1}{l}\big{)}},$ and $\displaystyle\sum_{u=1}^{3}\sum_{v=1}^{q}\nu_{(f^{u},H_{v}),\leqslant k_{v}}^{[n]}(z)\leqslant\beta\nu_{P}(z),\forall z\not\in\mathcal{S}.$ It is easy to see that $|P|^{\beta}\leqslant C(\|f^{1}\|\|f^{2}\|\|f^{3}\|)^{\beta\gamma}=C(\|f^{1}\|\|f^{2}\|\|f^{3}\|)^{\alpha}$, where $C$ is some positive constant. This means that $P^{\beta}\in B(\alpha,0;f^{1},f^{2},f^{3})$. Applying Lemma 2.8, we obtain $\displaystyle q$ $\displaystyle\leqslant 2N-n+1+\sum_{j=1}^{q}\frac{n}{k_{j}+1}+\rho n(2N-n+1)+\alpha$ $\displaystyle=2N-n+1+\sum_{j=1}^{q}\frac{n}{k_{j}+1}+\rho n(2N-n+1)+\frac{3nq}{2\big{(}q+(n-1)\frac{l+1}{l}\big{)}},$ which contradicts the assumption. Therefore, $f^{1}\wedge f^{2}\wedge f^{3}\equiv 0$ on $M$. The proof of Lemma 3.3 is complete. ∎ By basing on the proofs of Quang [18, Lemma 3.3, 3.4, 3.5, 3.6] or [19, Lemma 4.4, 4.5, 4.6, 4.8], we obtain the following Lemmas which are necessary for the proof of our theorem. The first, for three mappings $f^{1},f^{2},f^{3}\in\mathcal{F}(f,\\{H_{i},k_{i}\\}_{i=1}^{q},1)$, we define $\bullet F^{ij}_{k}=\frac{(f^{k},H_{i})}{(f^{k},H_{i})},\ \ 0\leqslant k\leqslant 2,\ 1\leqslant i,j\leqslant q,$ $\bullet V_{i}=((f^{1},H_{i}),(f^{2},H_{i}),(f^{3},H_{i}))\in\mathcal{M}^{3}_{m},$ $\bullet\nu_{i}:\text{ the divisor whose support is the closure of the set }$ $\\{z:\nu_{(f^{u},H_{i}),\leqslant k_{i}}(z)\geqslant\nu_{(f^{v},H_{i}),\leqslant k_{i}}(z)=\nu_{(f^{t},H_{i}),\leqslant k_{i}}(z)\text{ for a permutation }(u,v,t)\text{ of }(1,2,3)\\}.$ We write $V_{i}\cong V_{j}$ if $V_{i}\wedge V_{j}\equiv 0$, otherwise we write $V_{i}\not\cong V_{j}$. For $V_{i}\not\cong V_{j}$, we write $V_{i}\sim V_{j}$ if there exist $1\leqslant u<v\leqslant 3$ such that $F_{u}^{ij}=F_{v}^{ij}$, otherwise we write $V_{i}\not\sim V_{j}.$ ###### Lemma 3.4. [18, Lemma 3.3] or [19, Lemma 4.4] With the assumption of Theorem 1.1, let $h$ and $g$ be two elements of the family $\mathcal{F}(f,\\{H_{i},k_{i}\\}_{i=1}^{q},1)$. If there exists a constant $\lambda$ and two indices $i,j$ such that $\frac{(h,H_{i})}{(h,H_{j})}=\lambda\frac{(g,H_{i})}{(g,H_{j})},$ then $\lambda=1.$ ###### Lemma 3.5. [18, Lemma 3.4] or [19, Lemma 4.5] Let $f^{1},f^{2},f^{3}$ be three elements of $\mathcal{F}(f,\\{H_{i},k_{i}\\}_{i=1}^{q},1)$. Suppose that $f^{1}\wedge f^{2}\wedge f^{3}\equiv 0$ and $V_{i}\sim V_{j}$ for some distinct indices $i$ and $j$. Then $f^{1},f^{2},f^{3}$ are not distinct. ###### Lemma 3.6. [18, Lemma 3.5] or [19, Lemma 4.6] With the assumption of Theorem 1.1, let $f^{1},f^{2},f^{3}$ be three maps in $\mathcal{F}(f,\\{H_{i},k_{i}\\}_{i=1}^{q},1)$. Suppose that $f^{1},f^{2},f^{3}$ are distinct and there are two indices $i,j\in\\{1,2,\ldots,q\\}\ (i\not=j)$ such that $V_{i}\not\cong V_{j}$ and $\Phi^{\alpha}_{ij}:=\Phi^{\alpha}(F_{1}^{ij},F_{2}^{ij},F_{3}^{ij})\equiv 0$ for every $\alpha=(\alpha_{1},\ldots,\alpha_{m})\in\mathbb{Z}^{m}_{+}$ with $|\alpha|=1.$ Then for every $t\in\\{1,\ldots,q\\}\setminus\\{i\\}$, the following assertions hold: (i) $\Phi^{\alpha}_{it}\equiv 0$ for all $|\alpha|\leqslant 1,$ (ii) if $V_{i}\not\cong V_{t}$, then $F^{ti}_{1},F^{ti}_{2},F^{ti}_{3}$ are distinct and there exists a meromorphic function $h_{it}\in B(0,1;f^{1},f^{2},f^{3})$ such that $\displaystyle\nu_{h_{ti}}\geq-\nu^{[1]}_{(f,H_{i}),\leqslant k_{i}}-\nu^{[1]}_{(f,H_{t}),\leqslant k_{t}}+\sum_{j\not=i,t}\nu^{[1]}_{(f,H_{j}),\leqslant k_{j}}.$ ###### Lemma 3.7. [18, Lemma 3.6] or [19, Lemma 4.8] With the assumption of Theorem 1.1, let $f^{1},f^{2},f^{3}$ be three maps in $\mathcal{F}(f,\\{H_{i},k_{i}\\}_{i=1}^{q},1)$. Assume that there exist $i,j\in\\{1,2,\ldots,q\\}\ (i\not=j)$ and $\alpha\in\mathbb{Z}^{m}_{+}$ with $|\alpha|=1$ such that $\Phi^{\alpha}_{ij}\not\equiv 0$. Then there exists a holomorphic function $g_{ij}\in B(1,1;f^{1},f^{2},f^{3})$ such that $\displaystyle\nu_{g_{ij}}$ $\displaystyle\geq\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{i}),\leqslant k_{i}}+\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{j}),\leqslant k_{j}}+2\sum_{t=1,t\not=i,j}\nu^{[1]}_{(f,H_{t}),\leqslant k_{t}}-(2n+1)\nu^{[1]}_{(f,H_{i}),\leqslant k_{i}}$ $\displaystyle-(n+1)\nu^{[1]}_{(f,H_{j}),\leqslant k_{j}}+\nu_{j}.$ We now prove Theorem 1.1. Suppose that there exist three distinct meromorphic mappings $f^{1},f^{2},f^{3}$ belonging to $\mathcal{F}(f,\\{H_{i},k_{i}\\}_{i=1}^{q},1)$. By Lemma 3.3, we get $f^{1}\wedge f^{2}\wedge f^{3}\equiv 0.$ We may assume that ${\mathrel{\mathop{{\underbrace{{V_{1}\cong\cdots\cong V_{l_{1}}}}}}\limits_{{group\ 1}}}}\not\cong{\mathrel{\mathop{{\underbrace{V_{l_{1}+1}\cong\cdots\cong V_{l_{2}}}}}\limits_{{group\ 2}}}}\not\cong{\mathrel{\mathop{{\underbrace{V_{l_{2}+1}\cong\cdots\cong V_{l_{3}}}}}\limits_{{group\ 3}}}}\not\cong\cdots\not\cong{\mathrel{\mathop{{\underbrace{V_{l_{s-1}+1}\cong\cdots\cong V_{l_{s}}}}}\limits_{{group\ s}}}},$ where $l_{s}=q.$ Denote by $P$ the set of all $i\in\\{1,\ldots,q\\}$ satisfying that there exists $j\in\\{1,\ldots,q\\}\setminus\\{i\\}$ such that $V_{i}\not\cong V_{j}$ and $\Phi^{\alpha}_{ij}\equiv 0$ for all $\alpha\in\mathbb{Z}^{m}_{+}$ with $|\alpha|\leqslant 1.$ We separate into three cases. $\bullet$ Case 1: $\sharp P\geq 2.$ It follows that $P$ contains two elements $i,j.$ We get $\Phi^{\alpha}_{ij}=\Phi^{\alpha}_{ji}=0$ for all $\alpha\in\mathbb{Z}^{m}_{+}$ with $|\alpha|\leqslant 1.$ By Lemma 2.1, there exist two functions, for instance $F_{1}^{ij}$ and $F_{2}^{ij}$, and a constant $\lambda$ such that $F_{1}^{ij}=\lambda F_{2}^{ij}.$ Applying Lemma 3.4, we have $F_{1}^{ij}=F_{2}^{ij}$. Hence, since Lemma 3.6 (ii), we can see that $V_{i}\cong V_{j}$, i.e., $V_{i}$ and $V_{j}$ belong to the same group in the partition. We may assume that $i=1$ and $j=2.$ Since our assumption $f^{1},f^{2},f^{3}$ are distinct, the number of each group in the partition is less than $N+1.$ Thus, we get $V_{1}\cong V_{2}\not\cong V_{t}$ for all $t\in\\{N+1,\ldots,q\\}.$ By Lemma 3.6 (ii), we obtain $\displaystyle\nu_{h_{1t}}\geq-\nu^{[1]}_{(f,H_{1}),\leqslant k_{1}}-\nu^{[1]}_{(f,H_{t}),\leqslant k_{t}}+\sum_{s\not=1,t}\nu^{[1]}_{(f,H_{s}),\leqslant k_{s}},$ and $\displaystyle\nu_{h_{2t}}\geq-\nu^{[1]}_{(f,H_{2}),\leqslant k_{2}}-\nu^{[1]}_{(f,H_{t}),\leqslant k_{t}}+\sum_{s\not=2,t}\nu^{[1]}_{(f,H_{s}),\leqslant k_{s}}.$ By summing up both sides of the above two inequalities, we have $\displaystyle\nu_{h_{1t}}+\nu_{h_{2t}}\geq-2\nu^{[1]}_{(f,H_{t})\leqslant k_{t}}+\sum_{s\not=1,2,t}\nu^{[1]}_{(f,H_{s}),\leqslant k_{s}}.$ Summing up both sides of the above inequalities over all $t\in\\{N+1,\ldots,q\\},$ we obtain $\displaystyle\sum_{t=N+1}^{q}(\nu_{h_{1t}}+\nu_{h_{2t}})$ $\displaystyle\geq(q-N)\sum_{t=3}^{N}\nu^{[1]}_{(f,H_{t})\leqslant k_{t}}+(q-N-3)\sum_{t=N+1}^{q}\nu^{[1]}_{(f,H_{t})\leqslant k_{t}}$ $\displaystyle\geq(q-N-3)\sum_{t=3}^{q}\nu^{[1]}_{(f,H_{t})\leqslant k_{t}}\geq\frac{q-N-3}{3n}\sum_{u=1}^{3}\sum_{t=3}^{q}\nu^{[n]}_{(f,H_{t})\leqslant k_{t}}.$ Hence, we get $\displaystyle\sum_{u=1}^{3}\sum_{t=3}^{q}\nu^{[n]}_{(f,H_{t})\leqslant k_{t}}\leqslant\frac{3n}{q-N-3}\nu_{\prod_{t=N+1}^{q}}(h_{1t}h_{2t}).$ Since $({\prod_{t=N+1}^{q}}(h_{1t}h_{2t}))^{\frac{3n}{q-N-3}}\in B(0,2(q-N)\frac{3n}{q-N-3};f^{1},f^{2},f^{3})$, applying Lemma 2.8, we obtain $q-2\leqslant 2N-n+1+\sum_{i=1}^{q}\frac{n}{k_{i}+1}+\rho\big{(}n(2N-n+1)+4(q-N)\frac{n}{q-N-3}\big{)}.$ From the definition of $l$ and the condition of $q$, it is easy to see that $l\geq 3.$ It is easy to see that $2\leqslant\frac{3nq}{2\big{(}q+n-1+\frac{n-1}{3}\big{)}}\leqslant\frac{3nq}{2\big{(}q+n-1+\frac{n-1}{l}\big{)}},$ and $4(q-N)\frac{n}{q-N-3}\leqslant\frac{4(q-n)n}{n-1}.$ These inequalities imply that $q\leqslant 2N-n+1+\sum_{i=1}^{q}\frac{n}{k_{i}+1}+\rho\big{(}n(2N-n+1)+\frac{4(q-n)n}{n-1}\big{)}+\frac{3nq}{2\big{(}q+n-1+\frac{n-1}{l}\big{)}},$ which is a contradiction. $\bullet$ Case 2: $\sharp P=1$. We assume that $P=\\{1\\}.$ It is easy to see that $V_{1}\not\cong V_{i}$ for all $i=2,\ldots,q$. By Lemma 3.6 (ii), we obtain $\displaystyle\nu_{h_{1i}}\geq-\nu^{[1]}_{(f,H_{1})\leqslant k_{1}}-\nu^{[1]}_{(f,H_{i})\leqslant k_{i}}+\sum_{s\not=1,t}\nu^{[1]}_{(f,H_{s})\leqslant k_{s}}.$ Summing up both sides of the above inequalities over all $i=2,\ldots,q,$ we have (3.8) $\displaystyle\sum_{i=2}^{q}\nu_{h_{1i}}\geq(q-3)\sum_{i=2}^{q}\nu^{[1]}_{(f,H_{i})\leqslant k_{i}}-(q-1)\nu^{[1]}_{(f,H_{1})\leqslant k_{1}}.$ Obviously, $i\not\in P$ for all $i=2,\ldots,q.$ Now put $\sigma(i)=\begin{cases}i+N,&\text{ if }i+N\leqslant q\\\ i-N-q+1,&\text{ if }i+N>q,\end{cases}$ then $i$ and $\sigma(i)$ belong to distinct groups, i.e., $V_{i}\not\cong V_{\sigma(i)}$ for all $i=2,\ldots,q$ and hence $\Phi^{\alpha}_{i\sigma(i)}\not\equiv 0$ for some $\alpha\in\mathbb{Z}^{m}_{+}$ with $|\alpha|\leqslant 1.$ By Lemma 3.7, we get $\displaystyle\nu_{g_{i\sigma(i)}}$ $\displaystyle\geq\sum_{u=1}^{3}\sum_{t=i,\sigma(i)}\nu^{[n]}_{(f^{u},H_{t})\leqslant k_{t}}-(2n+1)\nu^{[1]}_{(f,H_{i})\leqslant k_{i}}-(n+1)\nu^{[1]}_{(f,H_{\sigma(i)})\leqslant k_{\sigma(i)}}$ $\displaystyle+2\sum_{t=1,t\not=i,\sigma(i)}\nu^{[1]}_{(f,H_{t})\leqslant k_{t}}.$ Summing up both sides of this inequality over all $i\in\\{2,\ldots,q\\}$ and using (3.8), we obtain $\displaystyle\sum_{i=2}^{q}\nu_{g_{i\sigma(i)}}$ $\displaystyle\geq 2\sum_{i=2}^{q}\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{i}),\leqslant k_{i}}+(2q-3n-8)\sum_{i=2}^{q}\nu^{[1]}_{(f,H_{i}),\leqslant k_{i}})+2(q-1)\nu^{[1]}_{(f,H_{1}),\leqslant k_{1}}$ $\displaystyle\geq 2\sum_{i=2}^{q}\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{i}),\leqslant k_{i}}+\frac{4q-3n-14}{3}\sum_{u=1}^{3}\sum_{i=2}^{q}\nu^{[1]}_{(f^{u},H_{i}),\leqslant k_{i}})-2\sum_{i=2}^{q}\nu_{h_{1i}}$ $\displaystyle\geq\frac{4q+3n-14}{3n}\sum_{i=2}^{q}\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{i}),\leqslant k_{i}}-2\sum_{i=2}^{q}\nu_{h_{1t}}.$ It implies that $\sum_{u=1}^{3}\sum_{i=2}^{q}\nu^{[n]}_{(f^{u},H_{i})}\leqslant\frac{3n}{4q+3n-14}\nu_{\prod_{i=2}^{q}(g_{i\sigma(i)}h^{2}_{1i})}.$ Obviously, $\prod_{i=2}^{q}(g_{i\sigma(i)}h^{2}_{1i})\in B(q-1,3(q-1);f^{1},f^{2},f^{3})$. Applying Lemma 2.8, we obtain $q-1\leqslant 2N-n+1+\sum_{i=1}^{q}\frac{n}{k_{i}+1}+\rho\big{(}n(2N-n+1)+\frac{6n(q-1)}{4q+3n-14}\big{)}+\frac{3n(q-1)}{4q+3n-14}.$ Since $q\geq 2n+2$ and by the simple calculation, we have $\frac{6n(q-1)}{4q+3n-14}\leqslant\frac{6n(q-1)}{11n-6}<\frac{4(q-n)n}{n-1}.$ It implies that $q\leqslant 2N-n+1+\sum_{i=1}^{q}\frac{n}{k_{i}+1}+\rho\big{(}n(2N-n+1)+\frac{4(q-n)n}{n-1}\big{)}+\frac{4q+3nq-14}{4q+3n-14},$ which is a contradiction. $\bullet$ Case 3: $\sharp P=0$. By Lemma 3.7, for all $i\not=j$, we get $\displaystyle\nu_{g_{ij}}$ $\displaystyle\geq\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{i}),\leqslant k_{i}}+\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{j}),\leqslant k_{j}}+2\sum_{t=1,t\not=i,j}\nu^{[1]}_{(f,H_{t}),\leqslant k_{t}}-(2n+1)\nu^{[1]}_{(f,H_{i}),\leqslant k_{i}}$ $\displaystyle-(n+1)\nu^{[1]}_{(f,H_{j}),\leqslant k_{j}}+\nu_{j}.$ Put $\gamma(i)=\begin{cases}i+N&\text{ if }i\leqslant q-N\\\ i+N-q&\text{ if }i>q-N.\end{cases}$ By summing up both sides of the above inequality over all pairs $(i,\gamma(i)),$ we obtain (3.9) $\displaystyle\sum_{i=1}^{q}\nu_{g_{i\gamma(i)}}\geq 2\sum_{u=1}^{3}\sum_{i=1}^{q}\nu^{[n]}_{(f^{u},H_{i}),\leqslant k_{i}}+(2q-3n-6)\sum_{t=1}^{q}\nu^{[1]}_{(f,H_{t}),\leqslant k_{t}}+\sum_{t=1}^{q}\nu_{t}.$ By Lemma 3.5, we can see that $V_{j}\not\sim V_{l}$ for all $j\not=l.$ Thus, we have $P^{i\gamma(i)}_{st}:=(f^{s},H_{i})(f^{t},H_{\gamma(i)})-(f^{t},H_{\gamma(i)})(f^{s},H_{i})\not\equiv 0,\ s\not=t,1\leqslant i\leqslant q.$ We claim that: With $i\not=j\not=\gamma(i)$, for every $z\in f^{-1}(H_{j})$, we have $\sum_{1\leqslant s<t\leqslant 3}\nu_{P^{i\gamma(i)}_{st}}(z)\geq 4\nu^{[1]}_{(f,H_{j}),\leqslant k_{j}}-\nu_{j}(z).$ Indeed, for $z\in f^{-1}(H_{j})\cap\mathrm{Supp}\,{\nu_{j}},$ we have $4\nu^{[1]}_{(f,H_{j}),\leqslant k_{j}}(z)-\nu_{j}(z)\leqslant 4-1=3\leqslant\sum_{1\leqslant s<t\leqslant 3}\nu_{P^{i\gamma(i)}_{st}}.$ For $z\in f^{-1}(H_{j})\setminus\mathrm{Supp}\,\nu_{j}$, we assume that $\nu_{(f^{1},H_{j}),\leqslant k_{j}}(z)<\nu_{(f^{2},H_{j}),\leqslant k_{j}}(z)\leqslant\nu_{(f^{3},H_{j}),\leqslant k_{j}}(z).$ Since $f^{1}\wedge f^{2}\wedge f^{3}\equiv 0,$ we have $\det(V_{i},V_{\gamma(i)},V_{j})\equiv 0,$ and hence $(f^{1},H_{j})P^{i\gamma(i)}_{23}=(f^{2},H_{j})P^{i\gamma(i)}_{13}-(f^{3},H_{j})P^{i\gamma(i)}_{12}.$ It implies that $\nu_{P^{i\gamma(i)}_{23}}\geq 2$ and so $\sum_{1\leqslant s<t\leqslant 3}\nu_{P^{i\gamma(i)}_{st}}(z)\geq 4=4\nu^{[1]}_{(f,H_{i}),\leqslant k_{i}}(z)-\nu_{j}(z).$ The claim is proved. On the other hand, with $j=i$ or $j=\sigma(i)$, for every $z\in f^{-1}(H_{j})$, we see that $\displaystyle\nu_{P^{i\gamma(i)}_{st}}(z)$ $\displaystyle\geq\min\\{\nu_{(f^{s},H_{j}),\leqslant k_{j}}(z),\nu_{(f^{t},H_{j}),\leqslant k_{j}}(z)\\}$ $\displaystyle\geq\nu^{[n]}_{(f^{s},H_{j}),\leqslant k_{j}}(z)+\nu^{[n]}_{(f^{t},H_{j}),\leqslant k_{j}}(z)-n\nu^{[1]}_{(f,H_{j}),\leqslant k_{j}}(z).$ Hence, $\sum_{1\leqslant s<t\leqslant 3}\nu_{P^{i\gamma(i)}_{st}}(z)\geq 2\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{j}),\leqslant k_{j}}(z)-3n\nu^{[1]}_{(f,H_{j}),\leqslant k_{j}}(z).$ Together this inequality with the above claim, we obtain $\displaystyle\sum_{1\leqslant s<t\leqslant 3}\nu_{P^{i\gamma(i)}_{st}}(z)$ $\displaystyle\geq\sum_{j=i,\gamma(i)}\big{(}2\sum_{u=1}^{3}\nu^{[n]}_{(f^{u},H_{j}),\leqslant k_{j}}(z)-3n\nu^{[1]}_{(f,H_{j}),\leqslant k_{j}}(z)\big{)}$ $\displaystyle+\sum_{j=1,j\not=i,\gamma(i)}(4\nu^{[1]}_{(f,H_{j}),\leqslant k_{j}}(z)-\nu_{j}(z)).$ On the other hand, it is easy to see that $\prod_{1\leqslant s<t\leqslant 3}P^{i\gamma(i)}_{st}\in B(2,0;f^{1},f^{2},f^{3}).$ Summing up both sides of the above inequality over all $i,$ we obtain $\displaystyle\sum_{i=1}^{q}\sum_{1\leqslant s<t\leqslant 3}\nu_{P^{i\gamma(i)}_{st}}\geq 4\sum_{u=1}^{3}\sum_{i=1}^{q}\nu^{[n]}_{(f^{u},H_{i}),\leqslant k_{i}}+(4q-6n-8)\sum_{i=1}^{q}\nu^{[1]}_{(f,H_{i}),\leqslant k_{i}}-(q-2)\sum_{i=1}^{q}\nu_{i}.$ Thus, $\sum_{i=1}^{q}\nu_{i}+\frac{1}{q-2}\sum_{i=1}^{q}\sum_{1\leqslant s<t\leqslant 3}\nu_{P^{i\gamma(i)}_{st}}\geq\frac{4}{q-2}\sum_{u=1}^{3}\sum_{i=1}^{q}\nu^{[n]}_{(f^{u},H_{i}),\leqslant k_{i}}+\frac{4q-6n-8}{q-2}\sum_{i=1}^{q}\nu^{[1]}_{(f,H_{i}),\leqslant k_{i}}.$ Using this inequality and (3.9), we have $\displaystyle\sum_{i=1}^{q}\nu_{g_{i\gamma(i)}}$ $\displaystyle+\frac{1}{q-2}\sum_{i=1}^{q}\sum_{1\leqslant s<t\leqslant 3}\nu_{P^{i\gamma(i)}_{st}}$ $\displaystyle\geq\big{(}2+\frac{4}{q-2}\big{)}\sum_{u=1}^{q}\sum_{t=1}^{q}\nu^{[n]}_{(f^{u},H_{t}),\leqslant k_{t}}+\big{(}n-2+\frac{4q-6n-8}{q-2}\big{)}\sum_{i=1}^{q}\nu^{[1]}_{(f^{u},H_{i}),\leqslant k_{i}}$ $\displaystyle\geq\big{(}2+\frac{4}{q-2}+\frac{n-2}{3n}+\frac{4q-6n-8}{3n(q-2)}\big{)}\sum_{u=1}^{q}\sum_{t=1}^{q}\nu^{[n]}_{(f^{u},H_{t}),\leqslant k_{t}}.$ It implies that $\sum_{u=1}^{q}\sum_{t=1}^{q}\nu^{[n]}_{(f^{u},H_{t}),\leqslant k_{t}}\leqslant\frac{3n}{6nq+(n-2)(q-2)+4q-6n-8}\nu_{\prod_{i=1}^{q}(g^{q-2}_{i\gamma(i)}P^{i\gamma(i)}_{12}P^{i\gamma(i)}_{13}P^{i\gamma(i)}_{23})}.$ Observe that $\prod_{i=1}^{q}g^{q-2}_{i\gamma(i)}P^{i\gamma(i)}_{12}P^{i\gamma(i)}_{13}P^{i\gamma(i)}_{23}\in B(q^{2},q(q-2);f^{1},f^{2},f^{3})$, hence applying Lemma 2.8, we obtain $\displaystyle q$ $\displaystyle\leqslant 2N-n+1+\sum_{i=1}^{q}\frac{n}{k_{i}+1}+\rho\big{(}n(2N-n+1)+\frac{2nq(q-2)}{6nq+(n-2)(q-2)+4q-6n-8}\big{)}$ $\displaystyle+\frac{3nq^{2}}{6nq+(n-2)(q-2)+4q-6n-8},$ which is impossiple since $\frac{2nq(q-2)}{6nq+(n-2)(q-2)+4q-6n-8}<\frac{2nq(q-2)}{6nq+q-2}=\frac{2n(q-2)}{6n+1}\leqslant\frac{4(q-n)n}{n-1}.$ The proof of Theorem 1.1 is complete. $\square$ Acknowledgement: This work was done while the first author was staying at the Vietnam Institude for Advanced Study in Mathematics (VIASM). He would like to thank VIASM for the support. ## References * [1] Z. Chen and Q. Yan, Uniqueness theorem of meromorphic mappings into $\mathbb{P}^{N}(\mathbb{C})$ sharing $2N+3$ hyperplanes regardless of multiplicities, Internat. J. Math. 20 (2009), 717-726. * [2] G. Dethloff and T. V. Tan, Uniqueness theorems for meromorphic mappings with few hyperplanes, Bull. Sci. Math. 133 (2009), 501-514. * [3] H. Fujimoto, The uniqueness problem of meromorphic maps into the complex projective space, Nagoya Math. J. 58 (1975) 1-23. * [4] H. Fujimoto, Non-integrated defect relation for meromorphic maps of complete Kähler manifolds into $\mathbb{P}^{N_{1}}(\mathbb{C})\times\cdots\times\mathbb{P}^{N_{k}}(\mathbb{C}),$ Japanese J. Math. 11 (1985), 233-264. * [5] H. Fujimoto, A unicity theorem for meromorphic maps of a complete Kähler manifolds into $\mathbb{P}^{N}(\mathbb{C}),$ Tohoku Math. J. 38 (1986), 327-341. * [6] H. Fujimoto, Uniqueness problem with truncated multiplicities in value distribution theory, Nagoya Math. J. 152 (1998), 131-152. * [7] L. Karp, Subharmonic functions on real and complex manifolds, Math. Z. 179 (1982), 535-554. * [8] R. Nevanlinna, Einige eideutigkeitssätze in der theorie der meromorphen funktionen, Acta. Math., 48 (1926), 367-391. * [9] N. T. Nhung and L. N. Quynh, Unicity of meromorphic mappings from complete Kähler manifolds into projective space, Houston. J. Math. 44, No. 3 (2018), 769-785. * [10] N. T. Nhung and P. D. Thoan, On degeneracy of three meromorphic mappings from complete Kähler manifolds into projective spaces, Comput. Methods Funct. Theory 19 (2019), 353-382. * [11] E. I. Nochka, _On the theory of meromorphic functions_ , Sov. Math. Dokl. 27 (1983), 377-381. * [12] Noguchi. J, _A note on entire pseudo-holomorphic curves and the proof of Cartan-Nochka’s theorem_ , Kodai Math. J. 28 (2005), 336-346. * [13] M. Ru and S. Sogome, Non-integrated defect relation for meromorphic maps of complete Kähler manifold intersecting hypersurface in $\mathbb{P}^{n}(\mathbb{C})$, Trans. Amer. Math. Soc. 364 (2012), 1145-1162. * [14] M. Ru and S. Sogome, A uniqueness theorem for meromorphic maps of a complete Kähler manifold into $\mathbb{P}^{n}(\mathbb{C})$ sharing hypersurfaces, Proc. Amer. Math. Soc., 141 No. 12 (2013), 4229-4239. * [15] L. Smiley, Geometric conditions for unicity of holomorphic curves, Contemp. Math. 25 (1983), 149-154. * [16] S. D. Quang, Unicity of meromorphic mappings sharing few hyperplanes, Ann. Polon. Math. 102 (3), (2011), 255-270. * [17] S. D. Quang, A finiteness theorem for meromorphic mappings sharing few hyperplanes, Kodai Math. J. 35 (2012), 463-484. * [18] S. D. Quang, Degeneracy and finiteness theorems for meromorphic mappings in several complex variables, Chin. Ann. Math. Series B 40, No 2 (2019), 251-272. * [19] S. D. Quang, Meromorphic mappings of a complete connected Käler manifold into a projective space sharing hyperplanes, Arxiv: 1909.01849v1 [math.CV] 4 sep 2019. * [20] S. D. Quang and L. N. Quynh, Algebraic dependences of meromorphic mappings sharing few hyperplanes counting truncated multiplicities, Kodai Math. J. 38 (2015), 97-118. * [21] D. D. Thai and S. D. Quang, Non-integrated defect of meromorphic maps on Käler manifold, Math. Zeitschrift 292 (2019), 211-229. * [22] P. D. Thoan and N. T. Nhung, Algebraic dependence for three meromorphic mappings from complete Kähler manifolds into projective spaces, Bull. Iran. Math. Soc. (2019) https://doi.org/10.1007/s41980-019-00301-8. * [23] S.T.Yau, Some function-theoretic properties of complete Riemannnian manifolds and their applications to geometry, Indianna Univ. Math. J. 25 (1976), 659-670.
2024-09-04T02:54:58.083935
2020-03-07T15:25:34
2003.03591
{ "authors": "Ira B Schwartz, Victoria Edwards, Sayomi Kamimoto, Klimka Kasraie,\n Ioana Triandaf, M. Ani Hsieh and Jason Hindes", "full_text_license": null, "license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/", "provenance": "arxiv-papers-0000.json.gz:26097", "submitter": "Ira Schwartz", "url": "https://arxiv.org/abs/2003.03591" }
arxiv-papers
# Torus bifurcations of large-scale swarms having range dependent communication delay Ira B. Schwartz<EMAIL_ADDRESS>U.S. Naval Research Laboratory, Code 6792, Plasma Physics Division, Washington, DC 20375, USA Victoria Edwards U.S. Naval Research Laboratory, Code 5514, Navy Center for Applied Research in Artificial Intelligence, Washington, DC 20375, USA Sayomi Kamimoto Department of Mathematics, George Mason University, Fairfax Virginia, 22030, USA Klimka Kasraie Aerospace, Transportation and Advanced Systems Laboratory of the Georgia Tech Research Institute, Atlanta, GA 30332 M. Ani Hsieh Mechanical Engineering and Applied Mechanics University of Pennsylvania, Philadelphia, PA 19104 USA Ioana Triandaf1 Jason Hindes1 ###### Abstract Dynamical emergent patterns of swarms are now fairly well established in nature, and include flocking and rotational states. Recently, there has been great interest in engineering and physics to create artificial self-propelled agents that communicate over a network and operate with simple rules, with the goal of creating emergent self-organizing swarm patterns. In this paper, we show that when communicating networks have range dependent delays, rotational states which are typically periodic, undergo a bifurcation and create swarm dynamics on a torus. The observed bifurcation yields additional frequencies into the dynamics, which may lead to quasi-periodic behavior of the swarm. Swarming behavior occurs when a large number of self-propelled agents interact using simple rules. Natural swarms of biological systems have been observed at a range of length scales forming complex emergent patterns. Engineers have drawn inspiration from these natural systems, resulting in the translation of swarm theory to communicating robotic systems. Example applications of artificial swarms include: exploration and mapping, search and rescue, and distributed sensing and estimation. Through continued development, an additional parameter of delay in communication between artificial agents has become important to consider. Specifically, it was previously discovered, that communication delay will create new rotational patterns which are not observed without delay, both theoretically and experimentally. Here we extend the understanding of communication delays to reveal the effects of range dependent delay, where the communication between agents depends on the distance between agents. The results of the research show that by including range dependent delay, new rotational states are introduced. We show how these new states emerge, discuss their stability, and discuss how they may be realized in large scale robotic systems. In improving our theoretical understanding of predicted swarm behavior modeled in simulation we can better anticipate what will happen experimentally. Additionally, it is possible to leverage the predicted autonomous behaviors to try and force different swarm behavior. ## I Introduction Swarming behavior, which we define as the emergence of spatio-temporal group behaviors from simple local interactions between pairs of agents, is widespread and observed over a range of application domains. Examples can be found in biological systems over a range of length scales, from aggregates of bacterial cells and dynamics of skin cells in wound healing Budrene and Berg (1995); Polezhaev _et al._ (2006); Lee _et al._ (2013) to dynamic patterns of fish, birds, bats, and even humans Tunstrø m _et al._ (2013); Helbing and Molnar (1995); Giuggioli, McKetterick, and Holderied (2015a); Lee (2006). These systems are particularly interesting because they allow simple individual agents to achieve complex tasks in ways that are scalable, extensible, and robust to failures of individual agents. In addition, these swarming behaviors are able to form and persist in spite of complicating factors such as delayed actuation, latent communication, localized number of neighbors each agent is able to interact with, heterogeneity in agent dynamics, and environmental noise. These factors have been the focus of previous theoretical research in describing the bifurcating spatial-temporal patterns in swarms, as seen for example in Refs. Topaz and Bertozzi (2004); Szwaykowska, Romero, and Schwartz (2015); Mier-y-Teran Romero, Forgoston, and Schwartz (2011); Hindes, Szwaykowska, and Schwartz (2016). Likewise, the application of swarms have been experimentally realized in areas, such as mappingRamachandran, Elamvazhuthi, and Berman (2018), leader-followingMorgan and Schwartz (2005); Wiech, Eremeyev, and Giorgio (2018), and density controlLi _et al._ (2017). To guarantee swarming behavior experimentally, control is typically employed Tanner, Jadbabaie, and Pappas (2007); Gazi (2005); Jadbabaie, Jie Lin, and Morse (2003); Viragh _et al._ (2014); Desai, Ostrowski, and Kumar (2001) to prove convergence to a given state by relying on strict assumptions to guarantee the desired behavior. However, by relaxing certain assumptions, a number of studies show that even with simple interaction protocols, swarms of agents are able to converge to organized, coherent behaviors in a self-emergent manner; i.e. autonomously without control. Different mathematical approaches yielded a wide selection of both agent-based Helbing and Molnar (1995); Lee (2006); Vicsek _et al._ (2006); Tunstrø m _et al._ (2013) and continuum models that predict swarming dynamics. Edelstein-Keshet, Grunbaum, and Watmough (1998); Topaz and Bertozzi (2004); Polezhaev _et al._ (2006). In almost all models, since the agents have just a few simple rules, there exists only a relatively small number of controllable parameters. The parameter set usually consists of a self- propulsion force, a potential function governing attracting and repelling forces between agents, and a communicating radius governing the local neighborhood at which the agents can sense and interact with each other. In both robotic and biological swarms, an additional parameter appears as a delay between the time information is perceived and the actuation (reaction) time of an agent. Such delays have now been measured in swarms of bats, birds, fish, and crowds of peopleGiuggioli, McKetterick, and Holderied (2015b); Nagy _et al._ (2010); Fehrenbach _et al._ (2014). The measured delays are longer than the typical relaxation times of the agents, and may be space and time dependent. Robotic swarms experience communication delays which provide similar effects to the delay experienced in natural swarms. Incorporating stationary delays along with a minimal set of parameters in swarm models results in multi-stability of rotational patterns in space Mier-y-Teran Romero, Forgoston, and Schwartz (2012); Szwaykowska _et al._ (2016); Edwards _et al._ (2020); Hindes and Schwartz (2018); Szwaykowska, Schwartz, and Carr (2018). In particular, for delays that equal and fixed, one observes three basic swarming states or modes: Flocking, which is a translating center of mass, Ring state, where the agents are splayed out on a ring in in phase about a stationary center of mass, and a Rotating state, where the center of mass itself rotates. Synthetic robotic swarms have communication delays that naturally occur over wireless networks, as a result of low bandwidthKomareji, Shang, and Bouffanais (2018) resulting in delayed communication and multi-hop communicationOliveira, Almeida, and Lima (2015). In cases where the delays are fixed and equal, and the communication occurs on a homogeneous network, it is known that delays create new rotational patterns , as has been verified both theoretically and experimentally Szwaykowska _et al._ (2016); Edwards _et al._ (2020). However, in situations with robots, even simple communication models are based on the distance between agents ying Ani Hsieh _et al._ (2004); Hsieh _et al._ (2007). Following from these models, if one assumes that the delays are range dependent, the problem becomes one of studying state dependent delays where delays depend implicitly on the relative positions between agents. When placing swarms in realistic complex environments, delays are not necessarily a continuous function of range, but rather it is the increasing probability of delays increasing stochastically when agents move further away from one another beyond a certain radius Fink, Ribeiro, and Kumar (2012, 2013). That is, the rate of communication becomes spatially dependent, whereby near agents see a signal with a fast rate of communication, but due to shading and fading of signals, communication rates are slowed and complex outside a given radius. Underwater communication is an excellent swarm example where delays outside a significant radius impart rates of communication of one to two orders of magnitude greater than local communication rates Arrichiello _et al._ (2009). The swarm model that follows takes a globally coupled swarm, and explicitly relaxes the fixed delay assumption, by including range dependent delay based on a fixed communication radius. We show that when range dependent delays are included, new frequencies are introduced and generate bifurcations to a torus. The result is a milling type of swarm that depends on just a few parameters. The results here are important for robotic swarming where one of the goals is to produce desired patterns autonomously, without external controls. The pattern formations predicted here show how delayed information, whether coming from communication, actuations, or both, impacts the stability of swarm states, such as ring and/or rotating states. By revealing those parameter regions where patterns are destabilized, we provide a comprehensive characterization of the autonomously accessible swarm states in the presence of range-dependent delay. ## II The swarm model Consider a swarm of delay-coupled agents in $\bm{{R}}^{2}$. Each agent is indexed by $i\in\\{1,\ldots,N\\}$. We use a simple but general model for swarming motion. Each agent has a self-propulsion force that strives to maintain motion at a preferred speed and a coupling force that governs its interaction with other agents in the swarm. The interaction force is defined as the negative gradient of a pairwise interaction potential $U(\cdot,\cdot)$. All agents follow the same rules of motion; however, mechanical differences between agents may lead to heterogeneous dynamics; this effect is captured by assigning different acceleration factors (denoted $\kappa_{i}$) to the agents. In this paper, we assume $\kappa_{i}=1$ for all $i$. For the effect of heterogeneity on the swarm bifurcations, see Szwaykowska, Romero, and Schwartz (2015). Agent-to-agent interactions occur along a graph $\mathcal{{G}}=\\{\mathcal{{V}},\mathcal{{E}}\\}$, where $\mathcal{{V}}$ is the set of vertexes $v_{i}$ in the graph and $\mathcal{{E}}$ is the set of edges $e_{ij}$. The vertices correspond to individual swarm agents, and edges represent communication links; that is, agents $i$ and $j$ communicate with each other if and only if $e_{ij}\in\mathcal{{E}}$. All communications links are assumed to be bi-directional, and all communications occur with a time delay $\tau$. That is, range dependence is not included. Let ${\bm{{r}}}_{i}\in\mathbb{R}^{2}$ denote the position of the agent $i$ and let $\mathcal{{N}}_{i}=\\{v_{j}\in\mathcal{{V}}\mathrel{\mathop{\mathchar 58\relax}}e_{ij}\in\mathcal{{E}}\\}$ denote its set of neighbors of agent $i$. The motion of agent $i$ is governed by the following equation: $\ddot{\mathbf{r}}_{i}=\kappa_{i}(1-\mathinner{\\!\left\lVert\dot{\mathbf{r}}_{i}\right\rVert}^{2})\dot{\mathbf{r}}_{i}-\kappa_{i}\sum_{j\in\mathcal{{N}}_{i}}\nabla_{x}U({\mathbf{r}}_{i}(t),{\mathbf{r}}_{j}^{\tau}(t)),$ (1) where superscript $\tau$ is used to denote time delay, so that ${\mathbf{r}}_{j}^{\tau}(t)={\mathbf{r}}_{j}(t-\tau)$, $\mathinner{\\!\left\lVert\cdot\right\rVert}$ denotes the Euclidean norm, and $\nabla_{x}$ denotes the gradient with respect to the first argument of $U$. The first term in Eq. 1 governs self-propulsion, where the speed has been normalized to unity. That is, without coupling the agents always asymptote to unit speed. To analyze the dynamics of a large scale swarm, we use a harmonic interaction potential with short-range repulsion. $U(\mathbf{r}_{i},\mathbf{r}_{j}^{\tau})=c_{r}e^{-\frac{\mathinner{\\!\left\lVert\mathbf{r}_{i}-\mathbf{r}_{j}\right\rVert}}{l_{r}}}+\frac{a}{2N}\mathinner{\\!\left\lVert\mathbf{r}_{i}-\mathbf{r}_{j}^{\tau}\right\rVert}^{2}.$ (2) In Eq. 1, it is assumed that the communication delay, $\tau$, is independent of the distance, or range, between any pair of agents. (Notice that the exponent of the repulsion term is independent of the delay since the repulsion force is local.) With the addition of delays in the network, it was shown in homogeneous communication networks that in addition to the usual dynamical translating and milling (or ring) states, for sufficiently large $\tau$, new rotational states emerge Szwaykowska _et al._ (2016). In particular, for a a given attractive coupling strength, there is a delay that destabilizes the periodic ring state into a rotating state, in which the agents coalesce to a small group and move around a fixed center of rotation; this behavior is quite different from the ring state where agents are spread out in a splay state phase. The rotating state is only observed with delay introduced in the communication network, and it appears through a Hopf bifurcation. However, in real-world robotic swarms, communication delays are not uniform between all pairs of agents; delays may be stochastic or even state-dependent. For example, if agents are communicating over a multi-hop network, the delay will increase with the number of hops required to send a message from one agent to the other, and in general will scale with the separation between them. In order to handle range dependent delays, we will make an approximation that depends on a communication range radius. ### II.1 Approximating range dependent delayed coupling For the coupling term, we are interested in introducing an approximation to range based coupling delay. Since all communicating agents send signals with some delay, we compute relative distances defined as $\displaystyle D^{\tau}_{i,j}$ $\displaystyle\equiv||{\mathbf{r}}_{i}-{\mathbf{r}}_{j}^{\tau}||.$ (3) We define a Heaviside function, $H(x)$, that is zero when $x\leq 0$ and 1 otherwise, and we employ global coupling based on a spring potential. For our range dependent metric, we let $\epsilon\geq 0$ denote the range radius. Suppose that when the separation between two agents is small, that is less than $\epsilon$, then sensing between two agents is almost immediate. In practice, the time needed for sensing depends on several factors, such as actuation times, and so distances in practice are computed with delay. Therefore, we model the coupling term for the $i^{th}$ agent as $\displaystyle C_{i}({\mathbf{r}}_{i},{\mathbf{r}}_{j},{\mathbf{r}}_{j}^{\tau},\epsilon)=-\frac{a}{N}(\nabla_{x}U({\mathbf{r}}_{i}(t),{\mathbf{r}}_{j}^{\tau}(t)))H(D^{\tau}_{i,j}-\epsilon)$ $\displaystyle-\frac{a}{N}(\nabla_{x}U({\mathbf{r}}_{i}(t),{\mathbf{r}}_{j}(t)))(1-H(D^{\tau}_{i,j}-\epsilon)),$ (4) where the first coupling term has delay turned on since the distance is outside a ball of radius $\epsilon$, while the second term has no delay since the distance is within the $\epsilon$ ball. The resulting swarm model with range dependence from Eq. II.1 is now $\ddot{\mathbf{r}}_{i}=\kappa_{i}(1-\mathinner{\\!\left\lVert\dot{\mathbf{r}}_{i}\right\rVert}^{2})\dot{\mathbf{r}}_{i}-\kappa_{i}\sum_{j\in\mathcal{{N}}_{i}}C_{i}({\mathbf{r}}_{i},{\mathbf{r}}_{j},{\mathbf{r}}_{j}^{\tau},\epsilon).$ (5) If the delayed distance is within an $\epsilon$ ball, then we evaluate the coupling without delay. Otherwise the coupling is delayed. Thus the coupling function takes into account when delay is active or not between pairs of communicating agents, and depends on the range radius, $\epsilon$. The Heaviside function of the right hand side of Eq. 9 renders the differential delay equation derivatives discontinuous, and as such poses a numerical integration problem. To mollify the lack of smoothness, we approximate $H(x)$ by letting $H(x)\approx\frac{1}{\pi}\arctan(kx)+\frac{1}{2}$, where $k\gg 1$ and constant, and limits on the Heaviside function as $k\rightarrow\infty$. Using only the delayed distance to compute a range dependent coupling assumes that any measurement is not instantaneous. If one were to be able to compute the ideal situation where delay would not be a sensing factor, then certain issues would need to be resolved, which we do not consider here. ### II.2 Numerical simulations of full swarms Examples of simulations using the swarm model with the range dependent coupling are shown below. Here the number of agents, $N=150$, and the coupling strength, $a=2.0$. For the remainder of the analysis, we set $c_{r}=0$, and note that the attractors persist when the repulsive amplitude is sufficiently small Szwaykowska _et al._ (2016). (See supplementary material for a video of the dynamics with small repulsion.) Figure 1: Three snapshots of swarm state in space for $\epsilon=0.01,a=2.0,\tau=1.75$. Sample times $t_{0},t_{1}=t_{0}+20,t_{2}=t_{0}+40$ . Note that even when $\epsilon$ is very small, as shown in Fig. 1, we observe a mix of clustered states which are a combination of pure ring and rotation states. The agents tend to cluster into local groups, and the clusters move in clockwise and counter-clockwise directions as in the ring state. Here, however, the phase differences between agents are non-uniform. When examining a single random agent, as shown in Fig 2, it is periodic with a sharp frequency of rotation, and the relative positions of all individual agents are phase locked. When considering the center of mass of the positions over all agents, $\bm{{R}}\equiv\frac{1}{N}\sum_{i}\bm{{r}}_{i}$, the center of mass does small amplitude oscillations about a fixed point (not shown). Figure 2: Swarm ring state for $\epsilon=0.01,a=2.0,\tau=1.75$. (a)Time series of the x-component of a single agent. (b)The power spectrum showing a sharp frequency. (c)A phase portrait of the orbit of a single agent. The red point denotes the center of mass. As the radius $\epsilon$ increases, instability of the periodic mixed state occurs, giving rise to more complicated behavior, as seen in Fig. 3. New frequencies are introduced, causing the ring state to appear as a quasi- periodic attractor. Moreover, the dynamics of the center of mass has its own non-trivial dynamics which includes the effects of new frequencies. By examining the Poincare map of the attractors, the instability gives rise to dynamics which we conjecture is motion on a torus. Letting $(M_{x},M_{y})$ denote the time averaged center of mass over all agents, we compute the sequence ${x(t_{i}),i=1..M}$ when $y(t_{i})=0$ and $x(t_{i})>M_{x}$. The result is shown in the two panels in Fig. 4. Panel (a) shows a complicated toroidal motion after transients are removed of the center of mass in Fig. 3c. For a single frequency, the dynamics of the center of mass would be a single fixed point. The addition of new frequencies is revealed in the Poincare map as complicated motion on a torus. For larger values of $\epsilon$, the motion on the torus converges to a periodic attractor in panel (b). Figure 3: Swam instability $\epsilon=0.25,a=2.0,\tau=1.75$. (a)Time series of the x-component of a single agent. (b)The Power spectrum showing a slight broadening and birth of a new frequency. (c)A phase portrait of the orbit of a single agent. Figure 4: Poincare map of Eqs. 1-II.1 for (a) $\epsilon=0.25$, (b) $\epsilon=0.5$. Other parameters are fixed: $a=2.0,\tau=1.75$. See text for details. ## III Mean-Field Equation of Range Dependent Delay Coupled Swarm In order to shed some light on the origin of the bifurcation to dynamics on a torus, we examine the full swarm model from a mean-field perspective. The mean field is much lower dimensional, and a full bifurcation analysis may be done. We consider the case of all-to-all communication. Let $\bm{R}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{r}_{i}$ and $\mathbf{r}_{i}=\bm{R}+\delta\mathbf{r}_{i},$ where $\delta\mathbf{r}_{i}$ is a fluctuation term with the identity, and $\sum_{i=1}^{N}\delta\mathbf{r}_{i}=0.$ (6) Then we can write Eq. 5 as $\displaystyle\ddot{\bm{R}}+\delta\ddot{\mathbf{r}}_{i}$ $\displaystyle=(1-|\dot{\bm{R}}+\delta\dot{\mathbf{r}}_{i}|^{2})(\dot{\bm{R}}+\delta\dot{\mathbf{r}}_{i})$ $\displaystyle-\frac{a}{N}\sum_{j=1,j\neq i}^{N}((\bm{R}+\delta\mathbf{r}_{i})-(\bm{R}^{\tau}+\delta\mathbf{r}_{j}^{\tau}))C_{1,i}$ $\displaystyle-\frac{a}{N}\sum_{j=1,j\neq i}^{N}((\bm{R}+\delta\mathbf{r}_{i})-(\bm{R}+\delta\mathbf{r}_{j}))C_{2,i},$ (7) where $\displaystyle C_{1,i}$ $\displaystyle=H(\lVert{\mathbf{r}_{i}-\mathbf{r}_{j}^{\tau}\rVert}-\epsilon)$ $\displaystyle=H(\lVert{(\bm{R}+\delta\mathbf{r}_{i})-(\bm{R}^{\tau}+\delta\mathbf{r}_{j}^{\tau})\rVert}-\epsilon)$ $\displaystyle=H(\lVert{\bm{R}-\bm{R}^{\tau}+\delta\mathbf{r}_{i}-\delta\mathbf{r}_{j}^{\tau}\rVert}-\epsilon)$ and $C_{2,i}=1-C_{1,i}.$ We use the following to reduce the equations of motion to the mean field: From Eq. 6, we note $\displaystyle\sum_{i=1}^{N}\delta\mathbf{r}_{i}^{\tau}$ $\displaystyle=\sum_{j=1,j\neq i}^{N}\delta\mathbf{r}_{j}^{\tau}+\delta\mathbf{r}_{i}^{\tau}=0\iff$ $\displaystyle-\sum_{j=1,j\neq i}^{N}\delta\mathbf{r}_{j}^{\tau}=\delta\mathbf{r}_{i}^{\tau}.$ (8) We further assume that all perturbations from the mean, $\delta\mathbf{r_{i}}$, are all negligible. (This is always true if the coupling amplitude is sufficiently large.) In addition, we use the fact that $\displaystyle{\frac{a(N-1)}{N}}$ limits to $a$, as $N\to\infty$. Therefore, we obtain mean field approximation for the center of mass of range dependent coupled delay case: $\ddot{\bm{R}}=(1-\lvert\dot{\bm{R}}\rvert^{2})\cdot\dot{\bm{R}}-a(\bm{R}-\bm{R}^{\tau})\cdot H(\lVert{\bm{R}-\bm{R}^{\tau}\rVert}-\epsilon)$ (9) ## IV Numerical Analysis of the mean field equation ### IV.1 Examples of rotational attractors As in the case for the full multi-agent system, we see the existence of periodic behavior for $\tau$ sufficiently below an instability threshold, as shown in the time series of Fig. 5. As we increase $\tau$, we expect the periodic orbit to lose stability, resulting in a new attractor. In particular, one notices the emergence of a new frequency in addition to the existing dominant one, as shown in Fig. 6 The additional frequency usually implies a bifurcation to dynamics on a torus, or a higher dimensional torus. Figure 5: Periodic motion of the mean field Eq. 9 for $\epsilon=0.01,a=0.64,\tau=1.6$. (a) Time series of the x-component of the mean field. (b) Power spectra of the time series. We now investigate this transition by tracking the stability via monitoring the Floquet exponents corresponding to the periodic orbit. For a general differential delay equation given by $\dot{\bm{x}}(t)=\bm{F}(\bm{x}(t),\bm{x}(t-\tau))$, if $\bm{\phi}(t)=\bm{\phi}(t+T)$ for all $t\geq 0$, then stability is determined by examining the linearized equation along $\bm{\phi}(t)$: $\displaystyle\dot{\bm{X}}(t)$ $\displaystyle=\frac{\partial\bm{F}}{\partial\bm{x}(t)}(\bm{\phi}(t),\bm{\phi}(t-\tau))\bm{X}(t)$ $\displaystyle+\frac{\partial\bm{F}}{\partial\bm{x}(t-\tau)}(\bm{\phi}(t),\bm{\phi}(t-\tau))\bm{X}(t-\tau).$ (10) The stability of the periodic solution is determined by the spectrum of the time integration operator $U(T,0)$ which integrates Eq. IV.1 around $\phi(t)$ from time t = 0 to t = T. This operator is called the monodromy operator and its (infinite number of) eigenvalues, which are independent of the initial state, are called the Floquet multipliers Hale (1977). For autonomous systems, it is necessary and sufficient there exists a trivial Floquet multiplier at 1, corresponding to a perturbation along the periodic solution Hartung _et al._ (2006); Hale and Lunel (1993). The periodic solution is stable provided all multipliers (except the trivial one) have modulus smaller than 1; it is unstable if there exists a multiplier with modulus larger than 1. Bifurcations occur whenever Floquet multipliers move into or out of the unit circle. Generically three types of bifurcations occur in a one parameter continuation of periodic solutions: a turning point, a period doubling, and a torus bifurcation where a branch of quasi-periodic solutions originates and where a complex pair of multipliers crosses the unit circle Hale (1977). Figure 6: Quasi-periodic motion of the mean field Eq. 9. (a) Time series of the x-component of the mean field. Solid (red) line denotes period length of dominant spectral peak. Dashed line denotes period length of secondary peak. (b) Power spectra of the time series. We have tracked a set of stable periodic orbits for various radii of $\epsilon$, and located the change in stability by computing the Floquet multipliers. The results plotted in Fig. 7 show that for a range of radii $\epsilon$, there exists a bifurcation to a torus at some delay. Notice that as $\epsilon$ increases, there results an increase in the size of the orbits, which qualitatively agrees with our full agent based simulations. Figure 7: Bifurcation plot showing the norm of the periodic orbits as a function of delay $\tau$. Parameter a=0.68. Red (blue) markers denote unstable (stable) orbits.Cyan symbols denote the change in stability where a pair of complex eigenvalues cross the imaginary axis. Since there exists a range of delays which destabilize periodic swarm dynamics for each $\epsilon$, we summarize the onset of torus bifurcations by plotting the locus of points at which stability changes as a a function of coupling amplitude and delay. The results are plotted in Fig. 8. Figure 8 is revealing in that it shows a functional relationship of the bifurcation onset that is similar over a range of $\epsilon$. For larger values of $\epsilon$, it is clear that lower values of delay and coupling are required to generate bifurcations. This holds true over two orders of $\epsilon$. For a fixed value of $\epsilon$, we also see monotonic relationship between delay and coupling strength, so that it is easier for smaller delays to destabilize periodic motion for larger coupling strengths. Figure 8: Plotted is the locus of points at which torus bifurcations emerge as a function of coupling amplitude $a$, delay $\tau$ for various range radii $\epsilon$ for the mean field Eq. 9. ## V Conclusions We considered a new model of a swarm with delay coupled communication network, where the delay is considered to be range dependent. That is, given a range radius, delay is on if two agents are outside the radius, and zero otherwise. The implication is that small delays do not matter if the agents are close to each other. The additional range dependence creates a new set of bifurcations not previously seen. For general swarms without delay, the usual states consist of flocking (translation) or ring / rotational state (milling), with agents spread in phase. With the addition of a fixed delay, a rotational state bifurcates that has all agents in phase and rotate together Hindes _et al._ (2020). Range dependence introduces a new rotational bifurcating state that exhibits behavior observed as a new mixed state combining dynamics of both ring and rotating states. The radius parameter $\epsilon$, was used to quantify the bifurcation of the rotational mixed state. For small $\epsilon$, we see dynamics for the full swarm shows clustered counter-rotational behavior that is periodic. This agrees for small radius values in the mean field description as well. As the radius increases, the mixed periodic state generates new frequencies in the full model, which are manifested as torus bifurcations in the mean field. Mean field analysis was done by tracking Floquet multipliers that cross the imaginary axis as complex pairs. Frequency analysis explicitly shows the additional frequencies in the mean field. Finally, we tracked the locus of coupling amplitudes and delay for various values of $\epsilon$ locating the parameters at which torus bifurcation occur. The results reveal that as $\epsilon$ increases, torus bifurcations onset at lower values of coupling amplitude and delay. The implications are that more complicated behavior than periodic motion has a greater probability of being observed in both theory and experiment if range dependence of delay is included. ## VI Supplementary Material The videos show the attractor of a swarm consisting of N=300 agents. Fixed parameters for the three videos are $a=2.0,\tau=1.75$ The parameters for zero radius (delay is on all the time) are $\epsilon=0.0,c_{r}=0.05$, and $l_{r}=0.05$ for a baseline, are shown in Video1_eps_0p0.mp4. The parameters corresponding to Fig. 2 are $\epsilon=0.01,c_{r}=0.01$, and $l_{r}=0.05$ are shown in Video2_eps_0p01.mp4. The video shows that the attractor persists when repulsive forces are local and weak. Similar behavior is observed when N=150, which is used in Fig. 1 without repulsion; i.e., $c_{r}=0$. The parameters for corresponding to Fig. 3 are $\epsilon=0.25,c_{r}=0.05$, and $l_{r}=0.05$, shown in Video3_eps_0p25.mp4. ###### Acknowledgements. IBS, JH, IT and KK gratefully acknowledge ONR for their support under N0001412WX20083, N0001420WX00034, and the NRL Base Research Program N0001420WX00410. VE is supported under the NRL Karles Fellowship Program, JON 55-N2Q4-09. SK was supported through the GMU Provost PhD award as part of the Industrial Immersion Program. MAH is supported by ONR No. N00014-18-1-2580 and ARL DCIST CRA W911NF-17-2-0181. Data sharing is not applicable to this article as no new data were created in this study. ## References * Budrene and Berg (1995) E. O. Budrene and H. C. Berg, “Dynamics of formation of symmetrical patterns by chemotactic bacteria,” Nature 376, 49–53 (1995). * Polezhaev _et al._ (2006) A. A. Polezhaev, R. A. Pashkov, A. I. Lobanov, and I. B. Petrov, “Spatial patterns formed by chemotactic bacteria Escherichia coli,” The International Journal of Developmental Biology 50, 309–314 (2006). * Lee _et al._ (2013) R. M. Lee, D. H. Kelley, K. N. Nordstrom, N. T. Ouellette, and W. Losert, “Quantifying stretching and rearrangement in epithelial sheet migration,” New Journal of Physics 15 (2013), 10.1088/1367-2630/15/2/025036. * Tunstrø m _et al._ (2013) K. r. Tunstrø m, Y. Katz, C. C. Ioannou, C. Huepe, M. J. Lutz, and I. D. Couzin, “Collective states, multistability and transitional behavior in schooling fish,” PLoS computational biology 9, e1002915 (2013). * Helbing and Molnar (1995) D. Helbing and P. Molnar, “Social force model for pedestrian dynamics,” Physical Review E 51, 4282–4286 (1995). * Giuggioli, McKetterick, and Holderied (2015a) L. Giuggioli, T. J. McKetterick, and M. Holderied, “Delayed Response and Biosonar Perception Explain Movement Coordination in Trawling Bats,” PLOS Computational Biology 11, e1004089 (2015a). * Lee (2006) S.-H. Lee, “Predator’s attack-induced phase-like transition in prey flock,” Physics Letters A 357, 270–274 (2006). * Topaz and Bertozzi (2004) C. M. Topaz and A. L. Bertozzi, “Swarming Patterns in a Two-Dimensional Kinematic Model for Biological Groups,” SIAM Journal on Applied Mathematics 65, 152–174 (2004). * Szwaykowska, Romero, and Schwartz (2015) K. Szwaykowska, L. M.-y.-T. Romero, and I. B. Schwartz, “Collective Motions of Heterogeneous Swarms,” IEEE Transactions on Automation Science and Engineering 12, 810–818 (2015), arXiv:1409.1042 . * Mier-y-Teran Romero, Forgoston, and Schwartz (2011) L. Mier-y-Teran Romero, E. Forgoston, and I. B. Schwartz, “Noise, Bifurcations, and Modeling of Interacting Particle Systems,” in _Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems_ (2011) pp. 3905–3910. * Hindes, Szwaykowska, and Schwartz (2016) J. Hindes, K. Szwaykowska, and I. B. Schwartz, “Hybrid dynamics in delay-coupled swarms with mothership networks,” Phys. Rev. E 94, 032306 (2016). * Ramachandran, Elamvazhuthi, and Berman (2018) R. K. Ramachandran, K. Elamvazhuthi, and S. Berman, “An optimal control approach to mapping gps-denied environments using a stochastic robotic swarm,” in _Robotics Research: Volume 1_, edited by A. Bicchi and W. Burgard (Springer International Publishing, Cham, 2018) pp. 477–493. * Morgan and Schwartz (2005) D. S. Morgan and I. B. Schwartz, “Dynamic coordinated control laws in multiple agent models,” Physics Letters A 340, 121 – 131 (2005). * Wiech, Eremeyev, and Giorgio (2018) J. Wiech, V. A. Eremeyev, and I. Giorgio, “Virtual spring damper method for nonholonomic robotic swarm self-organization and leader following,” Continuum Mechanics and Thermodynamics 30, 1091–1102 (2018). * Li _et al._ (2017) H. Li, C. Feng, H. Ehrhard, Y. Shen, B. Cobos, F. Zhang, K. Elamvazhuthi, S. Berman, M. Haberland, and A. L. Bertozzi, “Decentralized stochastic control of robotic swarm density: Theory, simulation, and experiment,” in _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ (2017) pp. 4341–4347. * Tanner, Jadbabaie, and Pappas (2007) H. G. Tanner, A. Jadbabaie, and G. J. Pappas, “Flocking in fixed and switching networks,” IEEE Transactions on Automatic Control 52, 863–868 (2007). * Gazi (2005) V. Gazi, “Swarm aggregations using artificial potentials and sliding-mode control,” IEEE Transactions on Robotics 21, 1208–1214 (2005). * Jadbabaie, Jie Lin, and Morse (2003) A. Jadbabaie, Jie Lin, and A. S. Morse, “Coordination of groups of mobile autonomous agents using nearest neighbor rules,” IEEE Transactions on Automatic Control 48, 988–1001 (2003). * Viragh _et al._ (2014) C. Viragh, G. Vasarhelyi, N. Tarcai, Szorenyi, and et al, “Flocking algorithm for autonomous flying robots,” Bioinspiration & biomimetics 9, 025012 (2014). * Desai, Ostrowski, and Kumar (2001) J. P. Desai, J. P. Ostrowski, and V. Kumar, “Modeling and control of formations of nonholonomic mobile robots,” in _IEEE Transactions on Robotics and Automation_ , Vol. 17(6) (2001) pp. 905–908. * Vicsek _et al._ (2006) T. Vicsek, A. Czirok, E. Ben-Jacob, I. Cohen, and O. Shochet, “Novel type of phase transition in a system of self-driven particles,” (2006), arXiv:0611743v1 [arXiv:cond-mat] . * Edelstein-Keshet, Grunbaum, and Watmough (1998) L. Edelstein-Keshet, D. Grunbaum, and J. Watmough, “Do travelling band solutions describe cohesive swarms? An investigation for migratory locusts,” Journal of Mathematical Biology 36, 515–549 (1998). * Giuggioli, McKetterick, and Holderied (2015b) L. Giuggioli, T. McKetterick, and M. Holderied, “Delayed response and biosonar perception explain movement coordination in trawling bats,” PLoS Comput Biol 11 (2015b). * Nagy _et al._ (2010) N. Nagy, Z. Akos, D. Biro, and T. Vicsek, “Hierarchical group dynamics in pigeon flocks,” Nature 464, 890–893 (2010). * Fehrenbach _et al._ (2014) J. Fehrenbach, J. Narski, J. Hua, S. Lemercier, A. Jelic, C. Appert-Rolland, S. Donikian, J. Pettré, and P. Degond, “Time-delayed follow-the-leader model for pedestrians walking in line,” (2014), 10.3934/nhm.2015.10.579, arXiv:1412.7537 . * Mier-y-Teran Romero, Forgoston, and Schwartz (2012) L. Mier-y-Teran Romero, E. Forgoston, and I. B. Schwartz, “Coherent Pattern Prediction in Swarms of Delay-Coupled Agents,” IEEE Transactions on Robotics 28, 1034–1044 (2012), arXiv:arXiv:1205.0195v1 . * Szwaykowska _et al._ (2016) K. Szwaykowska, I. B. Schwartz, L. Mier-y Teran Romero, C. R. Heckman, D. Mox, and M. A. Hsieh, “Collective motion patterns of swarms with delay coupling: Theory and experiment,” Phys. Rev. E 93, 032307 (2016). * Edwards _et al._ (2020) V. Edwards, P. deZonia, M. A. Hsieh, J. Hindes, I. Triandaf, and I. B. Schwartz, “Delay-induced swarm pattern bifurcations in mixed-reality experiments,” (2020). * Hindes and Schwartz (2018) J. Hindes and I. B. Schwartz, “Rare slips in fluctuating synchronized oscillator networks,” Chaos: An Interdisciplinary Journal of Nonlinear Science 28, 071106 (2018), https://doi.org/10.1063/1.5041377 . * Szwaykowska, Schwartz, and Carr (2018) K. Szwaykowska, I. B. Schwartz, and T. W. Carr, “State transitions in generic systems with asymmetric noise and communication delay,” in _11th International Symposium on Mechatronics and its Applications (ISMA)_ (2018) pp. 1–6. * Komareji, Shang, and Bouffanais (2018) M. Komareji, Y. Shang, and R. Bouffanais, “Consensus in topologically interacting swarms under communication constraints and time-delays,” Nonlinear Dynamics 93, 1287–1300 (2018). * Oliveira, Almeida, and Lima (2015) L. Oliveira, L. Almeida, and P. Lima, “Multi-hop routing within tdma slots for teams of cooperating robots,” in _2015 IEEE World Conference on Factory Communication Systems (WFCS)_ (2015) pp. 1–8. * ying Ani Hsieh _et al._ (2004) M. ying Ani Hsieh, P. Srivastava, V. Kumar, and C. J. Taylor, “Composable communication constraint-based control,” in _Mobile Robots XVII_, Vol. 5609, edited by D. W. Gage, International Society for Optics and Photonics (SPIE, 2004) pp. 192 – 200. * Hsieh _et al._ (2007) M. A. Hsieh, A. Cowley, J. F. Keller, L. Chaimowicz, B. Grocholsky, V. Kumar, C. J. Taylor, Y. Endo, R. C. Arkin, B. Jung, D. F. Wolf, G. S. Sukhatme, and D. C. MacKenzie, “Adaptive teams of autonomous aerial and ground robots for situational awareness,” Journal of Field Robotics 24, 991–1014 (2007), https://onlinelibrary.wiley.com/doi/pdf/10.1002/rob.20222 . * Fink, Ribeiro, and Kumar (2012) J. Fink, A. Ribeiro, and V. Kumar, “Robust control for mobility and wireless communication in cyber–physical systems with application to robot teams,” Proceedings of the IEEE 100, 164–178 (2012). * Fink, Ribeiro, and Kumar (2013) J. Fink, A. Ribeiro, and V. Kumar, “Robust control of mobility and communications in autonomous robot teams,” IEEE Access 1, 290–309 (2013). * Arrichiello _et al._ (2009) F. Arrichiello, D. Liu, S. Yerramall, A. Pereira, J. Das, U. Mitra, and G. Sukhatme, “Effects of underwater communication constraints on the control of marine robot teams,” (2009). * Hale (1977) J. K. Hale, _Theory of Functional Differential Equations_ , Applied Mathematical Sciences (Springer-Verlag, New York, 1977). * Hartung _et al._ (2006) F. Hartung, T. Krisztin, H. Walther, and J. Wu, “Chapter 5 functional differential equations with state-dependent delays: Theory and applications,” in _Handbook of Differential Equations: Ordinary Differential Equations_, Handbook of Differential Equations: Ordinary Differential Equations, Vol. 3 (2006) pp. 435–545. * Hale and Lunel (1993) J. K. Hale and S. M. V. Lunel, _Introduction to Functional Differential Equations_ (Springer, New York, 1993). * Hindes _et al._ (2020) J. Hindes, V. Edwards, S. Kamimoto, I. Triandaf, and I. B. Schwartz, “Unstable oscillations and bistability in delay-coupled swarms,” (2020), arXiv:2002.12420 [nlin.AO] .
2024-09-04T02:54:58.098869
2020-03-07T17:28:54
2003.03614
{ "authors": "Batuhan Kaplan, \\.Ibrahim Kahraman, Ali G\\\"or\\c{c}in, Hakan Ali\n \\c{C}{\\i}rpan, Ali R{\\i}za Ekti", "full_text_license": null, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "arxiv-papers-0000.json.gz:26098", "submitter": "Batuhan Kaplan", "url": "https://arxiv.org/abs/2003.03614" }
arxiv-papers
# Measurement based FHSS–type Drone Controller Detection at 2.4GHz: An STFT Approach ††thanks: This paper has been accepted for the presentation in the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring). Batuhan Kaplan12, İbrahim Kahraman13, Ali Görçin14, Hakan Ali Çırpan2, Ali Rıza Ekti15 1 Informatics and Information Security Research Center (BİLGEM), TÜBİTAK, Kocaeli, Turkey 2 Department of Electronics and Communication Engineering, İstanbul Technical University, İstanbul, Turkey 3 Department of Electrical and Electronics Engineering, Boğaziçi University, İstanbul, Turkey 4 Faculty of Electronics and Communications Engineering, Yıldız Technical University, İstanbul, Turkey 5 Department of Electrical and Electronics Engineering, Balıkesir University, Balıkesir, Turkey Emails: batuhan.kaplan<EMAIL_ADDRESS> [email protected],hakan.cirpan<EMAIL_ADDRESS> ###### Abstract The applications of the unmanned aerial vehicles increase rapidly in everyday life, thus detecting the UAVs and/or its pilot is a crucial task. Many UAVs adopt frequency hopping spread spectrum (FHSS) technology to efficiently and securely communicate with their radio controllers where the signal follows a hopping pattern to prevent harmful interference. In order to realistically distinguish the frequency hopping (FH) RC signals, one should consider the real–world radio propagation environment since many UAVs communicate with RCs from a far distance in which signal faces both slow and fast fading phenomenons. Therefore, in this study different from the literature, we consider a system that works under real–conditions by capturing over–the–air signals at hilly terrain suburban environments in the presence of foliages. We adopt the short–time Fourier transform (STFT) approach to capture the hopping sequence of each signal. Furthermore, time guards associated with each hopping sequence are calculated using the autocorrelation function (ACF) of the STFT which results in differentiating the each UAV RC signal accurately. In order to validate the performance of the proposed method, the results of normalized mean square error (MSE) respect to different signal–to–noise ratio (SNR), window size and Tx–Rx separation values are given. ###### Index Terms: short–time Fourier transform, frequency hopping, UAV remote controller detection ## I Introduction Unmanned aerial vehicles have become a prevalent part of the daily life with their applications to many fields such as mapping and surveying, transportation, surveillance, law enforcement, aerial imaging and agriculture [1]. Besides the aforementioned use of UAVs in many areas, one should keep in mind that UAVs can also be used dangerously to create unwanted incidents especially when they are diverted to the sensitive airspace near airports and their presence may cause accidents which can result in fatal crashes [2]. Moreover, UAVs can be utilized for collecting information about people, organisations, and companies without their consent. Therefore, identification of UAV systems and their communication are great importance, especially to prevent unwanted situations. In this context, it is known that most of the communication between the UAVs and wireless RC utilize the spread spectrum technology of FHSS on industrial, scientific, and medical (ISM) band at 2.4GHz [3]. Therefore a method to detect and classify these kinds of signals in this band would lead to the identification of the communication between the UAV and the controller. In literature, most well known FHSS signal detection methods adopt the time–frequency analysis and wavelet analysis [4, 5, 6, 7]. It is shown that Wigner Ville distribution, Wavelet transform method, and array signal processing methods can be used to identify FHSS signals with the burden of heavy computational complexity which make them hard to satisfy the real–time implementation. Since STFT based detection method does not need any prior information about received signal compared to computationally complex algorithms, STFT becomes the designated optimum detector in the absence of information regarding the received signal [8]. However, one should note that in order to achieve good results with STFT, SNR value needs to be high. Thus, in this study, we propose STFT based blind signal detection method for FHSS UAV RC signals by using over–the–air signal measurements which accounts for the signal imperfections present in the nature of real–world conditions (e.g. fading, multipath, and so on) by considering the hilly terrain suburban environments in the presence of foliage. Furthermore, the literature utilizes the simulated data instead of over–the–air signals in general and these simulations assume that there is no time guards between hops. This assumption makes differentiation of FH signals easier, however, many hopping signals use time guards and also these time guards are different for different signal sources. The proposed approach is also able to detect the time guards between each hopping sequence using the ACF [9] of the STFT which results in differentiating the each drone RC signal accurately. The paper is organized as follows, Section II details the system model. Section III presents the proposed method. The measurement setup is explained in Section IV. In Section V, measurement results are given and discussed. Finally, Section VI concludes the paper. ## II Signal Model Drone controller signals might be FH signals that have temporal statistical characteristics and they can be written as [6], $x(t)=s(t)\sum_{m=0}^{M-1}e^{j2\pi f_{c_{m}}t_{m}+\theta_{m}}$ (1) where $\theta_{m}$ and $f_{c_{m}}$ are the carrier phase and carrier frequency of $m_{th}$ hop, respectively. Also, $s(t)$ denotes the complex baseband equivalent of the information bearer for $t\in[0,T]$, $M$ stands for the total number of hop of a signal, $t_{m}$ is the duration time of $m_{th}$ hop that might be a uniform distribution or not. The received controller signal which is complex baseband equivalent of the received passband signal and can be expressed as, $r(t)=\sum_{n=0}^{N-1}y_{n}(t)+n(t)+I(t)$ (2) where $y_{n}(t)$ is an $n_{th}$ FH signal source, $n(t)$ denotes the complex Additive White Gaussian Noise (AWGN) in which I and Q components are i.i.d with $\mathcal{N}(0,\,\sigma^{2})$, $I(t)$ stands for the interference signal. ### II-A Short–Time Fourier Transform STFT approach is utilized to analyze the FH signals as a method to observe the frequency content of this type of non–stationary signals over time. Mathematical expression of STFT of the time-domain signal $z(t)$ can be written as [8], $STFT\Big{\\{}z(t)\Big{\\}}=\int_{-\infty}^{\infty}[z(t)w(t-\tau)]e^{-j2\pi f\tau}d\tau$ (3) where $w(t)$ is the window function. The STFT matrix $S=[s_{1}[f],s_{2}[f],...,s_{K}[f]]$ such that $i_{th}$ element of this matrix is a column vector determined by discrete Fourier transform of $r[n]w[n-iR]$, $s_{i}[f]=\sum_{n=0}^{N-1}r[n]w[n-iR]e^{-j2\pi fn}$ (4) where $r[n]$ is sampled version of $r(t)$ by considering the anti–aliasing property and $R$ denotes the shifting length. One should keep in mind that adjusting the time and frequency resolution is crucial point for STFT analysis [10] due to the trade off between them. The length of the time point can be calculated as [11] $m=\left\lfloor\frac{N_{x}-L}{M-L}\right\rfloor$ (5) where $N_{x}$ is length of the signal, $L$ denotes the number of overlap in the Fourier transform, $M$ represents the window size, $\lfloor\cdot\rfloor$ stands for the floor operator. ## III Proposed Method The received controller signal, $r(t)$, analyzed by STFT method. As depicted in Fig. 1, the flow graph explains how the system works in brief. After the signal is received in the first stage, optimal window time length is decided to get the optimum resolution at (5) based on maximizing the number of elements on the matrix $S$ in the second stage of the flowchart. STFT is calculated in a dBm unit according to power spectral density (PSD) in the same step. Figure 1: The flowchart of the proposed detection method. A binarization process is conducted at the following steps of the flowchart; the STFT matrix is converted to a binarized matrix, $Z(k,l)$, with the utilization of a threshold $\mu$. Also, in the flowchart PSD refers to each point in the STFT matrix. Based on the dynamically calculated threshold value, whether the signal presents or not is decided. Therefore, the problem statement can be indicated as the identification of the presence of the unknown FH signal. When the signal is present, $Z(k,l)$ is evaluated as 1 and the new binarized matrix, $Z(k,l)$, is given as, $Z(k,l)=\begin{cases}1,&S(k,l)\geq\mu\\\ 0,&S(k,l)<\mu\end{cases}$ (6) To determine the threshold value, each element of STFT matrix are concatenated and a sorting algorithm implemented to list the power levels of each point of the STFT matrix in a ascending order. Then, we assume that the majority of the received signal comprise of noise, therefore, we take the mean value of the top $20\%$ of the sorted values of STFT matrix to determine a lower bound for the computation of the threshold. Thus, the threshold is calculated as $\mu=\frac{S_{max}+\sigma_{20\%}}{2}$ (7) where $S_{max}$ is the maximum value of the STFT matrix, $\sigma_{20\%}$ denotes the mean of the top $20\%$ samples. Fig. 2 shows $S_{max}$, $\sigma_{20\%}$, and the threshold value. Please note that even for the very low SNR regimes, the measurement results indicate the feasibility of this threshold selection process. Figure 2: Estimated threshold representation with $S_{max}$ and $\sigma_{20\%}$ values over real recorded signal. Due to the wireless impairments on the received signal, some portions of the $Z$ matrix might be missing as shown in Fig. 3(a). In order to represent the signal in a more plausible way, we adopt a widely used morphological dilation and erosion processes from the domain of image processing [12, 13] to recover the received signal properly. A signal with impairments and the output of dilation and erosion processes are shown in Fig. 3. (a) Recorded FH drone controller signal with channel impairments. (b) FH drone controller signal after correction. Figure 3: Result of dilation process. After the recovery process, it becomes possible to extract the signal parameters such as start time, stop time, center frequency and difference between start time and stop time (dwell time) accurately. The parameter estimation algorithm is given in Algorithm 1. This process is implemented to the $Z$ matrix and the parameters of each signal in the spectrum are extracted. The algorithm simply detects the signals from $Z$ and extracts the duration of each independent transmission which can be part of same or different signal sources. Input: $Z(k,l)$ Output: start time, stop time, center frequency, dwell time, bandwidth 1 2for _$i\leftarrow 1$ to row_ do 3 for _$j\leftarrow 1$ to column_ do 4 $count\leftarrow 0$ 5 $bandwidth\leftarrow 0$ 6 if _$Z(i,j)$ == to $1$_ then 7 start time $\leftarrow j$ 8 $f_{start}\leftarrow i$ 9 while _$Z(i,j)$ $==$ $1$ _ do 10 $count\leftarrow count+1$ 11 $j\leftarrow j+1$ 12 while _$Z(i,j-1)$ $==$ $1$ _ do 13 $bandwidth\leftarrow bandwidth+1$ $i\leftarrow i+1$ 14 stop time $\leftarrow j-1$ 15 $f_{stop}\leftarrow i-1$ 16 center frequency $\leftarrow\frac{f_{start}+f_{stop}}{2}$ 17 dwell time $\leftarrow count$ 18 19 Assign $0$ to rectangle that founded above. 20 21 Algorithm 1 Parameter Extraction Algorithm In the last step of the flowchart, controlling for each hop is done to decide whether it belongs to FH signal or not. Due to the signal structure which will be explained in Section IV, time guards are inserted during hopping and this makes classification complicated. Time guards may lead to miss matches between hops because there can be some other signals between them and these miss matches should be corrected. In this paper ACF is utilized for this purpose since the highest correlation peaks occur as the signal matches itself perfectly and this is the case for drone controller FH signals based on the fact that time and frequency characteristics are fixed. ACF is used on $Z$ matrix and result of ACF gives the highest peak at $T_{1}$, which is the fundamental period (6.8ms) for the particular drone controller FH signal. The rest of the peaks give information in regards to hops which are generated from the same source. Thus, we define a set T = $\\{T_{1},T_{2},...T_{n}\\}$ which represents the locations of all the peak values where $T_{1}=\sup\hskip 2.84526pt\text{T}$. In other words, we put all the local extremum for interval $(0,T_{1})$ to the set T as shown in Fig. 4 meaning that we obtained all the required T values (guard and dwell times) to track and classify hops. Finally, the signal classification block in the flowchart is executed by the procedure: $\exists T_{i}\in\text{T}$ and any two hops, $hop\\_a$ and $hop\\_b$, are emitted from the same signal source if $\text{start time($hop\\_a$)}-\text{start time($hop\\_b$)}\equiv T_{i}\hskip 2.84526pt(\text{mod }T_{1})$ (8) where start time is the estimated parameter from Algorithm 1. Thus, hop separation is also possible with the proposed method. Please note that even though the method is applied to a particular set of FH signals, it can be utilized to detect any kinds of FH signal with observable time and frequency domain hopping pattern. Figure 4: Time difference estimation between hops utilizing auto correlation function. ## IV Measurement Setup Experimental setup for FH signal parameter estimation is performed in Scientific and Technological Research Council of Turkey (TÜBİTAK) Informatics and Information Security Research Center (BİLGEM). Data collection of FH signal is conducted with different distances from 5m to 135m with 10m intervals between each step. Fig. 5 shows the fixed location of our receiver and locations of the drone controller (transmitter). Figure 5: The aerial map of data collection locations. ### IV-A Hardware Setup The testbed used in the data acquisition procedure consists of signal source (drone controller) and spectrum analyzer to record the signals. Figure 6: The hopping pattern of Drone controller signal: Futaba T8J RC as signal source. Futaba T8J RC is used during the experiment as a FH signal source. Futaba T8J RC operates in the 2.4 gigahertz (GHz) ISM band over the half of the spectrum band (up to 2.45 GHz). When analyzing the signal, it is discovered that the RC transmitter is behaved differently compared to the standard FH communication systems (e.g. Bluetooth). An illustration of the periodic hopping sequence for the Futaba T8J RC is shown in Fig. 6. Also, in the figure for $\tau_{dwell}$ represents dwell time and fundamental period becomes $\Delta t_{1}<\Delta t_{2}<\Delta t_{3}$ (9) $3\tau_{dwell}+\Delta t_{1}+\Delta t_{2}+\Delta t_{3}=T_{1}=0.0068sec$ (10) TABLE I: FH signal characteristics FH Signal | ---|--- Dwell Time | 1.44ms Center Frequency Set | | 2.4GHz-2.45GHz interval --- Hopping Sequence | f1 f1 f2 f3 f3 f4 … The parameters of Futaba T8J RC FH signal source are listed in Table I. In the receiver side, Rohde&Schwarz FSW 26 signal and spectrum analyzer (SSA) is utilized to record FH signals. SSA can support the frequency range from 2 hertz (Hz) to 26.5GHz. The device provides real–time spectral analysis up to 160 megahertz (MHz) bandwidth. The signals are recorded over the 2.4GHz ISM spectrum band with an omnidirectional antenna. ### IV-B Experimental Procedures In data collection from real–world, it is assumed that transmitter operates between 2.4GHz–2.48GHz ISM spectrum band. The center frequency of SSA is set to 2.44GHz and bandwidth of interest is adjusted to 80MHz for the purpose of full coverage. Also, SSA is connected to external computer via an Ethernet cable in favor of achieving data storage with ease. The sampling rate depends on the analysis bandwidth of real–time spectrum which is selected as 80MS/s. Each measurement is captured as an I/Q samples and collected 20M I/Q samples during 250ms. However, considering the processing limit, the collected data is divided into pieces with each has 4M samples. Each divided data was considered when calculating the performance of the proposed method. Finally, captured I/Q data is fed into the computer which runs MATLAB R2015b software. ## V Measurement Results Over–the–air data collection is realized and the performance of time–frequency analysis method is evaluated. Please note that all the captured data includes real–world propagation effects such as multipath fading, interference, carrier frequency offset (CFO). Fig. 7 shows how over–the–air recorded FH signal source behaves during the observation time. Please also note that some of the estimated parameters of the real signal which measured at 25m distance with 5dB SNR can be found in Table II. Figure 7: Recorded 2.4GHz spectrum that comprised of drone controller FH signals. In order to validate the performance of the system, normalized MSE was considered. Normalized MSE can be calculated as[14] $\text{NMSE}=\frac{1}{N}\sum_{i=1}^{N}\Big{(}\frac{\hat{t}_{i}-t}{t}\Big{)}^{2}$ (11) where $\hat{t}_{i}$ is the estimated hopping time of $i_{th}$ hop and $t$ denotes the true value of hopping time. $N$ represents the total number of hop that must be found. When calculating the error, it is assumed that $N$ equals to $22$ in the observation time. Also, $0$ values have been added to extracted parameters if there is some undetected signal. TABLE II: Estimated parameters of FH signal | Start Time --- (ms) | Stop Time --- (ms) | Dwell Time --- (ms) | Center Frequency --- (GHz) 1.7930 | 3.2403 | 1.4472 | 2.4271 5.1742 | 6.6086 | 1.4344 | 2.4414 6.7623 | 8.1967 | 1.4344 | 2.4414 8.6066 | 10.0410 | 1.4344 | 2.4211 11.9749 | 13.4221 | 1.4472 | 2.4039 Figure 8: Estimated hopping time errors vs. distance in terms of normalized MSE. The error curve of an estimated hopping time from measured data is plotted in Fig. 8. In here, error values are determined for three different SNR values and distances. It is clearly seen that as the distance increases, the error of estimation increases. Also poor SNR condition adversely affects the performance. Moreover, even with the same SNR condition, there may be a missing hop of an FH signal in the signal received from the farther point. As discussed before, another important issue is selection of window size ($M$). In this regard, precision of window size is also studied. It can be seen that window size directly effects the accuracy of the estimation. While window size decreases, we get high resolution in time but low resolution in frequency. In contrast, increasing window size implies high resolution in frequency but low resolution in time. Because both the frequency and time information are main features to classify the hopping signals, there should be some optimum window size providing better performance. It is empirically shown in Fig. 9 that the optimum window size for STFT is $M=2048$. Since the previous works utilize STFT only for deciding whether a signal is hopping between different frequencies, ACF is applied only in time–domain for the signals with no guard times, and the simulations were considered instead of measurements, comparison of the performance of the proposed method with these simulations are avoided in this work. Figure 9: Estimated hopping time errors vs. distance for different window sizes and for SNR $=$ 0 dB. ## VI Concluding Remarks and Future Directions In this work, an FH drone controller signal detection algorithm is proposed and the performance of the algorithm is evaluated using measurements of UAV RC in real–world wireless environment. The algorithm can estimate the guards of the each hopping sequence successfully by using the ACF of STFT, which also leads to the accurate identification whether a FH signal is present or not. The performance of the proposed method is also quantified by normalized MSE for over–the–air signal which are recorded at different distances and SNRs. Measurement results show that reasonable input parameters will improve the performance of frequency hopping signal parameter estimation. In future studies, we will consider adopting the recently emerging deep learning algorithms to distinguish multiple standard based wireless hopping signals. ## VII Acknowledgement This publication was made possible by NPRP12S-0225-190152 from the Qatar National Research Fund (a member of The Qatar Foundation). The statements made herein are solely the responsibility of the author[s]. ## References * [1] DHL, “DHL launches first commercial drone ’parcelcopter’ delivery service,” Available: https://www.theguardian.com/technology/2014/sep/25/german-dhl-launches-first-commercial-drone-delivery-service, Accessed: Oct. 29, 2019. * [2] FAA, “Federal Aviation Administration UAS Sightings Report,” Available: https://www.faa.gov/uas/resources/public_records/uas_sightings_report/, Accessed: Oct. 29, 2019. * [3] P. Popovski, H. Yomo, and R. Prasad, “Strategies for adaptive frequency hopping in the unlicensed bands,” _IEEE Wireless Communications_ , vol. 13, no. 6, pp. 60–67, Dec 2006. * [4] A. Kanaa and A. Z. Sha’ameri, “A robust parameter estimation of fhss signals using time–frequency analysis in a non-cooperative environment,” _Physical Communication_ , vol. 26, pp. 9–20, 2018. * [5] B. Boashash, _Time-frequency signal analysis and processing: a comprehensive reference_. Academic Press, 2015. * [6] S. Wei, M. Zhang, G. Wang, X. Sun, L. Zhang, and D. Chen, “Robust multi-frame joint frequency hopping radar waveform parameters estimation under low signal-noise-ratio,” _IEEE Access_ , 2019. * [7] X. Zhang, X. Wang, and X.-m. Du, “Blind parameter estimation of frequency-hopping signals based on wavelet transform [j],” _Journal of Circuits and Systems_ , vol. 4, 2009. * [8] X. Ouyang and M. G. Amin, “Short-time fourier transform receiver for nonstationary interference excision in direct sequence spread spectrum communications,” _IEEE Trans. Signal Process._ , vol. 49, no. 4, pp. 851–863, 2001. * [9] C.-D. Chung and A. Polydoros, “Parameter estimation of random fh signals using autocorrelation techniques,” _IEEE Trans. Commun._ , vol. 43, no. 2/3/4, pp. 1097–1106, 1995. * [10] N.-K. Kim and S.-J. Oh, “Comparison of methods for parameter estimation of frequency hopping signals,” in _International Conference on Information and Communication Technology Convergence (ICTC)_ , 2017, pp. 567–569. * [11] J. Smith, S. U. C. for Computer Research in Music, Acoustics, and S. U. D. of Music, _Spectral Audio Signal Processing_. W3K, 2011, online book, 2011 edition. [Online]. Available: https://ccrma.stanford.edu/~jos/sasp/ * [12] R. M. Haralick, S. R. Sternberg, and X. Zhuang, “Image analysis using mathematical morphology,” _IEEE Trans. Pattern Anal. Mach. Intell._ , no. 4, pp. 532–550, 1987. * [13] L. Luo _et al._ , “Detection of an unknown frequency hopping signal based on image features,” in _2nd International Congress on Image and Signal Processing_ , 2009, pp. 1–4. * [14] Y. Ma and Y. Yan, “Blind detection and parameter estimation of single frequency-hopping signal in complex electromagnetic environment,” in _2016 Sixth International Conference on Instrumentation & Measurement, Computer, Communication and Control (IMCCC)_, 2016, pp. 370–374. *[UAVs]: Unmanned aerial vehicle *[FHSS]: frequency hopping spread spectrum *[FH]: frequency hopping *[RC]: radio controller *[RCs]: radio controller *[STFT]: short–time Fourier transform *[ACF]: autocorrelation function *[UAV]: Unmanned aerial vehicle *[MSE]: mean square error *[SNR]: signal–to–noise ratio *[ISM]: industrial, scientific, and medical *[AWGN]: Additive White Gaussian Noise *[PSD]: power spectral density *[TÜBİTAK]: Scientific and Technological Research Council of Turkey *[BİLGEM]: Informatics and Information Security Research Center *[GHz]: gigahertz *[SSA]: signal and spectrum analyzer *[Hz]: hertz *[MHz]: megahertz *[CFO]: carrier frequency offset
2024-09-04T02:54:58.109592
2020-03-07T19:07:31
2003.03638
{ "authors": "James D. Brunner and Nicholas Chia", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26099", "submitter": "James Brunner", "url": "https://arxiv.org/abs/2003.03638" }
arxiv-papers
Minimizing the number of optimizations for efficient community dynamic flux balance analysis. James D. Brunner1†*,Nicholas Chia1† 1Department of Surgery, Center for Individualized Medicine Microbiome Program, Mayo Clinic, Rochester, MN, USA †Current Address: Mayo Clinic, 200 First St. SW, Rochester, MN, USA *<EMAIL_ADDRESS> ## Abstract Dynamic flux balance analysis uses a quasi-steady state assumption to calculate an organism’s metabolic activity at each time-step of a dynamic simulation, using the well-known technique of flux balance analysis. For microbial communities, this calculation is especially costly and involves solving a linear constrained optimization problem for each member of the community at each time step. However, this is unnecessary and inefficient, as prior solutions can be used to inform future time steps. Here, we show that a basis for the space of internal fluxes can be chosen for each microbe in a community and this basis can be used to simulate forward by solving a relatively inexpensive system of linear equations at most time steps. We can use this solution as long as the resulting metabolic activity remains within the optimization problem’s constraints (i.e. the solution to the linear system of equations remains a feasible to the linear program). As the solution becomes infeasible, it first becomes a feasible but degenerate solution to the optimization problem, and we can solve a different but related optimization problem to choose an appropriate basis to continue forward simulation. We demonstrate the efficiency and robustness of our method by comparing with currently used methods on a four species community, and show that our method requires at least $91\%$ fewer optimizations to be solved. For reproducibility, we prototyped the method using Python. Source code is available at `https://github.com/jdbrunner/surfin_fba`. ## Author summary. The standard methods in the field for dynamic flux balance analysis (FBA) carries a prohibitively high computational cost because it requires solving a linear optimization problem at each time-step. We have developed a novel method for producing solutions to this dynamical system which greatly reduces the number of optimization problems that must be solved. We prove mathematically that we can solve the optimization problem once and simulate the system forward as an ordinary differential equation (ODE) for some time interval, and solutions to this ODE provide solutions to the optimization problem. Eventually, the system reaches an easily check-able condition which implies that another optimization problem must be solved. We compare our method against typically used methods for dynamic FBA to validate that it provides equivalent solutions while requiring fewer linear-program solutions. ## Introduction. ### Microbial communities and human health. The makeup of microbial communities is often complex, dynamic, and hard to predict. However, microbial community structure has a profound effect on human health and disease [1, 2, 3, 4, 5, 6, 7]. These two facts have lead to significant interest in mathematical models which can predict relative abundances among microbes in a community. Various dynamical models have been proposed to explain and predict microbial community population dynamics [8, 9, 10, 11, 12]. Among these are models which propose that interactions between species are mediated by the metabolites that each species produces and consumes [13, 14], and there is significant evidence that these models perform better than models which depend on direct interaction between species [15, 16]. Recently, advances in genetic sequencing have allowed the creation of genome- scale models (GEMs) that reflect the internal network of cellular metabolism, and can therefore be used to predict metabolite use and production [17, 18, 19]. This technique can be extended to microbial community modeling by combining GEMs of different species. There has been significant interest in using GEMs to predict relative populations of stable microbial communities [20, 21, 22, 23, 24, 25, 26]. Community metabolic modeling can not only predict relative populations, but also holds the potential to predict and explain the community metabolite yield, which can have a profound effect on health [4]. Furthermore, model repositories such as the online bacterial bioinformatics resource _PATRIC_ [27] or the _BiGG model database_ [28] make it possible to build community models using information from individual species investigations. GEMs can be used to predict microbial growth rates as well as metabolite consumption and production rates using a process called _flux balance analysis_ (FBA). Because these predictions appear in the form of rates of change, they can be used to define a metabolite mediated dynamical model, simply by taking as a vector field the rates of change predicted by FBA. We can therefore combine the techniques of metabolite mediated dynamic modeling and community metabolic modeling to produce dynamic predictions of microbial community population size and metabolite yield. This strategy is called _dynamic FBA_ [29, 30, 31], and has recently been used to model microbial communities [32, 33, 34]. Dynamic FBA, when implemented naïvely, requires a linear optimization problem to be repeatedly solved, and carries a high computational cost for even small communities. Furthermore, _in silico_ experiments may need to be repeated many times over various environmental conditions or using various parameter choices in order to make robust conclusions or to accurately fit model parameters. As a result, implementations of dynamic FBA which depend on optimization at every time-step carry a prohibitively high computational cost when used to simulate larger microbial communities. The implementation of dynamic FBA in the popular COBRA toolbox software package [17] is done in this way, and essentially all more efficient available tools for simulating dynamic FBA fundamentally use an ODE solver approach with optimization at each time-step [31, 35, 36, 24, 37, 38]. Dynamic FBA can be improved by taking advantage of the linear structure of the optimization problem which provides a choice of basis for an optimal solution that may be reused at future time-steps [39, 40]. However, the optimizations that are required by this strategy involve solutions with non- unique bases. This means that a basis chosen at random may not provide an optimal solution to the linear program at future time-steps because it provides a solution that is non-optimal or infeasible. In order to implement dynamic FBA without optimizing at each time step, we use an optimal basic set for the FBA linear optimization problem to create a system of linear equations whose solutions at future time-steps coincide with the solutions to the FBA optimization problem. To solve the problem of non- uniqueness among bases, we prove that there exists a choice of basis that allows forward simulation for a given optimal flux solution and provide a method to choose this basis. Note that this method does not choose among a set of non-unique optimal flux solutions, but instead chooses a basis for a single given optimum. To choose among multiple optimal flux solutions, biological, rather than mathematical, considerations should be used. In this manuscript, we detail how dynamic FBA can be simulated forward without re-optimization for some time interval, and give a method for doing so. We propose conditions on an optimal basic set for the FBA linear optimization problem which allows for forward simulation, and we prove that such a choice exists. We then detail how to choose this basis set, and finally give examples of simulations which demonstrate the power of our method. For reproducibility, we make a prototype implementation of our method in the Python language available at `https://github.com/jdbrunner/surfin_fba`. ## Background ### Flux balance analysis. With the advent of genetic sequencing and the resulting genome scale reconstruction of metabolic pathways, methods have been developed to analyze and draw insight from such large scale models [18]. To enable computation of relevant model outcomes, constraint based reconstruction and analysis (COBRA) is used to model steady state fluxes $v_{i}$ through a microorganism’s internal metabolic reactions under physically relevant constraints [18]. One of the most basic COBRA methods, called _flux balance analysis_ (FBA) optimizes some combination of reaction fluxes $\sum\gamma_{i}v_{i}$ which correspond to increased cellular biomass, subject to the constraint that the cell’s internal metabolism is at equilibrium: $\Gamma\bm{v}=0$ (1) where $\Gamma$ is the _stoichiometric matrix_ , a matrix describing the stoichiometry of the metabolic model. This optimization is chosen because it reflects the optimization carried out by nature through evolution [18]. The vector $\bm{\gamma}=(\gamma_{1},\gamma_{2},...,\gamma_{d})$ is an encoding of cellular objectives, reflecting the belief that the cell will be optimized to carry out these objectives. The constraint Eq. 1 means that any optimal set of fluxes found by FBA corresponds to a steady state of the classical model of chemical reaction networks [41]. This reflects the assumption that the cell will approach an internal chemical equilibrium. The optimization is done over a polytope of feasible solutions defined by the inequalities $v_{i,min}\leq v_{i}\leq v_{i,max}$, or possibly more complicated linear constraints. See Fig. 1 for a geometric representation of an example of the type of linear optimization problem that is carried out. By convention, forward and reverse reactions are not separated and so negative flux is allowed. Linear optimization problems like FBA often give rise to an infinite set of optimal flux vectors $\bm{v}=(v_{1},v_{2},...,v_{d})$. Geometrically, this set will correspond to some face of the polytope of feasible solutions. To draw conclusions despite this limitation, many methods have been developed to either characterize the set of optimal solutions, as with flux variability analysis (FVA), or enforce more constraints on the network to reduce the size of this set, as with loopless FVA [18]. ### Dynamic FBA. FBA provides a rate of increase of biomass which can be interpreted as a growth rate for a cell. Furthermore, a subset of the reactions of a GEM represent metabolite exchange between the cell and its environment. By interpreting constraints on nutrient exchange reactions within the metabolic network as functions of the available external metabolites and fluxes of exchange reactions as metabolite exchange rates between the cell and its environment, the coupled system can be modeled. The simplest way to do this is to use an Euler method, as in [30]. In addition to Euler’s method, more sophisticated ODE solvers may be used in the so-called “direct” method of simply recomputing the FBA optimization at every time-step. This can provide better solution accuracy and potentially larger time-steps, but may also require more than one FBA optimization at each time-step. For instance, the Runge-Kutta fourth order method [42] requires four FBA solutions at each time step. Direct methods are implemented in the COBRA toolbox [17] and are the central algorithm in many modern tools, including those of Zhuang et al. [31, 35], Harcombe et al. [36], Zomorrodi et al. [24], Louca and Doebeli [37], and Popp and Centler [38]. Notably, any direct method requires at least one complete recalculation of the network fluxes _at each time-step_. However, resolving the system at each time step is not necessary, as the solution the optimization problem at some initial time can actually be used to compute future optimal solutions. Höffner et al., [40], used this observation to introduce a variable step-size method for dynamic FBA. In that method a basic index set is chosen by adding biological constraints to the optimization problem hierarchically until a unique optimal flux vector is found. The challenge of such an approach is in choosing the basis for the optimal solution, as the optimal basis is not guaranteed to be unique even for a unique optimal flux solution. In fact, due to the nature of the method of Höffner et al. and of our method, any optimization past the initial solution that must be carried out is guaranteed to have a solution with a non-unique basis. Furthermore, many choices of optimal basis will not provide a solution for future time-steps, so that choosing among these bases must be done intelligently. Unfortunately, Höffner et al. [40] do not provide a method for choosing among non-unique bases for a single linear program solution. Our method seeks to solve this problem by choosing a basis from among the possibilities provided from an FBA solution which is most likely to remain optimal as simulation proceeds forward. We therefore prioritize reducing the number of times the linear program must be solved, choosing our basis based on the mathematical properties of the system which gives the best chance of providing a solution at future time-steps. Additionally, a method described as the “dynamic optimization approach” was introduced in Mahadevan et al., [29], however this method is computationally expensive. In particular, the method given in [29] involves optimizing over the entire time-course simulated, and so is formulated as a non-linear program which only needs to be solved once. While this method requires only one optimization, this optimization is itself prohibitively difficult due to the dimensionality of the problem growing with the fineness of time- discretization. ### The dynamic FBA model for communities. We can write a metabolite mediated model for the population dynamics of a community of organisms $\bm{x}=(x_{1},...,x_{p})$ on a medium composed of nutrients $\bm{y}=(y_{1},...,y_{m})$: $\displaystyle\dot{x}_{i}$ $\displaystyle=g_{i}(\bm{\psi}_{i}(\bm{y}))x_{i}$ (2) $\displaystyle\dot{y}_{j}$ $\displaystyle=-\sum_{i=1}^{p}\psi_{ij}(\bm{y})x_{i}$ (3) where $\bm{\psi}_{i}$ is a vector of the fluxes of nutrient exchange reactions for organism $x_{i}$ as determined by FBA. Using FBA to determine $\bm{\psi}_{i}$ is therefore a quasi-steady state assumption on the internal metabolism of the organisms $x_{i}$[43, 44, 45]. Recall that the basic assumption of flux balance analysis is that, given a matrix $\Gamma_{i}$ that gives the stoichiometry of the network of reactions in a cell of organism $x_{i}$ that growth $g_{i}(\bm{y})$ is the maximum determined by solving the following linear program [18]: $\left\\{\begin{array}[]{r}\max(\bm{v}_{i}\cdot\bm{\gamma}_{i})\\\ \Gamma_{i}\bm{v}_{i}=0\\\ \bm{c}^{1}_{i}\leq\bm{v}\leq\bm{c}^{2}_{i}(\bm{y})\end{array}\right\\}$ (4) where $\bm{c}^{1}_{i}$ is some vector of lower flux bounds while $\bm{c}^{2}_{i}(\bm{y})$ is some vector-valued function of the available metabolites which represents upper flux bounds. The key observation allowing dynamic FBA is that the optimal solution to this problem also determines $\bm{\psi}_{i}$ simply by taking $\psi_{ij}$ to be the value of the flux $v_{ij}$ of the appropriate metabolite exchange reaction. For clarity, we will relabel the elements of $\bm{v}_{i}$ so that $\psi_{ik}=v_{ij}$ if $v_{ij}$ is the $k^{th}$ exchange flux, and $\phi_{ik}=v_{ij}$ if $v_{ij}$ is the $k^{th}$ internal flux. The objective vector $\bm{\gamma}_{i}$ indicates which reactions within the cell contribute directly to cellular biomass, and so is non-zero only in elements corresponding to internal fluxes. We can therefore rewrite this vector to include only elements corresponding to internal fluxes, so that the objective of the optimization is to maximize $\bm{\gamma}_{i}\cdot\bm{\phi}_{i}$. The stoichiometry of metabolite exchange reactions is represented by standard basis vectors [18]. Therefore, we can partition $\Gamma_{i}$ as $\Gamma_{i}=\begin{bmatrix}I&-\Gamma_{i}^{*}\\\ 0&\Gamma_{i}^{\dagger}\end{bmatrix}$ (5) where $I$ is the identity matrix of appropriate size, and $\Gamma_{i}^{*}$ and $\Gamma_{i}^{\dagger}$ contain the stoichiometry of the internal reactions [18, 46, 47]. Making this change in notation allows us to see that the optimization problem of flux balance analysis is essentially internal to the cell, with external reactions providing constraints. We can see from Eq. 5 that $\ker(\Gamma_{i})$ is isomorphic to $\ker(\Gamma^{\dagger}_{i})$, and so we can maximize over this kernel. Then, the exchange reaction fluxes are determined by the internal fluxes according to the linear mapping $\bm{\psi}_{i}=\Gamma^{*}_{i}\bm{\phi}_{i}$ . The maximization of FBA becomes a maximization problem over the internal fluxes111In fact, we can project onto the kernel of the matrix $\Gamma^{\dagger}_{i}$ and so reduce the dimensionality of the problem. However, in practice this projection is not numerically stable.. We rewrite Eq. 4 using Eq. 5 and combine with Eqs. 2 and 3 to form the differential algebraic system $\displaystyle\frac{dx_{i}}{dt}=x_{i}(\bm{\gamma}_{i}\cdot\bm{\phi}_{i})$ (6) $\displaystyle\frac{d\bm{y}}{dt}=-\sum_{i}x_{i}\Gamma^{*}_{i}\bm{\phi}_{i}$ (7) $\displaystyle\left\\{\begin{array}[]{r}\max(\bm{\phi}_{i}\cdot\bm{\gamma}_{i})\\\ \Gamma^{\dagger}_{i}\bm{\phi}_{i}=0\\\ \bm{c}^{1}_{i}\leq\begin{bmatrix}\Gamma^{*}_{i}\\\ I\end{bmatrix}\bm{\phi}_{i}\leq\bm{c}^{2}_{i}(\bm{y})\end{array}\right\\}$ (11) where each $\bm{\phi}_{i}$ is determined by the optimization Eq. 11, all carried out separately. Note that this is a metabolite mediated model of community growth as defined in [15]. That is, the coupling of the growth of the separate microbes is due to the shared pool of metabolites $\bm{y}$. Each separate optimization which determines $\bm{\phi}_{i}$ at a single time-step depends on $\bm{y}$, and each $\bm{\phi}_{i}$ determines some change in $\bm{y}$. Furthermore, each optimization is carried out in a manner that depends only the status of the metabolite pool and is independent from the optimizations of other organisms. There is therefore no shared “community objective”. Instead, each organism optimizes according to only its own internal objective. We write, for full generality, upper and lower dynamic bounds on internal and exchange reactions, and assume that each function $c_{ij}(\bm{y})\in C^{\infty}$. We let $A_{i}=\begin{bmatrix}(\Gamma_{i}^{*})^{T},-(\Gamma_{i}^{*})^{T},I,-I,\end{bmatrix}^{T}$ (12) so that we can rewrite the optimization problem Eq. 11 as $\left\\{\begin{array}[]{r}\max(\bm{\phi}_{i}\cdot\bm{\gamma}_{i})\\\ A_{i}\bm{\phi}_{i}\leq\bm{c}_{i}(\bm{y},t)\\\ \Gamma^{\dagger}_{i}\bm{\phi}_{i}=\bm{0}\end{array}\right\\}$ (13) for ease of notation. We now hope to select a basic index set $\mathcal{I}_{i}$ for Eq. 13 for each organism $x_{i}$ so that each $\bm{\phi}_{i}(t)$ is a solution to the resulting linear system of equations. ## Methods. ### Linear optimization preliminaries. In this manuscript, we will rewrite the FBA optimization problem in the form $\left\\{\begin{array}[]{c}\max(\bm{\phi}\cdot\bm{\gamma})\\\ A\bm{\phi}\leq\bm{c}\\\ \Gamma^{\dagger}\bm{\phi}=0\end{array}\right\\}$ (14) where the matrices $A$ and $\Gamma^{\dagger}$ are derived from the stoichiometric matrix and flux constraints. Such a problem is often referred to as a _linear program_ (LP). We now recall some well known results from the study of linear programming (see, for example [48, 40]). First, we note that Eq. 14 can be rewritten in the so-called _standard form_ with the addition of _slack variables_ $\bm{s}=(s_{1},...,s_{n})$ which represent the distance each of the $n$ constraints is from its bound as follows: $\left\\{\begin{array}[]{c}\max(\bm{\tilde{\phi}}\cdot\bm{\tilde{\gamma}})\\\ \begin{bmatrix}\tilde{A}&I\end{bmatrix}\begin{bmatrix}\bm{\tilde{\phi}}\\\ \bm{s}\end{bmatrix}=\bm{c}\\\ \tilde{\phi}_{i}\geq 0,s_{i}\geq 0\end{array}\right\\}.$ (15) Standard form requires that we rewrite $\phi_{i}=\phi_{i}^{+}-\phi_{i}^{-}$ and then define $\bm{\tilde{\phi}}=(\phi_{1}^{+},\phi_{2}^{+},...,\phi_{d}^{+},\phi_{1}^{-},\phi_{2}^{-},...,\phi_{d}^{-})$ so that we require non-negativity of each variable, and the matrix $\tilde{A}=\left[A\;B\right]$, $B=-A$. We rewrite the problem in this form to make use of established results, and for ease of notation will write $\bm{\phi}$ instead of $\bm{\tilde{\phi}}$ when it is clear which form of the problem we are discussing. We will make use of the well-known result that there exists an _optimal basis_ or _basic set_ for a bounded linear program [49]. To state this result, we first define the notation $B_{\mathcal{J}}$ to be the matrix with columns of $[\tilde{A}\;I]$ corresponding to some index set $\\{k_{1},k_{2},...,k_{n}\\}=\mathcal{J}$, and if $B_{\mathcal{J}}$ is invertible we define the notation $\bm{w}_{\mathcal{J}}(\bm{a})$ so that $(\bm{w}_{\mathcal{J}}(\bm{a}))_{l}=\left\\{\begin{array}[]{cc}(B^{-1}_{\mathcal{I}}\bm{a})_{j}&l=k_{j}\in\mathcal{J}\\\ 0&l\not\in\mathcal{J}\end{array}\right.$ (16) for any $\bm{a}\in\mathbb{R}^{n}$. We may now define an _optimal basis_ and _optimal basic set_. ###### Definition 1. A _basic optimal solution_ to a linear program is an optimal solution along with some index set $\\{k_{1},k_{2},...,k_{n}\\}=\mathcal{I}$ such that $\bm{w}=\bm{w}_{\mathcal{I}}(\bm{c})$, where $\bm{c}$ is the vector of constraints as in Eq. 15. The variables $\\{\bm{w}_{i}|i\in\mathcal{I}\\}$ are referred to as _basic variables_ , and the index set $\mathcal{I}$ is referred to as the _basic index set_. Finally, if there exists a bounded, optimal solution to Eq. 15, then there exists a basic optimal solution and corresponding basic index set. For a given basic optimal solution vector $\bm{w}$, there may be more than one basic index set $\mathcal{I}$ such that $\bm{w}=\bm{w}_{\mathcal{I}}(\bm{b})$. Such a solution is called _degenerate_. Clearly a necessary condition for such non-uniqueness is that there exists some $k\in\mathcal{I}$ such that $w_{k}=0$. This is also a sufficient condition as long as there is some column of $[\tilde{A}\,I]$ which is not in the column space of $B_{\mathcal{I}\setminus\\{k\\}}$. ### Forward simulation without re-solving. Consider again Eq. 13, the linear program that must be solved at each time point of the dynamical system for each microbial population. Information from prior solutions can inform future time-steps as long as the region of feasible solutions has not qualitatively changed. Thus, we may only need to solve the optimization problem a few times over the course of a simulation. The key observation making this possible is that the simplex method of solving a linear program provides an optimal basis for the solution. We may often re-use this basis within some time interval, and therefore find optimal solutions without re-solving the linear program. In order to do this, we need to find a form of the solution which may be evolved in time. Thus, we turn the system of linear inequalities given in the linear program into a system of linear equations. Then, if this system has a unique solution we have reduced the task to solving a system of equations rather than optimizing over a system of inequalities. We can find such a system of equations by solving the linear program once, and using this solution to create a system of equations whose solution provides the optimal flux $\bm{\phi}_{i}$, as described above. We then use this same system to simulate forward without the need to re-solve the solution to the system of equations until there is no longer a feasible solution to the linear program. First, the linear program Eq. 13 is transformed into standard form (Eq. 15). Then, a basic optimal solution is found with corresponding basic index set $\mathcal{I}_{i}$. The dynamical system Eqs. 6, 7 and 13 can then be evolved in time using Eq. 16. This evolution is accurate until some $w_{ij}$ becomes negative (meaning that the solution is no longer a feasible solution to the linear program). At this point, a new basis must be chosen. That is, until $\bm{w}_{\mathcal{I}_{i}}(\bm{c}(t))$ becomes infeasible, we let $(\phi_{j_{1}}(\bm{c}_{i}(t)),...,\phi_{j_{m}}(\bm{c}_{i}(t)),s_{1}(\bm{c}_{i}(t)),...,s_{n}(\bm{c}_{i}(t)))=\bm{w}_{\mathcal{I}_{i}}(\bm{c}_{i}(t))$ and replace Eqs. 6, 7 and 13 with $\displaystyle\frac{dx_{i}}{dt}$ $\displaystyle=x_{i}(\bm{\gamma}_{i}\cdot\bm{\phi}_{i}(\bm{c}_{i}(t)))$ (17) $\displaystyle\frac{d\bm{y}}{dt}$ $\displaystyle=-\sum_{i}x_{i}\Gamma^{*}_{i}\bm{\phi}_{i}(\bm{c}_{i}(t))$ (18) One major difficulty in this technique is that a unique $\bm{w}_{i}$ does not guarantee a unique basis set $\mathcal{I}_{i}$. If we have some $(w_{\mathcal{I}_{i}})_{j}=0$ for $j\in\mathcal{I}_{i}$, then there exists some alternate set $\hat{\mathcal{I}}_{i}$ such that $\bm{{w}}_{\hat{\mathcal{I}}_{i}}=\bm{{w}}_{\mathcal{I}_{i}}$. Such a solution $\bm{{w}}_{\mathcal{I}_{i}}$ is called _degenerate_. In a static implementation of a linear program, the choice of basis of a degenerate solution is not important, as one is interested in the optimal vector and optimal value. However, as we will demonstrate with Example 1, the choice of basis of a degenerate solution is important in a dynamic problem. In fact, if the system given in Eqs. 17 and 18 is evolved forward until $\bm{w}_{\mathcal{I}_{i}}(\bm{c}_{i}(t))$ becomes infeasible, the time at which the system becomes infeasible is the time at which we have some $(w_{\mathcal{I}_{i}})_{j}=0$ for $j\in\mathcal{I}_{i}$. Thus, we need to resolve Eq. 13 whenever $\bm{w}_{\mathcal{I}_{i}}(\bm{c}_{i}(t))$ becomes degenerate, which will be the final time-point at which the $\bm{w}_{\mathcal{I}_{i}}(\bm{c}_{i}(t))$ is feasible. ###### Example 1. Consider the dynamic linear program $\left\\{\begin{array}[]{c}\max((1,1)\cdot\bm{v})\\\ \begin{bmatrix}1&0\\\ 0&1\\\ 1&2\end{bmatrix}\bm{v}\leq\begin{bmatrix}10\\\ 10\\\ 30-t\end{bmatrix}\\\ v_{i}\geq 0\end{array}\right\\}$ (19) In standard form at $t=0$, this linear program becomes $\left\\{\begin{array}[]{c}\max((1,1)\cdot\bm{v})\\\ \begin{bmatrix}1&0&1&0&0\\\ 0&1&0&1&0\\\ 1&2&0&0&1\end{bmatrix}\begin{bmatrix}\bm{v}\\\ \bm{s}\end{bmatrix}=\begin{bmatrix}10\\\ 10\\\ 30\end{bmatrix}\\\ v_{i},s_{i}\geq 0\end{array}\right\\}$ (20) which has the unique solution $\bm{w}=(10,10,0,0,0)$. There are three choices of basic index sets: $\mathcal{I}_{1}=\\{1,2,3\\}$, $\mathcal{I}_{2}=\\{1,2,4\\}$, and $\mathcal{I}_{3}=\\{1,2,5\\}$. The resulting bases are $\ B_{\mathcal{I}_{1}}=\begin{bmatrix}1&0&1\\\ 0&1&0\\\ 1&2&0\end{bmatrix}\quad B_{\mathcal{I}_{2}}=\begin{bmatrix}1&0&0\\\ 0&1&1\\\ 1&2&0\end{bmatrix}\quad B_{\mathcal{I}_{3}}=\begin{bmatrix}1&0&0\\\ 0&1&0\\\ 1&2&1\end{bmatrix}$ Computing Eq. 16 at $t>0$ for each, we have that $B_{\mathcal{I}_{1}}$ yields $\bm{w}_{\mathcal{I}_{1}}(\bm{c}(t))=(10-t,10,t,0,0)$, $B_{\mathcal{I}_{2}}$ yields $\bm{w}_{\mathcal{I}_{2}}(\bm{c}(t))=(10,10-\nicefrac{{t}}{{2}},0,\nicefrac{{t}}{{2}},0)$, and $B_{\mathcal{I}_{3}}$ yields $\bm{w}_{\mathcal{I}_{3}}(\bm{c}(t))=(10,10,0,0,-t)$, shown in Fig. 1 for $t>0$. Thus, only $\bm{w}_{\mathcal{I}_{2}}(\bm{c}(t))$ solves the dynamic problem because $\bm{w}_{\mathcal{I}_{1}}(\bm{c}(t))$ is not optimal and $\bm{w}_{\mathcal{I}_{3}}(\bm{c}(t))$ is not feasible for $t>0$. We may follow $\bm{w}_{\mathcal{I}_{2}}$ and be insured of remaining at an optimal solution to the linear program until $t=20+\varepsilon$, at which point $\bm{w}_{\mathcal{I}_{2}}=(10,-\varepsilon/2,0,10,0)$, which is not a feasible solution to the linear program. At time $t=20$, a re-optimization is required to choose a new basis. Notice that the correct choice of basis fundamentally depends on the time- varying bound function $\bm{c}(t)=(10,10,30-t)$. To see this, consider other possible time-varying bounds $\bm{c}(t)$ which have $\bm{c}(0)=(10,10,30)$. For example, if $\bm{c}(t)=(10-t,10-t,30)$, then only $B_{\mathcal{I}_{3}}$ would give the correct $\bm{w}(\bm{c}(t))$ for $t>0$. Fig 1: Geometric representation of Example 1 for $t_{3}>t_{2}>t_{1}>0$, showing the three options for bases which are equivalent at $t=0$. Note that the best choice depends on the function $\bm{c}(t)=(10,10,30-t)$ and cannot be chosen using the static problem alone. The feasible region of the optimization problem is shown in gray. ### A basis for the flux vector. We now provide a method to choose a basis $\mathcal{I}_{i}$ for each organism $x_{i}$ in the case of a degenerate solution. Consider an optimal solution $\bm{w}_{i}$ to the linear program Eq. 15. To simulate forward according to Eqs. 17 and 18, we need for each organism $x_{i}$ a basic index set $\mathcal{I}_{i}$ such that $\left\\{\begin{array}[]{c}\bm{\dot{w}_{i}}=\bm{w}_{\mathcal{I}_{i}}\left(\frac{d}{dt}\bm{c}_{i}\right)\\\ \begin{bmatrix}\tilde{A}&I\end{bmatrix}\bm{\dot{w}}=\frac{d}{dt}\bm{c}_{i}\\\ (\bm{w}_{\mathcal{I}_{i}})_{j}=0\Rightarrow\dot{w}_{ij}\geq 0\end{array}\right\\}$ (21) so that the solution remains feasible, and furthermore that $\bm{\dot{w}}_{i}$ is optimal over the possible choice of basic index sets for $\bm{w}_{i}$. This is obviously a necessary condition for forward simulation within some non- empty time interval, and can be made sufficient (although no longer necessary) by making the inequality $(\bm{w}_{\mathcal{I}_{i}})_{j}=0\Rightarrow\dot{w}_{ij}\geq 0$ strict. We use the relaxed condition for more practical applicability. In order to develop a method based on the above observation (i.e., Eq. 21), we must know that Eq. 15 has such a solution. We therefore require the following lemma, which is proved in Appendix A: ###### Lemma 1. For a linear program with the form given in Eq. 15 with a basic optimal solution $\bm{w}$, there exists a basic index set $\mathcal{I}$ such that Eq. 21 holds and $\bm{\dot{w}}$ is optimal over the possible choice of basic index sets for $\bm{w}$. If Eq. 15 has only a non-degenerate solution, the unique basis will satisfy this requirement. The challenge remains to choose from among the possible bases of a degenerate solution. To do this, we form a second linear program analogous to Eq. 21 in the following way. We first find all constraints $\bm{a}_{j}$ (i.e. rows of $A_{i}$ or $\Gamma^{\dagger}_{i}$) such that $\bm{a}_{ij}\cdot\bm{\phi}_{i}=c_{ij}(t)$, calling this set $\mathcal{S}_{i}$. Note that this set contains all the rows of $\Gamma^{\dagger}_{i}$, for which we regard $c_{ij}(t)=0$ for all $t>0$. Note that if the solution given is a basic optimal solution, the rank of the matrix whose rows are $\bm{a}_{ij}$ for $\bm{a}_{ij}\in\mathcal{S}_{i}$ is $d$, where again $d$ is the number of internal fluxes. This is true because we include constraints of the type $a<\phi_{ij}<b$ as rows of $A_{i}$. Then, we solve the linear program $\left\\{\begin{array}[]{c}\max(\bm{\dot{w}}_{i}\cdot\bm{\gamma}_{i})\\\ \bm{a}_{j}\cdot\bm{\dot{\phi}}_{i}\leq\frac{dc_{ij}}{dt},\;\;\bm{a}_{j}\in\mathcal{S}_{i}\end{array}\right\\}$ (22) We may then use any basis $B_{\mathcal{I}}^{i}$ which solves Eq. 22 as long as it has exactly $d$ non-basic slack variables. Lemma 1 tells us that such a choice exists, although it may be necessary to manually pivot non-slack variables into the basis set given by the numerical solver222In testing the algorithm, this was necessary when using IBM ILOG CPLEX Optimization Studio to solve, but not when using The Gurobi Optimizer.. Note that we do not need the entire basis $B_{\mathcal{I}}^{i}$, but instead only need the $d\times d$ submatrix formed by rows of $A_{i}$ or $\Gamma_{i}^{\dagger}$ which correspond to non-basic slack variables in the solution to Eq. 22. These appear as rows $(\bm{a}_{i},\bm{0})$ in $B_{\mathcal{I}}^{i}$, and so this sub-matrix uniquely determines $\bm{\phi}_{i}$. We call this smaller matrix $B_{i}$, and label the set of row indices as $\mathcal{J}$. The chosen basis $\mathcal{J}$ and corresponding constraints is used to simulate forward until that particular solution becomes infeasible. At that time, we have an optimal solution to Eq. 13 simply by continuity. We therefore do not need to resolve Eq. 13 but instead re-form and solve Eq. 22. ### Pseudo-Code of the method. Below, we present as pseudo-code an outline of the method. A practical implication may need to adaptively adjust the time-step $\Delta t$ to insure that no resource is artificially over-depleted past $0$. Input: Final time $T$, initial microbial biomasses $x_{i}(0)$, initial nutrient concentrations $y_{j}(0)$, maximum inflow rates of nutrients $\alpha_{i}$, stoichiometric matrices $\Gamma_{i}$ Output: Timecourse simulation of biomass and nutrient concentrations 1 for _each microbial population $i$_ do 2 Set $\bm{w}_{i}(0)$ to be solution to eq. (13) which lies on a vertex of the feasible polytope.; 3 Solve eq. (21) to find initial basis $B_{i}$ 4 end for 5 6while _$t <T$_ do 7 Integrate eqs. (14) and (15) from $t$ to $t+\Delta t$ with $\bm{\phi}_{i}=B_{i}^{-1}\bm{c}_{\mathcal{J}}(\bm{y}(t),t)$; 8 if _$B_{i}^{-1}\bm{c}_{\mathcal{J}}(\bm{y}(t+\Delta t),t+\Delta t)$ is not a feasible solution_ then 9 reset $x_{i}=x_{i}(t)$, $y_{j}=y_{j}(t)$; 10 Solve eq. (21) to find new basis $B_{i}$, with additional constraints representing bounds violated by $B_{i}^{-1}\bm{c}_{\mathcal{J}}(\bm{y}(t),t)$. 11 end if 12 13 end while Algorithm 1 Dynamic FBA algorithm following Lemma 1. Note that for numerical stability and speed, we may store the matrices $Q_{i},R_{i}$ such that $Q_{i}R_{i}=B_{i}$ is the QR-factorization of $B_{i}$ rather than either storing $B^{-1}_{i}$ or solving completely during each time step of numerical integration. ## Results. ### Number of optimizations. We can compare the efficiency of Algorithm 1 with modern dynamic FBA methods by counting the number of times a large linear program must be carried out over the course of a simulation. At their core, state-of-art dynamic FBA tools such as _d-OptCom_ [24] and _COMETS_ [36] employ the direct method of calling an ODE-solving method with the linear program set as the right-hand-side. In the case of Euler’s method, the resulting ODE can be integrated by hand between time-steps. This last strategy is often referred to as the “static optimization approach” [40]. We compared simulation of various combinations of the organisms _Escherichia coli str. K-12 substr. MG1655_ (model iJR904), _Saccharomyces cerevisiae S288C_ (model iND705), _Pseudomonas putida KT2440_ (model iJN746) and _Mycobacterium tuberculosis H37Rv_ (model iEK1008), using models from the BiGG database [28] (see S2 File. for details). We counted the optimizations required for our model, as well as for direct methods using the numerical ODE solvers _vode_ , _zvode_ , _lsoda_ , _dopri5_ , and _dop853_ from the SciPy library. All of these numerical ODE solvers use adaptive step sizes for accuracy and stability, and so represent optimized choices of time-steps. Additionally, we compared the method of Höffner et al. as implemented in the MatLab package _DFBAlab_ [39]. For our method and the direct method, we allowed exchange of every metabolite detailed in S1 File. with initial metabolite concentrations given by that same file, and with initial biomass of $0.3$ for each species. The file `sim_comm.py` in the supplementary repository S3 Software. contains complete simulation set-up. To compare with the method of Höffner et al. [40], we use the newly available Python package from the research group of Dr. David Tourigny titled _dynamic- fba_ [50] for single organisms. This package allows simulation without secondary optimizations, as our does, and so is more similar to our prototype tool for comparison. Unfortunately, this package is currently only able to simulate single organisms at the time of publishing. For microbial communities, we can compare with the MatLab package DFBAlab [39] which requires all dynamics variables to be optimized in a secondary optimization. For simulations with DFBAlab, we use only the low-concentration metabolites D-glucose, oxygen, and cob(I)alamin from the M9 medium detailed in S1 File. as dynamically varying metabolites. It is worth noting that these are the most favorable conditions we could find for the method of H”offner [40, 39] et al. which are still biologically equivalent to our other simulations. | Solution Method ---|--- Model Combination | Algorithm 1 | Höffner | vode | zvode | lsoda | dopri5 | dop853 iJR904 | 7 | 1 | 62 | 62 | 116 | 3313 | 6228 iND750 | 4 | 1 | 91 | 91 | 85 | 3508 | 6514 iJN746 | 4 | 13 | 166 | 167 | 376 | 1176 | 2249 iEK1008 | 4 | 4 | 120 | 120 | 208 | 2768 | 5148 iJR904 + iND750 | 4 | 24 | 240 | 211 | 346 | 5586 | 10469 iJR904 + iJN746 | 30 | 479 | 420 | 420 | 744 | 2695 | 5579 iJR904 + iEK1008 | 20 | 136 | 216 | 216 | 454 | 3385 | 6411 iND750 + iEK1008 | 8 | 32 | 311 | 311 | 509 | 5284 | 9888 iJR904 + iND750 + iEK1008 | 18 | 32* | 451 | 451 | 1282 | 6225 | 11961 iJR904 + iND750 + iJN746 + iEK1008 | 56 | 672 | 1122 | 1122 | 2242 | 6837 | 13529 Table 1: Number of realizations required to simulate to time $t=5$ with no cell death or metabolite flow, using M9 minimal medium. *Simulation failed at $t=3.034277$. Fig 2: Time-points of re-optimizations required in simulations using the proposed method, the method of Höffner et al. [40] and various direct methods, shown in blue. Shown in orange are times at which the direct method solver encountered an infeasible linear program due to numerical error. ### Error estimation. Our method provides much less theoretical error in dynamic FBA solutions than traditional methods. In fact, Algorithm 1 implies that a simulation of a microbial community can be divided into time intervals on which the algorithm is exact. Of course, this assumes that the linear ODE solved in these intervals is solved exactly rather than numerically. Precisely, there exits some sequence $t_{0}=0<t_{1}<\cdots<t_{n-1}<t_{n}=T$ such that if we know the optimal flux vectors $\bm{w}_{i}(t_{l})$ at time $t_{l}$, then Lemma 1 implies the existence of a set of invertible matrices $B_{i}^{l}$ such that solutions to Eqs. 17 and 18 are solutions to Eqs. 6, 7 and 13 for $t\in[t_{l},t_{l+1}]$. Therefore, if we are able to identify the $t_{l}$ exactly, then Algorithm 1 provides exact solutions to the dynamic FBA problem Eqs. 6, 7 and 13. Of course, numerical limitations imply that we will not re-optimize precisely at each $t_{l}$, and so we must investigate the impact of this error. However, once re-optimization is done, the method is again exact. The result is that we have no local truncation error for any time step taken between re-optimization after $t_{l}$ and the interval endpoint $t_{l+1}$, except for error due to numerical integration. In comparison, direct methods from some integration error at every time step. This error depends on the integration strategy used, and so for example the Euler’s method based static optimization approach carries first order local truncation error at each time step. This can easily lead to ODE overshoot and infeasible linear programs at future time-step. Assume that $t_{l-1}$ is known exactly, and $N$ is such that $t^{1}=t_{l-1}+(N-1)\Delta t\leq t_{l}<t_{l-1}+N\Delta t=t^{2}$, so that there is some possible error in the interval $[t^{1},t^{2}]$. We can estimate the accumulated error in this time interval using a power series expansion. Let $\bm{x}(t),\bm{y}(t)$ be solutions to Eqs. 6, 7 and 13 and $\bm{\tilde{x}},\bm{\tilde{y}}$ be solutions given by Algorithm 1 for $t\in[t^{1},t^{2})$. Furthermore, let $B_{i}^{l-1}$ be the invertible matrices derived by solving Eq. 13 at $t_{l-1}$ and $B_{i}^{l}$ those derived by solving at $t_{l}$. Then, $\bm{x}(t^{1})=\bm{\tilde{x}}(t^{1})$ and $\bm{y}(t^{1})=\bm{\tilde{y}}(t^{1})$. For each $x_{i}$ we expand, assuming some regularity of the functions $\bm{c}(\bm{y})$, $x_{i}(t^{2})-\tilde{x}_{i}(t^{2})=(\Delta t)x_{i}(t_{1})(\bm{\gamma}_{i}\cdot\left((B_{i}^{l-1})^{-1}-(B_{i}^{l-1})^{-1}\right)\bm{\hat{c}}_{i}(\bm{y}(t^{1}))+o(\Delta t)$ (23) and see that this method gives first order local error in time steps that require a re-optimization. The local error, while first order, only appears at time steps in which a re- optimization occurred, and so global error will scale with the number of necessary re-optimizations. This is in contrast with the classical use of Euler’s method, which gives first order local error at every time-step, or any other direct ODE method, whose error is dependent on the solver used. We may compare the solutions provided by direct methods with those provided by the method presented in Algorithm 1 and by the method of Höffner et al. [40]. The root-sum-square ($l_{2}$) difference in results are shown in Table 2. As we argue above, direct methods are less accurate in theory that the algorithm presented in Algorithm 1. Furthermore, direct simulations routinely failed to simulate to time $t=5$ without encountering an infeasible linear program. This infeasibility is the result of numerical error accumulating throughout the simulation. The comparisons in Table 2 can be summarized by three distinct characteristics. First, in the case of _S.cerevisiae_ , the direct methods agree well with the newly presented method. Secondly, in the case of _E.coli_ and _M.tuberculosis_ , error seems to begin accumulating immediately. Finally, in the case of _P.putida_ , the simulations agree well up to some time-point at which the direct method fails and either quits entirely (as in the case of the _dopri5_ solver which returns small error) or continues at a constant value. We note that discrepancies in dynamic FBA simulation may not always be due to numerical error, but instead due to non-uniqueness in optimal flux solutions. Our method provides a strategy for choosing between non-unique representations (in the form of a basis) of a single optimal flux solution. The method of Höffner et al. [40] provides a lexicographic strategy for choosing between non-unique optimal flux solutions based on biological, rather than mathematical, considerations. We note that for complete reproducibility, our method should be integrated with some biologically based strategy for choosing between non-unique optima. | vode | zvode | lsoda | dopri5 | dop853 | Hoffner et al. ---|---|---|---|---|---|--- E.coli | 5.09933 | 5.09933 | 4.61467 | 5.09928 | 5.09928 | 4.68578 M.tuberculosis | 1.45401 | 1.45401 | 1.45417 | 1.45415 | 1.45415 | 2.48691 S.cerevisiae | 0.00426 | 0.00426 | 0.00430 | 0.00429 | 0.00429 | 3.06105 P.putida | 15.29177 | 15.29177 | 0.07080 | 15.23826 | 15.26221 | 4.78751 Table 2: $l_{2}$ difference in solutions to single-organism simulations between direct methods and the method presented in Algorithm 1. Fig 3: Simulations of _E.coli_ , _S.cerevisae_ , _M.tuberculosis_ and _P.putida_ using Algorithm 1, direct solvers, and the method of Höffner et al. In simulations of _E.coli_ _M.tuberculosis_ , there is discrepancy early in the simulation. In contrast, simulations of _P.putida_ agree up to the point that an ODE solver fails. ## Examples & applications. There has been a recent surge in interest in modeling microbial communities using genome-scale metabolic models, much of which has focused on equilibrium methods [22, 21, 4, 51, 26]. In order to capture transient behavior and dynamic responses to stimuli, dynamic FBA has also been applied to microbial communities [24, 52, 34]. However, community dynamic FBA invariable leads to a large dynamical system with a high-dimensional parameter space, often with little to know knowledge of parameter values. Any parameter fitting therefore requires repeated numerical simulation of the system. Existing tools to do this are built around a direct simulation approach, requiring many linear program solutions. By drastically reducing the number of optimizations required for numerical simulation, our approach offers the promise of efficient numerical simulation of dynamic FBA which will make parameter fitting more tractable, and may even allow conclusions without well-fit parameters. Below, we demonstrate that the problem of parameter fitting is an important one by show that experimental outcome in even small communities is sensitive to changes in kinetic parameters. Precisely, the kinetic parameters governing the uptake rate of nutrients (i.e., the parameters of the functions $\bm{c}^{2}_{i}$ in Eq. 4) have a profound effect on species competition. Next, we show how repeated simulation with randomly sampled parameters can provide some insight into community structure even without a well-fit set of nutrient uptake parameters. These examples demonstrate the importance of efficient dynamic FBA to microbial community modeling. ### Prediction dependence on nutrient uptake. The set of unknown functions $\bm{c}_{i}^{2}(\bm{y})$ in Eq. 4 present a profound problem for dynamic FBA simulation. If the behavior of the system is sensitive to the functions chosen and parameters of those functions, a single simulation will be of little use in drawing biological conclusion. In order to demonstrate that such a sensitivity exists, we repeatedly simulated the same simple community with different randomly drawn parameters. While a more realistic choice of function may be saturating or sigmoidal (as with Hill or Michaelis-Menten kinetics), for the following experiment we take these functions to be linear: $c_{ij}^{2}(\bm{y})=\kappa_{ij}y_{j},$ (24) meaning that the maximum uptake rate of nutrient $y_{j}$ by organism $x_{i}$ is proportional to the concentration of $y_{j}$. This choice minimizes the number of parameters that must be chosen for our analysis of parameter sensitivity, and is in line with an assumption of simple mass action kinetics[53, 54]. The choice of $\kappa_{ij}$ may have a profound effect on the outcome of a community simulation, as it represents how well an organism can sequester a resource when this will optimize the organism’s growth. In order study this effect in a small community, we sampled a three-species community model with $\kappa_{ij}\in(0,1)$ chosen uniformly at random. We used models for _E.coli_ , _S.cerevisiae_ and _M.tuberculosis_ downloaded from the BiGG model database[28]. We simulated with no dilution of metabolites or microbes, and no replenishment of nutrients. In every simulation, some critical metabolite was eventually depleted and the organisms stopped growing. We recorded the simulated final biomass of each organism from each simulation, and the results are shown in Fig. 4. Fig 4: (Top) Histogram of the final simulated biomass of each of _E.coli_ , _S.cerevisiae_ and _M.tuberculosis_ from 95 simulations, each with different metabolite uptake rates $\kappa_{ij}$. (Bottom) Pair-wise comparison of the final simulated biomass densities using a kernel density estimation. In red is the result of uniform uptake rates $\kappa_{ij}=1$ for all $i,j$. ### Community growth effects. As we saw in previous section, community growth outcomes depend on the choice of nutrient uptake rates $\kappa_{ij}$. Using Algorithm 1, we can perform Monte-Carlo sampling in order to understand the possible effects on some microorganism growing in some community. To do this, we randomly sample the set of uptake rates $\kappa_{ij}$ and run simulations of various communities for the chosen uptake rates. Then, the correlation between communities of final simulated biomass of some organism can be interpreted as the effect of the community on the growth of that organism. A correlation less than $1$ between growth of an organism in different communities indicates that the community is having some effect. To see the direction of this effect, we can fit a simple linear regression model (best fit line) to the final simulated biomasses. Then, the slope of this line tells us if the organism benefits or is harmed by being in one community over another. We again simulated _E.coli_ , _S.cerevisiae_ and _M.tuberculosis_ downloaded from the BiGG model database [28]. Simulations were run with the M9 medium described in S1 File., with no replenishment of resources. Each organism grew to a larger final simulated biomass when alone compared to when in a trio with the other two, which is unsurprising given the finite resources. This difference was the least pronounced for _S.cerevisiae_ , suggesting that this organism is the least negatively effected by the competition. However, this can be seen as only a preliminary observation without better estimates of uptake parameters. Best-fit lines are shown in Fig. 5. Efficient dynamic FBA allows repeated simulation with randomly sampled parameters, which gives an indication of likely behavior even without accurate parameter fitting. Fig 5: Final simulated biomass of _E.coli_ , _S.cerevisiae_ and _M.tuberculosis_ when grown alone or in pairs, for randomly sampled modeled parameters. Best fit lines indicate the average effect of the community on an organism’s growth. ## Conclusion Understanding, predicting, and manipulating the make-up of microbial communities requires understanding a complex dynamic process. Genome-scale metabolic models provide an approximation to this process through the quasi- steady state assumption which leads to dynamic flux balance analysis. However, this system is large and hard to simulate numerically, let alone analyze for qualitative behaviors. As a first step towards a thorough analysis of community of organisms modeled with dynamic FBA, an efficient method of numerical simulation would provide an essential tool. However, modern tools for simulating dynamic FBA rely on repeatedly solving an optimization problem at every time step [31, 35, 36, 24, 37, 38]. Dynamic FBA simulation can be improved by considering the structure of these linear programs so that many fewer optimizations are required. As of now, the algorithm of Höffner et al. [40] is the only published method which takes advantage of this observation. However, that method does not account for the degeneracy of solutions to the relevant linear programs, meaning that it can choose a solution that cannot be carried forward in time. We present a method that chooses a basis to for forward simulation. In contrast to the method of Höffner et al., we choose this basis in such a way that increases the likelihood that this forward simulation is actually possible. Efficient dynamic FBA will allow better parameter fitting to time-longitudinal data. Furthermore, it allows for a search of parameter space which can help predict likely model outcomes or learn maps from parameter values to model outcomes. ## Supporting information. #### S1 File. M9 medium File. `m9med.csv` defines an M9 minimal medium as adapted from Monk et al. [55]. #### S2 File. List of Models Used. `modelsUsed.csv` provides name, ID, and URL for the four models used in analysis of the method. #### S3 Software. `https://github.com/jdbrunner/surfin_fba`. Available code for the algorithm described in the Python language. This code requires the popular COBRAPy package for metabolic models. ## Acknowledgments This work was supported by funding from the DeWitt and Curtiss Family Foundation, National Cancer Institute grant R01 CA179243, and the Center for Individualized Medicine, Mayo Clinic. ## References * 1. Braundmeier AG, Lenz KM, Inman KS, Chia N, Jeraldo P, Walther-António MRS, et al. Individualized medicine and the microbiome in reproductive tract. Frontiers in Physiology. 2015;6:97. doi:10.3389/fphys.2015.00097. * 2. Calcinotto A, Brevi A, Chesi M, Ferrarese R, Perez LG, Grioni M, et al. Microbiota-driven interleukin-17-producing cells and eosinophils synergize to accelerate multiple myeloma progression. Nature communications. 2018;9(1):4832. * 3. Flemer B, Lynch DB, Brown JM, Jeffery IB, Ryan FJ, Claesson MJ, et al. Tumour-associated and non-tumour-associated microbiota in colorectal cancer. Gut. 2017;66(4):633–643. * 4. Hale VL, Jeraldo P, Chen J, Mundy M, Yao J, Priya S, et al. Distinct microbes, metabolites, and ecologies define the microbiome in deficient and proficient mismatch repair colorectal cancers. Genome Medicine. 2018;10(1):78. doi:10.1186/s13073-018-0586-6. * 5. Ng KM, Ferreyra JA, Higginbottom SK, Lynch JB, Kashyap PC, Gopinath S, et al. Microbiota-liberated host sugars facilitate post-antibiotic expansion of enteric pathogens. Nature. 2013;502:96 EP –. * 6. Round JL, Mazmanian SK. The gut microbiota shapes intestinal immune responses during health and disease. Nature Reviews Immunology. 2009;9:313 EP –. * 7. Walsh DM, Mert I, Chen J, Hou X, Weroha SJ, Chia N, et al. The Role of Microbiota in Human Reproductive Tract Cancers. In: AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY. vol. 168. WILEY 111 RIVER ST, HOBOKEN 07030-5774, NJ USA; 2019. p. 260–261. * 8. Fisher CK, Mehta P. Identifying keystone species in the human gut microbiome from metagenomic timeseries using sparse linear regression. PloS one. 2014;9(7):e102451. * 9. Friedman J, Higgins LM, Gore J. Community structure follows simple assembly rules in microbial microcosms. Nature Ecology &Amp; Evolution. 2017;1:0109 EP –. * 10. Goyal A, Maslov S. Diversity, Stability, and Reproducibility in Stochastically Assembled Microbial Ecosystems. Phys Rev Lett. 2018;120:158102. doi:10.1103/PhysRevLett.120.158102. * 11. Stein RR, Bucci V, Toussaint NC, Buffie CG, Rätsch G, Pamer EG, et al. Ecological modeling from time-series inference: insight into dynamics and stability of intestinal microbiota. PLoS computational biology. 2013;9(12):e1003388. * 12. Sung J, Kim S, Cabatbat JJT, Jang S, Jin YS, Jung GY, et al. Global metabolic interaction network of the human gut microbiota for context-specific community-scale analysis. Nature communications. 2017;8:15393; 15393–15393. doi:10.1038/ncomms15393. * 13. Niehaus L, Boland I, Liu M, Chen K, Fu D, Henckel C, et al. Microbial coexistence through chemical-mediated interactions. bioRxiv. 2018;doi:10.1101/358481. * 14. Posfai A, Taillefumier T, Wingreen NS. Metabolic Trade-Offs Promote Diversity in a Model Ecosystem. Phys Rev Lett. 2017;118:028103. doi:10.1103/PhysRevLett.118.028103. * 15. Brunner JD, Chia N. Metabolite-mediated modelling of microbial community dynamics captures emergent behaviour more effectively than species–species modelling. Journal of the Royal Society Interface. 2019;16(159):20190423. * 16. Momeni B, Xie L, Shou W. Lotka-Volterra pairwise modeling fails to capture diverse pairwise microbial interactions. Elife. 2017;6:e25051. * 17. Heirendt L, Arreckx S, Pfau T, Mendoza S, Richelle A, Heinken A, et al. Creation and analysis of biochemical constraint-based models: the COBRA toolbox v3. 0. arXiv. arXiv preprint arXiv:171004038. 2017;. * 18. Lewis NE, Nagarajan H, Palsson BO. Constraining the metabolic genotype–phenotype relationship using a phylogeny of in silico methods. Nature Reviews Microbiology. 2012;10:291 EP –. * 19. Lloyd CJ, Ebrahim A, Yang L, King ZA, Catoiu E, O’Brien EJ, et al. COBRAme: A computational framework for genome-scale models of metabolism and gene expression. PLOS Computational Biology. 2018;14(7):1–14. doi:10.1371/journal.pcbi.1006302. * 20. Chan SHJ, Simons MN, Maranas CD. SteadyCom: Predicting microbial abundances while ensuring community stability. PLOS Computational Biology. 2017;13(5):1–25. doi:10.1371/journal.pcbi.1005539. * 21. Diener C, Resendis-Antonio O. Micom: metagenome-scale modeling to infer metabolic interactions in the microbiota. bioRxiv. 2018;doi:10.1101/361907. * 22. Gottstein W, Olivier BG, Bruggeman FJ, Teusink B. Constraint-based stoichiometric modelling from single organisms to microbial communities. Journal of the Royal Society Interface. 2016;13(124):20160627. * 23. Mendes-Soares H, Mundy M, Soares LM, Chia N. MMinte: an application for predicting metabolic interactions among the microbial species in a community. BMC Bioinformatics. 2016;17(1):343. doi:10.1186/s12859-016-1230-3. * 24. Zomorrodi AR, Islam MM, Maranas CD. d-OptCom: Dynamic Multi-level and Multi-objective Metabolic Modeling of Microbial Communities. ACS Synthetic Biology. 2014;3(4):247–257. doi:10.1021/sb4001307. * 25. Borer B, Ataman M, Hatzimanikatis V, Or D. Modeling metabolic networks of individual bacterial agents in heterogeneous and dynamic soil habitats (IndiMeSH). PLoS computational biology. 2019;15(6). * 26. Koch S, Kohrs F, Lahmann P, Bissinger T, Wendschuh S, Benndorf D, et al. RedCom: A strategy for reduced metabolic modeling of complex microbial communities and its application for analyzing experimental datasets from anaerobic digestion. PLoS computational biology. 2019;15(2):e1006759. * 27. Wattam AR, Davis JJ, Assaf R, Boisvert S, Brettin T, Bun C, et al. Improvements to PATRIC, the all-bacterial bioinformatics database and analysis resource center. Nucleic acids research. 2017;45(D1):D535–D542. * 28. King ZA, Lu J, Dräger A, Miller P, Federowicz S, Lerman JA, et al. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models. Nucleic acids research. 2016;44(D1):D515–D522. * 29. Mahadevan R, Edwards JS, Doyle FJ. Dynamic Flux Balance Analysis of Diauxic Growth in Escherichia coli. Biophysical Journal. 2002;83(3):1331 – 1340. doi:https://doi.org/10.1016/S0006-3495(02)73903-9. * 30. Varma A, Palsson BO. Stoichiometric flux balance models quantitatively predict growth and metabolic by-product secretion in wild-type Escherichia coli W3110. Applied and Environmental Microbiology. 1994;60(10):3724–3731. * 31. Zhuang K, Izallalen M, Mouser P, Richter H, Risso C, Mahadevan R, et al. Genome-scale dynamic modeling of the competition between Rhodoferax and Geobacter in anoxic subsurface environments. The ISME journal. 2011;5(2):305. * 32. Henson MA, Hanly TJ. Dynamic flux balance analysis for synthetic microbial communities. IET systems biology. 2014;8(5):214–229. * 33. Song HS, Cannon WR, Beliaev AS, Konopka A. Mathematical modeling of microbial community dynamics: a methodological review. Processes. 2014;2(4):711–752. * 34. Succurro A, Segrè D, Ebenhöh O. Emergent subpopulation behavior uncovered with a community dynamic metabolic model of Escherichia coli diauxic growth. Msystems. 2019;4(1). * 35. Zhuang K, Ma E, Lovley DR, Mahadevan R. The design of long-term effective uranium bioremediation strategy using a community metabolic model. Biotechnology and bioengineering. 2012;109(10):2475–2483. * 36. Harcombe WR, Riehl WJ, Dukovski I, Granger BR, Betts A, Lang AH, et al. Metabolic Resource Allocation in Individual Microbes Determines Ecosystem Interactions and Spatial Dynamics. Cell Reports. 2014;7(4):1104 – 1115. doi:https://doi.org/10.1016/j.celrep.2014.03.070. * 37. Louca S, Doebeli M. Calibration and analysis of genome-based models for microbial ecology. Elife. 2015;4:e08208. * 38. Popp D, Centler F. $\mu$bialSim: constraint-based dynamic simulation of complex microbiomes. BioRxiv. 2019; p. 716126. * 39. Gomez JA, Höffner K, Barton PI. DFBAlab: a fast and reliable MATLAB code for dynamic flux balance analysis. BMC bioinformatics. 2014;15(1):409. * 40. Höffner K, Harwood SM, Barton PI. A reliable simulator for dynamic flux balance analysis. Biotechnology and Bioengineering. 2012;110(3):792–802. doi:10.1002/bit.24748. * 41. Feinberg M, Horn F. Dynamics of open chemical systems and the algebraic structure of the underlying reaction network. Chemical Engineering Science. 1973;29:775–787. * 42. Bradie B. A Friendly Introduction to Numerical Analysis. Pearson Education Inc.; 2006. * 43. Baroukh C, Muñoz-Tamayo R, Steyer JP, Bernard O. DRUM: a new framework for metabolic modeling under non-balanced growth. Application to the carbon metabolism of unicellular microalgae. PloS one. 2014;9(8). * 44. Øyås O, Stelling J. Genome-scale metabolic networks in time and space. Current Opinion in Systems Biology. 2018;8:51–58. * 45. Zazueta CL, Bernard O, Gouzé JL. Reduction of Metabolic Networks keeping Core Dynamics. Discrete Applied Mathematics. 2018;157(10):2483–2493. * 46. Kondo A, Ishii J, Hara KY, Hasunuma T, Matsuda F. Development of microbial cell factories for bio-refinery through synthetic bioengineering. Journal of biotechnology. 2013;163(2):204–216. * 47. Bordbar A, Monk JM, King ZA, Palsson BO. Constraint-based models predict metabolic and associated cellular functions. Nature Reviews Genetics. 2014;15(2):107–120. * 48. Bertsimas D, Tsitsiklis JN. Introduction to linear optimization. vol. 6. Athena Scientific Belmont, MA; 1997. * 49. Tardella F. The fundamental theorem of linear programming: extensions and applications. Optimization. 2011;60(1-2):283–301. * 50. Tourigny DS, Muriel JC, Beber ME. dfba: Software for efficient simulation of dynamic flux-balance analysis models in Python; 2020. https://gitlab.com/davidtourigny/dynamic-fba. * 51. Islam MM, Fernando SC, Saha R. Metabolic modeling elucidates the transactions in the rumen microbiome and the shifts upon virome interactions. Frontiers in microbiology. 2019;10:2412. * 52. Xu X, Zarecki R, Medina S, Ofaim S, Liu X, Chen C, et al. Modeling microbial communities from atrazine contaminated soils promotes the development of biostimulation solutions. The ISME journal. 2019;13(2):494–508. * 53. Horn F, Jackson R. General mass action kinetics. Archive for Rational Mechanics and Analysis. 1972;47. * 54. Feinberg M. Lectures on Chemical Reaction Networks; 1979. http://www.crnt.osu.edu/LecturesOnReactionNetworks. * 55. Monk JM, Charusanti P, Aziz RK, Lerman JA, Premyodhin N, Orth JD, et al. Genome-scale metabolic reconstructions of multiple Escherichia coli strains highlight strain-specific adaptations to nutritional environments. Proceedings of the National Academy of Sciences. 2013;110(50):20338–20343. ## Appendix A Existence of desired optimal basis. ###### Lemma 1. For a linear program with the form given in Eq. 15 with a basic optimal solution $\bm{w}$, there exists a basic index set $\mathcal{I}$ such that Eq. 21 holds and $\bm{\dot{w}}$ is optimal over the possible choice of basic index sets for $\bm{w}$. ###### Proof. For convenience, we now restate Eq. 15: $\left\\{\begin{array}[]{c}\max(\bm{\tilde{\phi}}\cdot\bm{\tilde{\gamma}})\\\ \begin{bmatrix}\tilde{A}&I\end{bmatrix}\begin{bmatrix}\bm{\tilde{\phi}}\\\ \bm{s}\end{bmatrix}=\bm{c}\\\ \tilde{\phi}_{i}\geq 0,s_{i}\geq 0\end{array}\right\\}$ where we write $(\bm{\tilde{\phi}},\bm{s})=\bm{w}$. We note that there is a finite number of basic index sets for $\bm{w}$, and so we need only show that there exists $\mathcal{I}$ such that Eq. 21 holds. Then, the existence of an optimal such $\mathcal{I}$ follows trivially. If $\bm{w}$ is not degenerate, then the unique choice of basic index set $\mathcal{I}$ satisfies Eq. 21. To see this, simply note that if $\bm{w}$ is non-degenerate, then for every $i\in\mathcal{I}$, $w_{i}>0$. Thus, Eq. 21 only includes non-negativity constraints on $\dot{w}_{i}$ if $i\not\in\mathcal{I}$, and for any $i\not\in\mathcal{I}$, $\dot{w}_{i}=0$. Thus, the non-negativity constraints are enforced. The equality constraints are enforced by the definition of $\bm{w}_{\mathcal{I}}(\bm{a})$ given in Eq. 16, which implies that $[\tilde{A}\;I]\bm{w}_{\mathcal{I}}(\bm{a})=\bm{a}$ for any vector $\bm{a}\in\mathbb{R}^{n}$. In the case of a degenerate solution $\bm{w}$, we use the following procedure to choose a set of basic variables. Let $\mathcal{J}\subset\\{1,...,n\\}$ be the indices of the $n_{1}$ slack variables such that $s_{j}=0$ if $j\in\mathcal{J}$ (recalling that each $s_{i}$ is a component of the vector $\bm{w}$). Then, let $\tilde{A}_{\mathcal{J}}$ be the matrix with rows $m_{j}$ of $\tilde{A}$ for $j\in\mathcal{J}$. Next, let $\mathcal{J}^{*}$ be the indices of the $n_{2}$ non-slack variables such that $\phi_{j}=0$ and $I_{\mathcal{J}^{*}}$ the corresponding rows of the identity matrix $I$. Notice that we now have that $M\bm{\tilde{\phi}}=\begin{bmatrix}\tilde{A}_{\mathcal{J}}\\\ -I_{\mathcal{J}^{*}}\end{bmatrix}\bm{\tilde{\phi}}=\begin{bmatrix}\bm{c}_{\mathcal{J}}\\\ \bm{0}\end{bmatrix}.$ (25) and that if $w_{j}=0$ then either $j\in\mathcal{J}^{*}$ or $w_{j}=s_{k}$ where $k\in\mathcal{J}$ so that $\bm{m}_{k}\cdot\bm{\tilde{\phi}}=c_{k}$ (i.e. $s_{k}$ is a slack variable and $s_{k}=0$). Notice that because Eq. 15 has a bounded solution, then we can assume without loss of generality that if $M\in\mathbb{R}^{q\times r}$, then $\mathit{rank}(M)=r$ (i.e. $M$ is full rank) because $\bm{w}$ must satisfy at least $r$ linearly independent constraints. If this is not the case, then the problem can be projected onto a lower dimensional subspace. Consider the linear program $\left\\{\begin{array}[]{c}\max(\bm{y}\cdot\bm{\gamma})\\\ \begin{bmatrix}M&I\end{bmatrix}\begin{bmatrix}\bm{y}_{\bm{\tilde{\phi}}}\\\ \bm{y}_{\bm{s}}\end{bmatrix}=\begin{bmatrix}\frac{d}{dt}\bm{c}_{\mathcal{J}}\\\ \bm{0}\end{bmatrix}\\\ y_{j}\geq 0\end{array}\right\\}.$ (26) Assume that there is some basic optimal solution to Eq. 26 with a basic index set $\hat{\mathcal{I}}$ such that exactly $r$ slack variables are non-basic, where again $r=|\bm{\phi}|$ is the rank of the matrix $M$. This implies that there are $r$ linearly independent rows of $M$ (which we index by $\mathcal{J}^{{\dagger}}$) which form an invertible matrix $\tilde{M}$ such that $\tilde{M}\bm{y}_{\bm{\tilde{\phi}}}=\begin{bmatrix}\frac{d}{dt}\bm{c}_{\mathcal{J}^{{\dagger}}}\\\ \bm{0}\end{bmatrix}$ (27) and we can then determine $\bm{y}_{\bm{s}}$ by $\bm{y}_{\bm{s}}=\begin{bmatrix}\frac{d}{dt}\bm{c}_{\mathcal{J}}\\\ \bm{0}\end{bmatrix}-M\bm{y}_{\bm{\tilde{\phi}}}$ (28) and note that each $(\bm{y}_{\bm{s}})_{i}\geq 0$. We now rewrite $\bm{\dot{w}}=(\bm{\dot{w}}_{\bm{\tilde{\phi}}},\bm{\dot{w}}_{\bm{s}})$ from Eq. 21 and define $\bm{\dot{w}}_{\bm{\tilde{\phi}}}=\bm{y}_{\bm{\tilde{\phi}}}$ and $\bm{\dot{w}}_{\bm{s}}=\frac{d}{dt}\bm{c}-M\bm{\dot{w}}_{\bm{\tilde{\phi}}}$ (29) and conclude that this satisfies the constraints of Eq. 21. Next, we take $\bm{\tilde{\phi}}$ to be the unique solution to $\tilde{M}\bm{\tilde{\phi}}=\begin{bmatrix}\bm{c}_{\mathcal{J}^{{\dagger}}}\\\ \bm{0}\end{bmatrix}$ (30) and $\bm{s}=\bm{c}-\tilde{A}\bm{\tilde{\phi}}$. Finally, we take $\mathcal{I}=(\hat{\mathcal{I}}\setminus\mathcal{J}^{*})\cup\mathcal{J}^{c}$ and note that this basis set enforces exactly the same $r$ linearly independent constraints as $\tilde{M}$333In practice, we may simply use $\tilde{M}$ to find $\tilde{\bm{\phi}}$. We now prove that there is some basic optimal solution to Eq. 26 with a basic index set $\hat{\mathcal{I}}$ such that exactly $r$ slack variables are non- basic, where $r$ is the rank of the matrix $M$. First we note that for any basic optimal solution, if there are $r^{*}>r$ slack variables which are non-basic, then there are $r^{*}$ rows of $B_{\hat{\mathcal{I}}}$ which are non-zero only in the columns of ${M}$. Therefore, $B_{\hat{\mathcal{I}}}$ is not invertible. We can conclude that the number of non-basic slack variables is at most $r$. Next, suppose $\bm{\dot{w}}^{*}$ is a basic optimal solution with basis $\mathcal{I}^{*}$ such that there are $r^{*}<r$ slack variables which are non- basic. We would like to assume that there are at least $r$ slack variables $s_{k}^{*}$ corresponding to $r$ linearly independent constraints such that $s_{k}^{*}=0$. Recall that $\tilde{A}$ was formed with repeated (negated) columns in order write the problem in standard form (the non-negativity bounds of Eq. 15 are artificial). Therefore, we can find some vector $\bm{x}$ in the kernel of the matrix formed by the rows of $\tilde{A}$ corresponding to zero slacks which also has $\bm{x}\cdot\bm{\gamma}=0$. We can therefore find a vector $\bm{y}$ in the kernel of $\begin{bmatrix}\tilde{A}_{\mathcal{J}}&I&0\\\ -I_{\mathcal{J}^{*}}&0&I\end{bmatrix}$ which has $y_{k}=0$ if $s_{k}=0$ and $y_{j}\neq 0$ if $s_{j}\neq 0$ and $s_{j}$ corresponds to a constraint that is not a linear combination of the constraints corresponding to the $s_{k}=0$. There is at least one such constraint as long as the $0$ slack variables correspond to constraints with span less than dimension $r$, and so we can take $\bm{\dot{w}}+\lambda\bm{y}$ for some $\lambda$ and so increase the number of non-zero slack variables. We can therefore assume without loss of generality that there are at least $r$ slack variables $s_{k}^{*}$ corresponding to $r$ linearly independent constraints such that $s_{k}^{*}=0$, as was desired. We can finally choose some linearly independent set of $r$ constraints which correspond to $0$ slack variables, and call the matrix whose rows are these constraint vectors $M^{*}$. Now, because there are $r^{*}<r$ non-slack basic variables, there is some non-slack, non-basic variable $v_{j}$ such that the column $m_{j}^{*}$ of $M^{*}$ (and ${m}_{j}$ of ${M}$) is linearly independent from the columns corresponding to the $r^{*}$ non-slack basic variables. We can conclude that if $B_{\mathcal{I}^{*}}\bm{\lambda}={m}_{j}$ (31) then there is some $\lambda_{k}\neq 0$ where $k$ corresponds to the index of a slack variable with $s_{k}=0$. We can remove $k$ from the basic index set and add $j$ without changing $\bm{\dot{w}}^{*}$, and therefore preserving optimality and feasibility. We have then increased the number of non-basic slack variables, and we can repeat if necessary to form $\hat{\mathcal{I}}$ with exactly $r$ non-basic slack variables. ∎
2024-09-04T02:54:58.132395
2020-03-07T23:33:34
2003.03685
{ "authors": "Jakob Runge", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26100", "submitter": "Jakob Runge", "url": "https://arxiv.org/abs/2003.03685" }
arxiv-papers
# Discovering contemporaneous and lagged causal relations in autocorrelated nonlinear time series datasets Jakob Runge German Aerospace Center Institute of Data Science 07745 Jena, Germany and Technische Universität Berlin 10623 Berlin, Germany ###### Abstract The paper introduces a novel conditional independence (CI) based method for linear and nonlinear, lagged and contemporaneous causal discovery from observational time series in the causally sufficient case. Existing CI-based methods such as the PC algorithm and also common methods from other frameworks suffer from low recall and partially inflated false positives for strong autocorrelation which is an ubiquitous challenge in time series. The novel method, PCMCI+, extends PCMCI [Runge et al., 2019b] to include discovery of contemporaneous links. PCMCI+ improves the reliability of CI tests by optimizing the choice of conditioning sets and even benefits from autocorrelation. The method is order-independent and consistent in the oracle case. A broad range of numerical experiments demonstrates that PCMCI+ has higher adjacency detection power and especially more contemporaneous orientation recall compared to other methods while better controlling false positives. Optimized conditioning sets also lead to much shorter runtimes than the PC algorithm. PCMCI+ can be of considerable use in many real world application scenarios where often time resolutions are too coarse to resolve time delays and strong autocorrelation is present. ## 1 INTRODUCTION A number of frameworks address the problem of causal discovery from observational data utilizing different assumptions. Next to Bayesian score- based methods [Chickering, 2002], classical Granger causality (GC) [Granger, 1969], and the more recent restricted structural causal models (SCM) framework [Peters et al., 2017, Spirtes and Zhang, 2016], conditional independence (CI) based network learning algorithms [Spirtes et al., 2000] form a main pillar. A main representative of the CI framework in the causally sufficient case (no unobserved common drivers) is the PC algorithm [Spirtes and Glymour, 1991]. Its advantages lie, firstly, in the flexibility of utilizing a wide and growing class of CI tests, from linear partial correlation (ParCorr) and non- parametric residual-based approaches [Ramsey, 2014, Runge et al., 2019b] to Kernel measures [Zhang et al., 2011], tests based on conditional mutual information [Runge, 2018b], and neural networks [Sen et al., 2017]. Secondly, the PC algorithm utilizes sparsity making it applicable also to large numbers of variables while score- and SCM-based methods are more difficult to adapt to nonlinear high-dimensional causal discovery. Causal discovery in the time series case is partially less and partially more challenging [Runge et al., 2019a]. Obviously, time-order greatly helps in identifying causal directions for lagged links (causes precede effects). This forms the basis of GC which, however, cannot deal with contemporaneous links and suffers from the curse of dimensionality [Runge et al., 2019b]. SCM-based methods such as LiNGAM [Hyvärinen et al., 2010] and also CI-based methods [Runge et al., 2019b, Entner and Hoyer, 2010, Malinsky and Spirtes, 2018] have been adapted to the time series case. In [Moneta et al., 2011] GC is augmented by the PC algorithm. However, properties such as non-stationarity and especially autocorrelation can make causal discovery much less reliable. Here I show that autocorrelation, an ubiquitous property of time series (e.g., temperature data), is especially detrimental and propose a novel CI-based method, PCMCI+, that extends the PCMCI method from [Runge et al., 2019b] to also include discovery of contemporaneous links, which requires substantial changes. PCMCI+ is based on two central ideas that deviate from the PC algorithm and the time-series adaptations of FCI in [Entner and Hoyer, 2010, Malinsky and Spirtes, 2018]: First, an edge removal phase is conducted separately for lagged and contemporaneous conditioning sets and the lagged phase uses much fewer CI tests. Secondly, and more importantly, PCMCI+ optimizes the choice of conditioning sets for the individual CI tests to make them better calibrated under autocorrelation and increase detection power by utilizing the momentary conditional independence idea [Runge et al., 2019b]. The paper is structured as follows. Section 2 briefly introduces the problem and Sect. 3 describes the method and states theoretical results. Numerical experiments in Sect. 4 show that PCMCI+ benefits from strong autocorrelation and yields much more adjacency detection power and especially more orientation recall for contemporaneous links while better controlling false positives at much shorter runtimes than the PC algorithm. A Supplementary Material (SM) contains proofs and further numerical experiments. ## 2 TIME SERIES CAUSAL DISCOVERY ### 2.1 PRELIMINARIES We are interested in discovering time series graphs (e.g., [Runge, 2018a]) that can represent the temporal dependency structure underlying complex dynamical systems. Consider an underlying discrete-time structural causal process $\mathbf{X}_{t}=(X^{1}_{t},\ldots,X^{N}_{t})$ with $\displaystyle X^{j}_{t}$ $\displaystyle=f_{j}\left(\mathcal{P}(X^{j}_{t}),\,\eta^{j}_{t}\right)$ (1) where $f_{j}$ are arbitrary measurable functions with non-trivial dependencies on their arguments and $\eta^{j}_{t}$ represents mutually ($i\neq j$) and serially ($t^{\prime}\neq t$) independent dynamical noise. The nodes in a time series graph $\mathcal{G}$ (example in Fig. 1) represent the variables $X^{j}_{t}$ at different lag-times and the set of variables that $X^{j}_{t}$ depends on defines the causal parents $\mathcal{P}(X^{j}_{t})\subset\mathbf{X}^{-}_{t+1}=(\mathbf{X}_{t},\mathbf{X}_{t-1},\ldots){\setminus}\\{X^{j}_{t}\\}$. We denote _lagged parents_ by $\mathcal{P}^{-}_{t}(X^{j}_{t})=\mathcal{P}(X^{j}_{t})\cap\mathbf{X}^{-}_{t}$. A lagged ($\tau>0$) or contemporaneous ($\tau=0$) causal link $X^{i}_{t-\tau}\to X^{j}_{t}$ exists if $X^{i}_{t-\tau}\in\mathcal{P}(X^{j}_{t})$. Throughout this work the graph $\mathcal{G}$ is assumed _acyclic_ and the causal links _stationary_ meaning that if $X^{i}_{t-\tau}\to X^{j}_{t}$ for some $t$, then $X^{i}_{t^{\prime}-\tau}\to X^{j}_{t^{\prime}}$ for all $t^{\prime}\neq t$. Then we can always fix one variable at $t$ and take $\tau\geq 0$. Note that the stationarity assumption may be relaxed. The graph is actually infinite in time, but in practice only considered up to some maximum time lag $\tau_{\max}$. We define the set of adjacencies $\mathcal{A}(X^{j}_{t})$ of a variable $X^{j}_{t}$ to include all $X^{i}_{t-\tau}$ for $\tau\geq 0$ that have a (lagged or contemporaneous) link with $X^{j}_{t}$ in $\mathcal{G}$. We define contemporaneous adjacencies as $\mathcal{A}_{t}(X^{j}_{t})=\mathcal{A}(X^{j}_{t})\cap\mathbf{X}_{t}$. A sequence of $m$ contemporaneous links is called a _directed contemporaneous path_ if for all $k\in\\{1,\ldots,m\\}$ the link $X^{i+k-1}_{t}\to X^{i+k}_{t}$ occurs. We call $X^{i}_{t}$ a _contemporaneous ancestor_ of $X^{j}_{t}$ if there is a directed contemporaneous path from $X^{i}_{t}$ to $X^{j}_{t}$ and we denote the set of all contemporaneous ancestors as $\mathcal{C}_{t}(X^{j}_{t})$ (which excludes $X^{j}_{t}$ itself). We denote separation in the graph by $\bowtie$, see [Runge, 2018a] for further notation details. ### 2.2 PC ALGORITHM The PC algorithm is the most wide-spread CI-based causal discovery algorithm for the causally sufficient case and utilizes the Markov and Faithfulness assumptions as formally defined in Sect. S1. Adapted to time series (analogously to the methods for the latent case in [Entner and Hoyer, 2010, Malinsky and Spirtes, 2018]), it consists of three phases: First, a skeleton of adjacencies is learned based on iteratively testing which pairs of variables (at different time lags) are conditionally independent at some significance level $\alpha_{\rm PC}$ (Alg. 2 with the PC option). For lagged links, time-order automatically provides orientations, while for contemporaneous links a collider phase (Alg. S2) and rule phase (Alg. S3) determine the orientation of links. CI-based discovery algorithms can identify the contemporaneous graph structure only up to a Markov equivalence class represented as a completed partially directed acyclic graph (CPDAG). We denote links for which more than one orientation occurs in the Markov equivalence class by $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$. Here we consider a modification of PC that removes an undesired dependence on the order of variables, called PC-stable [Colombo and Maathuis, 2014]. These modifications also include either the _majority_ or _conservative_ [Ramsey et al., 2006] rule for handling ambiguous triples where separating sets are inconsistent, and conflicting links where different triples in the collider or orientation phase lead to conflicting link orientations. With the _conservative_ rule the PC algorithm is consistent already under the weaker Adjacency Faithfulness condition [Ramsey et al., 2006]. Another approach for the time series case (considered in the numerical experiments) is to combine vector-autoregressive modeling to identify lagged links with the PC algorithm for the contemporaneous causal structure [Moneta et al., 2011]. ### 2.3 AUTOCORRELATION Figure 1: The curse and blessing of autocorrelation. Linear example of model (3) with ground truth links shown for the PCMCI+ case (right panel). All autodependency coefficients are $0.95$ (except $0.475$ for $X^{5,6}$) and all cross-coupling coefficients are $0.4$ ($\pm$ indicated by red/blue links). The graphs show true and false link detection rates as the link width (if $>$ 0.06) for true (color indicating ParCorr) and incorrect links (grey) for the PC algorithm, Alg. 1, and the variants PCMCI+ and PCMCI${}^{+}_{0}$ as explained in the text (detection rates based on $500$ realizations run at $\alpha_{\rm PC}=0.01$ for $T=500$). To illustrate the challenge of autocorrelation, in Fig. 1 we consider a linear example with lagged and contemporaneous ground truth links shown for the PCMCI+ case (right panel). The PC algorithm (Alg. 2 with ParCorr CI test) starts by testing all unconditional independencies ($p=0$). Here the coupled pairs $(X^{5},X^{6})$ as well as $(X^{7},X^{8})$ are independent of the other variables and removed from each others adjacency sets, which shows how PC exploits sparsity and reduces the estimation dimension compared to fitting a full model on the whole past as in the GC framework. Due to the strong autocorrelation the remaining variables, on the other hand, are almost all adjacent to each other at multiple time lags in this iteration. In the next iteration, CI for all remaining links is tested conditional on all one- dimensional ($p=1$) conditioning sets. Here the PC algorithm removes the true lagged link $X^{1}_{t-1}\to X^{0}_{t}$ (black dots) due to the incorrect CI result $X^{1}_{t-1}\perp\\!\\!\\!\perp X^{0}_{t}|X^{1}_{t-2}$ (condition marked by blue box). Later this then leads to the false positive $X^{1}_{t-2}\to X^{0}_{t}$ (grey link) since $X^{1}_{t-1}$ is not conditioned on. In a similar way the true link $X^{1}_{t-2}\to X^{3}_{t}$ is missed leading to the false positive $X^{0}_{t-1}\to X^{3}_{t}$. Further, the true contemporaneous link $X^{2}_{t}{\circ\\!{\\--}\\!\circ}X^{3}_{t}$ (similarly $X^{3}_{t}{\circ\\!{\\--}\\!\circ}X^{4}_{t}$) is removed when conditioning on $\mathcal{S}=(X^{4}_{t-1},X^{3}_{t-1})$ (blue boxes), which leads to the false positive autodependencies at lag $2$ for $X^{2}_{t},X^{4}_{t}$, while the false autodependency $X^{3}_{t-2}\to X^{3}_{t}$ is due to missing $X^{1}_{t-2}\to X^{3}_{t}$. This illustrates the pattern of a cascade of false negative errors (missing links) leading to false positives in later stages of the PC algorithm. What determines the removal of a true link in the finite sample case? Detection power depends on sample size, the significance level $\alpha_{\rm PC}$, the CI test dimension ($p+2$), and effect size, e.g., the absolute ParCorr (population) value, here denoted $I(X^{i}_{t-\tau};X^{j}_{t}|\mathcal{S})$ for some conditioning set $\mathcal{S}$. Within each $p$-iteration the sample size, $\alpha_{\rm PC}$, and the dimension are the same and a link will be removed if $I(X^{i}_{t-\tau};X^{j}_{t}|\mathcal{S})$ falls below the $\alpha_{\rm PC}$-threshold for _any_ considered $\mathcal{S}$. Hence, the overall minimum effect size $\min_{\mathcal{S}}[I(X^{i}_{t-\tau};X^{j}_{t}|\mathcal{S})]$ determines whether a link is removed. The PC algorithm will iterate through _all_ subsets of adjacencies such that this minimum can become very small. Low effect size can be understood as a low (causal) signal-to-noise ratio: Here $I(X^{1}_{t-1};X^{0}_{t}|X^{1}_{t-2})$ is small since the signal $X^{1}_{t-1}$ is reduced by conditioning on its autodependency $X^{1}_{t-2}$ and the ‘noise’ in $X^{0}_{t}$ is large due to its strong autocorrelation. But autocorrelation can also be a blessing. The contemporaneously coupled pair $(X^{7},X^{8})$ illustrates a case where autocorrelation helps to identify the orientation of the link. Without autocorrelation the output of PC would be an unoriented link to indicate the Markov equivalence class. On the other hand, the detection rate here is rather weak since, as above, the signal (link from $X^{8}_{t}$) is small compared to the noise (autocorrelation in $X^{7}$). This illustrates the curse and blessing of autocorrelation. In summary, the PC algorithm often results in false negatives (low recall) and these then lead to false positives. Another reason for false positives are ill-calibrated tests: To correctly model the null distribution, each individual CI test would need to account for autocorrelation, which is difficult in a complex multivariate and potentially nonlinear setting [Runge, 2018a]. In the experiments we will see that the PC algorithm features inflated false positives. As a side comment, the pair $(X^{5},X^{6})$ depicts a feedback cycle. These often occur in real data and the example shows that time series graphs allow to resolve time-delayed feedbacks while an aggregated _summary graph_ would contain a cyclic dependency and summary graph-based methods assuming acyclic graphs would not work. The orientation of the contemporaneous link $X^{6}_{t}\to X^{5}_{t}$ is achieved via rule R1 in the orientation phase of PC (Alg. S3). ## 3 PCMCI+ Figure 2: Schematic of PCMCI+. Note that for ease of visualization repeating edges due to stationarity are only partially shown. ### 3.1 ALGORITHM The goal of PCMCI+ is to optimize the choice of conditioning sets in CI tests in order to increase detection power and at the same time maintain well- calibrated tests. The approach is based on two central ideas, (1) separating the skeleton edge removal phase into a lagged and contemporaneous conditioning phase with much fewer CI tests and (2) utilizing the momentary conditional independence (MCI) test [Runge et al., 2019b] idea in the contemporaneous conditioning phase. Below, I explain the reasoning behind. Figure 2 illustrates the steps. First, the goal of PC’s skeleton phase is to remove all those adjacencies that are due to indirect paths and common causes by conditioning on subsets $\mathcal{S}$ of the variables’ neighboring adjacencies in each iteration. Consider a variable $X^{j}_{t}$. If we test lagged adjacencies from nodes $X^{i}_{t-\tau}\in\mathbf{X}^{-}_{t}$ conditional on the whole past, i.e., $\mathcal{S}=\mathbf{X}^{-}_{t}\setminus\\{X^{i}_{t-\tau}\\}$, the only indirect adjacencies remaining are due to paths through contemporaneous parents of $X^{j}_{t}$. This is in contrast to conditioning sets on contemporaneous adjacencies which can also open up paths $X^{j}_{t}\to X^{k}_{t}\leftarrow X^{i}_{t-\tau}$ if $X^{k}_{t}$ is conditioned on. One reason why the PC algorithm tests _all_ combinations of subsets $\mathcal{S}$ is to avoid opening up such collider paths. Therefore, one approach would be to start by $\mathcal{S}=\mathbf{X}^{-}_{t}\setminus\\{X^{i}_{t-\tau}\\}$ and then iterate through contemporaneous conditions. A similar idea lies behind the combination of GC and the PC algorithm in [Moneta et al., 2011]. However, conditioning on large-dimensional conditioning sets strongly affects detection power [Runge et al., 2019b]. To avoid this, the lagged conditioning phase of PCMCI+ (Alg. 1, see Fig. 2 left panels) tests all pairs $(X^{i}_{t-\tau},X^{j}_{t})$ for $\tau>0$ conditional on only the _strongest_ $p$ adjacencies of $X^{j}_{t}$ in each $p$-iteration without going through all $p$-dimensional subsets of adjacencies. This choice $(i)$ improves the causal signal-to-noise ratio and recall since for a given test $X^{i}_{t-\tau}\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{S}$ the ‘noise’ in $X^{j}_{t}$ due to other lagged adjacencies is conditioned out, $(ii)$ leads to fewer CI tests further improving recall, and $(iii)$ speeds up the skeleton phase. We denote the lagged adjacency set resulting from Alg. 1 as $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$. Lemma 1 in Sect. 3.2 states that the only remaining indirect adjacencies in $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ are then due to paths passing through contemporaneous parents of $X^{j}_{t}$. In the schematic in Fig. 2 this is the link $Y_{t-1}\to X_{t}$. Secondly, in Alg. 2 (Fig. 2 center panels) the graph $\mathcal{G}$ is initialized with all contemporaneous adjacencies plus all lagged adjacencies from $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ for all $X^{j}_{t}$. Algorithm 2 tests all (unordered lagged and ordered contemporaneous) adjacent pairs $(X^{i}_{t-\tau},X^{j}_{t})$ and iterates through contemporaneous conditions $\mathcal{S}\subseteq\mathcal{A}_{t}(X^{j}_{t})$ with the MCI test $\displaystyle X^{i}_{t-\tau}$ $\displaystyle{\perp\\!\\!\\!\perp}X^{j}_{t}~{}|~{}\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\},\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau}).$ (2) In the schematic in Fig. 2 the condition on $\mathcal{S}=Y_{t}$, as part of the full conditioning set $\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\},\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau})$, removes the link $X_{t}{\circ\\!{\\--}\\!\circ}Z_{t}$. The condition on $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ blocks paths through lagged parents and the advantage of the additional conditioning on $\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau})$ is discussed in the following. We denote the variant without the condition on $\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau})$ as PCMCI${}^{+}_{0}$. Both versions are followed by the collider orientation phase (Alg. S2) and rule orientation phase (Alg. S3) which are deferred to the SM since they are equivalent to the PC algorithm with the modification that the additional CI tests in the collider phase for the conservative or majority rule are also based on the test (2) (Fig. 2 right panel). We now discuss PCMCI${}^{+}_{0}$ and PCMCI+ on the example in Fig. 1. Algorithm 1 tests $X^{1}_{t-1}\to X^{0}_{t}$ conditional on $\mathcal{S}=\\{X^{0}_{t-1}\\}$ for $p=1$ and $\mathcal{S}=\\{X^{0}_{t-1},X^{1}_{t-2}\\}$ for $p=2$ as the two strongest adjacencies (as determined by the test statistic value, see pseudo-code). In both of these tests the effect size $I$ (causal signal-to-noise ratio) is much larger than for the condition on $\mathcal{S}=\\{X^{1}_{t-2}\\}$ which lead to the removal of $X^{1}_{t-1}\to X^{0}_{t}$ in the PC algorithm. In Sect. 3.2 we elaborate more rigorously on effect size. In the example $\widehat{\mathcal{B}}^{-}_{t}(X^{2}_{t})$ is indicated as blue boxes in the second panel and contains lagged parents as well as adjacencies due to paths passing through contemporaneous parents of $X^{2}_{t}$. One false positive, likely due to an ill-calibrated test caused by autocorrelation, is marked by a star. Based on these lagged adjacencies, Alg. 2 with the PCMCI${}^{+}_{0}$ option then recovers all lagged links (3rd panel), but it still the misses contemporaneous adjacencies $X^{2}_{t}{\circ\\!{\\--}\\!\circ}X^{3}_{t}$ and $X^{3}_{t}{\circ\\!{\\--}\\!\circ}X^{4}_{t}$ and we also see strong lagged false positives from $X^{3}$ to $X^{2}$ and $X^{4}$. What happened here? The problem are now tests on contemporaneous links: The CI test for PCMCI${}^{+}_{0}$ in the $p=0$ loop, like the original PC algorithm, will test _ordered_ contemporaneous pairs. Hence, first $X^{2}_{t}{\circ\\!{\\--}\\!\circ}X^{3}_{t}$ conditional on $\widehat{\mathcal{B}}^{-}_{t}(X^{3}_{t})$ and, if the link is not removed, $X^{3}_{t}{\circ\\!{\\--}\\!\circ}X^{2}_{t}$ conditional on $\widehat{\mathcal{B}}^{-}_{t}(X^{2}_{t})$. Here $X^{2}_{t}{\circ\\!{\\--}\\!\circ}X^{3}_{t}$ is removed conditional on $\widehat{\mathcal{B}}^{-}_{t}(X^{3}_{t})$ (indicated by blue boxes in the panel) because $I(X^{2}_{t};X^{3}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{3}_{t}))$ falls below the significance threshold. The second central idea of PCMCI+ is to improve the effect size of CI tests for contemporaneous links by conditioning on _both_ lagged adjacencies $\widehat{\mathcal{B}}^{-}_{t}$ in the CI test (2) (see blue and red boxes in Fig. 1 right panel). At least for the initial phase $p=0$ one can prove that for non-empty $\widehat{\mathcal{B}}^{-}_{t}$ the effect size of the PCMCI+ CI test is always strictly larger than that of the PCMCI${}^{+}_{0}$ test (Thm. 4). I conjecture that this similarly holds for PCMCI+ vs. the PC algorithm. Higher effect size leads to higher recall and PCMCI+ now recovers all lagged as well as contemporaneous links and also correctly removes the lagged false positives that PCMCI${}^{+}_{0}$ obtains. Also the contemporaneous coupled pair $(X^{7},X^{8})$ is now much better detected since the MCI effect size $I(X^{7}_{t};X^{8}_{t}|X^{7}_{t-1})$ is larger than $I(X^{7}_{t};X^{8}_{t})$, one of the two PCMCI${}^{+}_{0}$ and PC algorithm effect sizes tested here. Another advantage, discussed in [Runge et al., 2019b] is that PCMCI+ CI tests are better calibrated, in contrast to PCMCI${}^{+}_{0}$ and PC algorithm tests, since the condition on both parents removes autocorrelation effects. Note that for lagged links the effect size of PCMCI+ is generally smaller than that of PCMCI${}^{+}_{0}$ since the extra condition on $\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau})$ can only reduce effect size (see [Runge et al., 2012]). This is the cost of avoiding inflated false positives. In summary, the central PCMCI+ idea is to increase effect size in individual CI tests to achieve higher detection power and at the same time maintain well- controlled false positives also for high autocorrelation. Correct adjacency information then leads to better orientation recall in Alg. S2, S3. The other advantage of PCMCI+ compared to the PC algorithm is a much faster and, as numerical examples show, also much less variable runtime. The full algorithm is detailed in pseudo-code Algorithms 1,2,S2,S3 with differences to PC and PCMCI${}^{+}_{0}$ indicated. Note that pairs $(X^{i}_{t-\tau},X^{j}_{t})$ in lines 5 and 6 of Alg. 2 are ordered for $\tau=0$ and unordered for $\tau>0$. One can construct (rather conservative) $p$-values for the skeleton adjacencies $(X^{i}_{t-\tau},X^{j}_{t})$ by taking the maximum $p$-value over all CI tests conducted in Alg. 2. A link strength can be defined corresponding to the test statistic value of the maximum $p$-value. Based on the PC stable variant, PCMCI+ is fully order-independent. Here shown is the majority-rule implementation of the collider phase, the version without handling ambiguous triples and for the conservative rule are detailed in Alg. S2. Note that the tests in the collider phase also use the CI tests (2). Like other CI-based methods, PCMCI+ has the free parameters $\alpha_{\rm PC}$, $\tau_{\max}$, and the choice of the CI test. $\alpha_{\rm PC}$ can be chosen based on cross-validation or an information criterion (implemented in tigramite). $\tau_{\max}$ should be larger or equal to the maximum true time lag of any parent and can in practice also be chosen based on model selection. However, the numerical experiments indicate that, in contrast to GC, a too large $\tau_{\max}$ does not degrade performance much and $\tau_{\max}$ can also be chosen based on the lagged dependence functions, see [Runge et al., 2019b]. PCMCI+ can flexibly be combined with different CI tests for nonlinear causal discovery, and for different variable types (discrete or continuous, univariate or multivariate). The computational complexity of PCMCI+ strongly depends on the network structure. The sparser the causal dependencies, the faster the convergence. Compared to the original PC algorithm with worst-case exponential complexity, the complexity is much reduced since Alg. 1 only has polynomial complexity [Runge et al., 2019b] and Alg. 2 only iterates through contemporaneous conditioning sets, hence the worst-case exponential complexity only applies to $N$ and not to $N\tau_{\max}$. Algorithm 1 (PCMCI+ / PCMCI${}^{+}_{0}$ lagged skeleton phase) 1:Time series dataset $\mathbf{X}=(X^{1},\,\ldots,X^{N})$, max. time lag $\tau_{\max}$, significance threshold $\alpha_{\rm PC}$, CI test ${\rm CI}(X,\,Y,\,\mathbf{Z})$ returning $p$-value and test statistic value $I$ 2:for all $X^{j}_{t}$ in $\mathbf{X}_{t}$ do 3: Initialize $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){=}\mathbf{X}^{-}_{t}{=}(\mathbf{X}_{t-1},\dots,\mathbf{X}_{t-\tau_{\max}})$ and $I^{\min}(X^{i}_{t-\tau},X^{j}_{t})=\infty~{}~{}\forall~{}X^{i}_{t-\tau}\in\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ 4: Let $p=0$ 5: while any $X^{i}_{t-\tau}\in\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ satisfies $|\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}|\geq p$ do 6: for all $X^{i}_{t-\tau}$ in $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ satisfying $|\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}|\geq p$ do 7: $\mathcal{S}=$ first $p$ variables in $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\setminus\\{X^{i}_{t-\tau}\\}$ 8: $(\text{$p$-value},\,I)\leftarrow$ CI($X^{i}_{t-\tau},\,X^{j}_{t},\,\mathcal{S}$) 9: $I^{\min}(X^{i}_{t-\tau},X^{j}_{t})=\min(|I|,I^{\min}(X^{i}_{t-\tau},X^{j}_{t}))$ 10: if $p$-value $>\alpha_{\rm PC}$ then mark $X^{i}_{t-\tau}$ for removal 11: Remove non-significant entries and sort $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ by $I^{\min}(X^{i}_{t-\tau},X^{j}_{t})$ from largest to smallest 12: Let $p=p+1$ 13:return $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ for all $X^{j}_{t}$ in $\mathbf{X}_{t}$ Algorithm 2 (PCMCI+ / PCMCI${}^{+}_{0}$ contemporaneous skeleton phase / PC full skeleton phase) 1:Time series dataset $\mathbf{X}=(X^{1},\,\ldots,X^{N})$, max. time lag $\tau_{\max}$, significance threshold $\alpha_{\rm PC}$, ${\rm CI}(X,\,Y,\,\mathbf{Z})$, PCMCI+ / PCMCI${}^{+}_{0}$: $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ for all $X^{j}_{t}$ in $\mathbf{X}_{t}$ 2:PCMCI+ / PCMCI${}^{+}_{0}$: Form time series graph $\mathcal{G}$ with lagged links from $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ for all $X^{j}_{t}$ in $\mathbf{X}_{t}$ and fully connect all contemporaneous variables, i.e., add $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ for all $X^{i}_{t}\neq X^{j}_{t}\in\mathbf{X}_{t}$ 3:PC: Form fully connected time series graph $\mathcal{G}$ with lagged and contemporaneous links 4:PCMCI+ / PCMCI${}^{+}_{0}$: Initialize contemporaneous adjacencies $\widehat{\mathcal{A}}(X^{j}_{t}):=\widehat{\mathcal{A}}_{t}(X^{j}_{t})=\\{X^{i}_{t}{\neq}X^{j}_{t}\in\mathbf{X}_{t}:X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}~{}\text{in $\mathcal{G}$}\\}$ 5:PC: Initialize full adjacencies $\widehat{\mathcal{A}}(X^{j}_{t})$ for all (lagged and contemporaneous) links in $\mathcal{G}$ 6:Initialize $I^{\min}(X^{i}_{t-\tau},X^{j}_{t})=\infty$ for all links in $\mathcal{G}$ 7:Let $p=0$ 8:while any adjacent pairs $(X^{i}_{t-\tau},X^{j}_{t})$ for $\tau\geq 0$ in $\mathcal{G}$ satisfy $|\widehat{\mathcal{A}}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}|\geq p$ do 9: Select new adjacent pair $(X^{i}_{t-\tau},X^{j}_{t})$ for $\tau\geq 0$ satisfying $|\widehat{\mathcal{A}}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}|\geq p$ 10: while $(X^{i}_{t-\tau},X^{j}_{t})$ are adjacent in $\mathcal{G}$ and not all $\mathcal{S}\subseteq\widehat{\mathcal{A}}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}$ with $|\mathcal{S}|=p$ have been considered do 11: Choose new $\mathcal{S}{\subseteq}\widehat{\mathcal{A}}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}$ with $|\mathcal{S}|{=}p$ 12: PCMCI+: Set $\mathbf{Z}{=}(\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\},\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau}))$ 13: PCMCI${}^{+}_{0}$: Set $\mathbf{Z}{=}(\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\})$ 14: PC: Set $\mathbf{Z}{=}\mathcal{S}$ 15: $(\text{$p$-value},\,I)\leftarrow$ CI($X^{i}_{t{-}\tau},X^{j}_{t},\mathbf{Z})$ 16: $I^{\min}(X^{i}_{t-\tau},X^{j}_{t})=\min(|I|,I^{\min}(X^{i}_{t-\tau},X^{j}_{t}))$ 17: if $p$-value $>\alpha_{\rm PC}$ then 18: Delete link $X^{i}_{t-\tau}\to X^{j}_{t}$ for $\tau>0$ (or $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ for $\tau=0$) from $\mathcal{G}$ 19: Store (unordered) ${\rm sepset}(X^{i}_{t-\tau},X^{j}_{t})=\mathcal{S}$ 20: Let $p=p+1$ 21: Re-compute $\widehat{\mathcal{A}}(X^{j}_{t})$ from $\mathcal{G}$ and sort by $I^{\min}(X^{i}_{t-\tau},X^{j}_{t})$ from largest to smallest 22:return $\mathcal{G}$, sepset ### 3.2 THEORETICAL RESULTS This section states asymptotic consistency, finite sample order-independence, and further results regarding effect size and false positive control. The consistency of network learning algorithms is separated into _soundness_ , i.e., the returned graph has correct adjacencies, and _completeness_ , i.e., the returned graph is also maximally informative (links are oriented as much as possible). We start with the following assumptions. ###### Assumptions 1 (Asymptotic case). Throughout this paper we assume Causal Sufficiency, the Causal Markov Condition, the Adjacency Faithfulness Conditions, and consistent CI tests (oracle). In the present time series context we also assume stationarity and time-order and that the maximum time lag $\tau_{\max}\geq\tau^{\mathcal{P}}_{\max}$, where $\tau^{\mathcal{P}}_{\max}$ is the maximum time lag of any parent in the SCM (1). Furthermore, we rule out _selection variables_ and _measurement error_. Definitions of these assumptions, adapted from [Spirtes et al., 2000] to the time series context, are in Sect. S1 and all proofs are in Sect. S2. We start with the following lemma. ###### Lemma 1. Under Assumptions 1 Alg. 1 returns a set that always contains the parents of $X^{j}_{t}$ and, _at most_ , the lagged parents of all contemporaneous ancestors of $X^{j}_{t}$, i.e., $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})=\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$. $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ contains _all_ lagged parents of all contemporaneous ancestors if the weaker Adjacency Faithfulness assumption is replaced by standard Faithfulness. This establishes that the conditions $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ estimated in the first phase of PCMCI+ will suffice to block all lagged confounding paths that do not go through contemporaneous links. This enables to prove the soundness of Alg. 2, even though Alg. 2 is a variant of the PC algorithm that only iterates through contemporaneous conditioning sets. ###### Theorem 1 (Soundness of PCMCI+). Algorithm 2 returns the correct adjacencies under Assumptions 1, i.e., $\widehat{\mathcal{G}^{*}}=\mathcal{G}^{*}$, where the $\mathcal{G}^{*}$ denotes the skeleton of the time series graph. To prove the completeness of PCMCI+, we start with the following observation. ###### Lemma 2. Due to time-order and the stationarity assumption, the considered triples in the collider phase (Alg. S2) and rule orientation phase (Alg. S3) can be restricted as follows: In the collider orientation phase only unshielded triples $X^{i}_{t-\tau}\to X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ (for $\tau>0$) or $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ (for $\tau=0$) in $\mathcal{G}$ where $(X^{i}_{t-\tau},X^{j}_{t})$ are not adjacent are relevant. For orientation rule R1 triples $X^{i}_{t-\tau}\to X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ where $(X^{i}_{t-\tau},X^{j}_{t})$ are not adjacent, for orientation rule R2 triples $X^{i}_{t}\to X^{k}_{t}\to X^{j}_{t}$ with $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$, and for orientation rule R3 pairs of triples $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{k}_{t}\to X^{j}_{t}$ and $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{l}_{t}\to X^{j}_{t}$ where $(X^{k}_{t},X^{l}_{t})$ are not adjacent and $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ are relevant. These restrictions imply that only contemporaneous parts of separating sets are relevant for the collider phase. ###### Theorem 2 (PCMCI+ is complete). PCMCI+ (Algorithms 1,2,S2,S3) when used with the conservative rule for orienting colliders in Alg. S2 returns the correct CPDAG under Assumptions 1. Under standard Faithfulness also PCMCI+ when used with the majority rule or the standard orientation rule is complete. Also the proof of order-independence follows straightforwardly from the proof in [Colombo and Maathuis, 2014]. Of course, order independence does not apply to time-order. ###### Theorem 3 (Order independence). Under Assumptions 1 PCMCI+ with the conservative or majority rule in Alg. S2 is independent of the order of variables $(X^{1},\ldots,X^{N})$. Next, we consider effect size. The toy example showed that a major problem of PCMCI${}^{+}_{0}$ (and also PC) is lack of detection power for contemporaneous links. A main factor of statistical detection power is effect size, i.e., the population value of the test statistic considered (e.g., absolute partial correlation). In the following, I will base my argument in an information- theoretic framework and consider the conditional mutual information as a general test statistic, denoted $I$. In Alg. 2 PCMCI${}^{+}_{0}$ will test a contemporaneous dependency $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ first with the test statistic $I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}))$, and, if that test was positive, secondly with $I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t})))$. If either of these tests finds (conditional) independence, the adjacency is removed. Therefore, the minimum test statistic value determines the relevant effect size. On the other hand, PCMCI+ treats both cases symmetrically since the test statistic is always $I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}),\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t}))$. ###### Theorem 4 (Effect size of MCI tests for $p=0$). Under Assumptions 1 the PCMCI+ oracle case CI tests in Alg. 2 for $p=0$ for contemporaneous true links $X^{i}_{t}\to X^{j}_{t}\in\mathcal{G}$ have an effect size that is always greater than that of the PCMCI${}^{+}_{0}$ CI tests, i.e., $I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}),\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t}))>\min(I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})),\,I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t})))$ if both $X^{i}_{t}$ and $X^{j}_{t}$ have parents that are not shared with the other. I conjecture that this result holds similarly for $p>0$ and also that PCMCI+ has greater effect sizes than the PC algorithm since the latter iterates over _all_ subsets of adjacencies and, hence, the minimum is taken generally over an even larger set leading to even smaller effect sizes. For lagged links the effect size of the PCMCI+ tests is always smaller (or equal) than that of the PCMCI${}^{+}_{0}$ tests (see [Runge et al., 2012]). Last, we discuss false positive control. While the effect size result regards detection power, in the following I give a mathematical intuition why the MCI tests are better calibrated than the PC algorithm CI tests and control false positives below the expected significance level. Lemma 1 implies that even though Alg. 1 does not aim to estimate the contemporaneous parents, it still yields a set of conditions that shields $X^{j}_{t}$ from the ‘infinite’ past $\mathbf{X}^{-}_{t}$, either by blocking the parents of $X^{j}_{t}$ or by blocking indirect contemporaneous paths through contemporaneous ancestors of $X^{j}_{t}$. Blocking paths from the infinite past, I conjecture, is key to achieve well-calibrated CI tests in Alg. 2. The authors in [Runge et al., 2019b] showed that under certain model assumptions the MCI tests reduce to CI tests among the noise terms $\eta$ from model (1) which are assumed to be i.i.d. and help to achieve well-calibrated CI tests. In the numerical experiments below we can see that the PC algorithm has inflated false positive for high autocorrelation, while PCMCI+ well controls false positives, but a formal proof of correct false positive control for this challenging nonlinear, high-dimensional setting is beyond the scope of this paper. ## 4 NUMERICAL EXPERIMENTS We consider a number of typical challenges [Runge et al., 2019a], contemporaneous and time lagged causal dependencies, strong autocorrelation, large number of variables and considered time lags, different noise distributions and nonlinearity, in the following additive variant of model (1): $\displaystyle X_{t}^{j}$ $\displaystyle=a_{j}X^{j}_{t-1}+\textstyle{\sum_{i}}c_{i}f_{i}(X^{i}_{t-\tau_{i}})+\eta^{j}_{t}$ (3) for $j\in\\{1,\ldots,N\\}$. Autocorrelations $a_{j}$ are uniformly drawn from $[\max(0,a-0.3),\,a]$ for $a$ as indicated in Fig. 3 and $\eta^{j}$ is _i.i.d._ and follows a zero-mean Gaussian $\mathcal{N}$ or Weibull $\mathcal{W}$ (scale parameter $2$) distribution (depending on setup) with standard deviation drawn from $[0.5,\,2]$. In addition to autodependency links, for each model $L=\lfloor 1.5\cdot N\rfloor$ (except for $N=2$ with $L=1$) cross-links are chosen whose functional dependencies are linear or $f_{i}(x)=f^{(2)}(x)=(1+5xe^{-x^{2}/20})x$ (depending on setup), with $f^{(2)}$ designed to yield more stationary dynamics. Coefficients $c_{i}$ are drawn uniformly from $\pm[0.1,0.5]$. 30% of the links are contemporaneous ($\tau_{i}=0$) and the remaining $\tau_{i}$ are drawn from $[1,\,5]$. Only stationary models are considered. We have an average cross-in-degree of $d=1.5$ for all network sizes (plus an auto-dependency) implying that models become sparser for larger $N$. We consider several model setups: linear Gaussian, linear mixed noise (among the $N$ variables: 50% Gaussian, 50% Weibull), and nonlinear mixed noise (50% linear, 50% $f^{(2)}(x)$; 66% Gaussian, 34% Weibull). For the linear model setups we consider the PC algorithm and PCMCI+ in the majority-rule variant with ParCorr and compare these with GCresPC [Moneta et al., 2011], a combination of GC with PC applied to residuals, and a autoregressive model version of LiNGAM [Hyvärinen et al., 2010], a representative of the SCM framework (implementation details in Sect. S4). For the LiNGAM implementation I could not find a way to set a significance level and used the LASSO option which prunes ‘non-active’ links to zero. Both GCresPC and LiNGAM assume linear dependencies and LiNGAM also non-Gaussianity. For the nonlinear setup the PC algorithm and PCMCI+ are implemented with the GPDC test [Runge et al., 2019b] that is based on Gaussian process regression and a distance correlation test on the residuals, which is suitable for a large class of nonlinear dependencies with additive noise. Performance is evaluated as follows: True (TPR) and false positive rates (FPR, shown to evaluate false positive control, not applicable to LiNGAM) for adjacencies are distinguished between lagged cross-links ($i\neq j$), contemporaneous, and autodependency links. Due to time order, lagged links (and autodependencies) are automatically oriented. Contemporaneous orientation precision is measured as the fraction of correctly oriented links (${\circ\\!{\\--}\\!\circ}$ or $\to$) among all estimated adjacencies, and recall as the fraction of correct orientations among all true contemporaneous links. Further shown is the fraction of conflicting links among all detected contemporaneous adjacencies (not applicable to LiNGAM). All metrics (and their std. errors) are computed across all estimated graphs from $500$ realizations of model (3) at time series length $T$. The average runtimes were evaluated on Intel Xeon Platinum 8260. In Fig. 3 results for the linear Gaussian setup with default model parameters $N=5,\,T=500,\,a=0.95$ and method parameters $\tau_{\max}=5$ and $\alpha=0.01$ (not applicable to LiNGAM) are shown. Each of the four panels shows results for varying one of $a,\,N,\,T,\,\tau_{\max}$. The insets show ANOVA statistics $r\pm\bar{\Delta}r$ [per unit], where $r$ is the performance metric at the leftmost parameter on the $x$-axis ($a,\,N,\,T,\,\tau_{\max}$, respectively) and $\bar{\Delta}r$ denotes the average change per parameter unit. In the adjacency subplots the statistics refer to lagged links. Figure 3: Numerical experiments with linear Gaussian setup for varying (A) autocorrelation strength $a$ (B) number of variables $N$ (C) sample size $T$ and (D) maximum time lag $\tau_{\max}$. All remaining setup parameters indicated in the top right. Errorbars show std. errors or the 90% range (for runtime). The insets show ANOVA statistics. Figure 3A demonstrates that the TPR of PCMCI+ and GCresPC for contemporaneous links is stable even under high autocorrelation while PC and LiNGAM show strong declines. Since LiNGAM has no $\alpha_{\rm PC}$ for FPR-control we focus on its relative changes rather than absolute performance. Lagged TPR decreases strongly for PC while the other methods are more robust. FPR is well-controlled for PCMCI+ while PC and slightly also GCresPC show inflated lagged FPR for high autocorrelation. LiNGAM features a strong increase of lagged FPR. These adjacency results translate into higher contemporaneous orientation recall for PCMCI+ which increases with autocorrelation, while it decreases for all other methods. GCresPC has steady low recall since it does not use lagged links in the orientation phase. Except for GCresPC, all methods have increasing precision with PCMCI+ and PC outperforming LiNGAM. PCMCI+ shows almost no conflicts while PC’s conflicts increase with autocorrelation until low power reduces them again. Finally, runtimes are almost constant for GCresPC and LiNGAM, while they increase for PCMCI+ and much stronger for PC. Figure 3B shows that PCMCI+ and GCresPC have the highest TPR for increasing number of variables $N$, especially for contemporaneous links. FPR is well controlled only for PCMCI+ while PC has false positives for small $N$ where model connectivity is denser and false negatives are more likely leading to false positives. For high $N$ PC has false positives only regarding autodependencies while inflated FPR appears for GCresPC. PCMCI+ has more than twice as much contemporaneous recall compared to the other methods and is almost not affected by higher $N$. Orientation precision is decreasing for all methods (except PC) with a higher decrease for PCMCI+. Runtime is increasing at a much smaller rate for PCMCI+ compared to PC, which also has a very high runtime variability across the different model realizations. LiNGAM and especially GCresPC are fastest. PCMCI+, GCresPC, and LiNGAM benefit similarly and PC less so for increasing sample size regarding TPR (Fig. 3C). FPR is still not controlled for PC for large sample sizes, lagged FPR increases for GCresPC. PCMCI+ shows the highest increases in contemporaneous recall and precision. Runtime increases are moderate compared to PC, conflicts decrease. Last, Fig. 3D shows that all methods are relatively robust to large maximum time lags $\tau_{\max}$ (beyond the true max. time lag $5$) for the considered sample size $T=500$. Contemporaneous FPR and runtime increase for PC. In the SM further results are shown. For too large $N\tau_{\max}$ (relative to $T$) GCresPC and LiNGAM (despite LASSO-regularization) sharply drop in performance. For the linear mixed noise setup (Fig. S2) results are almost unchanged for all methods except for LiNGAM for which recall and precision rise, as expected. Recall is then higher than PCMCI+ for low autocorrelation, but still much lower for high autocorrelation and large $N$ or $\tau_{\max}$, at similar precision. In the nonlinear mixed noise setup (Fig. S3), the difference between PC and PCMCI+ is similar. We observe slight FPR inflation for high autocorrelation. GPDC seems to not work well in high-dimensional, highly autocorrelated settings. Runtime for GPDC compared to ParCorr is orders of magnitude longer, especially for PC. Further figures in the SM show many combinations of $a,\,N,\,T,\,\tau_{\max}$ and $\alpha_{\rm PC}$ for the model setups and demonstrate that the above findings are robust. ## 5 CONCLUSIONS PCMCI+ improves the reliability of CI tests by optimizing the choice of conditioning sets and yields much higher recall, well-controlled false positives, and faster runtime than the original PC algorithm for highly autocorrelated time series, while maintaining similar performance for low autocorrelation. The algorithm well exploits sparsity in high-dimensional settings and can flexibly be combined with different CI tests for nonlinear causal discovery, and for different variable types (discrete or continuous, univariate or multivariate). Autocorrelation is actually key to increase contemporaneous orientation recall since it creates triples $X^{i}_{t-1}\to X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ that can often be oriented while an isolated link $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ stays undirected in the Markov equivalence class, a drawback of CI-based methods. If the data is at least non-Gaussian, a SCM method like LiNGAM can exploit this property and recover directionality in such cases. Still, we saw that LiNGAM suffers from large autocorrelation. PCMCI+ is available as part of the _tigramite_ Python package at https://github.com/jakobrunge/tigramite. A next step will be to extend the present ideas to an algorithm accounting for latent confounders and to explore combinations between SCM-based methods and PCMCI+. The numerical results will be contributed to the causality benchmark platform `www.causeme.net` [Runge et al., 2019a] to facilitate a further expanded method evaluation. #### Acknowledgments DKRZ provided computational resources (grant no. 1083). I thank Andreas Gerhardus for helpful comments. ## References * [Chickering, 2002] Chickering, D. M. (2002). Learning Equivalence Classes of Bayesian-Network Structures. J. Mach. Learn. Res., 2:445–498. * [Colombo and Maathuis, 2014] Colombo, D. and Maathuis, M. H. (2014). Order-Independent Constraint-Based Causal Structure Learning. J. Mach. Learn. Res., 15:3921–3962. * [Entner and Hoyer, 2010] Entner, D. and Hoyer, P. O. (2010). On causal discovery from time series data using FCI. In Proc. Fifth Eur. Work. Probabilistic Graph. Model., pages 121–128. * [Granger, 1969] Granger, C. W. J. (1969). Investigating causal relations by econometric models and cross-spectral methods. Econometrica, 37(3):424–438. * [Hyvärinen et al., 2010] Hyvärinen, A., Zhang, K., Shimizu, S., and Hoyer, P. O. (2010). Estimation of a structural vector autoregression model using non-gaussianity. J. Mach. Learn. Res., 11:1709–1731. * [Malinsky and Spirtes, 2018] Malinsky, D. and Spirtes, P. (2018). Causal structure learning from multivariate time series in settings with unmeasured confounding. In Proc. of 2018 ACM SIGKDD Work. on Causal Discovery, pages 23–47. * [Moneta et al., 2011] Moneta, A., Chlaß, N., Entner, D., and Hoyer, P. (2011). Causal search in structural vector autoregressive models. In NIPS Mini-Symp. on Causality in Time Series, pages 95–114. * [Peters et al., 2017] Peters, J., Janzing, D., and Schölkopf, B. (2017). Elements of causal inference: foundations and learning algorithms. MIT Press, Cambridge, MA. * [Ramsey et al., 2006] Ramsey, J., Spirtes, P., and Zhang, J. (2006). Adjacency-faithfulness and conservative causal inference. In Proc. 22nd Conf. on Uncertainty in Art. Int., pages 401–408. * [Ramsey, 2014] Ramsey, J. D. (2014). A Scalable Conditional Independence Test for Nonlinear, Non-Gaussian Data. https://arxiv.org/abs/1401.5031. * [Runge, 2018a] Runge, J. (2018a). Causal network reconstruction from time series: From theoretical assumptions to practical estimation. Chaos: An Interdiscip. J. Nonlinear Sci., 28(7):075310. * [Runge, 2018b] Runge, J. (2018b). Conditional independence testing based on a nearest-neighbor estimator of conditional mutual information. In Storkey, A. & Perez-Cruz, F., editor, Proc. 21st Int. Conf. Artif. Intell. Stat. Playa Blanca, Lanzarote, Canary Islands: PMLR. * [Runge et al., 2019a] Runge, J., Bathiany, S., Bollt, E., Camps-Valls, G., Coumou, D., Deyle, E., Glymour, C., Kretschmer, M., Mahecha, M. D., Muñoz-Marí, J., van Nes, E. H., Peters, J., Quax, R., Reichstein, M., Scheffer, M., Schölkopf, B., Spirtes, P., Sugihara, G., Sun, J., Zhang, K., and Zscheischler, J. (2019a). Inferring causation from time series in earth system sciences. Nature Comm., 10(1):2553. * [Runge et al., 2012] Runge, J., Heitzig, J., Marwan, N., and Kurths, J. (2012). Quantifying causal coupling strength: A lag-specific measure for multivariate time series related to transfer entropy. Phys. Rev. E, 86(6):061121. * [Runge et al., 2019b] Runge, J., Nowack, P., Kretschmer, M., Flaxman, S., and Sejdinovic, D. (2019b). Detecting and quantifying causal associations in large nonlinear time series datasets. Science Advances, eaau4996(5). * [Sen et al., 2017] Sen, R., Suresh, A. T., Shanmugam, K., Dimakis, A. G., and Shakkottai, S. (2017). Model-Powered Conditional Independence Test. In Proc. 30th Conf. Adv. Neural Inf. Process. Syst., pages 2955–2965. * [Spirtes and Glymour, 1991] Spirtes, P. and Glymour, C. (1991). An Algorithm for Fast Recovery of Sparse Causal Graphs. Soc. Sci. Comput. Rev., 9(1):62–72. * [Spirtes et al., 2000] Spirtes, P., Glymour, C., and Scheines, R. (2000). Causation, Prediction, and Search. MIT Press, Boston, MA. * [Spirtes and Zhang, 2016] Spirtes, P. and Zhang, K. (2016). Causal discovery and inference: concepts and recent methodological advances. Appl. Informatics, 3(1):3. * [Zhang et al., 2011] Zhang, K., Peters, J., Janzing, D., and Schölkopf, B. (2011). Kernel-based Conditional Independence Test and Application in Causal Discovery. In Proc. 27th Conf. Uncertain. Artif. Intell., pages 804–813. ## Appendix S1 Definitions The following definitions are adaptations of the standard assumptions of causal discovery to the time series case. Here we consider the causally sufficient case and assume that all variables $\mathbf{X}=(X^{1},\ldots,X^{N})$ of the underlying SCM (1) are observed. Additionally, we assume that the maximum PCMCI+ time lag $\tau_{\max}\geq\tau^{\mathcal{P}}_{\max}$, where $\tau^{\mathcal{P}}_{\max}$ is the maximum time lag of any parent in the SCM (1). ###### Definition S1 (Causal Markov Condition). The joint distribution of a process $\mathbf{X}$ whose causal structure can be represented in a time series graph $\mathcal{G}$ fulfills the Causal Markov Condition iff for all $X^{j}_{t}\in\mathbf{X}_{t}$ every non-descendent of $X^{j}_{t}$ in $\mathcal{G}$ is independent of $X^{j}_{t}$ given the parents $\mathcal{P}(X^{j}_{t})$. In particular, $\mathbf{X}_{t}^{-}{\setminus}\mathcal{P}(X^{j}_{t})\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{P}(X^{j}_{t})$ since all variables in $\mathbf{X}_{t}^{-}$ are non-descendants of $X^{j}_{t}$ by time-order. Note that for the SCM (1) with independent noise terms the Causal Markov Condition is automatically fulfilled. ###### Definition S2 (Adjacency and standard faithfulness Condition). The joint distribution of a process $\mathbf{X}$ whose causal structure can be represented in a time series graph $\mathcal{G}$ fulfills the Adjacency Faithfulness Condition iff for all disjoint $X^{i}_{t-\tau},X^{j}_{t},\mathcal{S}\in\mathbf{X}^{-}_{t+1}$ with $\tau>0$ $\displaystyle X^{i}_{t-\tau}$ $\displaystyle\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{S}~{}\Rightarrow~{}X^{i}_{t-\tau}\to X^{j}_{t}\notin\mathcal{G}$ $\displaystyle X^{i}_{t-\tau}$ $\displaystyle\to X^{j}_{t}\in\mathcal{G}~{}\Rightarrow~{}X^{i}_{t-\tau}\cancel{\perp\\!\\!\\!\perp}X^{j}_{t}~{}|~{}\mathcal{S}~{}~{}\text{(contrapositive)}$ and with $\tau=0$ $\displaystyle X^{i}_{t}$ $\displaystyle\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{S}~{}\Rightarrow~{}X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}\notin\mathcal{G}$ $\displaystyle X^{i}_{t}$ $\displaystyle{\circ\\!{\\--}\\!\circ}X^{j}_{t}\in\mathcal{G}~{}\Rightarrow~{}X^{i}_{t}\cancel{\perp\\!\\!\\!\perp}X^{j}_{t}~{}|~{}\mathcal{S}~{}~{}\text{(contrapositive)}\,.$ Furthermore, the variables fulfill the (standard) Faithfulness Condition iff for $\tau\geq 0$ $\displaystyle X^{i}_{t-\tau}$ $\displaystyle\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{S}~{}\Rightarrow~{}X^{i}_{t-\tau}\bowtie X^{j}_{t}~{}|~{}\mathcal{S}$ $\displaystyle X^{i}_{t-\tau}$ $\displaystyle\cancel{\bowtie}X^{j}_{t}~{}|~{}\mathcal{S}~{}\Rightarrow~{}X^{i}_{t-\tau}\cancel{\perp\\!\\!\\!\perp}X^{j}_{t}~{}|~{}\mathcal{S}~{}~{}\text{(contrapositive)}\,.$ ## Appendix S2 Proofs ### S2.1 Proof of Lemma 1 We first consider the following Lemma: ###### Lemma S1. Algorithm 1 returns a superset of lagged parents under Assumptions 1, i.e., $\mathcal{P}^{-}_{t}(X^{j}_{t})\subseteq\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ for all $X^{j}_{t}$ in $\mathbf{X}_{t}$. ###### Proof. We need to show that for arbitrary $(X^{i}_{t-\tau},X^{j}_{t})$ with $\tau>0$ we have $X^{i}_{t-\tau}\notin\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})~{}\Rightarrow~{}X^{i}_{t-\tau}\notin\mathcal{P}^{-}_{t}(X^{j}_{t})$. Algorithm 1 removes $X^{i}_{t-\tau}$ from $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ iff $X^{i}_{t-\tau}\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{S}$ for some $\mathcal{S}\subseteq\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\}$ in the iterative CI tests. Then Adjacency Faithfulness directly implies that $X^{i}_{t-\tau}$ is not adjacent to $X^{j}_{t}$ and in particular $X^{i}_{t-\tau}\notin\mathcal{P}^{-}_{t}(X^{j}_{t})$. ∎ With this step we can prove Lemma 1. ###### Proof. The lemma states that under Assumptions 1 with Adjacency Faithfulness replaced by standard Faithfulness Alg. 1 for all $X^{j}_{t}\in\mathbf{X}_{t}$ returns $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})=\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$ where $\mathcal{C}_{t}(X^{j}_{t})$ denotes the contemporaneous ancestors of $X^{j}_{t}$. We need to show that for arbitrary $X^{i}_{t-\tau},X^{j}_{t}\in\mathbf{X}^{-}_{t+1}$ with $\tau>0$: (1) $X^{i}_{t-\tau}\notin\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})~{}\Rightarrow~{}X^{i}_{t-\tau}\notin\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$ and (2) $X^{i}_{t-\tau}\in\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})~{}\Rightarrow~{}X^{i}_{t-\tau}\in\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$. Ad 1) Algorithm 1 removes $X^{i}_{t-\tau}$ from $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ iff $X^{i}_{t-\tau}\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{S}$ for some $\mathcal{S}\subseteq\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}$ in the iterative CI tests. Then standard Faithfulness implies that $X^{i}_{t-\tau}\bowtie X^{j}_{t}~{}|~{}\mathcal{S}$ and in particular $X^{i}_{t-\tau}\notin\mathcal{P}^{-}_{t}(X^{j}_{t})$, as proven already in Lemma S1 under the weaker Adjacency Faithfulness Condition. To show that $X^{i}_{t-\tau}\notin\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$ we note that $\mathcal{S}\subseteq\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}$ does not include any contemporaneous conditions and, hence, all contemporaneous directed paths from contemporaneous ancestors of $X^{j}_{t}$ are open and also paths from parents of those ancestors are open. If $X^{i}_{t-\tau}\in\bigcup_{X^{i}_{t}\in\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$, by the contraposition of standard Faithfulness we should observe $X^{i}_{t-\tau}\cancel{\perp\\!\\!\\!\perp}X^{j}_{t}~{}|~{}\mathcal{S}$. Then the fact that on the contrary we observe $X^{i}_{t-\tau}\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{S}$ implies that $X^{i}_{t-\tau}\notin\bigcup_{X^{i}_{t}\in\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$. Ad 2) Now we have $X^{i}_{t-\tau}\in\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ which implies that $X^{i}_{t-\tau}\cancel{\perp\\!\\!\\!\perp}X^{j}_{t}~{}|~{}\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}$ in the last iteration step of Alg. 1. By (1), $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ is a superset of $\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$. Define the lagged extra conditions as $W^{-}_{t}=\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\setminus\\{\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t}),X^{i}_{t-\tau}\\}$. Since $W^{-}_{t}$ is lagged, it is a non-descendant of $X^{j}_{t}$ or any $X^{k}_{t}\in\mathcal{C}_{t}(X^{j}_{t})$. We now proceed by a proof by contradiction. Suppose to the contrary that $X^{i}_{t-\tau}\notin\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$. The Causal Markov Condition applies to both $X^{i}_{t-\tau}$ and $W^{-}_{t}$ and implies that $(X^{i}_{t-\tau},W^{-}_{t})\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$. From the weak union property of conditional independence we get $X^{i}_{t-\tau}\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t}),W^{-}_{t}$ which is equivalent to $X^{i}_{t-\tau}\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\setminus\\{X^{i}_{t-\tau}\\}$, contrary to the assumption, hence $X^{i}_{t-\tau}\in\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$. ∎ ### S2.2 Proof of Theorem 1 ###### Proof. The theorem states that under Assumptions 1 $\widehat{\mathcal{G}^{*}}=\mathcal{G}^{*}$, where the $\mathcal{G}^{*}$ denotes the skeleton of the time series graph. We denote the two types of skeleton links $\to$ and ${\circ\\!{\\--}\\!\circ}$ here generically as ${\star\\!{\\--}\\!\star}$ and can assume $\tau_{\max}\geq\tau\geq 0$. We need to show that for arbitrary $X^{i}_{t-\tau},X^{j}_{t}\in\mathbf{X}^{-}_{t+1}$: (1) $X^{i}_{t-\tau}{\star\\!{\\--}\\!\star}X^{j}_{t}\notin\widehat{\mathcal{G}^{*}}~{}\Rightarrow~{}X^{i}_{t-\tau}{\star\\!{\\--}\\!\star}X^{j}_{t}\notin\mathcal{G}^{*}$ and (2) $X^{i}_{t-\tau}{\star\\!{\\--}\\!\star}X^{j}_{t}\notin\mathcal{G}^{*}~{}\Rightarrow~{}X^{i}_{t-\tau}{\star\\!{\\--}\\!\star}X^{j}_{t}\notin\widehat{\mathcal{G}^{*}}$. Ad (1): Algorithm 2 deletes a link $X^{i}_{t-\tau}{\star\\!{\\--}\\!\star}X^{j}_{t}$ from $\widehat{\mathcal{G}^{*}}$ iff $X^{i}_{t-\tau}\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\},\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau})$ for some $\mathcal{S}\subseteq\widehat{\mathcal{A}}_{t}(X^{j}_{t})$ in the iterative CI tests with $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ estimated in Alg. 1. $\widehat{\mathcal{A}}_{t}(X^{j}_{t})$ denotes the contemporaneous adjacencies. Then Adjacency Faithfulness directly implies that $X^{i}_{t-\tau}$ is not adjacent to $X^{j}_{t}$: $X^{i}_{t-\tau}{\star\\!{\\--}\\!\star}X^{j}_{t}\notin\mathcal{G}^{*}$. Ad (2): By Lemma 1 we know that $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ is a superset of the lagged parents of $X^{j}_{t}$. Denote the lagged, extra conditions occurring in the CI tests of Alg. 2 as $W^{-}_{t}=(\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\setminus\\{X^{i}_{t-\tau}\\},\widehat{\mathcal{B}}^{-}_{t-\tau}(X^{i}_{t-\tau}))\setminus\mathcal{P}(X^{j}_{t})$. $W^{-}_{t}$ does not contain parents of $X^{j}_{t}$ and by the assumption also $X^{i}_{t-\tau}$ is not a parent of $X^{j}_{t}$. We further assume that for $\tau=0$ $X^{i}_{t}$ is also not a descendant of $X^{j}_{t}$ since that case is covered if we exchange $X^{i}_{t}$ and $X^{j}_{t}$. Then the Causal Markov Condition implies $(X^{i}_{t-\tau},W^{-}_{t})\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{P}(X^{j}_{t})$. By the weak union property of conditional independence this leads to $X^{i}_{t-\tau}\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{P}(X^{j}_{t}),W^{-}_{t}$ which is equivalent to $X^{i}_{t-\tau}\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{P}(X^{j}_{t}),\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\setminus\\{X^{i}_{t-\tau}\\},\widehat{\mathcal{B}}^{-}_{t-\tau}(X^{i}_{t-\tau})$. Now Alg. 2 iteratively tests $X^{i}_{t-\tau}\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\setminus\\{X^{i}_{t-\tau}\\},\widehat{\mathcal{B}}^{-}_{t-\tau}(X^{i}_{t-\tau})$ for all $\mathcal{S}\subseteq\widehat{\mathcal{A}_{t}}(X^{j}_{t})$. By the first part of this proof, the estimated contemporaneous adjacencies are always a superset of the true contemporaneous adjacencies, i.e., $\mathcal{A}_{t}(X^{j}_{t})\subseteq\widehat{\mathcal{A}_{t}}(X^{j}_{t})$, and by Lemma 1 $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ is a superset of the lagged parents. Hence, at some iteration step $\mathcal{S}=\mathcal{P}_{t}(X^{j}_{t})$ and Alg. 2 will find $X^{i}_{t-\tau}\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{P}(X^{j}_{t}),\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\setminus\\{X^{i}_{t-\tau}\\},\widehat{\mathcal{B}}^{-}_{t-\tau}(X^{i}_{t-\tau})$ and remove $X^{i}_{t-\tau}{\star\\!{\\--}\\!\star}X^{j}_{t}$ from $\widehat{\mathcal{G}^{*}}$. ∎ For empty conditioning sets $\mathcal{S}$ ($p=0$), Alg. 2 is equivalent to the MCI algorithm [Runge et al., 2019b] with the slight change that the latter is initialized with a fully connected (lagged) graph, which has no effect asymptotically. In [Runge et al., 2019b] the authors prove the consistency of PCMCI assuming no contemporaneous causal links under the standard Faithfulness Condition. The proof above implies that PCMCI is already consistent under the weaker Adjacency Faithfulness Condition. ### S2.3 Proof of Lemma 2 ###### Proof. Time order and stationarity can be used to constrain the four cases as follows. Let us first consider a generic triple $X^{i}_{t_{i}}{\star\\!{\\--}\\!\star}X^{k}_{t_{k}}{\star\\!{\\--}\\!\star}X^{j}_{t_{j}}$. By stationarity we can fix $t=t_{j}$. We only need to consider cases with $t_{i},t_{k}\leq t$. If $t_{k}>t_{j}$, the triple is oriented already by time order and the case $t_{i}>t_{j}$ is symmetric. The possible triples in the collider phase of the original PC algorithm are $X^{i}_{t_{i}}{\star\\!{\\--}\\!\star}X^{k}_{t_{k}}{\star\\!{\\--}\\!\star}X^{j}_{t}$ where $(X^{i}_{t_{i}},X^{j}_{t})$ are not adjacent. For $t_{k}<t$ the time- order constraint automatically orients $X^{k}_{t_{k}}\to X^{j}_{t}$ and hence $X^{k}_{t_{k}}$ is a parent of $X^{j}_{t}$ and must always be in the separating set that makes $X^{i}_{t_{i}}$ and $X^{j}_{t}$ independent. Hence we only need to consider $t_{k}=t$ and can set $\tau=t-t_{i}$ ($\tau_{\max}\geq\tau\geq 0$), leaving the two cases of unshielded triples $X^{i}_{t-\tau}\to X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ (for $\tau>0$) or $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ (for $\tau=0$) in $\mathcal{G}$ where $(X^{i}_{t-\tau},X^{j}_{t})$ are not adjacent. Since $X^{k}_{t}$ is contemporaneous to $X^{j}_{t}$, this restriction implies that only contemporaneous parts of separating sets are relevant for the collider orientation phase. For rule R1 in the orientation phase the original PC algorithm considers the remaining triples with $X^{i}_{t-\tau}\to X^{k}_{t}$ that were not oriented by the collider phase (or by time order). This leaves $X^{i}_{t-\tau}\to X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ where $\tau_{\max}\geq\tau\geq 0$. For rule R2 the original PC algorithm considers $X^{i}_{t_{i}}\to X^{k}_{t_{k}}\to X^{j}_{t}$ with $X^{i}_{t_{i}}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$. The latter type of link leads to $t_{i}=t$ and time order restricts the triples to $X^{i}_{t}\to X^{k}_{t}\to X^{j}_{t}$ with $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$. For rule R3 the original PC algorithm considers $X^{i}_{t_{i}}{\circ\\!{\\--}\\!\circ}X^{k}_{t_{k}}\to X^{j}_{t}$ and $X^{i}_{t_{i}}{\circ\\!{\\--}\\!\circ}X^{l}_{t_{l}}\to X^{j}_{t}$ where $(X^{k}_{t_{k}},X^{l}_{t_{l}})$ are not adjacent and $X^{i}_{t_{i}}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$. The latter constraint leads to $t_{i}=t$ and $X^{i}_{t_{i}}{\circ\\!{\\--}\\!\circ}X^{k}_{t_{k}}$ and $X^{i}_{t_{i}}{\circ\\!{\\--}\\!\circ}X^{k}_{t_{l}}$ imply $t_{k}=t_{l}=t$. Hence we only need to check triples $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{k}_{t}\to X^{j}_{t}$ and $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{l}_{t}\to X^{j}_{t}$ where $(X^{k}_{t},X^{l}_{t})$ are not adjacent and $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$. ∎ ### S2.4 Proof of Theorem 2 ###### Proof. We first consider the case under Assumptions 1 with Adjacency Faithfulness and PCMCI+ in conjunction with the conservative collider orientation rule in Alg. S2. We need to show that all separating sets estimated in Alg. S2 during the conservative orientation rule are correct. From the soundness (Theorem 1) and correctness of the separating sets follows the correctness of the collider orientation phase and the rule orientation phase which implies the completeness. By Lemma 2 we only need to prove that in Alg. S2 for unshielded triples $X^{i}_{t-\tau}\to X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ (for $\tau>0$) or $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ (for $\tau=0$) the separating sets among subsets of contemporaneous neighbors of $X^{j}_{t}$ and, if $\tau=0$, of $X^{i}_{t}$, are correct. Algorithm S2 tests $X^{i}_{t-\tau}\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\},\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau})$ for all $\mathcal{S}{\subseteq}\widehat{\mathcal{A}}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}$ and for all $\mathcal{S}{\subseteq}\widehat{\mathcal{A}}_{t}(X^{i}_{t}){\setminus}\\{X^{j}_{t}\\}$ (if $\tau=0$). Since PCMCI+ is sound, all adjacency information is correct and since all CI tests are assumed correct, all information on separating sets is correct. Furthermore, with the conservative rule those triples where only Adjacency Faithfulness, but not standard Faithfulness, holds will be correctly marked as ambiguous triples. Under standard Faithfulness the completeness requires to prove that PCMCI+ without the conservative orientation rule yields correct separating set information. By Lemma 2 also here we need to consider only separating sets among subsets of contemporaneous neighbors of $X^{j}_{t}$. Algorithm 2 tests $X^{i}_{t-\tau}\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\},\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau})$ for all $\mathcal{S}{\subseteq}\widehat{\mathcal{A}}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}$. And again, since PCMCI+ is sound, all adjacency information is correct and since all CI tests are assumed correct, all information on separating sets is correct, from which the completeness for this case follows. ∎ ### S2.5 Proof of Theorem 3 ###### Proof. Order-independence follows straightforwardly from sticking to the PC algorithm version in [Colombo and Maathuis, 2014]. In particular, Alg. 1 and Alg. 2 are order-independent since they are based on PC stable where adjacencies are removed only after each loop over conditions of cardinality $p$. Furthermore, the collider phase (Alg. S2) and rule orientation phase (Alg. S3) are order- independent by marking triples with inconsistent separating sets as ambiguous and consistently marking conflicting link orientations by ${x\\!{\\--}\\!x}$. ∎ ### S2.6 Proof of Theorem 4 ###### Proof. The theorem states that under Assumptions 1 the effect size for the PCMCI+ oracle case CI tests in Alg. 2 for $p=0$ for contemporaneous true links $X^{i}_{t}\to X^{j}_{t}\in\mathcal{G}$ is greater than that of PCMCI${}^{+}_{0}$: $I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}),\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t}))>\min(I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})),\,I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t})))$ if both $X^{i}_{t}$ and $X^{j}_{t}$ have parents that are not shared with the other. We will use an information-theoretic framework here and consider the conditional mutual information. To prove this statement, we denote by $\mathcal{B}_{i}=\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t})\setminus\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ the lagged conditions of $X^{i}_{t}$ that are not already contained in those of $X^{j}_{t}$ and, correspondingly, $\mathcal{B}_{j}=\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\setminus\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t})$. Since both $X^{i}_{t}$ and $X^{j}_{t}$ have parents that are not shared with the other and we assume the oracle case, both these sets are non-empty. Further, we denote the common lagged conditions as $\mathcal{B}_{ij}=\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\cap\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t})$ and make use of the following conditional independencies, which hold by the Markov assumption: (1) $\mathcal{B}_{i}\perp\\!\\!\\!\perp X^{j}_{t}|\mathcal{B}_{j},\mathcal{B}_{ij},X^{i}_{t}$ and (2) $\mathcal{B}_{j}\perp\\!\\!\\!\perp X^{i}_{t}|\mathcal{B}_{i},\mathcal{B}_{ij}$. We first prove that, given a contemporaneous true link $X^{i}_{t}\to X^{j}_{t}\in\mathcal{G}$, $I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{j})>I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{i})$ by using the following two ways to apply the chain rule of conditional mutual information: $\displaystyle I(X^{i}_{t},\mathcal{B}_{i};X^{j}_{t},\mathcal{B}_{j}|\mathcal{B}_{ij})=$ $\displaystyle=I(X^{i}_{t},\mathcal{B}_{i};\mathcal{B}_{j}|\mathcal{B}_{ij})+I(X^{i}_{t},\mathcal{B}_{i};X^{j}_{t}|\mathcal{B}_{ij}\mathcal{B}_{j})$ $\displaystyle=I(\mathcal{B}_{i};\mathcal{B}_{j}|\mathcal{B}_{ij})+\underbrace{I(X^{i}_{t};\mathcal{B}_{j}|\mathcal{B}_{ij},\mathcal{B}_{i})}_{=0~{}~{}\text{(Markov)}}$ $\displaystyle\phantom{=}+I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{j})+\underbrace{I(\mathcal{B}_{i};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{j},X^{i}_{t})}_{=0~{}~{}\text{(Markov)}}$ (S1) and $\displaystyle I(X^{i}_{t},\mathcal{B}_{i};X^{j}_{t},\mathcal{B}_{j}|\mathcal{B}_{ij})=$ $\displaystyle=I(\mathcal{B}_{i};X^{j}_{t},\mathcal{B}_{j}|\mathcal{B}_{ij})+I(X^{i}_{t};X^{j}_{t},\mathcal{B}_{j}|\mathcal{B}_{ij}\mathcal{B}_{i})$ $\displaystyle=I(\mathcal{B}_{i};\mathcal{B}_{j}|\mathcal{B}_{ij})+\underbrace{I(\mathcal{B}_{i};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{j})}_{>0~{}~{}\text{since $X^{i}_{t}\to X^{j}_{t}$}}$ $\displaystyle\phantom{=}+I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{i})+\underbrace{I(X^{i}_{t};\mathcal{B}_{j}|\mathcal{B}_{ij},\mathcal{B}_{i},X^{j}_{t})}_{>0~{}~{}\text{since $X^{i}_{t}\to X^{j}_{t}$}}$ (S2) where (S1) and (S2) denote two different applications of the chain rule. From this is follows that $I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{j})>I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{i})$. Hence, it remains to prove that $I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{j},\mathcal{B}_{i})>I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{i})$, which we also do by the chain rule: $\displaystyle I(X^{i}_{t};X^{j}_{t},\mathcal{B}_{j}|\mathcal{B}_{ij},\mathcal{B}_{i})=$ $\displaystyle=I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{i})+\underbrace{I(X^{i}_{t};\mathcal{B}_{j}|\mathcal{B}_{ij},\mathcal{B}_{i},X^{j}_{t})}_{>0~{}~{}\text{since $X^{i}_{t}\to X^{j}_{t}$}}$ (S3) $\displaystyle=\underbrace{I(X^{i}_{t};\mathcal{B}_{j}|\mathcal{B}_{ij},\mathcal{B}_{i})}_{=0~{}~{}\text{(Markov)}}+I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{i},\mathcal{B}_{j})$ (S4) ∎ ## Appendix S3 Further pseudo code Algorithms S2 and S3 detail the pseudo-code for the PCMCI+ / PCMCI${}^{+}_{0}$ / PC collider phase with different collider rules and the orientation phase. Algorithm S2 (Detailed PCMCI+ / PCMCI${}^{+}_{0}$ / PC collider phase with different collider rules) 1:$\mathcal{G}$ and sepset from Alg. 2, rule $=\\{$’none’, ’conservative’, ’majority’$\\}$, time series dataset $\mathbf{X}=(X^{1},\,\ldots,X^{N})$, significance threshold $\alpha_{\rm PC}$, ${\rm CI}(X,\,Y,\,\mathbf{Z})$, PCMCI+ / PCMCI${}^{+}_{0}$: $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ for all $X^{j}_{t}$ in $\mathbf{X}_{t}$ 2:for all unshielded triples $X^{i}_{t-\tau}\to X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ ($\tau>0$) or $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ ($\tau=0$) in $\mathcal{G}$ where $(X^{i}_{t-\tau},X^{j}_{t})$ are not adjacent do 3: if rule $=$ ’none’ then 4: if $X^{k}_{t}$ is not in sepset$(X^{i}_{t-\tau},X^{j}_{t})$ then 5: Orient $X^{i}_{t-\tau}\to X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ ($\tau>0$) or $X^{i}_{t-\tau}{\circ\\!{\\--}\\!\circ}X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ ($\tau=0$) as $X^{i}_{t-\tau}\to X^{k}_{t}\leftarrow X^{j}_{t}$ 6: else 7: PCMCI+ / PCMCI${}^{+}_{0}$: Define contemporaneous adjacencies $\widehat{\mathcal{A}}(X^{j}_{t})=\widehat{\mathcal{A}}_{t}(X^{j}_{t})=\\{X^{i}_{t}{\neq}X^{j}_{t}\in\mathbf{X}_{t}:X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}~{}\text{in $\mathcal{G}$}\\}$ 8: PC: Define full adjacencies $\widehat{\mathcal{A}}(X^{j}_{t})$ for all (lagged and contemporaneous) links in $\mathcal{G}$ 9: for all for all $\mathcal{S}{\subseteq}\widehat{\mathcal{A}}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}$ and for all $\mathcal{S}{\subseteq}\widehat{\mathcal{A}}(X^{i}_{t}){\setminus}\\{X^{j}_{t}\\}$ (if $\tau=0$) do 10: Evaluate CI($X^{i}_{t{-}\tau},X^{j}_{t},\mathbf{Z})$ with 11: PCMCI+: $\mathbf{Z}{=}(\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\},\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau}))$ 12: PCMCI${}^{+}_{0}$: $\mathbf{Z}{=}(\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\})$ 13: PC: $\mathbf{Z}{=}\mathcal{S}$ 14: Store all subsets $\mathcal{S}$ with $p$-value $>\alpha_{\rm PC}$ as separating subsets 15: if no separating subsets are found then 16: Mark triple as ambiguous 17: else 18: Compute fraction $n_{k}$ of separating subsets that contain $X^{k}_{t}$ 19: if rule $=$ ’conservative’ then 20: Orient triple as collider if $n_{k}{=}0$, leave unoriented if $n_{k}{=}1$, and mark as ambiguous if $0{<}n_{k}{<}1$ 21: else if rule $=$ ’majority’ then 22: Orient triple as collider if $n_{k}{<}0.5$, leave unoriented if $n_{k}{>}0.5$, and mark as ambiguous if $n_{k}{=}0.5$ 23: Mark links in $\mathcal{G}$ with conflicting orientations as ${x\\!{\\--}\\!x}$ 24:return $\mathcal{G}$, sepset, ambiguous triples, conflicting links Algorithm S3 (Detailed PCMCI+ / PCMCI${}^{+}_{0}$ / PC rule orientation phase) 1:$\mathcal{G}$, ambiguous triples, conflicting links 2:while any unambiguous triples suitable for rules R1-R3 are remaining do 3: Apply rule R1 (orient unshielded triples that are not colliders): 4: for all unambiguous triples $X^{i}_{t-\tau}\to X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ where $(X^{i}_{t-\tau},X^{j}_{t})$ are not adjacent do 5: Orient as $X^{i}_{t-\tau}\to X^{k}_{t}\to X^{j}_{t}$ 6: Mark links with conflicting orientations as ${x\\!{\\--}\\!x}$ 7: Apply rule R2 (avoid cycles): 8: for all unambiguous triples $X^{i}_{t}\to X^{k}_{t}\to X^{j}_{t}$ with $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ do 9: Orient as $X^{i}_{t}\to X^{j}_{t}$ 10: Mark links with conflicting orientations as ${x\\!{\\--}\\!x}$ 11: Apply rule R3 (orient unshielded triples that are not colliders and avoid cycles): 12: for all pairs of unambiguous triples $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{k}_{t}\to X^{j}_{t}$ and $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{l}_{t}\to X^{j}_{t}$ where $(X^{k}_{t},X^{l}_{t})$ are not adjacent and $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ do 13: Orient as $X^{i}_{t}\to X^{j}_{t}$ 14: Mark links with conflicting orientations as ${x\\!{\\--}\\!x}$ 15:return $\mathcal{G}$, conflicting links ## Appendix S4 Implementation details In the linear and nonlinear numerical experiments PCMCI+ is compared with the PC algorithm, both implemented with the appropriate CI test (ParCorr for the linear case, GPDC for the nonlinear case). For the linear numerical experiments we additionally consider representatives from two further frameworks: GCresPC, a combination of GC with PC applied to residuals, and an autoregressive model version of LiNGAM [Hyvärinen et al., 2010], a representative of the SCM framework. Their implementations are as follows. ### S4.1 LiNGAM For LiNGAM the code was taken from https://github.com/cdt15/lingam which provides a class VARLiNGAM. The method was called follows: Input: data, tau_max model = lingam.VARLiNGAM(lags=tau_max, criterion=None, prune=True) model.fit(data) val_matrix = model.adjacency_matrices_.transpose(2,1,0) graph = (val_matrix != 0.).astype(’int’) Output: graph The causal graph `graph` encodes the causal relations in an array of shape `(N, N, tau_max + 1)`. The option `criterion=None` just ignores the optional automatic selection of `lags`, which is here set to the same `tau_max` for all methods. I could not find a way to obtain p-values in the VARLiNGAM implementation, but with the parameter setting `prune=True` the resulting adjacency matrices are regularized with an adaptive LASSO approach using the BIC criterion to find the optimal regularization hyper-parameter (`sklearn.LassoLarsIC(criterion=’bic’)`). Non-zero adjacencies were then evaluated as causal links. Note that all other methods can be intercompared at different $\alpha_{\rm PC}$ levels while for comparison against LiNGAM we focus on its relative changes rather than absolute performance. ### S4.2 GCresPC There was no code available for the method proposed in [Moneta et al., 2011]. The present implementation first fits a VAR model up to $\tau_{\max}$ and applies the PC algorithm on the residuals. To remove spurious lagged links (due to contemporaneous paths), the PC algorithm was additionally run on significant lagged and contemporaneous links, but the orientation phase was restricted to contemporaneous links, as proposed in [Moneta et al., 2011]. The following Python pseudo-code utilizes functionality from the tigramite package, numpy, and statsmodels: Input: data, tau_max, alpha import functions/classes ParCorr, PCMCI, DataFrame from tigramite graph = np.zeros((N, N, tau_max + 1)) # 1. Estimate lagged adjacencies (to be updated in step 3.) tsamodel = tsa.var.var_model.VAR(data) results = tsamodel.fit(maxlags=tau_max, trend=’nc’) pvalues = results.pvalues values = results.coefs residuals = results.resid lagged_parents = significant lagged links at alpha # 2. Run PC algorithm on residuals (with tau_max=0) pcmci = PCMCI(dataframe=DataFrame(residuals), cond_ind_test=ParCorr()) pcmcires = pcmci.run_pcalg(pc_alpha=alpha, tau_min=0, tau_max=0) # Update contemporaneous graph graph[:,:,0] = pcmcires[’graph’][:,:,0] # 3. Run PC algorithm on significant lagged and contemporaneous adjacencies # to remove spurious lagged links due to contemporaneous parents selected_links = lagged_parents + significant contemporaneous adjacencies pcmci = PCMCI(dataframe=DataFrame(data), cond_ind_test=ParCorr()) pcmcires = pcmci.run_pcalg(selected_links=selected_links, pc_alpha=alpha, tau_min=0, tau_max=tau_max) # Update lagged part of graph graph[:,:,1:] = pcmcires[’graph’][:,:,1:] Output: graph Note that the contemporaneous graph structure in `graph` comes only from applying the PC algorithm to the residuals and, hence, does not utilize triples containing lagged adjacencies. Step 3 is necessary to remove spurious lagged links due to contemporaneous parents. The output of GCresPC depends on $\alpha_{\rm PC}$ as for PCMCI+ and the PC algorithm. ## Appendix S5 Further numerical experiments Next to repeating the overview figure for the linear Gaussian model setup from the main text in Fig. S1, in Fig. S2 we show the linear mixed noise setup, and in Fig. S3 the nonlinear mixed noise setup. The remaining pages contain results of further numerical experiments that evaluate different $a,\,N,\,T,\,\tau_{\max}$ and $\alpha_{\rm PC}$ for the linear model setups. All results and more will be contributed to the causality benchmark platform `www.causeme.net` [Runge et al., 2019a] to facilitate a further expanded method evaluation. Figure S1: Numerical experiments with linear Gaussian setup for varying (A) autocorrelation strength $a$ (B) number of variables $N$ (C) sample size $T$ and (D) maximum time lag $\tau_{\max}$. All remaining setup parameters indicated in the top right. Errorbars show std. errors or the 90% range (for runtime). The insets show ANOVA statistics. Figure S2: Numerical experiments with linear mixed noise setup for varying (A) autocorrelation strength $a$ (B) number of variables $N$ (C) sample size $T$ and (D) maximum time lag $\tau_{\max}$. All remaining setup parameters indicated in the top right. Errorbars show std. errors or the 90% range (for runtime). The insets show ANOVA statistics. Figure S3: Numerical experiments with nonlinear mixed noise setup for varying (A) autocorrelation strength $a$ (B) number of variables $N$ (C) sample size $T$ and (D) maximum time lag $\tau_{\max}$. All remaining setup parameters indicated in the top right. Errorbars show std. errors or the 90% range (for runtime). The insets show ANOVA statistics. Figure S4: Numerical experiments with linear Gaussian setup for varying autocorrelation $a$ and $T=200$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for $N=2,\,3,\,5,\,10$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S5: Numerical experiments with linear Gaussian setup for varying autocorrelation $a$ and $T=500$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for $N=2,\,3,\,5,\,10$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S6: Numerical experiments with linear Gaussian setup for varying autocorrelation $a$ and $T=1000$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for $N=2,\,3,\,5,\,10$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S7: Numerical experiments with linear Gaussian setup for varying number of variables $N$ and $T=200$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S8: Numerical experiments with linear Gaussian setup for varying number of variables $N$ and $T=500$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S9: Numerical experiments with linear Gaussian setup for varying number of variables $N$ and $T=1000$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S10: Numerical experiments with linear Gaussian setup for varying sample size $T$ for $N=5$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S11: Numerical experiments with linear Gaussian setup for varying sample size $T$ for $N=10$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S12: Numerical experiments with linear Gaussian setup for varying sample size $T$ for $N=20$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S13: Numerical experiments with linear Gaussian setup for varying maximum time lag $\tau_{\max}$ and $T=200$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S14: Numerical experiments with linear Gaussian setup for varying maximum time lag $\tau_{\max}$ and $T=500$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S15: Numerical experiments with linear Gaussian setup for varying maximum time lag $\tau_{\max}$ and $T=1000$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S16: Numerical experiments with linear mixed noise setup for varying autocorrelation $a$ and $T=200$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for $N=2,\,3,\,5,\,10$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S17: Numerical experiments with linear mixed noise setup for varying autocorrelation $a$ and $T=500$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for $N=2,\,3,\,5,\,10$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S18: Numerical experiments with linear mixed noise setup for varying autocorrelation $a$ and $T=1000$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for $N=2,\,3,\,5,\,10$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S19: Numerical experiments with linear mixed noise setup for varying number of variables $N$ and $T=200$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S20: Numerical experiments with linear mixed noise setup for varying number of variables $N$ and $T=500$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S21: Numerical experiments with linear mixed noise setup for varying number of variables $N$ and $T=1000$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S22: Numerical experiments with linear mixed noise setup for varying sample size $T$ for $N=5$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S23: Numerical experiments with linear mixed noise setup for varying sample size $T$ for $N=10$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S24: Numerical experiments with linear mixed noise setup for varying sample size $T$ for $N=20$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S25: Numerical experiments with linear mixed noise setup for varying maximum time lag $\tau_{\max}$ and $T=200$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S26: Numerical experiments with linear mixed noise setup for varying maximum time lag $\tau_{\max}$ and $T=500$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel. Figure S27: Numerical experiments with linear mixed noise setup for varying maximum time lag $\tau_{\max}$ and $T=1000$ . The left (right) column shows results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for increasing autocorrelations $a$ (top to bottom). All model and method parameters are indicated in the upper right of each panel.
2024-09-04T02:54:58.146594
2020-03-08T01:10:11
2003.03694
{ "authors": "Yu-Hui Chen, Ronnie R. Tamming, Kai Chen, Zhepeng Zhang, Yanfeng\n Zhang, Justin M. Hodgkiss, Richard J. Blaikie, Boyang Ding, Min Qiu", "full_text_license": null, "license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/", "provenance": "arxiv-papers-0000.json.gz:26101", "submitter": "Boyang Ding", "url": "https://arxiv.org/abs/2003.03694" }
arxiv-papers
††thanks: These authors contributed equally††thanks: These authors contributed equally # Bandgap Control in Two-Dimensional Semiconductors via Coherent Doping of Plasmonic Hot Electrons Yu-Hui Chen School of Physics, Beijing Institute of Technology, Beijing 10081, China Ronnie R. Tamming MacDiarmid Institute for Advanced Materials and Nanotechnology, Dodd-Walls Centre for Photonic and Quantum Technologies, School of Chemical and Physical Sciences, Victoria University of Wellington, Wellington 6012, New Zealand Kai Chen MacDiarmid Institute for Advanced Materials and Nanotechnology, Dodd-Walls Centre for Photonic and Quantum Technologies, School of Chemical and Physical Sciences, Victoria University of Wellington, Wellington 6012, New Zealand Zhepeng Zhang Department of Materials Science and Engineering, College of Engineering, Center for Nanochemistry (CNC), College of Chemistry and Molecular Engineering, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China Yanfeng Zhang Department of Materials Science and Engineering, College of Engineering, Center for Nanochemistry (CNC), College of Chemistry and Molecular Engineering, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China Justin M. Hodgkiss MacDiarmid Institute for Advanced Materials and Nanotechnology, Dodd-Walls Centre for Photonic and Quantum Technologies, School of Chemical and Physical Sciences, Victoria University of Wellington, Wellington 6012, New Zealand Richard J. Blaikie MacDiarmid Institute for Advanced Materials and Nanotechnology, Dodd-Walls Centre for Photonic and Quantum Technologies, Department of Physics, University of Otago, PO Box 56, Dunedin 9016, New Zealand Boyang Ding <EMAIL_ADDRESS>MacDiarmid Institute for Advanced Materials and Nanotechnology, Dodd-Walls Centre for Photonic and Quantum Technologies, Department of Physics, University of Otago, PO Box 56, Dunedin 9016, New Zealand Min Qiu<EMAIL_ADDRESS>Key Laboratory of 3D Micro/Nano Fabrication and Characterization of Zhejiang Province, School of Engineering, Westlake University, 18 Shilongshan Road, Hangzhou 310024, Zhejiang Province, China Institute of Advanced Technology, Westlake Institute for Advanced Study, 18 Shilongshan Road, Hangzhou 310024, Zhejiang Province, China ###### Abstract Bandgap control is of central importance for semiconductor technologies. The traditional means of control is to dope the lattice chemically, electrically or optically with charge carriers. Here, we demonstrate for the first time a widely tunable bandgap (renormalisation up to 650 meV at room-temperature) in two-dimensional (2D) semiconductors by coherently doping the lattice with plasmonic hot electrons. In particular, we integrate tungsten-disulfide (WS2) monolayers into a self-assembled plasmonic crystal, which enables coherent coupling between semiconductor excitons and plasmon resonances. Accompanying this process, the plasmon-induced hot electrons can repeatedly fill the WS2 conduction band, leading to population inversion and a significant reconstruction in band structures and exciton relaxations. Our findings provide an innovative and effective measure to engineer optical responses of 2D semiconductors, allowing a great flexiblity in design and optimisation of photonic and optoelectronic devices. Two-dimensional (2D) semiconductors, such as transition metal dichalcogenides (TMDCs)Mak _et al._ (2010); Splendiani _et al._ (2010), have direct bandgap at their monolayer limit, exhibiting tremendous potential in development of next-generation nanoscale devices. Like in their bulk counterparts, bandgap control plays a vital role in 2D semiconductor technoglogies, since it enables the creation of desirable optoelectronic properties that are required in numerous applications, ranging from lasersYe _et al._ (2015) to modulatorsMak and Shan (2016), photodetectorsLopez-Sanchez _et al._ (2013) and photocatalysisVoiry _et al._ (2013). The traditional means of control is to dope the lattice chemicallyKim _et al._ (2015), electricallyChernikov _et al._ (2015a) or opticallyChernikov _et al._ (2015b) with charge carriers, the practicality of which is, however, limited by many factors, e.g. the irreversible bandgap modification, contact-type control and requirement of ultrastrong pump. Here we report that one can flexibly and effectively modify the electronic band structures of 2D semiconductors by establishing coherent strong coupling between the semiconductor excitons and a plasmonic resonatorEbbesen _et al._ (1998); Liu and Lalanne (2008). In particular, plasmonic resonators are metallic nanostructures that support collective oscillation of electrons, known as plasmons. The excitation of plasmons can produce hot electrons, i.e. highly energetic electrons with non-equilibrium thermal distributionsClavero (2014); Brongersma _et al._ (2015), which, in the strong coupling regime, can repeatedly dope the lattice along with the coherent plasmon-exciton energy exchange. As a result, the bandgap of 2D semiconductors is significantly renormalised and the renormalisation can be easily altered through changing the detuning between plasmons and excitons. The schematic of our sample in Fig.1a demonstrates a WS2 monolayer (ML) deposited onto a plasmonic crystal (PC)Ding _et al._ (2013, 2019), which comprises of a periodic array of silver capped silica nanospheres that are coated with an ultrathin Al2O3 spacer. This metal-insulator-semiconductor configuration constitutes PC-WS2 hybrid systems, supporting plasmon lattice modes propagating on the PC-WS2 interface. Here the top WS2 MLs belong to the family of atomically thin TMDCs, having been extensively studiedYe _et al._ (2014); Sie _et al._ (2017); Ruppert _et al._ (2017); Cunningham _et al._ (2017); Steinhoff _et al._ (2017) for their unusual exciton-dominated optical responses, such as high absorption and emission efficiency. These properties make the PC-WS2 systems a suitable platform to study plasmon-exciton interactionsDing _et al._ (2019). The PC geometries were chosen to excite plasmon lattice modesEbbesen _et al._ (1998); Liu and Lalanne (2008); Ding _et al._ (2013, 2019) that can match the frequency of exciton A in WS${}_{\text{2}}$ MLs at certain incident angles $\theta$. The plasmon modes show red-shift dispersion at higher $\theta$ (yellow curve in Fig.1b), matching the frequency of exciton A ($E=2.061$ eV) at $\theta=22^{\circ}$. In this case, plasmon modes can coherently couple with excitons, leading to the formation of plasmon-exciton polaritons, i.e. half- light half-matter quasiparticles that inherit properties from both the plasmonic and excitonic components. As a result, the transmission maxima exhibit pronounced splitting features that follow the dispersions of upper polariton (UP) and lower polariton (LP), indicating the establishment of strong coupling between plasmons and excitons. When the frequency of the plasmon mode is tuned in resonance with exciton A ($\theta=22^{\circ}$), the hybrid system is characterised by a vacuum Rabi splitting of $\hbar\cdot\Omega_{\text{R}}\approx 136$ meV. More detailed analysis of strong plasmon-exciton coupling in equilibrium states can be found in a previous workDing _et al._ (2019) and Fig.S1 in the Supplementary Information (SI). Upon photoexcitation, the transient optical responses of PC-WS2 samples can be characterised using femtosecond transient absorption (TA) spectroscopy (Fig.2a and Methods), which enables incident angle-resolved probes of the optical properties and dynamics of WS2 MLs that are strongly coupled with plasmon resonancesDarby _et al._ (2016). Fig.2b shows the transient transmission spectra ($\Delta\text{T}/\text{T}$) with a pump fluence of $12\mu\text{J/cm}^{2}$ as a function of time delay and energy at the tuned state ($\theta=22^{\circ}$), which displays two split relaxation traces flanking the spectral position of exciton A ($E=2.061$ eV), corresponding to UP and LP. This sharply contrasts with the single-trace relaxations of exciton B ($E=2.471$ eV, Fig.2b) and uncoupled exciton A in bare WS2 MLs (Fig.S2 in SI). When the PC is detuned from exciton A, e.g. at $\theta=30^{\circ}$ (Fig.2c), a single relaxation trace appears, highly resembling the trace of bare exciton A. These ultrashort timescale results confirm again the strong coupling nature of our PC-WS2 systems. It is worth noting that the photoinduced absorption minimum associated with tuned polaritons appears at the 1 to 10 ps range (blue area centred at $E=1.946$ eV in Fig. 2b and the corresponding $\Delta\text{T}/\text{T}$ transient with negative magnitudes in Fig.2f), obviously delayed compared to the minimum near exciton B (Fig.2b) and its counterpart in the detuned polaritons (Fig. 2c), which all emerge simultaneously after the arrival of the pump pulse. Similar postponed minima have been found in transient spectra of bare TMDC MLs, which typically arise from enhanced exciton-exciton and/or exciton-electron interactions under high-power pump that can populate high- density carriers in the latticeCeballos _et al._ (2016); Ruppert _et al._ (2017); Cunningham _et al._ (2017); Sie _et al._ (2017) (see Section 2 in SI for detailed discussions). What is different is that, in our hybrid systems, the delayed minima appear under much lower pump intensity than that in the reference experiments for bare WS2 MLs and are only associated with tuned polaritons. More importantly, it is noted that a $\Delta\text{T}/\text{T}$ maximum lasting for $\sim 1$ ps in $E=1.6$ to $1.8$ eV arise in the tuned polariton spectra (Fig.2b), which, in contrast, is remarkably weaker in the detuned state (Fig.2c) and is completely absent in bare WS2 MLs (Fig.S2 in SI). The integrated $\Delta\text{T}/\text{T}$ spectrum near zero probe delay (Fig.2d) shows that the broad maximum has positive magnitudes, which indicates negative optical absorption or positive gain, being a clear evidence of bandgap renormalisation accompanied by population inversion.Chernikov _et al._ (2015b) Such phenomena are typically induced by the population of high-density carriers in 2D semiconductor latticeMeckbach _et al._ (2018), which leads to the non-equilibrium occupation of electron and/or hole states that can induce the formation of new quasiparticle bandgaps. This process can be decribed byPeyghambarian _et al._ (1993): $\Delta E_{\text{g}}=-\underset{q\neq 0}{\sum}V_{\text{s}}(q)\,[f_{\text{e}}(q)+f_{\text{h}}(q)]-\underset{q\neq 0}{\sum}[V_{\text{s}}(q)-V(q)]$ (1) where $V_{\text{s}}(q)$ and $V(q)$ represent fourier transforms of screened and unscreened Coulomb potentials, while $f_{\text{e}}(q)$ and $f_{\text{h}}(q)$ are occupation probabilities of electron and hole with momentum $q$. The onset of the new bandgap can be extracted from the low- energy end of the broad maximum. It means that in our experiments, the renormalised bandgap starts at $E_{\text{g}}\approx 1.60$ eV, lying $\sim 400$ meV below LP and $\sim 650$ meV below the initial bandgap of WS2 MLs (given that the binding energy of exciton A is $\sim 200$ meVCunningham _et al._ (2017)). This is, to the best of our knowledge, the largest bandgap renormalisation in 2D semiconductors under such a low pump intensity (12$\mu$J$/$cm2) to date, which, in the meanwhile, results in the inversion of carrier population near the newly formed band edgeChernikov _et al._ (2015b); Meckbach _et al._ (2018), presenting as optical gains, i.e. the broad maximum in Fig.2b and 2d. These unusual spectral and transient features are broadly understood as the presence of high-density carriers, which, in our tuned PC-WS2 systems, surprisingly have been achieved under room temperature and extremely low pump intensity ($12\mu$J$/$cm2). This sharply contrasts with similar observationsChernikov _et al._ (2015b) in bare WS2 single/bi-layers with ultrastrong photoexcitation ($840\mu$J$/$cm2 at 70 K or $3400\mu$J$/$cm2 at room temperature). In their study, the population of high-density carriers are a result of Mott-transition, which are induced by enhanced exciton-exciton interactions under high-power pump, reducing exciton binding energy, finally breaking excitons into unbound electron-hole plasmaChernikov _et al._ (2015b); Steinhoff _et al._ (2017). In our experiments, the pump power is too low to develop a Mott-transition, suggesting that there must be other sources that can provide large numbers of additional carriers. To understand the origin of these carriers, we turn to discuss one unique property of plasmon-exciton polaritons, i.e. the generation of hot electrons that are inherited from the polaritons’ plasmon root. In particular, hot electrons are electrons with non-equilibrium thermal distributions, generated by plasmon dephasing from wave-like states through non-radiative decayBrongersma _et al._ (2015), which can electrically dope adjacent semiconductorsFang _et al._ (2012), modifying their photovoltaic and photocatalytic performanceClavero (2014). When plasmons are coupled to exciton-like resonances in semiconductors, the hot electron density can be highly enhanced in the lattice through direct electron tunnelingGarcía De Arquer _et al._ (2013) or dipole-dipole interactionCushing _et al._ (2012). Therefore it is very likely that the high-density carriers in tuned PC-WS2 systems are the hot electrons introduced during strong coupling process. (See Section 3 in SI for detailed discussions) The analyses of relaxation dynamics of tuned and detuned polaritons support the hot electron model. We note that both the UP and LP in Fig.2f demonstrate slower decays than that of detuned states in Fig.2g (Table S2 in SI for fitting parameters). This observation coincides with a previous studyBoulesbaa _et al._ (2016), clearly indicating the involvement of plasmonic hot electrons in the strong plasmon-exciton coupling process. Specifically, as the system sits in the strong coupilng regime, after photoexcitation, excitons and plasmons coherently exchange energy at the Rabi frequency ($\sim 136$ meV)Vasa _et al._ (2013), while the plasmon-to-exciton process is accompanied by hot electron population in the lattice. Such a charge population runs at an ultrashort period of $\sim 30$ fs ($T_{\text{R}}=2\pi/\Omega_{\text{R}}$), which is too short to be caught by our equipment, also greatly shorter than the exciton formation ($<1$ ps)Ceballos _et al._ (2016), the non-radiative decay (at scales of $10$ ps) and the radiative decay process (up to few- hundred ps) in WS2 MLsRuppert _et al._ (2017); Sie _et al._ (2017). This means that during exciton relaxation, there is frequent tunneling/generation of hot electrons that can repeatedly fill the unoccupied states in conduction band of WS2 monolayers, which slow down the exciton bleaching via Pauli blocking and lead to the extended lifetimes (Section 4 in SI for more details). Given that there is little evidence for other possible carrier sources, e.g. polariton condensates Byrnes _et al._ (2014), we conclude that coherent doping of plasmonic hot electrons is the origin of the spectral and transient features that require high-density population. In particular, the hot electron population repeatedly takes place throughout the whole relaxation process, while the Al2O3 spacer can form a Schottky-like barrier that prevents charges from returning back to the metalsCushing _et al._ (2012); García De Arquer _et al._ (2013). As a result, hot electrons can be accumulated in the lattice before they decay (within 1 psBrongersma _et al._ (2015)), which simultaneously competes with rapid exciton relaxations, transiently converting the intrinsic WS2 monolayers to ”n-doped” ones. This leads to the giant bandgap renormalisation with population inversion that peak at few-hundred femtoseconds (Fig.S10 in SI), and also induces the delayed absorption minima in Fig.2b and 2f (Section 5 in SI). To confirm our observations, we have performed meaurements under $\sim 10$ times higher pump fluence ($100\mu$J$/$cm2) (Fig. 3a). Apart from the pronounced broad maxima at low energies, we can see large spectral shift as well as remarkably delayed occurance of UP and LP maxima, revealing that the accumulation of hot electrons competes with the relaxation dynamics, which significantly enhances the systems’ nonlinear responses on ultrashort timescales. (Detailed discussions in Section 6 of SI). Similar to the low- power case, the transient variation of the broad maximum (Fig. 3c) under intense photoexcitation takes $\sim 1.5$ ps from initial excitation to fading. Fig. 3d shows the evolution of population inversion, where the magnitude and width of the maximum is highly dependent on pump intensity. Under $100\mu$J$/$cm2 pump fluence, the full-width at half-maximum can reach at $\sim 200$ meV with highly enhanced magnitudes as compared to the maximum under $5\mu$J$/$cm2 pump, also contrasting the unchanged flat spectral features in bare WS2 MLs. But, even in this case, the pump fluence is still sigfinicantly lower than that in Ref.Chernikov _et al._ (2015b). As discussed above, the strong plasmon-exction coupling dramatically modifies the electronic band structures of WS2 monolayers, which are induced, to a large degree, by plasmonic hot electron doping via strong coupling. This effect is extremely hard to observe in traditional exciton-polaritonsByrnes _et al._ (2014), being a non-trivial factor that has to be considered when studying light-matter interactions using plasmonic resonators, which, on the other hand, provides new and effective measures to engineer bandgap of 2D semiconductors. ## Acknowledgments The authors acknowledge the New Idea Research Funding 2018 (Dodd-Walls Centre for photonic and quantum technologies), the Marsden Fast-start Fund by Royal Society of New Zealand through contract MFP-UOO1827 and the Smart Ideas Fund by Ministry of Business, Innovation and Employment, New Zealand through contract UOOX1802. In addition, this work was supported in part by the National Key Research and Development Program of China (no. 2017YFA0205700) and the National Natural Science Foundation of China (nos. 61425023, 61235007, 61575177 and 51861135201). The authors also acknowledge the visiting Fellowship awarded by New Zealand Centre at Peking University. We thank Dr. M. Yan and Dr. F. Hong for their help with thin-film deposition, AFM, and SEM measurements. ## Author Contributions B.D. and Y.-H.C. conceived the project; Z.Z, and B.D. prepared the samples; R.T., K.C., Y.-H.C. and B.D. carried out the optical and other characterization; Y.-H.C. and B.D. performed the simulation; Y.Z., M.Q., R.J.B., and B.D. supervised the projects; Y.-H.C. and B.D. prepared the manuscript; all authors discussed and analyzed the results. ## References * Mak _et al._ (2010) K. F. Mak, C. Lee, J. Hone, J. Shan, and T. F. Heinz, Phys. Rev. Lett. 105, 136805 (2010), arXiv:1004.0546 . * Splendiani _et al._ (2010) A. Splendiani, L. Sun, Y. Zhang, T. Li, J. Kim, C. Y. Chim, G. Galli, and F. Wang, Nano Lett. 10, 1271 (2010), arXiv:1308.1834 [cond-mat.mtrl-sci] . * Ye _et al._ (2015) Y. Ye, Z. J. Wong, X. Lu, X. Ni, H. Zhu, X. Chen, Y. Wang, and X. Zhang, Nat. Photonics 9, 733 (2015), arXiv:1503.06141 . * Mak and Shan (2016) K. F. Mak and J. Shan, Nat. Photonics 10, 216 (2016). * Lopez-Sanchez _et al._ (2013) O. Lopez-Sanchez, D. Lembke, M. Kayci, A. Radenovic, and A. Kis, Nat. Nanotechnol. 8, 497 (2013). * Voiry _et al._ (2013) D. Voiry, H. Yamaguchi, J. Li, R. Silva, D. C. Alves, T. Fujita, M. Chen, T. Asefa, V. B. Shenoy, G. Eda, and M. Chhowalla, Nat. Mater. 12, 850 (2013), arXiv:1212.1513 . * Kim _et al._ (2015) J. Kim, S. S. Baik, S. H. Ryu, Y. Sohn, S. Park, B.-G. Park, J. Denlinger, Y. Yi, H. J. Choi, and K. S. Kim, Science (80-. ). 349, 723 (2015). * Chernikov _et al._ (2015a) A. Chernikov, A. M. Van Der Zande, H. M. Hill, A. F. Rigosi, A. Velauthapillai, J. Hone, and T. F. Heinz, Phys. Rev. Lett. 115, 1 (2015a). * Chernikov _et al._ (2015b) A. Chernikov, C. Ruppert, H. M. Hill, A. F. Rigosi, and T. F. Heinz, Nat. Photonics 9, 466 (2015b). * Ebbesen _et al._ (1998) T. W. Ebbesen, H. J. Lezec, H. F. Ghaemi, T. Thio, and P. A. Wolf, Nature 391, 667 (1998). * Liu and Lalanne (2008) H. Liu and P. Lalanne, Nature 452, 728 (2008). * Clavero (2014) C. Clavero, Nat. Photonics 8, 95 (2014). * Brongersma _et al._ (2015) M. L. Brongersma, N. J. Halas, and P. Nordlander, Nat. Nanotechnol. 10, 25 (2015). * Ding _et al._ (2013) B. Ding, C. Hrelescu, N. Arnold, G. Isic, and T. A. Klar, Nano Lett. 13, 378 (2013). * Ding _et al._ (2019) B. Ding, Z. Zhang, Y.-H. Chen, Y. Zhang, R. J. Blaikie, and M. Qiu, ACS Nano 13, 1333 (2019). * Ye _et al._ (2014) Z. Ye, T. Cao, K. O’Brien, H. Zhu, X. Yin, Y. Wang, S. G. Louie, and X. Zhang, Nature 513, 214 (2014), arXiv:1403.5568 . * Sie _et al._ (2017) E. J. Sie, A. Steinhoff, C. Gies, C. H. Lui, Q. Ma, M. R?sner, G. Sch?nhoff, F. Jahnke, T. O. Wehling, Y.-H. Lee, J. Kong, P. Jarillo-Herrero, and N. Gedik, Nano Lett. 17, 4210 (2017). * Ruppert _et al._ (2017) C. Ruppert, A. Chernikov, H. M. Hill, A. F. Rigosi, and T. F. Heinz, Nano Lett. 17, 644 (2017). * Cunningham _et al._ (2017) P. D. Cunningham, A. T. Hanbicki, K. M. McCreary, and B. T. Jonker, ACS Nano 11, 12601 (2017). * Steinhoff _et al._ (2017) A. Steinhoff, M. Florian, M. Rösner, G. Schönhoff, T. O. Wehling, and F. Jahnke, Nat. Commun. 8, 1166 (2017), arXiv:1705.05202 . * Darby _et al._ (2016) B. L. Darby, B. Auguié, M. Meyer, A. E. Pantoja, and E. C. L. Ru, Nat. Photonics 10, 40 (2016), arXiv:1509.07216 . * Ceballos _et al._ (2016) F. Ceballos, Q. N. Cui, M. Z. Bellus, and H. Zhao, Nanoscale 8, 11681 (2016), arXiv:1607.04856 . * Meckbach _et al._ (2018) L. Meckbach, T. Stroucken, and S. W. Koch, Appl. Phys. Lett. 112 (2018), 10.1063/1.5017069. * Peyghambarian _et al._ (1993) N. Peyghambarian, S. W. Koch, and A. Mysyrowicz, _Introduction to semiconductor optics_ (Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1993). * Fang _et al._ (2012) Z. Fang, Y. Wang, Z. Liu, A. Schlather, P. M. Ajayan, F. H. Koppens, P. Nordlander, and N. J. Halas, ACS Nano 6, 10222 (2012). * García De Arquer _et al._ (2013) F. P. García De Arquer, A. Mihi, D. Kufer, and G. Konstantatos, ACS Nano 7, 3581 (2013). * Cushing _et al._ (2012) S. K. Cushing, J. Li, F. Meng, T. R. Senty, S. Suri, M. Zhi, M. Li, A. D. Bristow, and N. Wu, J. Am. Chem. Soc. 134, 15033 (2012). * Boulesbaa _et al._ (2016) A. Boulesbaa, V. E. Babicheva, K. Wang, I. I. Kravchenko, M. W. Lin, M. Mahjouri-Samani, C. B. Jacobs, A. A. Puretzky, K. Xiao, I. Ivanov, C. M. Rouleau, and D. B. Geohegan, ACS Photonics 3, 2389 (2016). * Vasa _et al._ (2013) P. Vasa, W. Wang, R. Pomraenke, M. Lammers, M. Maiuri, C. Manzoni, G. Cerullo, and C. Lienau, Nat. Photon. 7, 128 (2013). * Byrnes _et al._ (2014) T. Byrnes, N. Y. Kim, and Y. Yamamoto, Nat. Phys. 10, 803 (2014). Figure 1: Structures of a PC-WS${}_{\text{2}}$ sample and steady-state optical properties. a, schematic of polariton formation in a WS${}_{\text{2}}$ ML that is supported on a self-assembled plasmonic crystal. The Al2O3 spacer is not depicted for similicity. right insets: side and top-view scanning electron microscope (SEM) images; b, angle-resolved transmission spectra under p-polarised illumination and their projection (top x-y plane), in which the spectral positions of exciton A (X${}_{\text{A}}$) and B (X${}_{\text{B}}$), calculated dispersions of plasmon lattice modes (yellow curve), and upper and lower branches of polaritons (orange curves) are indicated. The tuned angle ($\theta=22^{\circ}$) is marked with a blacked dahsed line. Refer to Section 1 in the SI for detailed discussion of the strong plasmon-exciton coupling and its dispersion. Figure 2: Transient optical responses. a, schematic of angle- resolved ultrafast pump-probe spectroscopy; b, d and f refer to normalised differential transmission spectra ($\Delta\text{T}/\text{T}$) at the tuned angle ($\theta=22^{\circ}$), while c, e and g refer to $\Delta\text{T}/\text{T}$ at the detuned angle ($\theta=30^{\circ}$); b and c are intensity plots of $\Delta\text{T}/\text{T}$ as function of time delay and probe photon energy, using the same colour bar (which is also used by Fig.3a); d and e are $\Delta\text{T}/\text{T}$ spectra averaged within the time span from 0.1 to 0.7 ps after pump; f and g are $\Delta\text{T}/\text{T}$ transient at specific energies (labelled with different colours), in which scatter symbols and solid curves represent measured and fitted data, respectively. Dashed frames in panel b, d and e mark the spectral region of the broad maxima (see main text). All measurements were carried out using 400 nm ($E=3.1$ eV) pump pulses that have 100 fs duration and pump fluence of 12 $\mu$J/cm2 at room temperature. The instrument-response-function is shown as the grey area in panel g Figure 3: Bandgap renormalisation and evloution of population inversion. a, intensity plot of $\Delta\text{T}/\text{T}$ spectra of PC-WS2 under $100\mu$J$/$cm2 pump fluence at $\theta=22^{\circ}$, where orange (blue) colour represents the maximum (minimum) value. b, delay time dependent spectra ($\Delta\text{T}/\text{T}$) at energies of UP, LP and exciton B extracted from panel a. Solid curves are plotted only for visual guidance c, $\Delta\text{T}/\text{T}$ spectra at different delay times, extracted from the white dashed frame in panel a; red dashed vertical line indicates the onset of renormalised bandgap. d, comparison of $\Delta\text{T}/\text{T}$ spectra at delay of 0.96 ps between PC-WS2 (left) and WS2 MLs (right) under gradually increasing pump fluence.
2024-09-04T02:54:58.158122
2020-03-08T07:12:20
2003.03734
{ "authors": "Qingxia Liu, Gong Cheng, Kalpa Gunaratna, Yuzhong Qu", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26102", "submitter": "Gong Cheng", "url": "https://arxiv.org/abs/2003.03734" }
arxiv-papers
11institutetext: National Key Laboratory for Novel Software Technology, Nanjing University, China 11email<EMAIL_ADDRESS><EMAIL_ADDRESS>22institutetext: Samsung Research America, Mountain View CA, USA 22email<EMAIL_ADDRESS> # ESBM: An Entity Summarization BenchMark Qingxia Liu 11 Gong Cheng 11 Kalpa Gunaratna 22 Yuzhong Qu 11 ###### Abstract Entity summarization is the problem of computing an optimal compact summary for an entity by selecting a size-constrained subset of triples from RDF data. Entity summarization supports a multiplicity of applications and has led to fruitful research. However, there is a lack of evaluation efforts that cover the broad spectrum of existing systems. One reason is a lack of benchmarks for evaluation. Some benchmarks are no longer available, while others are small and have limitations. In this paper, we create an Entity Summarization BenchMark (ESBM) which overcomes the limitations of existing benchmarks and meets standard desiderata for a benchmark. Using this largest available benchmark for evaluating general-purpose entity summarizers, we perform the most extensive experiment to date where 9 existing systems are compared. Considering that all of these systems are unsupervised, we also implement and evaluate a supervised learning based system for reference. ###### Keywords: Entity summarization Triple ranking Benchmarking. ## 1 Introduction RDF data describes entities with triples representing property values. In an RDF dataset, the description of an entity comprises all the RDF triples where the entity appears as the subject or the object. An example entity description is shown in Fig. 1. Entity descriptions can be large. An entity may be described in dozens or hundreds of triples, exceeding the capacity of a typical user interface. A user served with all of those triples may suffer information overload and find it difficult to quickly identify the small set of triples that are truly needed. To solve the problem, an established research topic is _entity summarization_ [15], which aims to compute an optimal compact summary for the entity by selecting a size-constrained subset of triples. An example entity summary under the size constraint of 5 triples is shown in the bottom right corner of Fig. 1. Figure 1: Description of entity Tim Berners-Lee and a summary thereof. Entity summarization supports a multiplicity of applications [6, 21]. Entity summaries constitute entity cards displayed in search engines [9], provide background knowledge for enriching documents [26], and facilitate research activities with humans in the loop [3, 4]. This far-reaching application has led to fruitful research as reviewed in our recent survey paper [15]. Many entity summarizers have been developed, most of which generate summaries for general purposes. Research Challenges. However, two challenges face the research community. First, there is a _lack of benchmarks_ for evaluating entity summarizers. As shown in Table 1, some benchmarks are no longer available. Others are available [22, 7, 8] but they are small and have limitations. Specifically, [22] has a task-specific nature, and [7, 8] exclude classes and/or literals. These benchmarks could not support a comprehensive evaluation of general- purpose entity summarizers. Second, there is a _lack of evaluation efforts_ that cover the broad spectrum of existing systems to compare their performance and assist practitioners in choosing solutions appropriate to their applications. Contributions. We address the challenges with two contributions. First, we create an Entity Summarization BenchMark (ESBM) which overcomes the limitations of existing benchmarks and meets the desiderata for a successful benchmark [18]. ESBM has been published on GitHub with extended documentation and a permanent identifier on w3id.org111https://w3id.org/esbm under the ODC- By license. As the largest available benchmark for evaluating general-purpose entity summarizers, ESBM contains 175 heterogeneous entities sampled from two datasets, for which 30 human experts create 2,100 general-purpose ground-truth summaries under two size constraints. Second, using ESBM, we evaluate 9 existing general-purpose entity summarizers. It represents the most extensive evaluation effort to date. Considering that existing systems are unsupervised, we also implement and evaluate a supervised learning based entity summarizer for reference. In this paper, for the first time we comprehensively describe the creation and use of ESBM. We report ESBM v1.2—the latest version, while early versions have successfully supported the entity summarization shared task at the EYRE 2018 workshop222https://sites.google.com/view/eyre18/sharedtasks and the EYRE 2019 workshop.333https://sites.google.com/view/eyre19/sharedtasks We will also educate on the use of ESBM at an ESWC 2020 tutorial on entity summarization444https://sites.google.com/view/entity-summarization- tutorials/eswc2020. The remainder of the paper is organized as follows. Section 2 reviews related work and limitations of existing benchmarks. Section 3 describes the creation of ESBM, which is analyzed in Section 4. Section 5 presents our evaluation. In Section 6 we discuss limitations of our study and perspectives for future work. Table 1: Existing benchmarks for evaluating entity summarization. | Dataset | Number of entities | Availability ---|---|---|--- WhoKnows?Movies! [22] | Freebase | 60 | Available111http://yovisto.com/labs/iswc2012 Langer et al. [13] | DBpedia | 14 | Unavailable FRanCo [1] | DBpedia | 265 | Unavailable Benchmark for evaluating RELIN [2] | DBpedia | 149 | Unavailable Benchmark for evaluating DIVERSUM [20] | IMDb | 20 | Unavailable Benchmark for evaluating FACES [7] | DBpedia | 50 | Available222http://wiki.knoesis.org/index.php/FACES Benchmark for evaluating FACES-E [8] | DBpedia | 80 | Available222http://wiki.knoesis.org/index.php/FACES ## 2 Related Work We review methods and evaluation efforts for entity summarization. Methods for Entity Summarization. In a recent survey [15] we have categorized the broad spectrum of research on entity summarization. Below we briefly review _general-purpose_ entity summarizers which mainly rely on generic technical features that can apply to a wide range of domains and applications. We will not address methods that are domain-specific (e.g., for movies [25] or timelines [5]), task-specific (e.g., for facilitating entity resolution [3] or entity linking [4]), or context-aware (e.g., contextualized by a document [26] or a query [9]). RELIN [2] uses a weighted PageRank model to rank triples according to their statistical informativeness and relatedness. DIVERSUM [20] ranks triples by property frequency and generates a summary with a strong constraint that avoids selecting triples having the same property. SUMMARUM [24] and LinkSUM [23] mainly rank triples by the PageRank scores of property values that are entities. LinkSUM also considers backlinks from values. FACES [7], and its extension FACES-E [8] which adds support for literals, cluster triples by their bag-of-words based similarity and choose top-ranked triples from as many different clusters as possible. Triples are ranked by statistical informativeness and property value frequency. CD [28] models entity summarization as a quadratic knapsack problem that maximizes the statistical informativeness of the selected triples and in the meantime minimizes the string, numerical, and logical similarity between them. In ES-LDA [17], ES- LDAext [16], and MPSUM [27], a Latent Dirichlet Allocation (LDA) model is learned where properties are treated as topics, and each property is a distribution over all the property values. Triples are ranked by the probabilities of properties and values. MPSUM further avoids selecting triples having the same property. BAFREC [12] categorizes triples into meta-level and data-level. It ranks meta-level triples by their depths in an ontology and ranks data-level triples by property and value frequency. Triples having textually similar properties are penalized to improve diversity. KAFCA [11] ranks triples by the depths of properties and values in a hierarchy constructed by performing the Formal Concept Analysis (FCA). It tends to select triples containing infrequent properties but frequent values, where frequency is computed at the word level. Limitations of Existing Benchmarks. For evaluating entity summarization, compared with task completion based _extrinsic evaluation_ , ground truth based _intrinsic evaluation_ is more popular because it is easy to perform and the results are reproducible. Its idea is to create a benchmark consisting of human-made ground-truth summaries, and then compute how much a machine- generated summary is close to a ground-truth summary. Table 1 lists known benchmarks, including dedicated benchmarks [22, 13, 1] and those created for evaluating a particular entity summarizer [2, 20, 7, 8]. It is not surprising that these benchmarks are not very large since it is expensive to manually create high-quality summaries for a large set of entities. Unfortunately, some of these benchmarks are not publicly available at this moment. Three are available [22, 7, 8] but they are relatively small and have limitations. Specifically, WhoKnows?Movies! [22] is not a set of ground-truth summaries but annotates each triple with the ratio of movie questions that were correctly answered based on that triple, as an indicator of its importance. This kind of task-specific ground truth may not be suitable for evaluating general-purpose entity summarizers. The other two available benchmarks were created for evaluating FACES/-E [7, 8]. Classes and/or literals are not included because they could not be processed by FACES/-E and hence were filtered out. Such benchmarks could not comprehensively evaluate most of the existing entity summarizers [2, 20, 28, 27, 12, 11] that can handle classes and literals. These limitations of available benchmarks motivated us to create a new ground truth consisting of _general-purpose summaries_ for a _larger set of entities_ involving _more comprehensive triples_ where property values can be entities, classes, or literals. ## 3 Creating ESBM To overcome the above-mentioned limitations of existing benchmarks, we created a new benchmark called ESBM. To date, it is the largest available benchmark for evaluating general-purpose entity summarizers. In this section, we will first specify our design goals. Then we describe the selection of entity descriptions and the creation of ground-truth summaries. We partition the data to support cross-validation for parameter fitting. Finally we summarize how our design goals are achieved and how ESBM meets standard desiderata for a benchmark. ### 3.1 Design Goals The creation of ESBM has two main design goals. First, a successful benchmark should meet seven desiderata [18]: accessibility, affordability, clarity, relevance, solvability, portability, and scalability, which we will detail in Section 3.5. Our design of ESBM aims to satisfy these basic requirements. Second, in Section 2 we discussed the limitations of available benchmarks, including task specificness, small size, and triple incomprehensiveness. Besides, all the existing benchmarks use a single dataset and hence may weaken the generalizability of evaluation results. We aim to overcome these limitations when creating ESBM. In Section 3.5 we will summarize how our design goals are achieved. ### 3.2 Entity Descriptions To choose entity descriptions to summarize, we sample entities from selected datasets and filter their triples. The process is detailed below. Datasets. We sample entities from two datasets of different kinds: an encyclopedic dataset and a domain-specific dataset. For the encyclopedic dataset we choose DBpedia [14], which has been used in other benchmarks [13, 1, 2, 7, 8]. We use the English version of DBpedia 2015-10555http://wiki.dbpedia.org/dbpedia-dataset-version-2015-10—the latest version when we started to create ESBM. For the domain-specific dataset we choose LinkedMDB [10], which is a popular movie database. The movie domain is also the focus of some existing benchmarks [22, 20] possibly because this domain is familiar to the lay audience so that it would be easy to find qualified human experts to create ground-truth summaries. We use the latest available version of LinkedMDB.666http://www.cs.toronto.edu/~oktie/linkedmdb/linkedmdb-latest- dump.zip Entities. For DBpedia we sample entities from five large classes: Agent, Event, Location, Species, and Work. They collectively contain 3,501,366 entities (60%) in the dataset. For LinkedMDB we sample from Film and Person, which contain 159,957 entities (24%) in the dataset. Entities from different classes are described by very different properties as we will see in Section 4.3, and hence help to assess the generalizability of an entity summarizer. According to the human efforts we could afford, from each class we randomly sample 25 entities. The total number of selected entities is 175. Each selected entity should be described in at least 20 triples so that summarization would not be a trivial task. This requirement follows common practice in the literature [1, 2, 20, 7] where a minimum constraint in the range of 10–20 was posed. (a) Average number of triples describing an entity. (b) Average number of distinct properties describing an entity. Figure 2: Composition of entity descriptions (the left bar in each group), top-5 ground-truth summaries (the middle bar), and top-10 ground-truth summaries (the right bar), grouped by class in DBpedia (D) and LinkedMDB (L). Triples. For DBpedia, entity descriptions comprise triples in the following dump files: _instance types_ , _instance types transitive_ , _YAGO types_ , _mappingbased literals_ , _mappingbased objects_ , _labels_ , _images_ , _homepages_ , _persondata_ , _geo coordinates mappingbased_ , and _article categories_. We do not import dump files that provide metadata about Wikipedia articles such as _page links_ and _page length_. We do not import _short abstracts_ and _long abstracts_ as they provide handcrafted textual entity summaries; it would be inappropriate to include them in a benchmark for evaluating entity summarization. For LinkedMDB we import all the triples in the dump file except sameAs links which do not express facts about entities but are of more technical nature. Finally, as shown in Fig. 2a (the left bar in each group), the mean number of triples in an entity description is in the range of 25.88–52.44 depending on the class, and the overall mean value is 37.62. ### 3.3 Ground-Truth Summaries We invite 30 researchers and students to create ground-truth summaries for entity descriptions. All the participants are familiar with RDF. Task Assignment. Each participant is assigned 35 entities consisting of 5 entities randomly selected from each of the 7 classes in ESBM. The assignment is controlled to ensure that each entity in ESBM is processed by 6 participants. A participant creates two summaries for each entity description by selecting different numbers of triples: a _top-5 summary_ containing 5 triples, and a _top-10 summary_ containing 10 triples. Therefore, we will be able to evaluate entity summarizers under different size constraints. The choice of these two numbers follows previous work [2, 7, 8]. Participants work independently and they may create different summaries for an entity. It is not feasible to ask participants to reach an agreement. It is also not reasonable to merge different summaries into a single version. So we keep different summaries and will use all of them in the evaluation. The total number of ground-truth summaries is $175\cdot 6\cdot 2=2,100$. Figure 3: User interface for creating ground-truth entity summaries. Procedure. Participants are instructed to create _general-purpose summaries_ that are not specifically created for any particular task. They read and select triples using a Web-based user interface shown in Fig. 3. All the triples in an entity description are listed in random order but those having a common property are placed together for convenient reading and comparison. For IRIs, their human-readable labels (rdfs:label) are shown if available. To help participants understand a property value that is an unfamiliar entity, a click on it will open a pop-up showing a short textual description extracted from the first paragraph of its Wikipedia/IMDb page. Any triple can be selected into the top-5 summary, the top-10 summary, or both. The top-5 summary is not required to be a subset of the top-10 summary. ### 3.4 Training, Validation, and Test Sets Some entity summarizers need to tune hyperparameters or fit models. To make their evaluation results comparable with each other, we specify a split of our data into training, validation, and test sets. We provide a partition of the 175 entities in ESBM into 5 equally sized subsets $P_{0},\ldots,P_{4}$ to support 5-fold cross-validation. Entities of each class are partitioned evenly among the subsets. For $0\leq i\leq 4$, the $i$-th fold uses $P_{i},P_{i+1\text{ mod }5},P_{i+2\text{ mod }5}$ as the training set (e.g., for model fitting), uses $P_{i+3\text{ mod }5}$ for validation (e.g., tuning hyperparameters), and retains $P_{i+4\text{ mod }5}$ as the test set. Evaluation results are averaged over the 5 folds. ### 3.5 Conclusion ESBM overcomes the limitations of available benchmarks discussed in Section 2. It contains 175 entities which is 2–3 times as large as available benchmarks [22, 7, 8]. In ESBM, property values are not filtered as in [7, 8] but can be any entity, class, or literal. Different from the task-specific nature of [22], ESBM provides general-purpose ground-truth summaries for evaluating general-purpose entity summarizers. Besides, ESBM meets the seven desiderata proposed in [18] as follows. * • Accessibility. ESBM is publicly available and has a permanent identifier on w3id.org. * • Affordability. ESBM is with an open-source program and example code for evaluation. The cost of using ESBM is minimized. * • Clarity. ESBM is documented clearly and concisely. * • Relevance. ESBM samples entities from two real datasets that have been widely used. The summarization tasks are natural and representative. * • Solvability. An entity description in ESBM has at least 20 triples and a mean number of 37.62 triples, from which 5 or 10 triples are to be selected. The summarization tasks are not trivial and not too difficult. * • Portability. ESBM can be used to evaluate any general-purpose entity summarizer that can process RDF data. * • Scalability. ESBM samples 175 entities from 7 classes. It is reasonably large and diverse to evaluate mature entity summarizers but is not too large to evaluate research prototypes. However, ESBM has its own limitations, which we will discuss in Section 6. ## 4 Analyzing ESBM In this section, we will first characterize ESBM by providing some basic statistics and analyzing the triple composition and heterogeneity of entity descriptions. Then we compute inter-rater agreement to show how much consensus exists in the ground-truth summaries given by different participants. ### 4.1 Basic Statistics The 175 entity descriptions in ESBM collectively contain 6,584 triples, of which 37.44% are selected into at least one top-5 summary and 58.15% appear in at least one top-10 summary, showing a wide selection by the participants. However, many of them are selected only by a single participant; 20.46% and 40.23% are selected by different participants into top-5 and top-10 summaries, respectively. We will further analyze inter-rater agreement in Section 4.4. We calculate the overlap between the top-5 and the top-10 summaries created by the same participant for the same entity. The mean overlap is in the range of 4.80–4.99 triples depending on the class, and the overall mean value is 4.91, showing that the top-5 summary is usually a subset of the top-10 summary. ### 4.2 Triple Composition In Fig. 2 we present the composition of entity descriptions (the left bar in each group) and their ground-truth summaries (the middle bar for top-5 and the right bar for top-10) in ESBM, in terms of the average number of triples describing an entity (Fig. 2a) and in terms of the average number of distinct properties describing an entity (Fig. 2b). Properties are divided into literal-valued, class-valued, and entity-valued. Triples are divided accordingly. In Fig. 2a, both class-valued and entity-valued triples occupy a considerable proportion of the entity descriptions in DBpedia. Entity-valued triples predominate in LinkedMDB. Literal-valued triples account for a small proportion in both datasets. However, they constitute 30% in top-5 ground- truth summaries and 25% in top-10 summaries. Entity summarizers that cannot process literals [24, 23, 7, 17] have to ignore these notable proportions, thereby significantly influencing their performance. In Fig. 2b, in terms of distinct properties, entity-valued and literal-valued triples have comparable numbers in entity descriptions since many entity- valued properties are multi-valued. Specifically, an entity is described by 13.24 distinct properties, including 5.31 literal-valued (40%) and 6.93 entity-valued (52%). Multi-valued properties appear in every entity description and they constitute 35% of the triples. However, in top-5 ground- truth summaries, the average number of distinct properties is 4.70 and is very close to 5, indicating that the participants are not inclined to select multiple values of a property. Entity summarizers that prefer diverse properties [20, 7, 8, 28, 27, 12] may exhibit good performance. Figure 4: Jaccard similarity between property sets describing different classes. Table 2: Popular properties in ground-truth summaries. In top-5 summaries | In top-10 summaries ---|--- Agent | Event | Location | Species | Work | Film | Person | Agent | Event | Location | Species | Work | Film | Person type | type | type | type | type | director | type | type | type | type | family | type | director | type birthDate | date | country | family | | type | actor | subject | subject | country | type | subject | actor | actor | | | | | | | birthDate | date | subject | order | genre | type | label | | | | | | | | label | | class | | writer | page | | | | | | | | | | genus | | producer | | | | | | | | | | | subject | | date | | | | | | | | | | | kingdom | | language | ### 4.3 Entity Heterogeneity Entities from different classes are described by different sets of properties. For each class we identify the set of properties describing at least one entity from the class. The Jaccard similarity between properties sets for each pair of classes is very low, as shown in Fig. 4. Such heterogeneous entity descriptions help to assess the generalizability of an entity summarizer. Table 2 shows popular properties that appear in at least 50% of the ground- truth summaries for each class. Some universal properties like rdf:type and dct:subject are popular for most classes. We also see class-specific properties, e.g., dbo:birthDate for Agent, dbo:family for Species. However, the results suggest that it would be unrealistic to generate good summaries by manually selecting properties for each class. For example, among 13.24 distinct properties describing an entity, only 1–2 are popular in top-5 ground-truth summaries. The importance of properties is generally contextualized by concrete entities. ### 4.4 Inter-Rater Agreement Recall that each entity in ESBM has six top-5 ground-truth summaries and six top-10 summaries created by different participants. We calculate the average overlap between these summaries in terms of the number of common triples they contain. As shown in Table 3, the results are generally comparable with those reported for other benchmarks in the literature. There is a moderate degree of agreement between the participants. Table 3: Inter-rater agreement. | ESBM | [2] | [7] | [8] ---|---|---|---|--- Overlap between top-5 summaries | 1.99 (39.8$\%$) | 2.91 (58.2$\%$) | 1.92 (38.4$\%$) | 2.12 (42.4$\%$) Overlap between top-10 summaries | 5.42 (54.2$\%$) | 7.86 (78.6$\%$) | 4.64 (46.4$\%$) | 5.44 (54.4$\%$) Ground-truth summaries per entity | 6 | 4.43 | $\geq$ 7 | $\geq$ 4 ## 5 Evaluating with ESBM We used ESBM to perform the most extensive evaluation of general-purpose entity summarizers to date. In this section, we will first describe evaluation criteria. Then we introduce the entity summarizers that we evaluate. Finally we present evaluation results. ### 5.1 Evaluation Criteria Let $S_{m}$ be a machine-generated entity summary. Let $S_{h}$ be a human-made ground-truth summary. To compare $S_{m}$ with $S_{h}$ and assess the quality of $S_{m}$ based on how much $S_{m}$ is close to $S_{h}$, it is natural to compute precision (P), recall (R), and F1. The results are in the range of 0–1: $\text{P}=\frac{|S_{m}\cap S_{h}|}{|S_{m}|}\,,\quad\text{R}=\frac{|S_{m}\cap S_{h}|}{|S_{h}|}\,,\quad\text{F1}=\frac{2\cdot\text{P}\cdot\text{R}}{\text{P}+\text{R}}\,.$ (1) In the experiments we configure entity summarizers to output at most $k$ triples and we set $k=|S_{h}|$, i.e., $k=5$ and $k=10$ are our two settings corresponding to the sizes of ground-truth summaries. We will trivially have P$=$R$=$F1 if $|S_{m}|=|S_{h}|$. However, some entity summarizers may output less than $k$ triples. For example, DIVERSUM [20] disallows an entity summary to contain triples having the same property. It is possible that an entity description contains less than $k$ distinct properties and hence DIVERSUM has to output less than $k$ triples. In this case, P$\neq$R and one should rely on F1. In the evaluation, for each entity in ESBM, we compare a machine-generated summary with each of the 6 ground-truth summaries by calculating F1, and take their aggregation value. Finally we report the mean F1 over all the entities. For aggregation function, we report the results of average, to show an overall match with all the different ground truths; on the website we also give the results of maximum, to show the best match with each individual ground truth. ### 5.2 Participating Entity Summarizers We not only evaluate existing entity summarizers but also compare them with two special entity summarizers we create: an oracle entity summarizer which is used to show the best possible performance on ESBM, and a new supervised learning based entity summarizer. Existing Entity Summarizers. We evaluate 9 out of the 12 general-purpose entity summarizers reviewed in Section 2. We re-implement RELIN [2], DIVERSUM [20], LinkSUM [23], FACES [7], FACES-E [8], and CD [28], while MPSUM [27], BAFREC [12], and KAFCA [11] are open source. We exclude SUMMARUM [24], ES-LDA [17], and ES-LDAext [16] because LinkSUM represents an extension of SUMMARUM, and MPSUM represents an extension of ES-LDA and ES-LDAext. We follow the original implementation and suggested configuration of existing entity summarizers as far as possible. However, for RELIN, we replace its Google-based relatedness measure with a string metric [19] because Google’s search API is no longer free. We also use this metric to replace the unavailable UMBC’s SimService used in FACES-E. For DIVERSUM, we ignore its witness count measure since it does not apply to ESBM. For LinkSUM, we obtain backlinks between entities in LinkedMDB via their corresponding entities in DBpedia. RELIN, CD, and LinkSUM compute a weighted combination of two scoring components. We tune these hyperparameters in the range of 0–1 in 0.01 increments. Since these summarizers are unsupervised, we use both the training set and the validation set described in Section 3.4 for tuning hyperparameters. Oracle Entity Summarizer. We implement an entity summarizer denoted by ORACLE to approximate the best possible performance on ESBM and form a reference point used for comparisons. ORACLE simply outputs $k$ triples that are selected by the most participants into ground-truth summaries. Supervised Learning Based Entity Summarizer. Existing general-purpose entity summarizers are unsupervised. We implement a supervised learning based entity summarizer with features that are used by existing entity summarizers. A triple with property $p$ and value $v$ describing entity $e$ is represented by the following features: * • $\mathtt{gf}_{\mathbb{T}}$: the number of triples in the dataset where $p$ appears [23, 12], * • $\mathtt{lf}$: the number of triples in the description of $e$ where $p$ appears [20, 23], * • $\mathtt{vf}_{\mathbb{T}}$: the number of triples in the dataset where $v$ appears [7, 8, 12], and * • $\mathtt{si}$: the self-information of the triple [2, 7, 8, 28]. We also add three binary features: * • $\mathtt{isC}$: whether $v$ is a class, * • $\mathtt{isE}$: whether $v$ is an entity, and * • $\mathtt{isL}$: whether $v$ is a literal. Based on the training and validation sets described in Section 3.4, we implement and tune 6 pointwise learning to rank models provided by Weka: SMOreg, LinearRegression, MultilayerPerceptron, AdditiveRegression, REPTree, and RandomForest. Each model outputs $k$ top-ranked triples as a summary. ### 5.3 Evaluation Results We first report the overall evaluation results to show which entity summarizer generally performs better. Then we break down the results into different entity types (i.e., classes) for detailed comparison. Finally we present and analyze the performance of our supervised learning based entity summarizer. Table 4: Average F1 over all the entities in a dataset. For the nine existing entity summarizers, significant improvements and losses over each other are indicated by $\blacktriangle$ and $\blacktriangledown$ ($p<0.05$), respectively. Insignificant differences are indicated by $\circ$. | DBpedia | LinkedMDB ---|---|--- | $k=5$ | $k=10$ | $k=5$ | $k=10$ RELIN | 0.242 ${}^{\text{-}\circ\circ\blacktriangledown\blacktriangledown\blacktriangledown\blacktriangledown\blacktriangledown\blacktriangledown}$ | 0.455 ${}^{\text{-}\blacktriangledown\circ\circ\blacktriangledown\circ\blacktriangledown\blacktriangledown\blacktriangledown}$ | 0.203 ${}^{\text{-}\circ\circ\blacktriangledown\circ\blacktriangle\blacktriangledown\circ\blacktriangledown}$ | 0.258 ${}^{\text{-}\blacktriangledown\circ\blacktriangledown\blacktriangledown\circ\blacktriangledown\blacktriangledown\blacktriangledown}$ DIVERSUM | 0.249 ${}^{\circ\text{-}\circ\circ\blacktriangledown\blacktriangledown\blacktriangledown\blacktriangledown\blacktriangledown}$ | 0.507 ${}^{\blacktriangle\text{-}\blacktriangle\circ\circ\circ\circ\circ\circ}$ | 0.207 ${}^{\circ\text{-}\circ\blacktriangledown\circ\blacktriangle\blacktriangledown\circ\blacktriangledown}$ | 0.358 ${}^{\blacktriangle\text{-}\blacktriangle\circ\circ\blacktriangle\blacktriangledown\circ\blacktriangledown}$ FACES | 0.270 ${}^{\circ\circ\text{-}\circ\circ\circ\blacktriangledown\blacktriangledown\blacktriangledown}$ | 0.428 ${}^{\circ\blacktriangledown\text{-}\blacktriangledown\blacktriangledown\blacktriangledown\blacktriangledown\blacktriangledown\blacktriangledown}$ | 0.169 ${}^{\circ\circ\text{-}\blacktriangledown\blacktriangledown\circ\blacktriangledown\blacktriangledown\blacktriangledown}$ | 0.263 ${}^{\circ\blacktriangledown\text{-}\blacktriangledown\blacktriangledown\circ\blacktriangledown\blacktriangledown\blacktriangledown}$ FACES-E | 0.280 ${}^{\blacktriangle\circ\circ\text{-}\circ\circ\blacktriangledown\blacktriangledown\blacktriangledown}$ | 0.488 ${}^{\circ\circ\blacktriangle\text{-}\circ\circ\circ\circ\circ}$ | 0.313 ${}^{\blacktriangle\blacktriangle\blacktriangle\text{-}\blacktriangle\blacktriangle\blacktriangledown\blacktriangle\circ}$ | 0.393 ${}^{\blacktriangle\circ\blacktriangle\text{-}\blacktriangle\blacktriangle\circ\circ\circ}$ CD | 0.283 ${}^{\blacktriangle\blacktriangle\circ\circ\text{-}\circ\blacktriangledown\circ\circ}$ | 0.513 ${}^{\blacktriangle\circ\blacktriangle\circ\text{-}\circ\circ\circ\circ}$ | 0.217 ${}^{\circ\circ\blacktriangle\blacktriangledown\text{-}\blacktriangle\blacktriangledown\circ\blacktriangledown}$ | 0.331 ${}^{\blacktriangle\circ\blacktriangle\blacktriangledown\text{-}\blacktriangle\blacktriangledown\blacktriangledown\blacktriangledown}$ LinkSUM | 0.287 ${}^{\blacktriangle\blacktriangle\circ\circ\circ\text{-}\blacktriangledown\circ\circ}$ | 0.486 ${}^{\circ\circ\blacktriangle\circ\circ\text{-}\circ\circ\circ}$ | 0.140 ${}^{\blacktriangledown\blacktriangledown\circ\blacktriangledown\blacktriangledown\text{-}\blacktriangledown\blacktriangledown\blacktriangledown}$ | 0.279 ${}^{\circ\blacktriangledown\circ\blacktriangledown\blacktriangledown\text{-}\blacktriangledown\blacktriangledown\blacktriangledown}$ BAFREC | 0.335 ${}^{\blacktriangle\blacktriangle\blacktriangle\blacktriangle\blacktriangle\blacktriangle\text{-}\circ\circ}$ | 0.503 ${}^{\blacktriangle\circ\blacktriangle\circ\circ\circ\text{-}\circ\circ}$ | 0.360 ${}^{\blacktriangle\blacktriangle\blacktriangle\blacktriangle\blacktriangle\blacktriangle\text{-}\blacktriangle\blacktriangle}$ | 0.402 ${}^{\blacktriangle\blacktriangle\blacktriangle\circ\blacktriangle\blacktriangle\text{-}\circ\circ}$ KAFCA | 0.314 ${}^{\blacktriangle\blacktriangle\blacktriangle\blacktriangle\circ\circ\circ\text{-}\circ}$ | 0.509 ${}^{\blacktriangle\circ\blacktriangle\circ\circ\circ\circ\text{-}\circ}$ | 0.244 ${}^{\circ\circ\blacktriangle\blacktriangledown\circ\blacktriangle\blacktriangledown\text{-}\circ}$ | 0.397 ${}^{\blacktriangle\circ\blacktriangle\circ\blacktriangle\blacktriangle\circ\text{-}\circ}$ MPSUM | 0.314 ${}^{\blacktriangle\blacktriangle\blacktriangle\blacktriangle\circ\circ\circ\circ\text{-}}$ | 0.512 ${}^{\blacktriangle\circ\blacktriangle\circ\circ\circ\circ\circ\text{-}}$ | 0.272 ${}^{\blacktriangle\blacktriangle\blacktriangle\circ\blacktriangle\blacktriangle\blacktriangledown\circ\text{-}}$ | 0.423 ${}^{\blacktriangle\blacktriangle\blacktriangle\circ\blacktriangle\blacktriangle\circ\circ\text{-}}$ ORACLE | 0.595 | 0.713 | 0.619 | 0.678 SMOreg | 0.279 | 0.543 | 0.403 | 0.472 LinearRegression | 0.319 | 0.556 | 0.401 | 0.471 MultilayerPerceptron | 0.340 | 0.560 | 0.390 | 0.477 AdditiveRegression | 0.345 | 0.558 | 0.415 | 0.510 REPTree | 0.392 | 0.570 | 0.455 | 0.538 RandomForest | 0.399 | 0.576 | 0.449 | 0.506 Overall Results of Existing Entity Summarizers. Table 4 presents the results of all the participating entity summarizers on two datasets under two size constraints. We compare nine existing summarizers using one-way ANOVA post-hoc LSD and we show whether the difference between each pair of them is statistical significant at the 0.05 level. Among existing summarizers, BAFREC achieves the highest F1 under $k=5$. It significantly outperforms six existing summarizers on DBpedia and outperforms all the eight ones on LinkedMDB. It is also among the best under $k=10$. MPSUM follows BAFREC under $k=5$ but performs slightly better under $k=10$. Other top-tier results belong to KAFCA on DBpedia and FACES-E on LinkedMDB. The F1 scores of ORACLE are in the range of 0.595–0.713. It is impossible for ORACLE or any other summarizer to reach $\text{F1}=1$, because for each entity in ESBM there are six ground-truth summaries which are often different and hence cannot simultaneously match a machine-generated summary. However, the gap between the results of ORACLE and the best results of existing summarizers is still as large as 0.20–0.26, suggesting that there is much room for improvement. Results on Different Entity Types. We break down the results of existing entity summarizers into 7 entity types (i.e., classes). When $k=5$ in Fig. 5, there is no single winner on every class, but BAFREC and MPSUM are among top three on 6 classes, showing relatively good generalizability over different entity types. Some entity summarizers have limited generalizability and they perform not well on certain classes. For example, RELIN and CD mainly rely on the self-information of a triple, while for Location entities their latitudes and longitudes are often unique in DBpedia but such triples with large self- information rarely appear in ground-truth summaries. Besides, most summarizers generate low-quality summaries for Agent, Film, and Person entities. This is not surprising since these entities are described in more triples and/or by more properties according to Fig. 2. Their summarization is inherently more difficult. When $k=10$ in Fig. 6, MPSUM is still among top three on 6 classes. KAFCA also shows relatively good generalizability—among top three on 5 classes. Figure 5: Average F1 over all the entities in each class under $k=5$. Figure 6: Average F1 over all the entities in each class under $k=10$. Results of Supervised Learning. As shown in Table 4, among the six supervised learning based methods, RandomForest and REPTree achieve the highest F1 on DBpedia and LinkedMDB, respectively. Four methods (MultilayerPerceptron, AdditiveRegression, REPTree, and RandomForest) outperform all the existing entity summarizers on both datasets under both size constraints, and two methods (SMOreg and LinearRegression) only fail to outperform in one setting. The results demonstrate the powerfulness of supervised learning for entity summarization. Further, recall that these methods only use standard models and rely on features that are used by existing entity summarizers. It would be reasonable to predict that better results can be achieved with specialized models and more advanced features. However, creating a large number of ground- truth summaries for training is expensive, and the generalizability of supervised methods for entity summarization still needs further exploration. Moreover, we are interested in how much the seven features contribute to the good performance of supervised learning. Table 5 shows the results of RandomForest after removing each individual feature. Considering statistical significance at the 0.05 level, two features $\mathtt{gf}_{\mathbb{T}}$ and $\mathtt{lf}$ show effectiveness on both datasets under both size constraints, and two features $\mathtt{vf}_{\mathbb{T}}$ and $\mathtt{si}$ are only effective on LinkedMDB. The usefulness of the three binary features $\mathtt{isC}$, $\mathtt{isE}$, and $\mathtt{isL}$ is not statistically significant. Table 5: F1 of RandomForest after removing each individual feature, its difference from using all features ($\Delta\%$), and the significance level for the difference ($p$). DBpedia | LinkedMDB ---|--- $k=5$ | $k=10$ | $k=5$ | $k=10$ | F1 | $\Delta\%$ | $p$ | | F1 | $\Delta\%$ | $p$ | | F1 | $\Delta\%$ | $p$ | | F1 | $\Delta\%$ | $p$ All | 0.399 | — | — | All | 0.576 | — | — | All | 0.449 | — | — | All | 0.506 | — | — -$\mathtt{gf}_{\mathbb{T}}$ | 0.346 | $-$5.360 | 0.000 | -$\mathtt{lf}$ | 0.546 | $-$0.030 | 0.000 | -$\mathtt{gf}_{\mathbb{T}}$ | 0.383 | $-$0.066 | 0.000 | -$\mathtt{lf}$ | 0.473 | $-$0.033 | 0.008 -$\mathtt{lf}$ | 0.366 | $-$3.307 | 0.000 | -$\mathtt{gf}_{\mathbb{T}}$ | 0.551 | $-$0.025 | 0.000 | -$\mathtt{lf}$ | 0.413 | $-$0.036 | 0.025 | -$\mathtt{vf}_{\mathbb{T}}$ | 0.477 | $-$0.029 | 0.010 -$\mathtt{isC}$ | 0.392 | $-$0.720 | 0.261 | -$\mathtt{vf}_{\mathbb{T}}$ | 0.569 | $-$0.007 | 0.198 | -$\mathtt{vf}_{\mathbb{T}}$ | 0.414 | $-$0.035 | 0.022 | -$\mathtt{gf}_{\mathbb{T}}$ | 0.479 | $-$0.027 | 0.007 -$\mathtt{isE}$ | 0.397 | $-$0.267 | 0.720 | -$\mathtt{isE}$ | 0.570 | $-$0.006 | 0.262 | -$\mathtt{si}$ | 0.442 | $-$0.007 | 0.574 | -$\mathtt{si}$ | 0.486 | $-$0.020 | 0.009 -$\mathtt{si}$ | 0.400 | $+$0.027 | 0.973 | -$\mathtt{isC}$ | 0.571 | $-$0.005 | 0.303 | -$\mathtt{isE}$ | 0.455 | $+$0.005 | 0.651 | -$\mathtt{isL}$ | 0.491 | $-$0.015 | 0.079 -$\mathtt{isL}$ | 0.401 | $+$0.160 | 0.816 | -$\mathtt{si}$ | 0.572 | $-$0.004 | 0.402 | -$\mathtt{isL}$ | 0.456 | $+$0.007 | 0.504 | -$\mathtt{isE}$ | 0.492 | $-$0.014 | 0.148 -$\mathtt{vf}_{\mathbb{T}}$ | 0.407 | $+$0.720 | 0.346 | -$\mathtt{isL}$ | 0.578 | $+$0.002 | 0.683 | -$\mathtt{isC}$ | 0.463 | $+$0.013 | 0.281 | -$\mathtt{isC}$ | 0.514 | $+$0.008 | 0.396 Conclusion. Among existing entity summarizers, BAFREC generally shows the best performance on ESBM while MPSUM seems more robust. However, none of them are comparable with our straightforward implementation of supervised learning, which in turn is still far away from the best possible performance represented by ORACLE. Therefore, entity summarization on ESBM is a non-trivial task. We invite researchers to experiment with new ideas on ESBM. ## 6 Discussion and Future work We identify the following limitations of our work to be addressed in future work. Evaluation Criteria. We compute F1 score in the evaluation, which is based on common triples but ignores semantic overlap between triples. A triple $t$ in a machine-generated summary $S$ may partially cover the information provided by some triple $t^{\prime}$ in the ground-truth summary. It may be reasonable to not completely penalize $S$ for missing $t^{\prime}$ but give some reward for the presence of $t$. However, it is difficult to quantify the extent of penalization for all possible cases, particularly when multiple triples semantically overlap with each other. In future work, we will explore more proper evaluation criteria. Representativeness of Ground Truth. The ground-truth summaries in ESBM are not supposed to represent the view of the entire user population. They are intrinsically biased towards their creators. Besides, these ground-truth summaries are created for general purposes. Accordingly, we use them to evaluate general-purpose entity summarizers. However, for a specific task, these summaries may not show optimality, and the participating systems may not represent the state of the art. Still, we believe it is valuable to evaluate general-purpose systems not only because of their wide range of applications but also because their original technical features have been reused by task- specific systems. In future work, we will extend ESBM to a larger scale, and will consider benchmarking task-specific entity summarization. Form of Ground Truth. ESBM provides ground-truth summaries, whereas some other benchmarks offer ground-truth scores of triples [22, 13, 1]. Scoring-based ground truth may more comprehensively evaluate an entity summarizer than our set-based ground truth because it not only considers the triples in a machine- generated summary but also assesses the rest of the triples. However, on the other hand, a set of top-scored triples may not equal an optimal summary because they may cover limited aspects of an entity and show redundancy. Therefore, both methods have their advantages and disadvantages. In future work, we will conduct scoring-based evaluation to compare with the current results. ## Acknowledgments This work was supported in part by the NSFC under Grant 61772264 and in part by the Qing Lan Program of Jiangsu Province. ## References * [1] Bobic, T., Waitelonis, J., Sack, H.: FRanCo - A ground truth corpus for fact ranking evaluation. In: SumPre 2015 & HSWI 2015 (2015) * [2] Cheng, G., Tran, T., Qu, Y.: RELIN: relatedness and informativeness-based centrality for entity summarization. In: ISWC 2011, Part I. pp. 114–129 (2011). https://doi.org/10.1007/978-3-642-25073-6_8 * [3] Cheng, G., Xu, D., Qu, Y.: C3D+P: A summarization method for interactive entity resolution. J. Web Sem. 35, 203–213 (2015). https://doi.org/10.1016/j.websem.2015.05.004 * [4] Cheng, G., Xu, D., Qu, Y.: Summarizing entity descriptions for effective and efficient human-centered entity linking. In: WWW 2015. pp. 184–194 (2015). https://doi.org/10.1145/2736277.2741094 * [5] Gottschalk, S., Demidova, E.: EventKG - the hub of event knowledge on the web \- and biographical timeline generation. Semantic Web 10(6), 1039–1070 (2019). https://doi.org/10.3233/SW-190355 * [6] Gunaratna, K.: Semantics-based Summarization of Entities in Knowledge Graphs. Ph.D. thesis, Wright State University (2017) * [7] Gunaratna, K., Thirunarayan, K., Sheth, A.P.: FACES: diversity-aware entity summarization using incremental hierarchical conceptual clustering. In: AAAI 2015. pp. 116–122 (2015) * [8] Gunaratna, K., Thirunarayan, K., Sheth, A.P., Cheng, G.: Gleaning types for literals in RDF triples with application to entity summarization. In: ESWC 2016. pp. 85–100 (2016). https://doi.org/10.1007/978-3-319-34129-3_6 * [9] Hasibi, F., Balog, K., Bratsberg, S.E.: Dynamic factual summaries for entity cards. In: SIGIR 2017. pp. 773–782 (2017). https://doi.org/10.1145/3077136.3080810 * [10] Hassanzadeh, O., Consens, M.P.: Linked movie data base. In: LDOW 2009 (2009) * [11] Kim, E.K., Choi, K.S.: Entity summarization based on formal concept analysis. In: EYRE 2018 (2018) * [12] Kroll, H., Nagel, D., Balke, W.T.: BAFREC: Balancing frequency and rarity for entity characterization in linked open data. In: EYRE 2018 (2018) * [13] Langer, P., Schulze, P., George, S., Kohnen, M., Metzke, T., Abedjan, Z., Kasneci, G.: Assigning global relevance scores to DBpedia facts. In: ICDE Workshops 2014. pp. 248–253 (2014). https://doi.org/10.1109/ICDEW.2014.6818334 * [14] Lehmann, J., Isele, R., Jakob, M., Jentzsch, A., Kontokostas, D., Mendes, P.N., Hellmann, S., Morsey, M., van Kleef, P., Auer, S., Bizer, C.: DBpedia - A large-scale, multilingual knowledge base extracted from Wikipedia. Semantic Web 6(2), 167–195 (2015). https://doi.org/10.3233/SW-140134 * [15] Liu, Q., Cheng, G., Gunaratna, K., Qu, Y.: Entity summarization: State of the art and future challenges. CoRR abs/1910.08252 (2019), http://arxiv.org/abs/1910.08252 * [16] Pouriyeh, S.A., Allahyari, M., Kochut, K., Cheng, G., Arabnia, H.R.: Combining word embedding and knowledge-based topic modeling for entity summarization. In: ICSC 2018. pp. 252–255 (2018). https://doi.org/10.1109/ICSC.2018.00044 * [17] Pouriyeh, S.A., Allahyari, M., Kochut, K., Cheng, G., Arabnia, H.R.: ES-LDA: entity summarization using knowledge-based topic modeling. In: IJCNLP 2017, Volume 1. pp. 316–325 (2017) * [18] Sim, S.E., Easterbrook, S.M., Holt, R.C.: Using benchmarking to advance research: A challenge to software engineering. In: ICSE 2003. pp. 74–83 (2003). https://doi.org/10.1109/ICSE.2003.1201189 * [19] Stoilos, G., Stamou, G.B., Kollias, S.D.: A string metric for ontology alignment. In: ISWC 2005. pp. 624–637 (2005). https://doi.org/10.1007/11574620_45 * [20] Sydow, M., Pikula, M., Schenkel, R.: The notion of diversity in graphical entity summarisation on semantic knowledge graphs. J. Intell. Inf. Syst. 41(2), 109–149 (2013). https://doi.org/10.1007/s10844-013-0239-6 * [21] Thalhammer, A.: Linked Data Entity Summarization. Ph.D. thesis, Karlsruher Institut für Technologie (2017) * [22] Thalhammer, A., Knuth, M., Sack, H.: Evaluating entity summarization using a game-based ground truth. In: ISWC 2012, Part II. pp. 350–361 (2012). https://doi.org/10.1007/978-3-642-35173-0_24 * [23] Thalhammer, A., Lasierra, N., Rettinger, A.: LinkSUM: Using link analysis to summarize entity data. In: ICWE 2016. pp. 244–261 (2016). https://doi.org/10.1007/978-3-319-38791-8_14 * [24] Thalhammer, A., Rettinger, A.: Browsing DBpedia entities with summaries. In: ESWC 2014 Satellite Events. pp. 511–515 (2014). https://doi.org/10.1007/978-3-319-11955-7_76 * [25] Thalhammer, A., Toma, I., Roa-Valverde, A.J., Fensel, D.: Leveraging usage data for linked data movie entity summarization. In: USEWOD 2012 (2012) * [26] Tonon, A., Catasta, M., Prokofyev, R., Demartini, G., Aberer, K., Cudré-Mauroux, P.: Contextualized ranking of entity types based on knowledge graphs. J. Web Sem. 37-38, 170–183 (2016). https://doi.org/10.1016/j.websem.2015.12.005 * [27] Wei, D., Gao, S., Liu, Y., Liu, Z., Huang, L.: MPSUM: Entity summarization with predicate-based matching. In: EYRE 2018 (2018) * [28] Xu, D., Zheng, L., Qu, Y.: CD at ENSEC 2016: Generating characteristic and diverse entity summaries. In: SumPre 2016 (2016)
2024-09-04T02:54:58.169588
2020-03-08T07:15:48
2003.03736
{ "authors": "Qingxia Liu, Gong Cheng, Yuzhong Qu", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26103", "submitter": "Gong Cheng", "url": "https://arxiv.org/abs/2003.03736" }
arxiv-papers
11institutetext: National Key Laboratory for Novel Software Technology, Nanjing University, China 11email<EMAIL_ADDRESS><EMAIL_ADDRESS> # DeepLENS: Deep Learning for Entity Summarization Qingxia Liu Gong Cheng Yuzhong Qu ###### Abstract Entity summarization has been a prominent task over knowledge graphs. While existing methods are mainly unsupervised, we present DeepLENS, a simple yet effective deep learning model where we exploit textual semantics for encoding triples and we score each candidate triple based on its interdependence on other triples. DeepLENS significantly outperformed existing methods on a public benchmark. ## 1 Introduction Entity summarization is the task of computing a compact summary for an entity by selecting an optimal size-constrained subset of entity-property-value triples from a knowledge graph such as an RDF graph [7]. It has found a wide variety of applications, for example, to generate a compact entity card from Google’s Knowledge Graph where an entity may be described in dozens or hundreds of triples. Generating entity summaries for general purposes has attracted much research attention, but existing methods are mainly unsupervised [2, 9, 3, 4, 13, 10, 6, 5, 11]. One research question that naturally arises is _whether deep learning can much better solve this task_. To the best of our knowledge, ESA [12] is the only supervised method in the literature for this task. ESA encodes triples using graph embedding (TransE), and employs BiLSTM with supervised attention mechanism. Although it outperformed unsupervised methods, the improvement reported in [12] was rather marginal, around $+7\%$ compared with unsupervised FACES-E [4] on the ESBM benchmark [8]. It inspired us to explore more effective deep learning models for the task of general-purpose entity summarization. In this short paper, we present DeepLENS,111https://github.com/nju- websoft/DeepLENS a novel Deep Learning based approach to ENtity Summarization. DeepLENS uses a simple yet effective model which addresses the following two limitations of ESA, and thus achieved significantly better results in the experiments. 1. 1. Different from ESA which encodes a triple using graph embedding, we use word embedding because we consider textual semantics more useful than graph structure for the entity summarization task. 2. 2. Whereas ESA encodes a set of triples as a sequence and its performance is sensitive to the chosen order, our aggregation-based representation satisfies permutation invariance and hence more suitable for entity summarization. In the remainder of the paper, Section 2 details DeepLENS, Section 3 presents experiment results, and Section 4 concludes the paper. ## 2 Approach #### 2.0.1 Problem Statement An RDF graph $T$ is a set of triples. The _description_ of entity $e$ in $T$, denoted by $\mathtt{Desc}(e)\subseteq T$, comprises triples where $e$ is the subject or object. Each triple $t\in\mathtt{Desc}(e)$ describes a property $\mathtt{prop}(t)$ which is the predicate of $t$, and gives a value $\mathtt{val}(t)$ which is the object or subject of $t$ other than $e$. For a size constraint $k$, a _summary_ of $e$ is a subset of triples $S\subseteq\mathtt{Desc}(e)$ with $|S|\leq k$. We aim to generate an optimal summary for general purposes. #### 2.0.2 Overview of DeepLENS Our approach DeepLENS generates an optimal summary by selecting $k$ most salient triples. As a supervised approach, it learns salience from labeled entity summaries. However, two issues remain unsolved. First, knowledge graph like RDF graph is a mixture of graph structure and textual content. The effectiveness of a learning-based approach to entity summarization relies on a _proper representation of entity descriptions of such mixed nature_. Second, the salience of a triple is not absolute but dependent on the context, i.e., the set of other triples in the entity description. It is essential to _represent their independence_. DeepLENS addresses these issues with the scoring model presented in Fig. 1. It has three modules which we will detail below: triple encoding, entity description encoding, and triple scoring. Finally, the model scores each candidate triple $t\in\mathtt{Desc}(e)$ in the context of $\mathtt{Desc}(e)$. Figure 1: Model of DeepLENS. #### 2.0.3 Triple Encoding For entity $e$, a triple $t\in\mathtt{Desc}(e)$ provides a property-value pair $\langle\mathtt{prop}(t),\mathtt{val}(t)\rangle$ of $e$. Previous research [12] leverages graph embedding to encode the structural features of $\mathtt{prop}(t)$ and $\mathtt{val}(t)$. By contrast, for the task of entity summarization we consider textual semantics more important than graph structure, and we _solely exploit textual semantics_ for encoding $t$. Specifically, for RDF resource $r$, we obtain its _textual form_ as follows. For an IRI or a blank node, we retrieve its rdfs:label if it is available, otherwise we have to use its local name; for a literal, we take its lexical form. We represent each word in the textual form by a pre-trained word embedding vector, and we average these vectors over all the words to represent $r$, denoted by $\text{Embedding}(r)$. For triple $t\in\mathtt{Desc}(e)$, we generate and concatenate such vector representations for $\mathtt{prop}(t)$ and $\mathtt{val}(t)$ to form $\boldsymbol{t}$, the _initial representation_ of $t$. Then $\boldsymbol{t}$ is fed into a multi-layer perceptron (MLP) to generate $\boldsymbol{h}$, the _final representation_ of $t$: $\boldsymbol{t}=\left[\text{Embedding}(\mathtt{prop}(t));~{}\text{Embedding}(\mathtt{val}(t))\right]\,,\quad\boldsymbol{h}=\text{MLP}_{\text{C}}(\boldsymbol{t})\,.\\\ $ (1) #### 2.0.4 Entity Description Encoding To score a candidate triple in the context of other triples in the entity description, previous research [12] captures the independence between triples in $\mathtt{Desc}(e)$ using BiLSTM to pass information. Triples are fed into BiLSTM as a sequence. However, $\mathtt{Desc}(e)$ is a set and the triples lack a natural order. The performance of this model is unfavourably sensitive to the order of input triples. Indeed, as we will show in the experiments, different orders could lead to considerably different performance. To generate a representation for $\mathtt{Desc}(e)$ that is _permutation invariant_ , we perform aggregation. Specifically, let $\boldsymbol{t_{1}},\ldots,\boldsymbol{t_{n}}$ be the initial representations of triples in $\mathtt{Desc}(e)$ computed by Eq. (1). We feed a MLP with each $\boldsymbol{t_{i}}$ for $1\leq i\leq n$ and generate their final representations $\boldsymbol{g_{1}},\ldots,\boldsymbol{g_{n}}$, which in turn are weighted using attention mechanism from $\boldsymbol{h}$ computed by Eq. (1), the final representation of the candidate triple $t$ to be scored. We calculate the sum of these weighted representations of triples to represent $\mathtt{Desc}(e)$, denoted by $\boldsymbol{d}$: $\boldsymbol{g_{i}}=\text{MLP}_{\text{D}}(\boldsymbol{t_{i}})\,,\quad a_{i}=\frac{\exp(\cos(\boldsymbol{h},\boldsymbol{g_{i}}))}{\sum_{j}\exp(\cos(\boldsymbol{h},\boldsymbol{g_{j}}))}\,,\quad\boldsymbol{d}=\sum_{i=1}^{n}{a_{i}\boldsymbol{g_{i}}}\,.\\\ $ (2) The result of summation is not sensitive to the order of triples in $\mathtt{Desc}(e)$. #### 2.0.5 Triple Scoring For each candidate triple $t\in\mathtt{Desc}(e)$ to be scored, we concatenate its final representation $\boldsymbol{h}$ and the representation $\boldsymbol{d}$ for $\mathtt{Desc}(e)$. We feed the result into a MLP to compute the context-based salience score of $t$: $\mathtt{score}(t|\mathtt{Desc}(e))=\text{MLP}_{\text{S}}(\left[\boldsymbol{h};~{}\boldsymbol{d}\right])\,.$ (3) Parameters of the entire model are jointly trained based on the mean squared error loss, supervised by labeled entity summaries. ## 3 Experiments ### 3.1 Datasets We used ESBM v1.2, the largest available benchmark for evaluating general- purpose entity summarization.222https://w3id.org/esbm For each of 125 entities in DBpedia and 50 entities in LinkedMDB, this benchmark provided 6 ground- truth summaries created by different human experts under $k=5$, and another 6 ground-truth summaries under $k=10$. We used the train-valid-test split specified in the benchmark to perform five-fold cross-validation. ### 3.2 Participating Methods We compared DeepLENS with 10 baseline methods. Unsupervised Methods. We compared with 9 unsupervised methods that had been tested on ESBM: RELIN [2], DIVERSUM [9], FACES [3], FACES-E [4], CD [13], LinkSUM [10], BAFREC [6], KAFCA [5], and MPSUM [11]. We directly presented their results reported on the ESBM website. Supervised Methods. We compared with ESA [12], the only supervised method in the literature to our knowledge. We reused its open-source implementation and configuration.333https://github.com/WeiDongjunGabriel/ESA We fed it with triples sorted in alphabetical order. For our approach DeepLENS, we used 300-dimensional fastText [1] word embedding vectors trained on Wikipedia to generate initial representations of triples. The numbers of hidden units in $\text{MLP}_{\text{C}}$, $\text{MLP}_{\text{D}}$, and $\text{MLP}_{\text{S}}$ were [64, 64], [64, 64], and [64, 64, 64], respectively. All hidden layers used ReLU as activation function. The final output layer of $\text{MLP}_{\text{S}}$ consisted of one linear unit. We trained the model using Adam optimizer with learning rate 0.01. For both ESA and DeepLENS, we performed early stopping on the validation set to choose the number of training epochs from 1–50. Oracle Method. ORACLE approximated the best possible performance on ESBM and formed a reference point used for comparisons. It outputted $k$ triples that most frequently appeared in ground-truth summaries. ### 3.3 Results Following ESBM, we compared machine-generated summaries with ground-truth summaries by calculating F1 score, and reported the mean F1 achieved by each method over all the test entities in a dataset. Table 1: Average F1 over all the test entities. Significant and insignificant differences ($p<0.01$) between DeepLENS and each baseline are indicated by $\blacktriangle$ and $\circ$, respectively. | DBpedia | LinkedMDB ---|---|--- | $k=5$ | $k=10$ | $k=5$ | $k=10$ RELIN [2] | 0.242 | 0.455 | 0.203 | 0.258 DIVERSUM [9] | 0.249 | 0.507 | 0.207 | 0.358 FACES [3] | 0.270 | 0.428 | 0.169 | 0.263 FACES-E [4] | 0.280 | 0.488 | 0.313 | 0.393 CD [13] | 0.283 | 0.513 | 0.217 | 0.331 LinkSUM [10] | 0.287 | 0.486 | 0.140 | 0.279 BAFREC [6] | 0.335 | 0.503 | 0.360 | 0.402 KAFCA [5] | 0.314 | 0.509 | 0.244 | 0.397 MPSUM [11] | 0.314 | 0.512 | 0.272 | 0.423 ESA [12] | 0.331 | 0.532 | 0.350 | 0.416 DeepLENS | 0.402 ▲▲▲▲▲▲▲▲▲▲ | 0.574 ▲▲▲▲▲▲▲▲▲▲ | 0.474 ▲▲▲▲▲▲▲▲▲▲ | 0.493 ▲▲▲▲▲▲▲▲▲▲ ORACLE | 0.595 | 0.713 | 0.619 | 0.678 Comparison with Baselines. As shown in Table 1, supervised methods were generally better than unsupervised methods. Our DeepLENS outperformed all the baselines including ESA. Moreover, two-tailed t-test showed that all the differences were statistically significant ($p<0.01$) in all the settings. DeepLENS achieved new state-of-the-art results on the ESBM benchmark. However, the notable gaps between DeepLENS and ORACLE suggested room for improvement and were to be closed by future research. Table 2: Average F1 over all the test entities achieved by different variants of ESA. | DBpedia | LinkedMDB ---|---|--- | $k=5$ | $k=10$ | $k=5$ | $k=10$ ESA | 0.331 | 0.532 | 0.350 | 0.416 ESA-text | 0.379 | 0.558 | 0.390 | 0.418 ESA-rnd | 0.116$\pm$0.008 | 0.222$\pm$0.007 | 0.113$\pm$0.015 | 0.219$\pm$0.011 Ablation Study. Compared with ESA, we attributed the better performance of DeepLENS to two improvements in our implementation: the exploitation of textual semantics, and the permutation invariant representation of triple set. They were demonstrated by the following ablation study of ESA. First, we compared two variants of ESA by encoding triples in different ways. For triple $t$, the original version of ESA encoded the structural features of $\mathtt{prop}(t)$ and $\mathtt{val}(t)$ using TransE. We implemented ESA- text, a variant that encoded both $\mathtt{prop}(t)$ and $\mathtt{val}(t)$ using fastText as in our approach. As shown in Table 2, ESA-text slightly outperformed ESA, showing the usefulness of textual semantics compared with graph structure used by ESA. Second, we compared two variants of ESA by feeding with triples in different orders. The default version of ESA was fed with triples sorted in alphabetical order for both training and testing. We implemented ESA-rnd, a variant that was fed with triples in alphabetical order for training but in random order for testing. We tested ESA-rnd 20 times and reported its mean F1 with standard deviation. In Table 2, the notable falls from ESA to ESA-rnd showed the unfavourable sensitivity of BiLSTM used by ESA to the order of input triples. ## 4 Conclusion We presented DeepLENS, a simple yet effective deep learning model for general- purpose entity summarization. It has achieved new state-of-the-art results on the ESBM benchmark, significantly outperforming existing methods. Thus, entity summarization becomes another research field where a combination of deep learning and knowledge graph is likely to shine. However, in DeepLENS we only exploit textual semantics. In future work, we will incorporate ontological semantics into our model. We will also revisit the usefulness of structural semantics. ## Acknowledgments This work was supported by the National Key R&D Program of China under Grant 2018YFB1005100 and by the Qing Lan Program of Jiangsu Province. ## References * [1] Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. TACL 5, 135–146 (2017) * [2] Cheng, G., Tran, T., Qu, Y.: RELIN: relatedness and informativeness-based centrality for entity summarization. In: ISWC 2011, Part I. pp. 114–129 (2011) * [3] Gunaratna, K., Thirunarayan, K., Sheth, A.P.: FACES: diversity-aware entity summarization using incremental hierarchical conceptual clustering. In: AAAI 2015. pp. 116–122 (2015) * [4] Gunaratna, K., Thirunarayan, K., Sheth, A.P., Cheng, G.: Gleaning types for literals in RDF triples with application to entity summarization. In: ESWC 2016. pp. 85–100 (2016) * [5] Kim, E.K., Choi, K.S.: Entity summarization based on formal concept analysis. In: EYRE 2018 (2018) * [6] Kroll, H., Nagel, D., Balke, W.T.: BAFREC: Balancing frequency and rarity for entity characterization in linked open data. In: EYRE 2018 (2018) * [7] Liu, Q., Cheng, G., Gunaratna, K., Qu, Y.: Entity summarization: State of the art and future challenges. CoRR abs/1910.08252 (2019) * [8] Liu, Q., Cheng, G., Gunaratna, K., Qu, Y.: ESBM: An entity summarization benchmark. In: ESWC 2020 (2020) * [9] Sydow, M., Pikula, M., Schenkel, R.: The notion of diversity in graphical entity summarisation on semantic knowledge graphs. J. Intell. Inf. Syst. 41(2), 109–149 (2013) * [10] Thalhammer, A., Lasierra, N., Rettinger, A.: LinkSUM: Using link analysis to summarize entity data. In: ICWE 2016. pp. 244–261 (2016) * [11] Wei, D., Gao, S., Liu, Y., Liu, Z., Huang, L.: MPSUM: Entity summarization with predicate-based matching. In: EYRE 2018 (2018) * [12] Wei, D., Liu, Y., Zhu, F., Zang, L., Zhou, W., Han, J., Hu, S.: ESA: Entity summarization with attention. In: EYRE 2019. pp. 40–44 (2019) * [13] Xu, D., Zheng, L., Qu, Y.: CD at ENSEC 2016: Generating characteristic and diverse entity summaries. In: SumPre 2016 (2016)
2024-09-04T02:54:58.183852
2020-03-08T09:26:26
2003.03751
{ "authors": "Nathan Bowler, Ting Su", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26104", "submitter": "Ting Su", "url": "https://arxiv.org/abs/2003.03751" }
arxiv-papers
# Classification of Doubly Distributive skew Hyperfields and Stringent hypergroups Nathan Bowler and Ting Su<EMAIL_ADDRESS>Department of Mathematics, Universität Hamburg, Germany<EMAIL_ADDRESS>Department of Mathematics, Universität Hamburg, Germany ###### Abstract. A hypergroup is stringent if $a\boxplus b$ is a singleton whenever $a\neq-b$. A hyperfield is stringent if the underlying additive hypergroup is. Every doubly distributive skew hyperfield is stringent, but not vice versa. We present a classification of stringent hypergroups, from which a classification of doubly distributive skew hyperfields follows. It follows from our classification that every such hyperfield is a quotient of a skew field. ###### Key words and phrases: hypergroup, hyperring, hyperfield, double distributivity ## 1\. Introduction The notion of hyperfield was first introduced by Krasner in [Kra57, Kra83]. It is an algebraic structure similar to a field except that its addition $\boxplus$ is multivalued. In [Vir10], Viro provided an excellent introduction to and motivation for hyperfields and introduced several good examples of hyperfields, including the tropical hyperfield $\mathbb{T}_{+}$, the tropical real hyperfield $\mathbb{TR}$ and the ultratriangle hyperfield $\mathbb{T}\triangle$. Viro has also illustrated the utility of $\mathbb{T}_{+}$ for the foundations of tropical geometry in several interesting papers (cf. [Vir10, Vir11]). In [BB16], Baker and Bowler presented an algebraic framework which simultaneously generalizes the notion of linear subspaces, matroids, oriented matroids, and valuated matroids, and called the resulting objects matroids over hyperfields. A matroid over a field $F$ corresponds to a subspace of some $F^{n}$. A $\mathbb{K}$-matroid is just a matroid. An $\mathbb{S}$-matroid is an oriented matroid. And a $\mathbb{T}\triangle$-matroid is a valuated matroid, as defined in [DW92]. Baker and Bowler also provided two natural notions of matroids over a hyperfield $F$, weak $F$-matroids and strong $F$-matroids, and showed that the two notions coincide when $F$ has a property called double distributivity. A hyperfield $F$ is doubly distributive if $(a\boxplus b)(c\boxplus d)=ac\boxplus ad\boxplus bc\boxplus bd$ for any $a,b,c,d\in F$. Fields, $\mathbb{K}$, $\mathbb{S}$ and $\mathbb{T}\triangle$ are all doubly distributive. So too are the other two hyperfields mentioned above, $\mathbb{T}_{+}$ and $\mathbb{TR}$. It is these the results in tropical geometry and matroid theory which motivate our interest in doubly distributive hyperfields. More generally, we are also interested in doubly distributive hyperrings, which were also analysed by Baker and Bowler. In fact, rather than just hyperfields, they worked with a more general kind of algebraic object known as tracts (cf. [BB19]). The other important example of tracts other than hyperfields is given by partial fields, which have also been the subject of much fruitful study. Baker and Bowler defined a special class of tracts called partial hyperfields, objects based on hyperrings which generalize both hyperfields and partial fields in a natural way. The property of double distributivity also extends to hyperrings and thus to partial hyperfields. We will classify the doubly distributive skew hyperfields in Section 5. The classification itself will be described in Section 4, but has the following important consequence: ###### Definition 1.1. A valuation $\nu$ of a skew hyperfield $F$ is a map from $F$ to $G\cup\\{-\infty\\}$, where $(G,<)$ is a linearly ordered group, satisfying 1. (1) $\nu(x)=-\infty$ if and only if $x=0$. 2. (2) $\nu(xy)=\nu(x)\cdot\nu(y)$. 3. (3) $\nu(x)>\nu(y)$ implies $x\boxplus y=\\{x\\}$. ###### Theorem 1.2. For every doubly distributive skew hyperfield $F$, there is always a valuation $\nu$ of $F$ such that $\nu^{-1}(1_{G})$ is either the Krasner hyperfield, or the sign hyperfield, or a skew field. This compact description is from the paper [BP19]. In particular, since any nontrivial ordered group is infinite, it follows from our results that the only finite doubly distributive hyperfields are the Krasner hyperfield, the sign hyperfield and the finite fields. This classification has a number of applications. For example, we use it in Section 7 to show that any doubly distributive skew hyperfield is a quotient of a skew field. Bowler and Pendavingh used it in [BP19] to show that any doubly distributive skew hyperfield is perfect and to provide vector axioms for matroids over such skew hyperfields. Our classification uses a property of the underlying hypergroup which we call stringency. A hyperfield $F$ is stringent, if $a\boxplus b$ is a singleton whenever $a\neq-b$. ###### Proposition 1.3. Every doubly distributive skew hyperfield is stringent. ###### Proof. Let $F$ be a doubly distributive skew hyperfield. Let $a,b\in F^{\times}$ be such that $a\neq-b$. Let $x,y\in F^{\times}$ be such that $x,y\in a\boxplus b$. By double distributivity, we have $(a\boxplus b)(x^{-1}\boxplus-y^{-1})=(a\boxplus b)\cdot x^{-1}\boxplus(a\boxplus b)\cdot(-y^{-1})\supseteq x\cdot x^{-1}\boxplus y\cdot(-y^{-1})=1\boxplus-1\ni 0.$ As $a\neq-b$, then $x^{-1}=y^{-1}$, and so $x=y$. So $a\boxplus b$ is a singleton if $a\neq-b$. ∎ However, not every stringent skew hyperfield is doubly distributive. The following is a counterexample. ###### Example 1.4. Let $F:=\mathbb{Z}\cup\\{-\infty\\}$ be the stringent hyperfield with multiplication given by $a\odot b=a+b$ and multiplicative identity $0$. Hyperaddition is given by $a\boxplus b=\begin{cases}\\{\max(a,b)\\}&\text{ if $a\neq b$,}\\\ \\{c\,|\,c<a\\}&\text{ if $a=b$,}\end{cases}$ so that the additive identity is $-\infty$. Here we use the standard total order on $\mathbb{Z}$ and set $-\infty<x$ for all $x\in\mathbb{Z}$. $F$ is not doubly distributive because $\displaystyle(0\boxplus 0)\odot(0\boxplus 0)$ $\displaystyle=\\{z\,|\,z<0\\}\odot\\{z\,|\,z<0\\}=\\{z\,|\,z<-1\\},$ $\displaystyle 0\boxplus 0\boxplus 0\boxplus 0$ $\displaystyle=\\{z\,|\,z<0\\}\boxplus\\{z\,|\,z<0\\}=\\{z\,|\,z<0\\}.$ We use our classification of stringent skew hyperfields to derive a classification of stringent skew hyperrings in Section 6. However, this does not give a classification of doubly distributive skew hyperrings, since not every doubly distributive skew hyperring is stringent (see Example 6.2). In fact, we classify all stringent hypergroups, and our classification of doubly distributive skew hyperfields follows from this. ###### Definition 1.5. Let $(G,<)$ be a totally ordered set, let $(F_{g}\,|\,g\in G)$ be a family of hypergroups with a common identity element $0$ in each $F_{g}$ but otherwise disjoint, and let $\psi$ be the surjective function from $\bigcup_{g\in G}F_{g}^{\times}$ to $G$ sending $f$ in $F_{g}^{\times}$ to $g$. We denote the hyperaddition of $F_{g}$ by $\boxplus_{g}$. For any $g\in G$ we denote by $g\downarrow$ the set of $h\in G$ with $h<g$. Then the wedge sum $F=\bigvee_{g\in G}{F_{g}}$ is the hypergroup with ground set $\bigcup_{g\in G}F_{g}$ and hyperaddition given by $\displaystyle x\boxplus 0$ $\displaystyle=0\boxplus x=\\{x\\},$ $\displaystyle x\boxplus y$ $\displaystyle=\begin{cases}\\{x\\}&\text{if $\psi(x)>\psi(y)$,}\\\ \\{y\\}&\text{if $\psi(x)<\psi(y)$,}\\\ x\boxplus_{\psi(x)}y&\text{if $\psi(x)=\psi(y)$ and $0\not\in x\boxplus_{\psi(x)}y$,}\\\ (x\boxplus_{\psi(x)}y)\cup\psi^{-1}(\psi(x)\downarrow)&\text{if $\psi(x)=\psi(y)$ and $0\in x\boxplus_{\psi(x)}y$. }\end{cases}$ We can also define $\bigcup_{g\in G}F_{g}$ up to isomorphism if the $F_{g}^{\prime}s$ don’t have the same identity or aren’t otherwise disjoint, by replacing the $F_{g}^{\prime}s$ with suitably chosen isomorphic copies. We will show in Section 3 that this construction always yields a hypergroup, and we classify the stringent hypergroups as follows: ###### Theorem 1.6. Every stringent hypergroup is a wedge sum $\bigvee_{g\in G}{F_{g}}$ where each $F_{g}$ is either a copy of the Krasner hypergroup, or a copy of the sign hypergroup, or a group. This classification of hypergroups is used to derive the classification of doubly distributive skew hyperfields discussed above. ### 1.1. Structure of the paper After the classification of stringent hypergroups in Section 3, we show in Section 4 that every stringent skew hyperfield arises from a short exact sequence of groups, where the first group in the sequence is the multiplicative group of either the Krasner hyperfield or the sign hyperfield or a skew field, and the last group in the sequence is a totally ordered group. The underlying additive hypergroup is a wedge sum of isomorphic copies of hypergroups. Then we present the classification of doubly distributive skew hyperfields in Section 5 following from the classification of stringent skew hyperfields. We show the surprising result that every stringent skew hyperring is either a skew ring or a stringent skew hyperfield in Section 6. We use our classification to show that every stringent skew hyperfield is a quotient of a skew field by some normal subgroup in Section 7. In Appendix A we present a proof that a construction really gives a skew field and in Appendix B we talk about the semirings associated to doubly distributive hyperfields. ### Acknowledgements We thank Matthew Baker and Laura Anderson (second author’s PhD advisor) for introducing the two authors to each other. We thank Laura Anderson and Tom Zaslavsky, who gave us important comments on early versions of the work. Thanks also to Pascal Gollin for asking whether our classification might hold for all stringent hypergroups. ## 2\. Background ###### Notation 2.1. Throughout $G$ and $H$ denotes groups. For a hypergroup (or skew hyperring) $S$, $S^{\times}$ denotes $S-\\{0\\}$. For a function $f$ from a hypergroup (or skew hyperring) $A$ to a hypergroup (or skew hyperring) $B$, $\operatorname{supp}(f)$ denotes the set of support of $f$ (the elements of $A$ where the function value is not zero). ### 2.1. Hypergroups, hyperrings and hyperfields ###### Definition 2.2. A hyperoperation on a set $S$ is a map $\boxplus$ from $S\times S$ to the collection of non-empty subsets of $S$. If $A$, $B$ are non-empty subsets of $S$, we define $A\boxplus B:=\bigcup_{a\in A,b\in B}a\boxplus b$ and we say that $\boxplus$ is associative if $a\boxplus(b\boxplus c)=(a\boxplus b)\boxplus c$ for all $a,b,c\in S$. All hyperoperations in this paper will be associative. ###### Definition 2.3. [Vir10] A hypergroup is a tuple $(G,\boxplus,0)$ where $\boxplus$ is an associative hyperoperation on $G$ such that: 1. (1) $0\boxplus x=x\boxplus 0=\\{x\\}$ for all $x\in G$. 2. (2) For every $x\in G$ there is a unique element $x^{\prime}$ of $G$ such that $0\in x\boxplus x^{\prime}$ and there is a unique element $x^{\prime\prime}$ of $G$ such that $0\in x^{\prime\prime}\boxplus x$. Furthermore, $x^{\prime}=x^{\prime\prime}$. This element is denoted by $-x$ and called the hyperinverse of $x$. 3. (3) (Invertibility of sums) $x\in y\boxplus z$ if and only if $-x\in-z\boxplus-y$. A hypergroup is said to be commutative if 4. (4) $x\in y\boxplus z$ if and only if $x\in z\boxplus y$. ###### Theorem 2.4. [Vir10] In Definition 2.3, the axiom (3) can be replaced by (Reversibility property) $x\in y\boxplus z$ implies $y\in x\boxplus-z$ and $z\in-y\boxplus x$. The Reversibility property was introduced by Marshall in [Mar06]. ###### Definition 2.5. A skew hyperring is a tuple $(R,\odot,\boxplus,1,0)$ such that: 1. (1) $(R,\odot,1)$ is a monoid. 2. (2) $(R,\boxplus,0)$ is a commutative hypergroup. 3. (3) (Absorption rule) $x\odot 0=0\odot x=0$ for all $x\in R$. 4. (4) (Distributive Law) $a\odot(x\boxplus y)=(a\odot x)\boxplus(a\odot y)$ and $(x\boxplus y)\odot a=(x\odot a)\boxplus(y\odot a)$ for all $a,x,y\in R$. A hyperring is a skew hyperring with commutative multiplication. A skew hyperring $F$ is called a skew hyperfield if $0\neq 1$ and every non- zero element of $F$ has a multiplicative inverse. A hyperfield is then a skew hyperfield with commutative multiplication. ###### Definition 2.6. Let $F$ and $G$ be skew hyperrings. We may define a skew hyperring $F\times G$ with $(x_{1},y_{1})\boxplus(x_{2},y_{2})$ defined as $(x_{1}\boxplus_{F}x_{2})\times(y_{1}\boxplus_{G}y_{2})$ and multiplication defined pointwise. Its additive identity is $(0_{F},0_{G})$ and its multiplicative identity is $(1_{F},1_{G})$. We call $F\times G$ the product of $F$ and $G$. Let $x,y\in F$, we will sometimes write $xy$ instead of $x\odot y$ if there is no risk of confusion. ###### Example 2.7. In [Vir10], Viro provided a good introduction to hyperfields. Several of the following hyperfields were first introduced there. 1. (1) If $F$ is a field, then $F$ is a hyperfield with $a\odot b=a\cdot b$ and $a\boxplus b=\\{a+b\\}$, for any $a,b\in F$. 2. (2) The Krasner hyperfield $\mathbb{K}:=\\{0,1\\}$ has the usual multiplication rule and hyperaddition is defined by $0\boxplus x=\\{x\\}$ for $x\in\mathbb{K}$ and $1\boxplus 1=\\{0,1\\}$. 3. (3) The sign hyperfield $\mathbb{S}:=\\{0,1,-1\\}$ has the usual multiplication rule and hyperaddition is defined by $0\boxplus x=\\{x\\},x\boxplus x=\\{x\\}$ for $x\in\mathbb{S}$, and $1\boxplus-1=\\{0,1,-1\\}$. 4. (4) The triangle hyperfield $\triangle:=\mathbb{R}_{\geq 0}$ has the usual multiplication rule and hyperaddition is defined by $x\boxplus y=\\{z\,|\,|x-y|\leq z\leq x+y\\}$. 5. (5) The tropical hyperfield $\mathbb{T}_{+}:=\mathbb{R}\cup\\{-\infty\\}$ has multiplication defined by $x\odot y=x+y$ (with $-\infty$ as an absorbing element), for $x,y\in\mathbb{T}_{+}$. Hyperaddition is defined by $x\boxplus y=\begin{cases}\\{\max(x,y)\\}&\text{ if $x\neq y$,}\\\ \\{z\,|\,z\leq x\\}&\text{ if $x=y$.}\end{cases}$ Here we use the standard total order on $\mathbb{R}$ and set $-\infty<x$ for all $x\in\mathbb{R}$. The additive identity is $-\infty$ and the multiplicative identity is $0$. 6. (6) The tropical phase hyperfield $\Phi:=S^{1}\cup\\{0\\}$ has the usual multiplication rule and hyperaddition is defined by $0\boxplus x=\\{x\\}$, $x\boxplus-x=S^{1}\cup\\{0\\}$ and $x\boxplus y=\\{\frac{ax+by}{|ax+by|}\,|\,a,b\in\mathbb{R}_{\geq 0},a+b\neq 0\\}$ for $x,y\in S^{1}$ with $y\neq-x$.111This is called phase hyperfield in Viro’s paper, but more recent papers have often worked with the phase hyperfield (7) described next. The confusion on this point is exacerbated by the fact that Viro incorrectly claims that his phase hyperfield is the same as the quotient hyperfield of the complex numbers by the positive real numbers, but this construction actually gives the hyperfield (7). 7. (7) The phase hyperfield $\mathbb{P}:=S^{1}\cup\\{0\\}$ has the usual multiplication rule and hyperaddition is defined by $0\boxplus x=\\{x\\}$, $x\boxplus-x=\\{x,-x,0\\}$ and $x\boxplus y=\\{\frac{ax+by}{|ax+by|}\,|\,a,b\in\mathbb{R}_{>0}\\}$ for $x,y\in S^{1}$ with $y\neq-x$. 8. (8) The tropical real hyperfield $\mathbb{TR}:=\mathbb{R}$ has the usual multiplication rule and hyperaddition is defined by $x\boxplus y=\begin{cases}\\{x\\}&\text{ if $|x|>|y|$,}\\\ \\{y\\}&\text{ if $|x|<|y|$,}\\\ \\{x\\}&\text{ if $x=y$,}\\\ \\{z\,|\,|z|\leq|x|\\}&\text{ if $x=-y$.}\end{cases}$ 9. (9) The tropical complex hyperfield $\mathbb{TC}:=\mathbb{C}$ has the usual multiplication rule and hyperaddition is defined by $x\boxplus y=\begin{cases}\\{x\\}&\text{if $|x|>|y|$,}\\\ \\{y\\}&\text{if $|x|<|y|$,}\\\ \\{|x|\dfrac{ax+by}{|ax+by|}\,|\,a,b\in\mathbb{R}_{\geq 0},a+b\neq 0\\}&\text{if $|x|=|y|$ and $x\neq-y$,}\\\ \\{z\,|\,|z|\leq|x|\\}&\text{if $x=-y$.}\end{cases}$ 10. (10) The ultratriangle hyperfield $\mathbb{T}\triangle:=\mathbb{R}_{\geq 0}$ (denoted by $\mathbb{Y}_{\times}$ in [Vir10] and $\mathbb{T}$ in [BB19]) has the usual multiplication rule and hyperaddition is defined by $x\boxplus y=\begin{cases}\\{\max(x,y)\\}&\text{ if $x\neq y$,}\\\ \\{z\,|\,z\leq x\\}&\text{ if $x=y$.}\end{cases}$ ###### Definition 2.8. [Vir10, BB19] A skew hyperring $R$ is said to be doubly distributive if for any $a$, $b$, $c$ and $d$ in $R$, we have $(a\boxplus b)(c\boxplus d)=ac\boxplus ad\boxplus bc\boxplus bd.$ ###### Example 2.9. Fields, $\mathbb{K}$, $\mathbb{S}$, $\mathbb{T}_{+}$, $\mathbb{TR}$, $\mathbb{T}\triangle$ are all doubly distributive, but $\triangle$, $\mathbb{P}$, $\Phi$ and $\mathbb{TC}$ are not doubly distributive. ###### Definition 2.10. A hypergroup $G$ is said to be stringent if for any $a,b\in G$ the set $a\boxplus b$ is a singleton whenever $a\neq-b$. A skew hyperring is said to be stringent if the underlying hypergroup $F$ is stringent. ### 2.2. Homomorphism ###### Definition 2.11. [BB16, Pen18] A hypergroup homomorphism is a map $f:G\rightarrow H$ such that $f(0)=0$ and $f(x\boxplus y)\subseteq f(x)\boxplus f(y)$ for all $x,y\in G$. A skew hyperring homomorphism is a map $f:R\rightarrow S$ which is a homomorphism of additive commutative hypergroups as well as a homomorphism of multiplicative monoids (i.e., $f(1)=1$ and $f(x\odot y)=f(x)\odot f(y)$ for $x,y\in R$). A skew hyperfield homomorphism is a homomorphism of the underlying skew hyperrings. A hypergroup (resp. skew hyperring, skew hyperfield) isomorphism is a bijection $f:G\rightarrow H$ which is a hypergroup (resp. skew hyperring, skew hyperfield) homomorphism and whose inverse is also a hypergroup (resp. skew hyperring, skew hyperfield) homomorphism. ###### Example 2.12. The map $\exp:\mathbb{T}_{+}\rightarrow\mathbb{T}\triangle$ is a hyperfield isomorphism. ## 3\. Classification of stringent hypergroups Our aim in this section is to prove Theorem 1.6, the Classification Theorem for stringent hypergroups. We will work with the definition of wedge sums given as Definition 1.5. First we will show that $F:=\bigvee_{g\in G}F_{g}$ is indeed a hypergroup. ###### Lemma 3.1. $F$ is again a hypergroup. If every hypergroup in $(F_{g}\,|\,g\in G)$ is stringent, then so is $F$. If every hypergroup in $(F_{g}\,|\,g\in G)$ is commutative, then so is $F$. ###### Proof. For associativity, suppose we have $x_{1},x_{2},x_{3}\in F$. If any of them is 0, then associativity is clear, so suppose that each $x_{i}$ is in $H$. If one of the elements $\psi(x_{i})$ of $G$, say $\psi(x_{i_{0}})$, is bigger than the others, then $x_{1}\boxplus(x_{2}\boxplus x_{3})=\\{x_{i_{0}}\\}=(x_{1}\boxplus x_{2})\boxplus x_{3}$. If one of the $\psi(x_{i})$ is smaller than the others, then both $x_{1}\boxplus(x_{2}\boxplus x_{3})$ and $(x_{1}\boxplus x_{2})\boxplus x_{3}$ evaluate to the sum of the other two $x_{j}$. So we may suppose that all $\psi(x_{i})$ are equal, taking the common value $g$. If $0\not\in x_{1}\boxplus_{g}x_{2}\boxplus_{g}x_{3}$, then both $x_{1}\boxplus(x_{2}\boxplus x_{3})$ and $(x_{1}\boxplus x_{2})\boxplus x_{3}$ evaluate to $x_{1}\boxplus_{g}x_{2}\boxplus_{g}x_{3}$, whereas if $0\in x_{1}\boxplus_{g}x_{2}\boxplus_{g}x_{3}$, then both evaluate to $(x_{1}\boxplus_{g}x_{2}\boxplus_{g}x_{3})\cup\psi^{-1}(g\downarrow)$. The hyperinverse of 0 is 0 and the hyperinverse of any other $x$ is its hyperinverse in $F_{\psi(x)}$, and $0$ is the additive identity. For invertibility of sums, suppose we have $x,y,z\in F$. We would like to show that $x\in y\boxplus z$ if and only if $-x\in-z\boxplus-y$. It suffices to prove one direction, say if $x\in y\boxplus z$, then $-x\in-z\boxplus-y$. If $\psi(y)<\psi(z)$, then $x\in y\boxplus z=\\{z\\}$ and $\psi(-y)<\psi(-z)$. So $-z\boxplus-y=\\{-z\\}=\\{-x\\}$. Similarly, we have if $\psi(y)>\psi(z)$, then $-z\boxplus-y=\\{-x\\}$. If $\psi(x)=\psi(y)=\psi(z)$, then the statement holds by the reversibility of the hypergroup $(F_{\psi(y)},\boxplus_{\psi(y)},0)$. Otherwise we have $\psi(x)<\psi(y)=\psi(z)$, and so $y=-z$. Then $\psi(-x)<\psi(-y)=\psi(-z)$ and so $-x\in-z\boxplus-y$. Then, we would like to show that $F$ is stringent if every hypergroup $F_{g}$ in $(F_{g}\,|\,g\in G)$ is. By definition of $F$, we just need to show that for any $x,y\in F$ with $\psi(x)=\psi(y)$ and $0\not\in x\boxplus_{\psi(x)}y$, $x\boxplus y$ is a singleton. As $F_{\psi(x)}$ is stringent and $0\not\in x\boxplus_{\psi(x)}y$, then $x\boxplus_{\psi(x)}y$ is a singleton. So $x\boxplus y=x\boxplus_{\psi(x)}y$ is also a singleton. Finally, it is clear that $\boxplus$ is commutative if each $\boxplus_{g}$ is commutative. ∎ Now we begin the proof of the Classification Theorem. We first introduce a useful lemma. Note that this lemma automatically holds for stringent commutative hypergroups, so readers only interested in that case may skip the proof. ###### Lemma 3.2. Let $F$ be a stringent hypergroup. If $y\in x\boxplus y$, then $y\in y\boxplus x$. ###### Proof. We will divide the proof into four cases. _Case 1:_ If $x=y$, this is immediate. _Case 2:_ If $x=-y$, then by reversibility we get $y\in x\boxplus y\Rightarrow y\in-y\boxplus y\Rightarrow y\in y\boxplus y\Rightarrow y\in y\boxplus-y\Rightarrow y\in y\boxplus x.$ _Case 3:_ If $y=-y$, then by reversibility and case 2 we get $y\in x\boxplus y\Rightarrow x\in y\boxplus-y\Rightarrow x\in-y\boxplus y\Rightarrow y\in y\boxplus x.$ _Case 4:_ Now we suppose $x\notin\\{y,-y\\}$ and $y\neq-y$. Let $z\in F^{\times}$ be such that $y\boxplus x=\\{z\\}$ and let $t\in F^{\times}$ be such that $-y\boxplus-y=\\{t\\}$. Then by associativity we get $z\boxplus y\boxplus t=(y\boxplus x)\boxplus y\boxplus(-y\boxplus-y)=y\boxplus(x\boxplus y)\boxplus-y\boxplus-y=y\boxplus y\boxplus-y\boxplus-y\ni 0.$ So we get $0\in z\boxplus y\boxplus t$, thus $-z\in y\boxplus t$. As $t\in-y\boxplus-y$, we have $-y\in y\boxplus t$. So $-z,-y\in y\boxplus t$. Then by stringency we get either $z=y$ or $t=-y$. If $z=y$, then we are done. Now assume $t=-y$. Thus $-z\in y\boxplus t=y\boxplus-y$, and so $-y\in-y\boxplus z$. As $y\boxplus x=\\{z\\}$, we have $x\in-y\boxplus z$. So $-y,x\in-y\boxplus z$. Then by stringency we get either $x=-y$ or $z=y$. By case 2, the statement holds. ∎ Now we define a relation on $F^{\times}$ which roughly corresponds to the ordering of $G$. ###### Definition 3.3. We define a relation $<_{F}$ on $F^{\times}$ by $x<_{F}y$ if $x\boxplus y=y\boxplus x=\\{y\\}$ but $x\neq y$. ###### Lemma 3.4. $<_{F}$ is a strict partial order on $F^{\times}$. ###### Proof. Irreflexivity is built into the definition, so it remains to check transitivity. Suppose that $x<_{F}y<_{F}z$. Then $x\boxplus z=x\boxplus y\boxplus z=y\boxplus z=\\{z\\}$. Similarly, $z\boxplus x=\\{z\\}$. We cannot have $x=z$, since then $\\{y\\}=y\boxplus x=y\boxplus z=\\{z\\}$, so $y=z$, which is a contradiction. ∎ ###### Lemma 3.5. If $x<_{F}y$, then 1. (1) $\pm x<_{F}\pm y$. 2. (2) for any $z\in F^{\times}$ we have either $x<_{F}z$ or $z<_{F}y$. ###### Proof. 1. (1) It suffices to prove that $-x<_{F}y$ by invertibility of sums. As $x<_{F}y$, then $x\neq-y$ since $0\in-y\boxplus y$. As $x\boxplus y=\\{y\\}$, then $y\in-x\boxplus y$. By stringency, $-x\boxplus y=\\{y\\}$. Similarly, $y\boxplus-x=\\{y\\}$. So $-x<_{F}y$. 2. (2) By (1), we have $\pm x<_{F}\pm y$. Suppose that $z\not<_{F}y$. If $z\in\\{y,-y\\}$, then we have $x<_{F}z$. Otherwise, $y\not\in z\boxplus y$ and $y\notin y\boxplus z$ by Lemma 3.2. Then $0\not\in z\boxplus y\boxplus-y$ and $0\not\in-y\boxplus y\boxplus z$. So by stringency, we have $z\boxplus y\boxplus-y=\\{z\\}$ and $-y\boxplus y\boxplus z=\\{z\\}$. However, $x\in y\boxplus-y$ and $x\in-y\boxplus y$, since $x<_{F}y$. So $z\boxplus x=\\{z\\}$ and $x\boxplus z=\\{z\\}$. Now if $z\neq x$ this implies that $x<_{F}z$, but if $z=x$ then we have $z<_{F}y$. ∎ Now we define a relation $\sim_{F}$ on $F^{\times}$ by $x\sim_{F}y$ if and only if both $x\not<_{F}y$ and $y\not<_{F}x$. ###### Lemma 3.6. $\sim_{F}$ is an equivalence relation. ###### Proof. $\sim_{F}$ is clearly reflexive and symmetric. For transitivity, suppose that $x\sim_{F}y$ and $y\sim_{F}z$. If $x<_{F}z$ then either $x<_{F}y$, contradicting $x\sim_{F}y$, or else $y<_{F}z$, contradicting $y\sim_{F}z$, so this is impossible. Similarly we have $z\not<_{F}x$. So $x\sim_{F}z$. ∎ The following results are obvious and we will put them together. ###### Lemma 3.7. 1. (1) If $x\sim_{F}y<_{F}z$ or $x<_{F}y\sim_{F}z$, then $x<_{F}z$. 2. (2) The relation $<_{F}$ could lift to a relation (denoted by $<_{F}^{\prime}$) on the set $G$ of $\sim_{F}$-equivalence classes and $(G,<_{F}^{\prime})$ is a totally ordered set. 3. (3) For every $x\in F^{\times}$, $-x\sim_{F}x$. 4. (4) Let $x,y,z\in F^{\times}$ with $x\neq-y$, $y\neq-z$ and $z\neq-x$. If $0\in x\boxplus y\boxplus z$, then $x\sim_{F}y\sim_{F}z$. ###### Proof. For (1), the proof is trivial. (1) implies (2). (3) As $0\in x\boxplus-x$, we have $x\not<_{F}-x$ and $-x\not<_{F}x$. So $-x\sim_{F}x$. (4) If not, then without loss of generality we have $x<_{F}y$, and so $-z\in x\boxplus y=\\{y\\}$, giving $y=-z$, contradicting our assumptions. ∎ ###### Lemma 3.8. Let $(x_{i}\,|\,i\in I)$ be a finite family of elements of $F$, and $z\in F$ with $x_{i}<_{F}z$ for all $i\in I$. Then for any $y\in\boxplus_{i\in I}x_{i}$ we have $y<_{F}z$. ###### Proof. It suffices to prove this when $I$ has just two elements, say $x_{1}$ and $x_{2}$, since the general result then follows by induction. Suppose $x_{1},x_{2}<_{F}z$ and $y\in x_{1}\boxplus x_{2}$, then we have $y\boxplus z\subseteq x_{1}\boxplus x_{2}\boxplus z=x_{1}\boxplus z=\\{z\\}$ and $z\boxplus y\subseteq z\boxplus x_{1}\boxplus x_{2}=z\boxplus x_{2}=\\{z\\}.$ So $y\boxplus z=\\{z\\}$ and $z\boxplus y=\\{z\\}$. If $z\in x_{1}\boxplus x_{2}$ then $-x_{1}\in x_{2}\boxplus-z=\\{-z\\}$, contradicting $x_{1}<_{F}z$. So $z\not\in x_{1}\boxplus x_{2}$, and so $z\neq y$. So $y<_{F}z$. ∎ It follows from the above results that the sum $x\boxplus y$ is given by $\\{x\\}$ if $x>_{F}y$, by $\\{y\\}$ if $x<_{F}y$, by $\\{z\\}$ for some $z$ in the $\sim_{F}$-equivalence class of $x$ and $y$ if $x\sim_{F}y$ but $x\neq-y$, and by some subset of that class together with $\\{t\,|\,t<_{F}x\\}\cup\\{0\\}$ if $x=-y$. This looks very similar to the hyperaddition given in Definition 1.5. We now want to consider the structure of the equivalence classes. Let $g$ be an equivalence class in $G$ and let $F_{g}$ be the set $g\cup\\{0\\}$. We can define a multivalued binary operation $\boxplus_{g}$ on $F_{g}$ by $x\boxplus_{g}y=(x\boxplus y)\cap F_{g}$. ###### Lemma 3.9. For any element $g$ in $G$, $F_{g}$ is again a hypergroup, with hyperaddition given by $\boxplus_{g}$. ###### Proof. For every $x\in F_{g}$, we have $0\boxplus_{g}x=\\{x\\}\cap F_{g}=\\{x\\}$. Suppose $0\in x\boxplus_{g}y$, then $0\in x\boxplus y$, and so $y=-x$. Similarly, if $0\in y\boxplus_{g}x$, then $y=-x$. For invertibility of sums, let $x,y,z\in F_{g}$ with $x\in y\boxplus_{g}z$. Then we have $x\in y\boxplus z$. By invertibility of sums of $F$, $-x\in-z\boxplus-y$. So $-x\in-z\boxplus_{g}-y$. For associativity, suppose we have $x,y,z\in F_{g}$. We would like to show that $(x\boxplus_{g}y)\boxplus_{g}z=x\boxplus_{g}(y\boxplus_{g}z).$ Let $t\in F_{g}$. Let us first show that $t\in x\boxplus_{g}(y\boxplus_{g}z)$ if and only if $t\in x\boxplus(y\boxplus z)$. It is clear that $x\boxplus_{g}(y\boxplus_{g}z)\subseteq x\boxplus(y\boxplus z)$. So it suffices to prove the other direction. We suppose that $t\in x\boxplus(y\boxplus z)$. Then there exists $k\in F$ such that $k\in y\boxplus z$ and $t\in x\boxplus k$. If $k\in F_{g}$, then we are done. If not, we have $y=-z$ and $k<_{F}y$. So we also have $k<_{F}x$, and so $t=x\in x\boxplus_{g}0\subseteq x\boxplus_{g}(y\boxplus_{g}z)$. Similarly, we can also get $t\in(x\boxplus_{g}y)\boxplus_{g}z$ if and only if $t\in(x\boxplus y)\boxplus z$. By associativity of $F$, $(x\boxplus y)\boxplus z=x\boxplus(y\boxplus z)$. So $(x\boxplus_{g}y)\boxplus_{g}z=x\boxplus_{g}(y\boxplus_{g}z).$ ∎ ###### Lemma 3.10. For any element $g$ in $G$, $F_{g}$ is either isomorphic to $\mathbb{K}$ or isomorphic to $\mathbb{S}$ or is a group. ###### Proof. For any $y$ and any $x$ with $x\in y\boxplus-y$, we have $y\in x\boxplus y$ and so $x<_{F}y$ unless $x\in\\{-y,0,y\\}$. So for any $y\in F_{g}$ we have $y\boxplus_{g}-y\subseteq\\{-y,0,y\\}$. Now suppose that there is some $y\in F_{g}$ with $y\boxplus_{g}-y\neq\\{0\\}$. Then $y$ is nonzero and $y,-y\in-y\boxplus_{g}y$. Suppose for a contradiction that there is some $z\in F_{g}\setminus\\{-y,0,y\\}$, and let $t$ be the unique element of $-y\boxplus z$. Then by Lemma 3.5, $t\notin\\{y,-y\\}$, since $z\not<_{F}-y$. So $y\boxplus t=\\{z\\}$. Thus $y\in y\boxplus_{g}0\subseteq(-y\boxplus_{g}y)\boxplus_{g}(t\boxplus_{g}-t)=-y\boxplus_{g}(y\boxplus_{g}t)\boxplus_{g}-t=-y\boxplus_{g}z\boxplus_{g}-t=t\boxplus_{g}-t,$ and so $y\in\\{-t,0,t\\}$, which is the desired contradiction. So if there is any $y$ with $y\boxplus_{g}-y\neq\\{0\\}$, then $F_{g}=\\{-y,0,y\\}$. It is now not hard to check that in this case if $y=-y$ then $F_{g}\cong\mathbb{K}$, and if $y\neq-y$ then $F_{g}\cong\mathbb{S}$. On the other hand, if there is no such $y$ then the hyperaddition on $F_{g}$ is single-valued, and so $F_{g}$ is a group. ∎ We can finally prove the Classification Theorem. ###### Proof of Theorem 1.6. Let $H$ be $F^{\times}$, let $G$ be given as above and let $\psi$ be the map sending an element $h$ of $H$ to its equivalence class in $G$. For any $x$ and $y$ in $H$, if $\psi(x)>_{F}^{\prime}\psi(y)$ then $x>_{F}y$ and so $x\boxplus y=\\{x\\}$. Similarly if $\psi(x)<_{F}^{\prime}\psi(x)$ then $x\boxplus y=\\{y\\}$. If $\psi(x)=\psi(y)$ then $x\boxplus_{\psi(x)}y=(x\boxplus y)\cap F_{\psi(x)}$. So by the remarks following Lemma 3.8 we have that the hyperaddition of $F$ agrees with that of $\bigvee_{g\in G}F_{g}$ in this case as well. ∎ ## 4\. Classification of stringent skew hyperfields In this section, we will present the classification of stringent skew hyperfields. We will first introduce a construction of skew hyperfields arising from short exact sequences. ###### Definition 4.1. Let $F$ be a skew hyperfield and let $G$ be a totally ordered group. Suppose that we have a short exact sequence of groups $1\to F^{\times}\xrightarrow{\varphi}H\xrightarrow{\psi}G\to 1\,.$ Since $\varphi$ is injective, by replacing $H$ with an isomorphic copy if necessary we may (and shall) suppose that $\varphi$ is the identity. As usual, we define $x^{h}$ to be $h^{-1}\cdot x\cdot h$ for $x,h\in H$. We extend this operation by setting $0_{F}^{h}:=0_{F}$. We say that the short exact sequence has stable sums if for each $h\in H$ the operation $x\mapsto x^{h}$ is an automorphism of $F$ (as a skew hyperfield). Since this operation clearly preserves the multiplicative structure, this is equivalent to the condition that it is always an automorphism of the underlying additive hypergroup. Furthermore, any short exact sequence as above with $H$ abelian automatically has stable sums. Suppose now that we have a short exact sequence with stable sums as above. Then we may define a hyperfield with multiplicative group $H$ as follows. We begin by choosing some object not in $H$ to serve as the additive identity, and we denote this object by $0$. For each $g$ in $G$, let $A_{g}$ be $\psi^{-1}(g)\cup\\{0\\}$. For any $h$ in $\psi^{-1}(g)$ there is a bijection $\lambda_{h}$ from $F$ to $A_{g}$ sending $0_{F}$ to $0$ and $x$ to $h\cdot x$ for $x\in F^{\times}$, and so there is a unique hypergroup structure on $A_{g}$ making $\lambda_{h}$ an isomorphism of hypergroups. Furthermore, this structure is independent of the choice of $h$ since for $h_{1},h_{2}\in\psi^{-1}(g)$ the map $\lambda_{h_{1}}^{-1}\cdot\lambda_{h_{2}}$ is just left multiplication by $h_{1}^{-1}\cdot h_{2}$, which is an automorphism of the additive hypergroup of $F$. In this way we obtain a well defined hypergroup structure on $A_{g}$, whose hyperaddition we denote by $\boxplus_{g}$. Then the $G$-layering $F\rtimes_{H,\psi}G$ of $F$ along this short exact sequence has as ground set $H\cup\\{0\\}$. Multiplication is given by $x\cdot y=0$ if $x$ or $y$ is $0$ and by the multiplication of $H$ otherwise. $H\cup\\{0\\}$ is the underlying set of the hypergroup $\bigvee_{g\in G}A_{g}$, and we take the hyperaddition of $F\rtimes_{H,\psi}G$ to be given by that of this hypergroup. Explicitly; the hyperaddition is given by taking 0 to be the additive identity and setting $x\boxplus y=\begin{cases}\\{x\\}&\text{if }\psi(x)>\psi(y),\\\ \\{y\\}&\text{if }\psi(x)<\psi(y),\\\ x\boxplus_{\psi(x)}y&\text{if }\psi(x)=\psi(y)\text{ and }0\not\in x\boxplus_{\psi(x)}y,\\\ (x\boxplus_{\psi(x)}y)\cup\psi^{-1}(\psi(x)\downarrow)&\text{if }\psi(x)=\psi(y)\text{ and }0\in x\boxplus_{\psi(x)}y.\end{cases}$ ###### Lemma 4.2. $F\rtimes_{H,\psi}G$ is again a skew hyperfield. If $F$ is stringent, then so is $F\rtimes_{H,\psi}G$. ###### Proof. As shown in Lemma 3.1, $\bigvee_{g\in G}A_{g}$ is a commutative hypergroup. So it suffices to show that $\cdot$ distributes over $\boxplus$. For left distributivity, we must prove an equation of the form $x_{1}\cdot(x_{2}\boxplus x_{3})=x_{1}\cdot x_{2}\boxplus x_{1}\cdot x_{3}$. As usual, if any of the $x_{i}$ is 0, then this is trivial, so we suppose that each $x_{i}$ is in $H$. If $\psi(x_{2})>\psi(x_{3})$, then both sides are equal to $x_{1}\cdot x_{2}$. If $\psi(x_{2})<\psi(x_{3})$, then both sides are equal to $x_{1}\cdot x_{3}$. So we may assume that $\psi(x_{2})=\psi(x_{3})$ and we call their common value $g$. Then $x_{2}\boxplus_{g}x_{3}=\lambda_{x_{2}}(1\boxplus_{F}x_{2}^{-1}\cdot x_{3})$ and $(x_{1}\cdot x_{2})\boxplus_{\psi(x_{1})\cdot g}(x_{1}\cdot x_{3})=\lambda_{x_{1}\cdot x_{2}}(1\boxplus_{F}x_{2}^{-1}\cdot x_{3})$. So if $0\not\in x_{2}\boxplus_{g}x_{3}$, then also $0\not\in(x_{1}\cdot x_{2})\boxplus_{\psi(x_{1})\cdot g}(x_{1}\cdot x_{3})$, and so both sides of the equation are equal to $x_{1}\cdot(x_{2}\boxplus_{g}x_{3})$. If $0\in x_{2}\boxplus_{g}x_{3}$, then also $0\in(x_{1}\cdot x_{2})\boxplus_{\psi(x_{1})\cdot g}(x_{1}\cdot x_{3})$, and so both sides of the equation are equal to $x_{1}\cdot(x_{2}\boxplus_{g}x_{3})\cup x_{1}\cdot\psi^{-1}(g\downarrow)$. For the right distributivity, we need to consider bijections $\lambda_{h}^{\prime}\colon F\to A_{\psi(h)}$ similar to the $\lambda_{h}$. We take $\lambda_{h}^{\prime}(x)$ to be $x\cdot h$ for $x\in F^{\times}$ and to be $0$ for $x=0_{F}$. Then since $\lambda_{h}^{\prime}(x)=\lambda_{h}(x)^{h}$ for any $x$ and the short exact sequence has stable sums, the $\lambda_{h}^{\prime}$ are also hyperfield isomorphisms. So we may argue as above but with the $\lambda_{h}^{\prime}$ in place of the $\lambda_{h}$. Finally, we must show that $F\rtimes_{H,\psi}G$ is stringent if $F$ is. By definition of $F\rtimes_{H,\psi}G$, we just need to show that for $x,y\in F\rtimes_{H,\psi}G$ with $\psi(x)=\psi(y)$ and $0\not\in x\boxplus_{\psi(x)}y$, $x\boxplus y$ is a singleton. As $F$ is stringent and $0\not\in x\boxplus_{\psi(x)}y$, then $x\boxplus_{\psi(x)}y$ is a singleton. So $x\boxplus y=x\boxplus_{\psi(x)}y$ is also a singleton. ∎ Now let us see some interesting examples of hyperfields constructed in this way. ###### Example 4.3. If $F$ is the Krasner hyperfield, $G$ and $H$ are both the additive group of real numbers, and $\psi$ is the identity, then $F\rtimes_{H,\psi}G$ is the tropical hyperfield. ###### Example 4.4. $F:=\mathbb{Z}\cup\\{-\infty\\}$ in Example 1.4 can arise from the short exact sequence of groups $0\to GF(2)^{\times}\xrightarrow{\varphi}\mathbb{Z}\xrightarrow{\psi}\mathbb{Z}\to 0.$ ###### Example 4.5. In [AD19], Anderson and Davis drew a diagram encoding many popular and important hyperfields and homomorphisms, see as follows. $\mathbb{TR}$$\mathbb{TC}$$\mathbb{T}\triangle$$\mathbb{S}$$\mathbb{K}$$\Phi$$\mathbb{P}$$\mathbb{R}$$\mathbb{C}$$\triangle$$|\,\,|$$|\,\,|$$|\,\,|$$|\,\,|$phphphphphph The diagram with the solid arrows commutes. The four dashed arrows are inclusions giving sections (one-sided inverses). Here ph is the _phase map_ ph$(x)=x/|x|$ if $x=0$ and ph$(0)=0$. In each of the ten hyperfields, the underlying set is a subset of the complex numbers closed under multiplication. And in each hyperfield, multiplication, the additive identity, and the multiplicative identity coincides with that of the complex numbers. Our classification gives a good relationship between the hyperfields in each column and we can construct each hyperfield in the bottom row from the corresponding element of the row just above the bottom and the ordered group $\mathbb{R}_{>0}$. 1. (1) From the short exact sequence of groups $1\to\mathbb{S}^{\times}\rightarrow\mathbb{R}^{\times}\rightarrow\mathbb{R}_{>0}\to 1,$ we can get the tropical real hyperfield $\mathbb{TR}=\mathbb{S}\rtimes\mathbb{R}_{>0}$. 2. (2) From the short exact sequence of groups $1\to\Phi^{\times}\rightarrow\mathbb{C}^{\times}\rightarrow\mathbb{R}_{>0}\to 1,$ we can get the tropical complex hyperfield $\mathbb{TC}=\Phi\rtimes\mathbb{R}_{>0}$. 3. (3) From the short exact sequence of groups $1\to\mathbb{K}^{\times}\rightarrow\mathbb{R}_{>0}\rightarrow\mathbb{R}_{>0}\to 1,$ we can get the ultratriangle hyperfield $\mathbb{T}\triangle=\mathbb{K}\rtimes\mathbb{R}_{>0}$. Since in each column the second element is obtained as a quotient of the first by a $\mathbb{R}_{>0}-$subgroup, this operation of putting back the factor of $\mathbb{R}_{>0}$ yields a hyperfield on the same ground set as the top element of the column. Our aim is to show that every stringent skew hyperfield is of the form $F\rtimes_{H,\psi}G$ with $F$ either the Krasner hyperfield or the sign hyperfield or a skew field. Let’s start with a stringent skew hyperring. Let $R$ be a stringent skew hyperring. By Theorem 1.6, we can classify $R$ to be the wedge sum $\bigvee_{g\in G}R_{g}$ with a surjective mapping $\psi$ from $R^{\times}$ to the set $G$ defined in last section and an ordering $<_{R}^{\prime}$ on $G$ by $\psi(x)<_{R}^{\prime}\psi(y)$ if and only if $x\boxplus y=\\{y\\}$ but $x\neq y$, where the hypergroup $R_{g}$ is either isomorphic to $\mathbb{K}$ or isomorphic to $\mathbb{S}$ or is a group. Thus by distributivity of $R$, $\psi(x)<_{R}^{\prime}\psi(y)$ if and only if $\psi(ax)<_{R}^{\prime}\psi(ay)$ if and only if $\psi(xa)<_{R}^{\prime}\psi(ya)$ for $a\in R^{\times}$. So the multiplication of $R$ lifts to a multiplication on $G$ respecting the ordering, with identity $\psi(1):=1_{G}$. By Lemma 3.7(2), we can easily get the following lemma. ###### Lemma 4.6. $(G,\cdot,<_{R}^{\prime})$ is a totally ordered monoid. If $R$ is a skew hyperfield, then $G$ is a totally ordered group. Now we want to consider the structure of $R_{g}$. ###### Lemma 4.7. $R_{1_{G}}$ is again a skew hyperring, with hyperaddition given by $\boxplus_{1_{G}}$ and multiplication by that of $R$. ###### Proof. By Lemma 3.9, it suffices to check the distributivity. To prove left distributivity we must show that any element $t\in R_{1_{G}}$ of $x\cdot(y\boxplus z)$ is also an element of the same expression evaluated in $R_{1_{G}}$. So let $w$ be an element of $y\boxplus z$ with $x\cdot w=t$. This second equation implies that the equivalence class of $w$ is $1_{G}$, as desired. The right distributivity is similar. ∎ ###### Lemma 4.8. If $R$ is a skew hyperfield, $R_{1_{G}}$ is either the Krasner hyperfield or the sign hyperfield or a skew field. ###### Proof. By Lemma 3.10 and Lemma 4.7, we can get $R_{1_{G}}$ is either the Krasner hyperfield or the sign hyperfield or a skew ring. Since $\sim_{R}$ respects the multiplication, the multiplicative inverse of anything equivalent to $1_{R}$ is again equivalent to $1_{R}$, so that $R_{1_{G}}$ is a skew field if it is a skew ring. ∎ ###### Lemma 4.9. For every $g\in G$, the hypergroup of $R_{g}$ is isomorphic to the hypergroup of $R_{1_{G}}$. ###### Proof. Let $a\in R_{g}^{\times}$. Define $f:R_{g}\rightarrow R_{1_{G}}$ by sending $0$ to $0$ and $x$ in $R_{g}^{\times}$ to $a^{-1}\cdot x$. Since $f$ has an inverse operation, namely left multiplication by $a$, this is a bijection. Now we would like to show $f(x\boxplus_{g}y)=f(x)\boxplus_{1_{G}}f(y)$. $\displaystyle f(x\boxplus_{g}y)$ $\displaystyle=a^{-1}\cdot(x\boxplus_{g}y)$ $\displaystyle=a^{-1}\cdot\big{(}(x\boxplus y)\cap R_{g}\big{)}$ $\displaystyle=\big{(}a^{-1}\cdot(x\boxplus y)\big{)}\cap(a^{-1}\cdot R_{g})$ $\displaystyle=\big{(}(a^{-1}\cdot x)\boxplus(a^{-1}\cdot y)\big{)}\cap R_{1_{G}}$ $\displaystyle=(a^{-1}\cdot x)\boxplus_{1_{G}}(a^{-1}\cdot y)$ $\displaystyle=f(x)\boxplus_{1_{G}}f(y)$ ∎ Now using the above results, we can classify the stringent skew hyperfields as follows. ###### Theorem 4.10. Any stringent skew hyperfield $R$ has the form $F\rtimes_{H,\psi}G$, where $F$ is either the Krasner hyperfield or the sign hyperfield or a skew field. ###### Proof. Let $F$ be $R_{1_{G}}$, let $H$ be $R^{\times}$ and let $G$ be given as above. Let $\varphi$ be the injection of $F^{\times}$ as a subgroup of $H$ and let $\psi$ be the map sending an element $h$ of $H$ to its equivalence class in $G$. Then $1\to F^{\times}\xrightarrow{\varphi}H\xrightarrow{\psi}G\to 1$ is a short exact sequence. For any $x$ and $y$ in $H$, if $\psi(x)>_{R}^{\prime}\psi(y)$, then $x>_{R}y$ and so $x\boxplus y=\\{x\\}$. Similarly if $\psi(x)<_{R}^{\prime}\psi(x)$, then $x\boxplus y=\\{y\\}$. If $\psi(x)=\psi(y)$, then $x\boxplus_{\psi(x)}y=x\cdot((1\boxplus x^{-1}\cdot y)\cap R_{1_{G}})=(x\boxplus y)\cap R_{\psi(x)}$. So by the remarks following Lemma 3.8 we have that the hyperaddition of $R$ agrees with that of $F\rtimes_{H,\psi}G$ in this case as well. ∎ Using results of Marshall’s paper [Mar06], we can show that the structure is even more constrained if the multiplication of $R$ is commutative (so that $R$ is a stringent hyperfield) and $R_{1_{G}}$ is the Krasner or the sign hyperfield. ###### Proposition 4.11. Let $R$ be a stringent skew hyperfield with $R_{1_{G}}=\mathbb{S}$ and let $a\in R^{\times}-\\{1,-1\\}$. Then $a^{2}\notin\\{1,-1\\}$. ###### Proof. As $a\notin\\{1,-1\\}$, then $a\not\sim_{R}1$. So $\psi(a)\neq 1$. Then $\psi(a^{2})=(\psi(a))^{2}\neq 1$, since $G$ is a totally ordered group. So $a^{2}\not\sim_{R}1$. That is $a^{2}\notin\\{1,-1\\}$. ∎ Following are some useful Lemmas in Marshall’s paper (cf. section 3,[Mar06]). ###### Definition 4.12. [Mar06] Let $R$ be a hyperfield. A subset $P$ of $R$ is called an ordering if $P\boxplus P\subseteq P,P\odot P\subseteq P,P\cup-P=R\text{ and }P\cap-P=\\{0\\}.$ ###### Definition 4.13. [Mar06] A hyperfield $R$ is said to be real if $-1\notin R^{2}\boxplus R^{2}$ where $R^{2}:=\\{a^{2}\,|\,a\in R\\}$. ###### Lemma 4.14. [Mar06, Lemma 3.3] Let $R$ be a hyperfield. $R$ has an ordering if and only if $R$ is real. ###### Lemma 4.15. [Mar06, Lemma 3.2, 3.3] Let $R$ be a hyperfield with $1\neq-1$. If $R$ has an ordering $P$, then $-1\notin P$. Based on above lemmas, we get the following. ###### Proposition 4.16. If $R$ is a stringent hyperfield with $R_{1_{G}}=\mathbb{S}$, then $R$ has an ordering. ###### Proof. By Lemma 4.14, we just need to show that $R$ is real. Suppose that $-1\in R^{2}\boxplus R^{2}$. Then there exist $a,b\in R$ such that $-1\in a^{2}\boxplus b^{2}$. By Proposition 4.11, $a^{2}\neq-1$ and $b^{2}\neq-1$. Thus $a\neq 0$ and $b\neq 0$. And by reversibility, $-b^{2}\in 1\boxplus a^{2}$. As $a^{2}\neq-1$, then $1\boxplus a^{2}\subseteq\\{1,a^{2}\\}$. Thus $-b^{2}=a^{2}$. Then $-1=a^{2}b^{-2}=(ab^{-1})^{2}$, a contradiction to Proposition 4.11. So $R$ is real, and therefore has an ordering. ∎ ###### Theorem 4.17. If $R$ is a stringent hyperfield with $R_{1_{G}}\in\\{\mathbb{K},\mathbb{S}\\}$, then $R$ arises from a short exact sequence $1\to R_{1_{G}}^{\times}\xrightarrow{\varphi}R_{1_{G}}^{\times}\times G\xrightarrow{\psi}G\to 1.$ ###### Proof. If $R_{1_{G}}=\mathbb{K}$, this is trivial. If $R_{1_{G}}=\mathbb{S}$, by Theorem 4.10, we may suppose $R=\mathbb{S}\rtimes_{H,\psi}G=H\cup\\{0\\}$ with a short exact sequence of groups $1\to\mathbb{S}^{\times}\xrightarrow{\varphi}H\xrightarrow{\psi}G\to 1.$ By Proposition 4.16, we know that $R$ has an ordering $P$. Let $O=P-\\{0\\}$. As $P\cup-P=R$ and $P\cap-P=\\{0\\}$, then $R=O\mbox{\hskip 1.49994pt$\cup$$\cdot$\hskip 3.99994pt}-O\cup\\{0\\}$. By Lemma 4.15, $-1\notin P$. Then $-1\notin O$, thus $1\in O$. And as $P\odot P\subseteq P$, then $O\odot O\subseteq O$. For any $a\in O$, $a^{-1}\in O$. Otherwise $-a^{-1}\in O$. Then $a\odot-a^{-1}=-1\in O$, which is a contradiction. So $O$ is a multiplicative group with $1\in O$ and $R=O\mbox{\hskip 1.49994pt$\cup$$\cdot$\hskip 3.99994pt}-O\cup\\{0\\}$, and $\psi\restriction O$ is an isomorphism from $O$ to $G$. Now we can identify $x\in H$ with $(1,\psi(x))$ if $x\in O$, and with $(-1,\psi(x))$ if $x\notin O$, giving a bijection from $H$ to $\mathbb{S}^{\times}\times G$. So $R\cong(\mathbb{S}^{\times}\times G)\cup\\{0\\}$. ∎ It is not clear whether this result extends to stringent skew hyperfields. ## 5\. Classification of Doubly Distributive Skew Hyperfields In this section, we will present the classification of doubly distributive skew hyperfields. ###### Proposition 5.1. The doubly distributive skew hyperfields are precisely those of the form $F\rtimes_{H,\psi}G$ of exactly one of the following types: 1. (1) $F$ is the Krasner hyperfield, 2. (2) $F$ is the sign hyperfield, 3. (3) $F$ is a skew field and $G$ satisfies $\\{ab\,|\,a,b<1_{G}\\}=\\{c\,|\,c<1_{G}\\}.$ The following example is a doubly distributive hyperfield of type (3) in Proposition 5.1. ###### Example 5.2. Let $F:=\mathbb{R}$ be the hyperfield with usual multiplication and hyperaddition given by $x\boxplus y=\begin{cases}x&\text{ if }|x|>|y|,\\\ y&\text{ if }|x|<|y|,\\\ \\{z\,|\,|z|<|x|\\}&\text{ if }|x|=|y|.\end{cases}$ This hyperfield is stringent and arises from the short exact sequence of groups $1\to GF(3)^{\times}\xrightarrow{\varphi}\mathbb{R}^{\times}\xrightarrow{\psi}\mathbb{R}_{>0}\to 1\,.$ Another natural example of doubly distributive skew hyperfields of type (3) in Proposition 5.1 can be found in [Pen18], which is built around a (noncommutative) such hyperfield (the one called $L^{\sigma}$). Before showing the proof of Proposition 5.1, We’ll first introduce a useful lemma. ###### Lemma 5.3. Let $R$ be a stringent skew hyperfield. $R$ is doubly distributive if and only if $(1\boxplus-1)(1\boxplus-1)=1\boxplus-1\boxplus 1\boxplus-1.$ ###### Proof. By Definition 2.8, $R$ is doubly distributive if and only if $(a\boxplus b)(c\boxplus d)=ac\boxplus ad\boxplus bc\boxplus bd$, for any $a,b,c,d\in R$. As $R$ is stringent, we have $u\boxplus v$ is a singleton if $u\neq-v$. So if either $a\neq-b$ or $c\neq-d$, then the equation above is just about distributivity. It already holds. If both $a=-b$ and $c=-d$, then $(a\boxplus b)(c\boxplus d)=(a\boxplus-a)(c\boxplus-c)=a(1\boxplus-1)(1\boxplus-1)c$ and $ac\boxplus ad\boxplus bc\boxplus bd=ac\boxplus-ac\boxplus-ac\boxplus ac=a(1\boxplus-1\boxplus 1\boxplus-1)c.$ So $R$ is doubly distributive if and only if $(1\boxplus-1)(1\boxplus-1)=1\boxplus-1\boxplus 1\boxplus-1.$ ∎ Now we will present the proof of Proposition 5.1. ###### Proof of Proposition 5.1. By Proposition 1.3 and Theorem 4.10, we know that a doubly distributive skew hyperfield $R$ also has the form $F\rtimes_{H,\psi}G$, where $F$ is either the Krasner hyperfield or the sign hyperfield or a skew field, with a short exact sequence of groups $1\to F^{\times}\xrightarrow{\varphi}H\xrightarrow{\psi}G\to 1\,.$ So it suffices to show that the hyperfields of type (1) and (2) are doubly distributive and the hyperfields with $F$ a skew field are doubly distributive if and only if they are of type (3). _Case 1:_ When $F=\mathbb{K}=\\{1,0\\}$, hyperaddition is defined by $\displaystyle x\boxplus 0$ $\displaystyle=\\{x\\},$ $\displaystyle x\boxplus y$ $\displaystyle=\begin{cases}\\{x\\}&\text{ if $\psi(x)>\psi(y)$,}\\\ \\{y\\}&\text{ if $\psi(x)<\psi(y)$,}\\\ \\{z\,|\,\psi(z)\leq\psi(x)\\}\cup\\{0\\}&\text{ if $\psi(x)=\psi(y)$, that is $x=y$.}\end{cases}$ By Lemma 5.3, $R$ is doubly distributive if and only if $(1\boxplus 1)(1\boxplus 1)=1\boxplus 1\boxplus 1\boxplus 1.$ $(1\boxplus 1)(1\boxplus 1)=(\\{z\,|\,\psi(z)\leq 1\\}\cup\\{0\\})\cdot(\\{z\,|\,\psi(z)\leq 1\\}\cup\\{0\\})=\\{z\,|\,\psi(z)\leq 1\\}\cup\\{0\\},$ and $1\boxplus 1\boxplus 1\boxplus 1=(\\{z\,|\,\psi(z)\leq 1\\}\cup\\{0\\})\boxplus(\\{z\,|\,\psi(z)\leq 1\\}\cup\\{0\\})=\\{z\,|\,\psi(z)\leq 1\\}\cup\\{0\\}.$ So $R$ is doubly distributive when $F=\mathbb{K}$. _Case 2:_ When $F=\mathbb{S}=\\{1,-1,0\\}$, hyperaddition is defined by $\displaystyle x\boxplus 0$ $\displaystyle=\\{x\\},$ $\displaystyle x\boxplus y$ $\displaystyle=\begin{cases}\\{x\\}&\text{ if $\psi(x)>\psi(y)$,}\\\ \\{y\\}&\text{ if $\psi(x)<\psi(y)$,}\\\ \\{x\\}&\text{ if $x=y$,}\\\ \\{z\,|\,\psi(z)\leq\psi(x)\\}\cup\\{0\\}&\text{ if $x=-y$.}\end{cases}$ By Lemma 5.3, $R$ is doubly distributive if and only if $(1\boxplus-1)(1\boxplus-1)=1\boxplus-1\boxplus 1\boxplus-1.$ $(1\boxplus-1)(1\boxplus-1)=(\\{z\,|\,\psi(z)\leq 1\\}\cup\\{0\\})\cdot(\\{z\,|\,\psi(z)\leq 1\\}\cup\\{0\\})=\\{z\,|\,\psi(z)\leq 1\\}\cup\\{0\\},$ and $1\boxplus-1\boxplus 1\boxplus-1=(\\{z\,|\,\psi(z)\leq 1\\}\cup\\{0\\})\boxplus(\\{z\,|\,\psi(z)\leq 1\\}\cup\\{0\\})=\\{z\,|\,\psi(z)\leq 1\\}\cup\\{0\\}.$ So $R$ is doubly distributive when $F=\mathbb{S}$. _Case 3:_ When $F$ is a skew field, hyperaddition is defined by $\displaystyle x\boxplus 0$ $\displaystyle=\\{x\\},$ $\displaystyle x\boxplus y$ $\displaystyle=\begin{cases}\\{x\\}&\text{ if $\psi(x)>\psi(y)$,}\\\ \\{y\\}&\text{ if $\psi(x)<\psi(y)$,}\\\ x\boxplus_{\psi(x)}y&\text{ if $\psi(x)=\psi(y)$ and $0\notin x\boxplus_{\psi(x)}y$,}\\\ \\{z\,|\,\psi(z)<\psi(x)\\}\cup\\{0\\}&\text{ if $\psi(x)=\psi(y)$ and $0\in x\boxplus_{\psi(x)}y$.}\\\ \end{cases}$ By Lemma 5.3, $R$ is doubly distributive if and only if $(1\boxplus-1)(1\boxplus-1)=1\boxplus-1\boxplus 1\boxplus-1.$ $(1\boxplus-1)(1\boxplus-1)=(\\{z\,|\,\psi(z)<1\\}\cup\\{0\\})\cdot(\\{z\,|\,\psi(z)<1\\}\cup\\{0\\})=\\{xy\,|\,\psi(x),\psi(y)<1\\}\cup\\{0\\},$ and $1\boxplus-1\boxplus 1\boxplus-1=(\\{z\,|\,\psi(z)<1\\}\cup\\{0\\})\boxplus(\\{z\,|\,\psi(z)<1\\}\cup\\{0\\})=\\{z\,|\,\psi(z)<1\\}\cup\\{0\\}.$ So $R$ is doubly distributive if and only if $\\{xy\,|\,\psi(x),\psi(y)<1\\}\cup\\{0\\}=\\{z\,|\,\psi(z)<1\\}\cup\\{0\\}.$ We claim that $\\{xy\,|\,\psi(x),\psi(y)<1\\}\cup\\{0\\}=\psi^{-1}(\psi(1)\downarrow)\cup\\{0\\}=\\{z\,|\,\psi(z)<1\\}\cup\\{0\\},$ if and only if $\\{ab\,|\,a,b<1_{G}\\}=\\{c\,|\,c<1_{G}\\}.$ $(\Rightarrow):$ If $\\{xy\,|\,\psi(x),\psi(y)<1\\}\cup\\{0\\}=\\{z\,|\,\psi(z)<1\\}\cup\\{0\\}$, the direction $\subseteq$ is clear. We just need to consider the other direction. Let $c\in G$ be such that $c<1_{G}$ and let $z\in\psi^{-1}(c)$. Then there exist $x,y\in H$ such that $z=xy$ and $\psi(x),\psi(y)<1$ by our assumption. So $c=\psi(z)=\psi(xy)=\psi(x)\psi(y)$. We have $c\in\\{ab\,|\,a,b<1_{G}\\}$. $(\Leftarrow):$ If $\\{ab\,|\,a,b<1_{G}\\}=\\{c\,|\,c<1_{G}\\}$, the direction $\subseteq$ is also clear. We just need to consider the other direction. Let $z\in H$ be such that $\psi(z)<1$ and let $c=\psi(z)$. Then there exist $a,b\in G$ such that $c=ab$ and $a,b<1_{G}$ by our assumption. Let $x\in H$ be such that $\psi(x)=a<1_{G}$ and let $y=x^{-1}z$. We have $\psi(y)=\psi(x^{-1}z)=a^{-1}c=b<1_{G}$ and $z=xy$. So $z\in\\{xy\,|\,\psi(x),\psi(y)<1\\}$. So $R$ is doubly distributive if and only if $\\{ab\,|\,a,b<1_{G}\\}=\\{c\,|\,c<1_{G}\\}.$ ∎ ## 6\. Reduction of stringent skew hyperrings to hyperfields In this section, we will show that stringent skew hyperrings are very restricted. ###### Theorem 6.1. Every stringent skew hyperring is either a skew ring or a stringent skew hyperfield. ###### Proof. If $G$ is trivial, then $R=R_{1_{G}}$. So $R$ is either $\mathbb{K}$, or $\mathbb{S}$, or a skew ring. If $G$ is nontrivial, we would like to show that every element $x$ in $R^{\times}$ is a unit. Now let $s$ and $t$ in $R^{\times}$ be such that $x\cdot s>_{R}1$ and $t\cdot x>_{R}1$. Then by the remarks after Lemma 3.8, we have $1\in x\cdot s\boxplus-x\cdot s=x\cdot(s\boxplus-s),$ $1\in t\cdot x\boxplus-t\cdot x=(t\boxplus-t)\cdot x.$ So there exists $y\in s\boxplus-s$ and $z\in t\boxplus-t$ such that $1=x\cdot y=z\cdot x$. Thus $y=(z\cdot x)\cdot y=z\cdot(x\cdot y)=z$. So $x$ has a multiplicative inverse $y$ in $R$. Then $x$ is a unit of $R$. So every stringent skew hyperring is either a skew ring or a stringent skew hyperfield. ∎ We cannot classify doubly distributive hyperring using our classification because not every doubly distributive hyperring is stringent. The following is a counterexample. ###### Example 6.2. The hyperring $\mathbb{K}\times\mathbb{K}$ that is the square of the Krasner hyperfield is doubly distributive but not stringent. ## 7\. Every stringent skew hyperfield is a quotient of a skew field In this section, we would like to show that every stringent skew hyperfield is a quotient of a skew field by some normal subgroup. In particular, every stringent hyperfield is a quotient of a field by some special kind of subgroups, called ‘hüllenbildend’. This kind of subgroups was studied by Diller and Grenzdörffer in [DG73] when they tried to unify the treatment of various notions of convexity in projective spaces over a field $K$ by introducing for any subgroup $U\leq K^{\times}$ the notion of $U$-convexity. They showed that this notion is reasonably well behaved if and only if $U$ is as follows. ###### Definition 7.1. [DG73] Let $K$ be a field and let $U\leq K^{\times}$. $U$ is called $U$-‘hüllenbildend’ (hull producing) if $U$ satisfies $x,y\in K,x+y-xy\in U\rightarrow x\in U\text{ or }y\in U.$ (1) In [Dre77], Dress presented a simple complete classification of such ‘hüllenbildend’ subgroups $U$. We will combine our classification of stringent (skew) hyperfields into three types with Dress’s classification of such subgroups. ###### Theorem 7.2. [Dre77, Theorem 1] Let $U\leq K^{\times}$ satisfy (1) and let $S_{U}=\\{x\in K\,|\,x\notin U\text{ and }x+U\subseteq U\\}$. Then $S_{U}$ is the maximal ideal of a valuation ring $R=R_{U}(=\\{x\in K\,|\,x\cdot S_{U}\subseteq S_{U}\\})$ in $K$, $U$ is contained in $R$, $\overline{U}=\\{\overline{x}\in\overline{K}_{U}=R_{U}/S_{U}\,|\,x\in U\\}$ is either a domain of positivity in $\overline{K}_{U}$ (if $-1\notin U$, $2\in U$) or $\overline{U}=\\{\overline{1}\\}$ or $\overline{U}=\overline{K}_{U}^{\times}$ and, in any case, $U=\\{x\in R_{U}\,|\,\overline{x}\in\overline{U}\\}$. We will first explain how to choose the suitable subgroup $U$ in the case of stringent hyperfield and then give the proof in full generality for stringent skew hyperfields. From our classification in Theorem 4.10, we know that every stringent hyperfield $F$ has the form $M\rtimes_{H,\psi}G$, where $M$ is either $\mathbb{K}$, or $\mathbb{S}$, or a field. To show that $F$ is a quotient by some subgroup $U$, we will choose $U$ with $\overline{U}=\\{\overline{1}\\}$ if $M$ is $\mathbb{K}$, choose $U$ with $\overline{U}$ a domain of positivity if $M$ is $\mathbb{S}$, and choose $U$ with $\overline{U}=\overline{K}_{U}^{\times}$ if $M$ is a field. Now we begin the proof that every stringent skew hyperfield is a quotient of a skew field $K$ by some normal subgroup $U$. First, we recall the definition of quotient hyperfield. The quotient hyperfield $K/U=\\{[g]=gU\,|\,g\in K\\}$ was introduced by Krasner in [Kra83] with multiplication given by $[g]\cdot[h]=[gh]$, for $[g],[h]\in K/U$. Hyperaddition is given by $[g]\boxplus[0]=[g]$ and $[g]\boxplus[h]=\\{[f]\subseteq K/U\,|\,f\in gU+hU\\}$, for $[g],[h]\in(K/U)^{\times}$. As the subgroup $U$ we are choosing would be normal, so this quotient works in the skew case. We may suppose that a stringent skew hyperfield $F=M\rtimes_{H,\psi}G$ arises from a short exact sequence of groups $1\to M^{\times}\xrightarrow{\varphi}H\xrightarrow{\psi}G\to 1\,,$ where $G$ is a totally ordered group equipped with a total order $\leq$ and $M$ is either $\mathbb{K}$, or $\mathbb{S}$, or a skew field. We define an order $\leq^{\prime}$ on $G$ such that $x\leq^{\prime}y$ if and only if $y\leq x$. So $\leq^{\prime}$ is also a total order on $G$. Similarly as in the non- skew case, we will also choose $U$ with $\overline{U}=\\{\overline{1}\\}$ if $M$ is $\mathbb{K}$, choose $U$ with $\overline{U}$ a domain of positivity if $M$ is $\mathbb{S}$, and choose $U$ with $\overline{U}=\overline{K}_{U}^{\times}$ if $M$ is a skew field. Our difficulty now is to choose a suitable skew field $K$ for the quotient hyperfield corresponding to each $U$. We will introduce two different constructions of skew fields, as follows. ###### Example 7.3. [FS01] (Construction 1) Let $k$ be an arbitrary field. Define $K=k((G))$ to be the ring of formal power series whose powers come from $G$, that is, the elements of $K$ are functions from $G$ to $k$ such that the support of each function is a well-ordered subset of $(G,\leq^{\prime})$. Addition is pointwise, and multiplication is the Cauchy product or convolution, that is the natural operation when viewing the functions as power series $\sum_{{a\in G}}p(a)x^{a}$ It is well known (and easy to check) that $K$ is a skew field. We will construct a skew field $K=k((G))$ as in Example 7.3 by choosing $k$ to be an arbitrary field when $M$ is $\mathbb{K}$ and choosing $k$ to be the field $\mathbb{R}$ of real numbers (or any other ordered field) when $M$ is $\mathbb{S}$. The second construction is for a stringent skew hyperfield $F=M\rtimes_{H,\psi}G$ when $M$ is a skew field. ###### Example 7.4. (Construction 2) We define $K=M[[G]]$ to be the set of formal sums of elements of $H$ all from different layers such that the support is well-ordered, that is, an element of $K$ is a function $p$ from $G$ to $H$ such that for any $g$ in $G$, $p(g)\in\psi^{-1}(g)\cup\\{0\\}=A_{g}$ and the support of each function is a well-ordered subset of $(G,\leq^{\prime})$. As $M$ is a skew field and each $\lambda_{h}$ with $h\in H$ is an isomorphism of hypergroups, then $(A_{g},\boxplus_{g},0)$ is always an abelian group. We claim that $K$ is a skew field, viewing functions as power series $\sum_{{a\in G}}p(a)x^{a}\,,$ with addition $+_{K}$ given by $\sum_{{a\in G}}p(a)x^{a}+_{K}\sum_{{a\in G}}q(a)x^{a}=\sum_{{a\in G}}(p(a)\boxplus_{a}q(a))x^{a},$ and the additive identity is $\sum_{{a\in G}}0x^{a}$. Multiplication $\cdot_{K}$ is given by $\Big{(}\sum_{{a\in G}}p(a)x^{a}\Big{)}\cdot_{K}\big{(}\sum_{{a\in G}}q(a)x^{a}\Big{)}=\sum_{{s\in G}}\Big{(}\underset{g\cdot_{G}h=s}{\underset{h\in\operatorname{supp}(q),}{\underset{g\in\operatorname{supp}(p),}{\boxplus_{s}}}}p(g)\cdot_{H}q(h)\Big{)}x^{s},$ and the multiplicative identity is $1x^{1_{G}}$. Since the proof that this really gives a skew field is a long calculation and is very similar to that for $k((G))$, we do not give it here but in Appendix A. Now we divide the proof into three cases and show that $F\cong K/U$ in each case. For simplicity, we denote $\min(\operatorname{supp}(p))$ by $m_{p}$ for $p\in K^{\times}$. 1. _Case_ 1: If $M$ is $\mathbb{K}$, then let $U=\\{p\in K^{\times}\,|\,m_{p}=1_{G}\\}$. It’s easy to check that $U$ is normal. Then the quotient hyperfield $K/U=\\{[q]=qU\,|\,q\in K\\}$ has $\displaystyle[q]$ $\displaystyle=\\{p\in K^{\times}\,|\,m_{p}=m_{q}\\},$ $\displaystyle[0]$ $\displaystyle=\\{0_{K}\\}.$ So we can identify $[q]$ in $(K/U)^{\times}$ with $m_{q}$ in $G$ and identify $[0]$ in $K/U$ with $0$. So we have $K/U\cong(G\cup\\{0\\},\boxplus,\cdot)$ with multiplication given by $\displaystyle 0\cdot g$ $\displaystyle=0,$ $\displaystyle g\cdot h$ $\displaystyle=g\cdot_{G}h,$ where $g,h\in G$. And hyperaddition is given by $\displaystyle g\boxplus 0$ $\displaystyle=\\{g\\},$ $\displaystyle g\boxplus h$ $\displaystyle=\begin{cases}\\{g\\}&\text{ if $g<^{\prime}h$, that is $g>h$, }\\\ \\{h\\}&\text{ if $g>^{\prime}h$, that is $g<h$, }\\\ \\{f\in G\,|\,f\geq^{\prime}g\\}\cup\\{0\\}=\\{f\in G\,|\,f\leq g\\}\cup\\{0\\}&\text{ if $g=h$,}\end{cases}$ where $g,h\in G$. Now it is clear to see that $K/U\cong(G\cup\\{0\\},\boxplus,\cdot)\cong\mathbb{K}\rtimes_{H,\psi}G=F$. 2. _Case_ 2: If $M$ is $\mathbb{S}$, $k=\mathbb{R}$ (or any other ordered field) and $K=k((G))$, then let $U=\\{p\in K^{\times}\,|\,m_{p}=1_{G}\text{ and }p(1_{G})>0\\}$. It’s easy to check that $U$ is normal. Then the quotient hyperfield $K/U=\\{[q]=qU\,|\,q\in K\\}$ has $\displaystyle[q]$ $\displaystyle=\\{p\in K^{\times}\,|\,m_{p}=m_{q}\text{ and }p(m_{p})>0\\}\text{ if }q(m_{q})>0,$ $\displaystyle[q]$ $\displaystyle=\\{p\in K^{\times}\,|\,m_{p}=m_{q}\text{ and }p(m_{p})<0\\}\text{ if }q(m_{q})<0,$ $\displaystyle[0]$ $\displaystyle=\\{0_{K}\\}.$ We can identify $[q]$ in $(K/U)^{\times}$ with $(1,m_{q})$ if $q(m_{q})>0$, identify $[q]$ in $(K/U)^{\times}$ with $(-1,m_{q})$ if $q(m_{q})<0$, and identify $[0]$ with $0$. So we have $K/U\cong((\mathbb{S}^{\times}\times G)\cup\\{0\\},\boxplus,\cdot)$ with multiplication given by $\displaystyle(r,g)\cdot 0$ $\displaystyle=0,$ $\displaystyle(r_{1},g_{1})\cdot(r_{2},g_{2})$ $\displaystyle=(r_{1}\cdot_{\mathbb{S}}r_{2},g_{1}\cdot_{G}g_{2}),$ where $r,r_{1},r_{2}\in\mathbb{S}^{\times}$ and $g,g_{1},g_{2}\in G$. And hyperaddition is given by $\displaystyle(r,g)\boxplus 0$ $\displaystyle=\\{(r,g)\\},$ $\displaystyle(r_{1},g_{1})\boxplus(r_{2},g_{2})$ $\displaystyle=\begin{cases}\\{(r_{1},g_{1})\\}&\text{if $g_{1}<^{\prime}g_{2}$, that is $g_{1}>g_{2}$, }\\\ \\{(r_{2},g_{2})\\}&\text{if $g_{1}>^{\prime}g_{2}$, that is $g_{1}<g_{2}$, }\\\ \\{(r_{1},g_{1})\\}&\text{if $g_{1}=g_{2}$ and $r_{1}=r_{2}$ }\\\ \\{(r,f)\,|\,f\geq^{\prime}g\\}\cup\\{0\\}=\\{(r,f)\,|\,f\leq g\\}\cup\\{0\\}&\text{if $g_{1}=g_{2}$ and $r_{1}=-r_{2}$,}\end{cases}$ where $r,r_{1},r_{2}\in\mathbb{S}^{\times}$ and $g,g_{1},g_{2}\in G$. So by Theorem 4.17, $K/U\cong((\mathbb{S}^{\times}\times G)\cup\\{0\\},\boxplus,\cdot)\cong\mathbb{S}\rtimes_{H,\psi}G=F$. 3. _Case_ 3: If $M$ is a skew field and $K=M[[G]]$, then let $U=\\{p\in K^{\times}\,|\,m_{p}=1_{G}\text{ and }p(1_{G})=1\\}$. It’s easy to check that $U$ is normal. Then the quotient hyperfield $K/U=\\{[q]=qU\,|\,q\in K\\}$ has $\displaystyle[q]$ $\displaystyle=\\{p\in K^{\times}\,|\,m_{p}=m_{q}\text{ and }p(m_{p})=q(m_{q})\\},$ $\displaystyle[0]$ $\displaystyle=0_{K}.$ We can identify $[q]$ in $(K/U)^{\times}$ with $q(m_{q})$ in $H$ (clearly $\psi(q(m_{q}))=m_{q}$) and identify $[0]$ with $0_{F}$. So we have $K/U\cong F$ with multiplication given by $\displaystyle[q]\cdot 0$ $\displaystyle=0,$ $\displaystyle[q]\cdot[h]$ $\displaystyle=\\{p\in K^{\times}\,|\,m_{p}=m_{q}\cdot_{G}m_{h}\text{ and }p(m_{p})=q(m_{q})\cdot_{H}h(m_{h})\\}=[p]$ Hyperaddition is given by $[q]\boxplus 0=[q],$ $\displaystyle[q]\boxplus[h]$ $\displaystyle=$ $\displaystyle\begin{cases}\\{[q]\\}&\text{if $m_{q}<^{\prime}m_{h}$, that is $m_{q}>m_{h}$, }\\\ \\{[h]\\}&\text{if $m_{q}>^{\prime}m_{h}$, that is $m_{q}<m_{h}$, }\\\ \\{[p]=\\{p\in K^{\times}\,|\,m_{p}=m_{q}\text{ and }p(m_{p})=q(m_{q})\boxplus_{m_{q}}h(m_{h})\\}\\}&\text{if $m_{q}=m_{h}$ and $0\notin q(m_{q})\boxplus_{m_{q}}h(m_{h})$,}\\\ \\{[p]\in(K/U)^{\times}\,|\,m_{p}>^{\prime}m_{q}\\}\cup\\{0\\}=\\{[p]\in(K/U)^{\times}\,|\,m_{p}<m_{q}\\}\cup\\{0\\}&\text{if $m_{q}=m_{h}$ and $0\in q(m_{q})\boxplus_{m_{q}}h(m_{h}).$}\end{cases}$ where $[q],[h]\in(K/U)^{\times}$. So $K/U\cong M\rtimes_{H,\psi}G=F.$ ###### Theorem 7.5. Every stringent skew hyperfield is a quotient of a skew field. ###### Corollary 7.6. Every doubly distributive skew hyperfield is a quotient of a skew field. It follows from the construction that the same statements with all instances of the word ‘skew’ removed also hold. ## Appendix A The construction 2 in Example 7.4 gives a skew field ###### Lemma A.1. Let $F=M\rtimes_{H,\psi}G$ be a stringent skew hyperfield arising from a short exact sequence of groups $1\to M^{\times}\xrightarrow{\varphi}H\xrightarrow{\psi}G\to 1\,,$ where $G$ is a totally ordered group and $M$ is a skew field. Define $K=k[[G]]$ as we did in section 7. Then $K$ is a skew field. ###### Proof. The commutativity and associativity of $(K,+_{K},\sum_{{a\in G}}0x^{a})$ follow from those of $(H\cup\\{0\\},\boxplus,0)$. So we only need to show the associativity of $(K,\cdot_{K},1x^{1_{G}})$, the existence of a multiplicative inverse for every element and the distributivity. An important principle which we will need again and again as we go along is a kind of distributivity of the composition of $H$ over the various additions $\boxplus_{g}$. To express it cleanly, we begin by extending $\cdot_{H}$ to $H\cup\\{0\\}$ by setting $x\cdot 0=0\cdot x=0$ for all $x\in H\cup\\{0\\}$. Suppose that we have elements $x$ and $y_{1},y_{2}\ldots y_{n}$ of $H$ with $\psi(y_{i})=u\in G$ for all $i$, so that $\boxplus_{i=1}^{n}y_{i}$ is defined. Let $v\in G$ be such that $v=\psi(x)$. Then $z\mapsto x\cdot_{H}z$ is a bijection from $A_{u}$ to $A_{v\cdot u}$ whose composition with $\lambda_{y_{1}}$ is $\lambda_{x\cdot_{H}y_{1}}$, so it must also be an isomorphism of hypergroups. Thus $x\cdot_{H}\big{(}\underset{1\leq i\leq n}{\boxplus_{u}}y_{i}\big{)}=\underset{1\leq i\leq n}{\boxplus_{v\cdot u}}x\cdot_{H}y_{i}\,.$ A similar argument using the $\lambda^{\prime}_{h}$ defined in the proof of Lemma 4.2 shows $\big{(}\underset{1\leq i\leq n}{\boxplus_{u}}y_{i}\big{)}\cdot_{H}x=\underset{1\leq i\leq n}{\boxplus_{u\cdot v}}y_{i}\cdot_{H}x\,.$ To show the associativity of $(K,\cdot_{K},1x^{1_{G}})$, we let $p,q,w\in K$. Then for $s\in G$, $\displaystyle\big{(}(p\cdot_{K}q)\cdot_{K}w\big{)}(s)$ $\displaystyle=\underset{g\cdot_{G}c=s}{\underset{c\in\operatorname{supp}(w)}{\boxplus_{s}}}\Big{(}\big{(}\underset{a\cdot_{G}b=g}{\underset{b\in\operatorname{supp}(q)}{\underset{a\in\operatorname{supp}(p)}{\boxplus_{g}}}}p(a)\cdot_{H}q(b)\big{)}\cdot_{H}w(c)\Big{)}$ $\displaystyle=\underset{g\cdot_{G}c=s}{\underset{c\in\operatorname{supp}(w)}{\boxplus_{s}}}\Big{(}\underset{a\cdot_{G}b=g}{\underset{b\in\operatorname{supp}(q)}{\underset{a\in\operatorname{supp}(p)}{\boxplus_{s}}}}p(a)\cdot_{H}q(b)\cdot_{H}w(c)\Big{)}$ $\displaystyle=\underset{a\cdot_{G}b\cdot_{G}c=s}{\underset{c\in\operatorname{supp}(w)}{{\underset{b\in\operatorname{supp}(q)}{\underset{a\in\operatorname{supp}(p)}{\boxplus_{s}}}}}}p(a)\cdot_{H}q(b)\cdot_{H}w(c),$ $\displaystyle=\underset{a\cdot_{G}h=s}{\underset{a\in\operatorname{supp}(p)}{\boxplus_{s}}}\Big{(}\underset{b\cdot_{G}c=h}{\underset{c\in\operatorname{supp}(w)}{\underset{b\in\operatorname{supp}(q)}{\boxplus_{s}}}}p(a)\cdot_{H}q(b)\cdot_{H}w(c)\Big{)}$ $\displaystyle=\underset{a\cdot_{G}h=s}{\underset{a\in\operatorname{supp}(p)}{\boxplus_{s}}}\Big{(}p(a)\cdot_{H}\big{(}\underset{b\cdot_{G}c=h}{\underset{c\in\operatorname{supp}(w)}{\underset{b\in\operatorname{supp}(q)}{\boxplus_{h}}}}q(b)\cdot_{H}w(c)\big{)}\Big{)}$ $\displaystyle=\big{(}p\cdot_{K}(q\cdot_{K}w)\big{)}(s)\,.$ So $(p\cdot_{K}q)\cdot_{K}w=p\cdot_{K}(q\cdot_{K}w)$. Next we will show that each element of $K$ has a multiplicative inverse. We do this first for those $p\in K=k[[G]]$ such that $m_{p}=1_{G}$ and $p(m_{p})=1$. Let $S$ be the set of finite sums of elements of $\operatorname{supp}(p)$. $S$ is well founded. Define $q\in K=k[[G]]$ such that $q(1_{G}):=1$, $q(s):=0$ for $s\notin S$ and, for $s\in S$, define $q(s)$ recursively by $q(s):=-\Big{(}\underset{g\cdot_{G}h=s}{\underset{h\in S-\\{s\\}}{\underset{g\in\operatorname{supp}(p)-\\{1_{G}\\}}{\boxplus_{s}}}}p(g)\cdot_{H}q(h)\Big{)}.$ So $\displaystyle p\cdot_{K}q(1_{G})$ $\displaystyle=1,$ $\displaystyle p\cdot_{K}q(s)$ $\displaystyle=0$ if $s\notin S$, $\displaystyle p\cdot_{K}q(s)$ $\displaystyle=\underset{g\cdot_{G}h=s}{\underset{h\in\operatorname{supp}(q)}{\underset{g\in\operatorname{supp}(p)}{\boxplus_{s}}}}p(g)\cdot_{H}q(h)$ $\displaystyle=\Big{(}\underset{g\cdot_{G}h=s}{\underset{h\in\operatorname{supp}(q)-\\{s\\}}{\underset{g\in\operatorname{supp}(p)-\\{1_{G}\\}}{\boxplus_{s}}}}p(g)\cdot_{H}q(h)\Big{)}\boxplus_{s}p(1)\cdot_{H}q(s)$ $\displaystyle=\Big{(}\underset{g\cdot_{G}h=s}{\underset{h\in\operatorname{supp}(q)-\\{s\\}}{\underset{g\in\operatorname{supp}(p)-\\{1_{G}\\}}{\boxplus_{s}}}}p(g)\cdot_{H}q(h)\Big{)}\boxplus_{s}q(s)$ $\displaystyle=0$ if $s\in S-\\{1_{G}\\}$. So $p\cdot_{K}q$ is the identity. Therefore, $q$ is the multiplicative inverse of $p$. Next we consider elements of $K$ with only a single summand, that is, those of the form $ax^{g}$. It is clear that each such element also has a multiplicative inverse, namely $a^{-1}x^{g^{-1}}$. Now every element of $K$ can be expressed as a product $p_{1}\cdot p_{2}$, with $m_{p_{1}}=1_{G}$ and $p_{1}(m_{p_{1}})=1$ and such that $p_{2}$ has only a single summand. As seen above, each of $p_{1}$ and $p_{2}$ has a multiplicative inverse, and hence $p_{1}\cdot p_{2}$ also has one, namely $p_{2}^{-1}\cdot p_{1}^{-1}$. For distributivity, we first would like to show that $p\cdot_{K}(q+_{K}w)=p\cdot_{K}q+_{K}p\cdot_{K}w$. For $s\in G$, $\displaystyle(p\cdot_{K}(q+_{K}w))(s)$ $\displaystyle=\underset{g\cdot_{G}h=s}{\boxplus_{s}}p(g)\cdot_{H}(q(h)\boxplus_{h}w(h))$ $\displaystyle=\underset{g\cdot_{G}h=s}{\boxplus_{s}}\big{(}p(g)\cdot_{H}q(h)\boxplus_{s}p(g)\cdot_{H}w(h)\big{)}$ $\displaystyle=\big{(}\underset{g\cdot_{G}h=s}{\boxplus_{s}}p(g)\cdot_{H}q(h)\big{)}\boxplus_{s}\big{(}\underset{g\cdot_{G}h=s}{\boxplus_{s}}p(g)\cdot_{H}w(h)\big{)},$ $\displaystyle=(p\cdot_{K}q+_{K}p\cdot_{K}w)(s)\,.$ So $p\cdot_{K}(q+_{K}w)=p\cdot_{K}q+_{K}p\cdot_{K}w$. A similar calculation shows that $(p+_{K}q)\cdot_{K}w=p\cdot_{K}w+_{K}q\cdot_{K}w$. So $K=k[[G]]$ is a skew field. ∎ ## Appendix B The semirings associated to doubly distributive hyperfields In [GJL17], Lemma 6.2(2) provides a way to build a semiring out of a doubly distributive hyperfield.222In [Row16, Theorem 2.5], Rowen extended the theory of constructing the semiring to every hyperfield. In this section, we would like to talk about these semirings. For any doubly distributive hyperfield $H$ we can define binary operations $\oplus$ and $\odot$ on $\mathcal{P}H$ by setting $A\oplus B:=\bigcup_{a\in A,b\in B}a\boxplus b$ (this is just the extension of $\boxplus$ to subsets of $H$ from Definition 2.2) and $A\odot B:=\\{ab\colon a\in A,b\in B\\}$. Let $\langle H\rangle$ be the substructure of $(\mathcal{P}H,\oplus,\odot)$ generated from the singletons of elements of $H$. So $\langle H\rangle$ is a semiring. We will refer $\langle H\rangle$ as the associated semiring to $H$. Using our classification, we can easily determine all such associated semirings. Surprisingly, some of the basic examples have already been intensively studied and play an important role in the foundations of tropical geometry. In each case, we find that $\langle H\rangle$ contains only few elements in addition to the singletons of elements of $H$. We have seen that any doubly distributive hyperfield has the form $F\rtimes_{H,\psi}G$, where $F$ is the Krasner hyperfield, the sign hyperfield or a field. We divide into cases according the value of $F$. ### B.1. Supertropical semirings If $F$ is the Krasner hyperfield then $\psi\colon H^{\times}\to G$ is an isomorphism, and we can take it to be the identity. Then the elements of $\langle H\rangle$ are the singletons of elements of $H$ and the sets $g^{\nu}:=\\{h\in G\colon h\leq g\\}\cup\\{0\\}$. To simplify the definition of the addition we define an operation $\nu$ on $\langle H\rangle\setminus\\{\\{0\\}\\}$ by $\nu(\\{g\\})=\nu(g^{\nu})=g^{\nu}$ and we transfer the total order of $G$ to the $g^{\nu}$ in the obvious way. Then addition is given by $x\oplus\\{0\\}=x$ for any $x$ and otherwise by $x\oplus y=\begin{cases}x&\text{if $\nu(x)>\nu(y)$,}\\\ y&\text{if $\nu(x)<\nu(y)$,}\\\ \nu(x)&\text{if $\nu(x)=\nu(y)$.}\\\ \end{cases}$ Multiplication is given by $x\odot\\{0\\}=\\{0\\}$, by $\\{g\\}\odot\\{h\\}=\\{g\cdot h\\}$, by $\\{g\\}\odot h^{\nu}=(g\cdot h)^{\nu}$ and by $g^{\nu}\odot h^{\nu}=(g\cdot h)^{\nu}$. In the case that $G$ is the ordered group of real numbers, this is simply the supertropical semiring introduced by Izhakian in [Izh09]. This associated semiring has also been studied by Rowen in [Row16]. It would be reasonable to call such semirings in general supertropical semirings. ### B.2. Symmetrised $(\max,+)$-semirings If $F$ is the sign hyperfield then by Theorem 4.17 without loss of generality it arises from a short exact sequence $1\to\mathbb{S}^{\times}\to\mathbb{S}^{\times}\times G\to G\to 1\,.$ The elements of $\langle H\rangle$ then have the form $0:=\\{0_{H}\\}$, $\oplus g:=\\{(1,g)\\}$, $\ominus g:=\\{(-1,g)\\}$, or $g^{\circ}:=\\{(i,h)\colon i\in\mathbb{S}^{\times},h\leq g\\}\cup\\{0_{H}\\}$. There is an obvious projection map $\pi$ from $\langle H\rangle\setminus\\{0\\}$ to $G$. Then addition is given by $x\oplus 0=x$ for any $x$, by $x\oplus y=x$ if $\pi(x)>\pi(y)$, by $x\oplus g^{\circ}=g^{\circ}$ if $\pi(x)=g$, by $(\oplus g)\oplus(\oplus g)=\oplus g$, by $(\ominus g)\oplus(\ominus g)=\ominus g$ and by $(\oplus g)\oplus(\ominus g)=g^{\circ}$. Multiplication is given by $x\odot 0=0$ for any $x$, by $x\odot g^{\circ}=(\pi(x)\cdot g)^{\circ}$, by $(\oplus g)\odot(\oplus h)=\oplus(g\cdot h)$, by $(\ominus g)\odot(\ominus h)=\oplus(g\cdot h)$ and by $(\oplus g)\odot(\ominus h)=\ominus(g\cdot h)$. In the case that $G$ is the ordered group of real numbers, this is simply the symmetrised $(\max,+)$-semiring introduced by Akian et al in [ACG+91]. So it would be reasonable to call such semirings in general symmetrised $(max,+)$-semirings. ### B.3. Linearised $(\max,+)$-semirings If $F$ is a field, then the elements of $\langle H\rangle$ are the singletons of elements of $H$ (which are in canonical bijection with $H$) and the sets $\psi^{-1}(g\downarrow)\cup\\{0\\}$ (which are in canonical bijection with $G$). So $\langle H\rangle$ is isomorphic to the semiring on $H\cup G$ with $x\oplus y$ for $x,y\in H$ given by the unique element of $x\boxplus y$ if this set is a singleton and by $\psi(x)$ otherwise, with $x\oplus g$ for $x\in H$ and $g\in G$ given by $x$ if $\psi(x)\geq g$ and by $g$ otherwise, and with $g\oplus h$ for $g,h\in G$ given by $\max(g,h)$. For multiplication, $x\odot y=x\cdot y$ for $x,y\in H$ and $x\odot g=\psi(x)\cdot g$ for $x\in H$ and $y\in G$ and finally $g\odot h=g\cdot h$ for $g,h\in G$. By analogy to the previous construction, we could refer to such semirings as linearised $(\max,+)$-semirings. So far as we know, such semirings have not yet been seriously investigated. ## References * [ACG+91] Marianne Akian, G. Cohen, S. Gaubert, Ramine Nikoukhah, and J.P. Quadrat. Linear systems in (max, +) algebra. In Proceedings of the 29th IEEE Conference on Decision and Control, pages 151 – 156 vol.1, 01 1991. * [AD19] Laura Anderson and James F. Davis. Hyperfield Grassmannians. Adv. Math., 341:336–366, 2019. * [BB16] Matthew Baker and Nathan Bowler. Matroids over hyperfields. arXiv:1601.01204, 2016. * [BB19] Matthew Baker and Nathan Bowler. Matroids over partial hyperstructures. Adv. Math., 343:821–863, 2019. * [BP19] Nathan Bowler and Rudi Pendavingh. Perfect matroids over hyperfields. arXiv:1908.03420, 2019. * [DG73] Justus Diller and Jochen Grenzdörffer. $G$-Hüllen metrischer Teilräume. Math. Ann., 200:151–164, 1973. * [Dre77] A. Dress. On orderings and valuations of fields. Geometriae Dedicata, 6(3):259–266, 1977. * [DW92] Andreas W. M. Dress and Walter Wenzel. Valuated matroids. Adv. Math., 93(2):214–250, 1992. * [FS01] László Fuchs and Luigi Salce. Modules over non-Noetherian domains, volume 84 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2001. * [GJL17] Jeffrey Giansiracusa, Jaiung Jun, and Oliver Lorscheid. On the relation between hyperrings and fuzzy rings. Beitr. Algebra Geom., 58(4):735–764, 2017. * [Izh09] Zur Izhakian. Tropical arithmetic and matrix algebra. Communications in Algebra, 37(4):1445–1468, 2009. * [Kra57] Marc Krasner. Approximation des corps valués complets de caractéristique $p\not=0$ par ceux de caractéristique $0$. In Colloque d’algèbre supérieure, tenu à Bruxelles du 19 au 22 décembre 1956, Centre Belge de Recherches Mathématiques, pages 129–206. Établissements Ceuterick, Louvain; Librairie Gauthier-Villars, Paris, 1957. * [Kra83] Marc Krasner. A class of hyperrings and hyperfields. Internat. J. Math. Math. Sci., 6(2):307–311, 1983. * [Mar06] M. Marshall. Real reduced multirings and multifields. J. Pure Appl. Algebra, 205(2):452–468, 2006. * [Pen18] Rudi Pendavingh. Field extensions, derivations, and matroids over skew hyperfields. arXiv:1802.02447, 2018. * [Row16] Louis Halle Rowen. Algebras with a negation map. arXiv:1602.00353, 2016. * [Vir10] Oleg Viro. Hyperfields for tropical geometry i. hyperfields and dequantization. arXiv:1006.3034v2, 2010. * [Vir11] O. Ya. Viro. On basic concepts of tropical geometry. Tr. Mat. Inst. Steklova, 273(Sovremennye Problemy Matematiki):271–303, 2011.
2024-09-04T02:54:58.201400
2020-03-08T19:28:53
2003.03837
{ "authors": "Lucileide M. D. da Silva, Maria G. F. Coutinho, Carlos E. B. Santos,\n Mailson R. Santos, Luiz Affonso Guedes, M. Dolores Ruiz, Marcelo A. C.\n Fernandes", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26105", "submitter": "Marcelo Fernandes", "url": "https://arxiv.org/abs/2003.03837" }
arxiv-papers
# Hardware Architecture Proposal for TEDA algorithm to Data Streaming Anomaly Detection Lucileide M. D. da Silva<EMAIL_ADDRESS>Maria G. F. Coutinho <EMAIL_ADDRESS>Carlos E. B. Santos<EMAIL_ADDRESS>Mailson R. Santos<EMAIL_ADDRESS>Luiz Affonso Guedes<EMAIL_ADDRESS>M. Dolores Ruiz<EMAIL_ADDRESS>Marcelo A. C. Fernandes111Present address: John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA<EMAIL_ADDRESS>Laboratory of Machine Learning and Intelligent Instrumentation, Federal University of Rio Grande do Norte, Natal 59078-970, Brazil. Federal Institute of Education, Science and Technology of Rio Grande do Norte, Paraiso, Santa Cruz, RN, 59200-000, Brazil. Department of Statistics and Operations Research, University of Granada, Spain. Department of Computer Engineering and Automation, Federal University of Rio Grande do Norte, Natal, RN, 59078-970, Brazil. ###### Abstract The amount of data in real-time, such as time series and streaming data, available today continues to grow. Being able to analyze this data the moment it arrives can bring an immense added value. However, it also requires a lot of computational effort and new acceleration techniques. As a possible solution to this problem, this paper proposes a hardware architecture for Typicality and Eccentricity Data Analytic (TEDA) algorithm implemented on Field Programmable Gate Arrays (FPGA) for use in data streaming anomaly detection. TEDA is based on a new approach to outlier detection in the data stream context. In order to validate the proposals, results of the occupation and throughput of the proposed hardware are presented. Besides, the bit accurate simulation results are also presented. The project aims to Xilinx Virtex-6 xc6vlx240t-1ff1156 as the target FPGA. ###### keywords: FPGA , TEDA , data streaming , reconfigurable computing ††journal: arXiv.org ## 1 Introduction Outlier detection or anomaly detection consists in detect rare events in a data set. It is a central problem in many application areas such as time series forecasting, data mining and industrial process monitoring. Due the increasing number of sensors in the most diverse areas and applications, there is a huge raise in the availability of data from time series. Thus, outlier detection for temporal data has become a central problem [1], especially when data are captured and processed continuously in online way. In this case, the data are considered as data streams [2]. Some important aspects need to be considered when choosing an anomaly detection method, such as the computational effort to handle large streaming data. Since the received information need to be stored and analyzed without compromising memory and run-time. Many of the solutions presented in the literature require prior knowledge of the process and system, such as mathematical models, data distribution, and predefined parameters [3]. Anomaly detection is traditionally done from statistical analysis, using probability and making a series of initial assumptions that in most cases are not in practice applied. A disadvantage of the traditional statistical method is comparing a single point with the average of all points rather than comparing with sample or data pairs. This way, the information is no longer punctual and local. Moreover, probability theory was developed from examples where processes and variables are purely random. However, real processes are not purely random and shows dependency between samples. Thus, real problems are addressed from offline processes, where the entire data set needs to be known. Being a potential problem of the traditional method. Another problem with traditional approaches is that they often use an offline dataset. Thus, all samples must be previously available from the beginning of the algorithm execution [4], making it impossible to use in real-time and data stream applications. This type of data presents new technical challenges and opportunities in new fields of work. Detecting real-time anomalies can provide valuable information in critical scenarios, but it is a high computational demand problem that still lacks reliable solutions capable of providing high processing capabilities. Typicality and Eccentricity Data Analytic (TEDA) is based on new approach to outlier detection in data stream context [5] and it can applied with an algorithm to detect autonomous behavior in industrial process operation, for example. TEDA analyzes the density of each sample of data read, calculated according to the distance from the sample to the other samples previously read. It is an online algorithm that learns autonomously without the need for prior knowledge about the process or parameters. Therefore, the computational effort required is smaller, allowing the use in real time applications [3]. TEDA can be used as an alternative statistical framework for analyzing most data, except for purely random processes. It is based on new metrics, all based on similarity/proximity of data in the data space, not in density or entropy, as in traditional methods. The metrics used with TEDA are typicality, defined in [5] as the extent to which objects are ?good examples? of a concept, and eccentricity, defined as how distinct the object is from the rest of the group. A high eccentricity data has a low typicality and is usually an outlier [3]. Eccentricity can be very useful for anomaly detection, image processing, fault detection, particle physics, etc. Allows analysis for data samples (which can also be done in real time for data stream) [6]. It is also relevant in clustering processes, since elements of a cluster are naturally opposed to the atypical [5]. Another area where anomaly detection has been increasingly used is in industry 4.0 projects. One of the challenges of the Industry 4.0 is the detection of production failures and defects [7]. New technologies aim to add value and increase process productivity, but face difficulties in performing complex and massive-scale computing due to the large amount of data generated [8]. The huge accumulation of real time data to flow in a network, for example, can quickly overload traditional computing systems due to the large amount of data that originates from the sensors and the requirement for intensive processing and high performance. The development of specialized hardware presents itself as a possible solution to overcome the bottlenecks, making it possible to create solutions for mass data processing and, at the same time, consider ultra-low-latency, low-power, high-throughput, security and ultra-reliable conditions, important requirements for increasing productivity and quality in industry 4.0 processes. Thinking about the challenges presented, this work proposes a specialized hardware architecture of TEDA for anomaly detection. The development of the hardware technique allows systems to be made even faster than their software counterparts, extending the possibilities of use for situations where time constraints are even more severe. In addition allowing its use in applications with large data processing. The works [9, 10, 11, 12, 13] were developed in hardware, specifically on FPGA, for the acceleration of complex algorithms. The development of machine learning algorithms in hardware has grown significantly. This is justified from performance data with respect to system sampling times compared to software equivalents. One of the motivations for this work is the possibility of accelerating the TEDA algorithm and handling large data streams, such as streaming and real-time. In this work, all validation and synthesis results was made using a FPGA Virtex 6 xc6vlx240t1ff1156. The FPGA choice was because it has high performance. Modern FPGAs can deliver performance and density comparable to Application Specific Integrated Circuits (ASICs), without the disadvantages of high development time and enabling reprogramming, as FPGAs have a flexible architecture. The rest of this paper is organized as follows: This first section presented a introduction about the work explaining the motivation behind it and major contributions. Section 2 discusses some related works and the state of the art. In Section 3 will be presented a theoretical foundation regarding the TEDA technique. Section 4 presents the implementation description details for the architecture proposed. Section 5 will present the validation and synthesis results of the proposed hardware, as well as comparisons with software implementations. Finally, Section 6 will present the conclusions regarding the obtained results. ## 2 Related work Real-time anomaly detection in data stream has potential applications in many areas. Such as: preventive maintenance, fault detection, fraud detection, signals monitoring, among others. Concepts that can be used in many different ranges of industry, such as information technology, finance, medicine, security, energy, e-commerce, agriculture, social media, among others. In the literature there are some uses of the TEDA technique for anomaly detection and even for classification. The article presented in [6] shows a proposal for a new TEDA-based anomaly detection algorithm. The proposed method, called by the author $\sigma$ gap, combines the accumulated proximity information for all samples with the comparison of specific point pairs suspected of being anomalies. Using local spatial distribution information about the vicinity of the suspect point. In the journal, TEDA is compared to an approach using traditional statistical methods, emphasizing that the set of initial assumptions is different. TEDA has been shown to be a generalization of traditional statistics compared to a known analysis, n $\sigma$, which is a widely used principle for threshold anomaly detection. The same result was obtained for both approaches, although TEDA does not need the initial assumptions. In addition, for various types of proximity measurements (such as Euclidean, Cosine, Mahalanobis), it has been shown that due to the recursion feature, TEDA is computationally more efficient and suitable for online and real-time applications. In the work [14] a study is presented about the use of TEDA for fault detection in industrial processes. The work is pioneering the use of this approach for real industry data. For the experiments, TEDA was applied online to the dataset provided by the DAMADICS (Development and Application of Methods for Actuator Diagnosis in Industrial Control Systems) database, one of the most widely used benchmarks in fault detection and diagnosis applications. The experiments showed good results both in accuracy and execution time, which shows the suitability of this approach for real-time industrial applications. Finally, it was found that the TEDA algorithm is capable of dealing with the limitations and particularities of the industrial environment. The paper of [15] is intended to enable the use of TEDAClass, which consists of the TEDA algorithm for classification, in big data processing. The main feature of the proposed algorithm, called TEDAClassBDp, is the processing of block data, where each block uses the TEDAClass so that all blocks operate in parallel. As with TEDAClass, the proposed algorithm does not require information from previous data, and its operation occurs recursively, online and in real-time. The results indicated a reduction in time and computational complexity, without significantly compromising its accuracy, which indicates the strong possibility of using the proposal in problems where it is necessary to process large volumes of data quickly. The work presented in [16] proposes a new non-frequency and density-based data analysis tool. Classified by the author as a further development of TEDA and an effective alternative to the probability distribution function (pdf). Typicality Distribution Function (TDF) can provide valuable information for extreme process analysis, fault detection and identification, where the number of extreme event or fault observations is often disproportionately small. The proposed offers a closed non-parametric analytical (quadratic) description, extracted from the actual realizations of the data exactly in contrast to the usual practice in which these distributions are being assumed or approximated. In addition, for various types of proximity and similarity measures (such as Euclidean, Mahalonobis, and cosine distances), it can be recursively calculated, thus computationally efficient and suitable for online and real- time algorithms. As a fundamental theoretical innovation, TDF and TEDA application areas can range from anomaly detection, grouping, classification, prediction, control, filter regression (similar to Kalman). Practical applications may be even broader, so it is difficult to list them all. The paper presented in [3] proposes the application of TEDA for fault detection in industrial processes. The effectiveness of the proposal has been demonstrated with two real industrial plants, using data streaming, and compared with traditional failure detection methods. This paper presents a practical application of the TEDA algorithm for two fault detection problems of real industrial plants. The first application uses a well-known database, DAMADICS, a database that provides actual data on the water evaporation process of an operating plant of a Polish sugar manufacturing plant. The second application was made from data analysis of a pilot plant of the authors’ university laboratory. A plant equipped with real industrial instruments used for process control. The work of [4] presents a new proposal for unsupervised fuzzy classifier, capable of aggregating the main characteristics of evolving classifiers, as well as making fuzzy classifications of real time data streams completely online. The proposed algorithm uses TEDA concepts, replacing traditional clusters with data clouds, granular structures without shape or predefined boundaries. For data classification, the proposed approach uses the concepts of soft-labeling rather than mutually exclusive classes. Experiments performed using data obtained from different operational failures of a real industrial plant, showed very significant results regarding unsupervised as well as semi- supervised learning, requiring minimal human interaction. The manuscript presented in [2] brings a new algorithm for detecting anomalies based on an online memory sequence algorithm called Hierarchical Temporal Memory (HTM). The performance of the proposed algorithm was evaluated and compared with a set of real time anomaly detection algorithms. Comparative analysis was performed as a way to evaluate anomaly detection algorithms for data streaming. All analyzes were performed from the Numenta Anomaly Benchmark (NAB) [17], which is a benchmark of actual streaming data. The paper published by [18] brings a study for anomaly detection in TCP / IP networks. The purpose of the paper is to detect computer network anomalies in the process of virtual machine (VM) live migration from local to cloud, by comparing this approach between TEDA, clustering K-Means, and static analysis. They used the tuple - Source IP, Destination IP, Source Port, and Destination Port \- to create a signature process and validate errors, including those of traffic flow hidden in the legitimate network. Testing was done using the SECCRIT (SEcure Cloud Computing for CRitical Infrastructure IT - http://www.seccrit.eu) project dataset, which allows anomalies or environmental attacks to be analyzed with Live Migration and other background traffic conditions. The results demonstrate that the proposed method makes it possible to automatically and successfully detect anomalies in attacks, network port scan (NPS) and network scan (NS). A major difficulty is distinguishing a high-volume attack from a denial of service (DoS) attack, for example. Accuracy and false negative rate calculations were made for comparison with K-Means and the proposed solution, with TEDA having better rates in almost all measurements performed. As the amount of data that needs to be processed grows exponentially and autonomous systems become increasingly important and necessary. Implementation of machine learning and streaming algorithms have been studying in literature. The work presented in [19] describes how to use run-time reconfiguration on FPGAs to improve the efficiency of streaming data transmission in shared communication channel with real-time applications. The reconfigurable architecture proposed consists of two subsystems: the reconfiguration subsystem, which running the modules, and the scheduling subsystem, that controls which modules are loaded to the reconfiguration subsystem. Besides, many works in the literature have been studied fault and anomaly detection in hardware. In work [20], an implementation of target and anomaly detection algorithms for real-time hyper-spectral imaging was proposed on FPGA. The algorithms were implemented in streaming fashion, similar to this work. The results, obtained from a Kintex-7 FPGA using fixed point structure, were very satisfactory and demonstrated that the implementation can be used in different detection circumstances. The work [21] presented a study of the impact of Neural Network architectures compared to statistical methods in the implementation of an Electrocardiogram (ECG) anomaly detection algorithm on FPGA. The fixed point implementation contributes to reduce the amount of needed resources. However, the design was made with High Level Sinthesys (HLS), witch could not optimize the FPGA resources consumption. In relation to the TEDA algorithm, no studies in the literature aimed at exploring its hardware implementation on FPGA were identified to date this paper had been write, which this work proposes to accomplish in a pioneering manner. ## 3 TEDA TEDA was introduced by [22] as a statistical framework, influenced by recursive density estimation algorithms. However, unlike algorithms that uses data density as a measure of similarity, TEDA uses concepts of typicity and eccentricity to infer whether a given sample is normal or abnormal to the dataset. The methodology used in TEDA does not require the use of a previous data information, and can be applied to problems involving fault detection, clustering, classification, among others [22]. TEDA is a data structure-based anomaly detection algorithm that aims to generalize and avoid the need for well-known, but very restrictive, initial conditions inherent in traditional statistics and probability theory [23]. The approach presented in the TEDA has some advantages over traditional statistical anomaly detection methods. Its recursive feature allows it to handle large volumes of data, such as data streams, with low computational cost and online, enabling faster processing. TEDA main features include [6]: * 1. It is entirely based on data and its distribution in data spaces; * 2. No previous assumptions are made; * 3. Limits and parameters does not need to be pre-specified; * 4. No sample independence required; * 5. An infinite number of observations are not required. The typicality of TEDA is the similarity of a given data sample to the rest of the dataset samples to which it belongs. Eccentricity, on the other hand, is the opposite of typicality, which indicates how much a sample is dissociated from the other samples in its set. Thus, an outlier can be defined as a sample with high eccentricity and low typicality, considering a threshold established for comparison. It is important to note that for eccentricity and typicality calculations no parameter or threshold is required. To calculate the eccentricity of each sample, TEDA uses the sum of the geometric distances between the analyzed sample $\bm{x}_{k}$ and the other samples in the set. Thus, the higher this value, the greater the eccentricity of the sample, and consequently, the lower its typicality. [6] proposed recursively calculating eccentricity. Thus, the eccentricity, $\xi$ can be expressed as $\xi_{k}(x)=\frac{1}{k}+\frac{(\bm{\mu}_{k}^{x}-\bm{x}_{k})^{T}(\bm{\mu}^{x}_{k}-\bm{x}_{k})}{k[\sigma^{2}]^{x}_{k}},[\sigma^{2}]^{x}_{k}>0$ (1) where $k$ is discreization instant; $\bm{x}_{k}$ is a input set of N elements in the k-th iteration, $\bm{x}_{k}=[x_{k}^{1}\ x_{k}^{2}\ ...\ x_{k}^{N}]$; $\bm{\mu}_{x}^{k}$ is also a N elements vector, equal to the average of $\bm{x}_{k}$ at the $k$-th iteration and $[\sigma^{2}]^{x}_{k}$ is the variance of $\bm{x}_{k}$ at the $k$-th iteration. The calculation of $\bm{\mu}_{x}^{k}$ and $[\sigma^{2}]^{x}_{k}$ is also recursively done, using the following equation $\bm{\mu}^{x}_{k}=\frac{(k-1)}{k}\bm{\mu}^{x}_{k-1}+\frac{1}{k}\bm{x}_{k},\ k\geq 1,\ \bm{\mu}^{x}_{0}=0$ (2) and $[\sigma^{2}]^{x}_{k}=\frac{(k-1)}{k}[\sigma^{2}]^{x}_{k-1}+\frac{1}{k}\left\|\bm{x}_{k}-\bm{\mu}_{k}\right\|^{2},\ k\geq 1,\ [\sigma^{2}]^{x}_{0}=0.$ (3) The typicality of a given sample $\bm{x}_{k}$, at the $k$-th iteration, can be expressed as a complement to eccentricity [6], as follows $\tau_{k}(x)=1-\xi_{k}(x).$ (4) In addition, [6] also defined that normalized eccentricity can be calculated as $\zeta_{k}(x)=\frac{\xi_{k}(x)}{2},\sum^{k}_{i=1}\xi_{k}(x)=1,\ k\geq 2.$ (5) In order to separate normal state data from abnormal state data, it is necessary to define a comparison threshold. For anomaly detection, the use of the $m\sigma$ [24] threshold is widespread. However, this principle must first assume the distribution of the analyzed data, such as the Gaussian distribution [6]. Chebyshev inequality can be used for any data distribution, assuming that the probability that the data samples are more than $m\sigma$ from the average is less than or equal to $1/m^{2}$, where $\sigma$ is the standard deviation of the data [25]. The condition that produces the same results as Chebyshev’s inequality, discarding any assumptions about data and its independence, can be expressed as [6] $\zeta_{k}>\frac{m^{2}+1}{2k},\ m>0$ (6) where $m$ corresponds to the comparison threshold. For a better understanding of the hardware implemented technique in this work, the Algorithm 1 details the operation of TEDA, based on the equations presented above. Input: $\mathbf{x}_{k}$: $k$-th sample; $m$: threshold Output: outlier: sample classification as abnormal or normal 1 begin 2 while _receive $\mathbf{x}$_ do 3 if _k=1_ then 4 $\bm{\mu}^{x}_{k}\leftarrow\mathbf{x}_{k}$; 5 $[\sigma^{2}]^{x}_{k}\leftarrow 0$; 6 7 else 8 update $\bm{\mu}^{x}_{k}$ using equation 2; 9 update $[\sigma^{2}]_{k}^{x}$ using equation 3; 10 update $\xi_{k}(x)$ using equation 1; 11 update $\zeta_{k}(x)$ using equation 5; 12 if _$\zeta_{k}(x) >\frac{m^{2}+1}{2k}$_ then 13 $outlier\leftarrow true$; 14 15 else 16 $outlier\leftarrow false$; 17 18 19 $k\leftarrow k+1$; 20 21 22 Algorithm 1 TEDA As presented in the Algorithm 1, only input data samples, $\bm{x}_{k}$, and a comparison threshold, $m$, are used as input to the algorithm. The output for each entry, $\bm{x}_{k}$, is the indication of the sample’s classification as abnormal (outlier = true) or normal (outlier = false). ## 4 Implementation description In this work, a TEDA FPGA proposal was implemented using Register Transfer Level (RTL) such as works presented in [9, 10, 11, 12, 13]. In the following section characteristics of the proposal will be presented, as well as details regarding processing time. A design overview can be seen in Figure 1. ### 4.1 Architecture proposal overview As illustrated in the Figure 1, the proposed implementation of TEDA has four different block structures: The MEAN module, which implements the average described in Equation 2; The VARIANCE module, responsible for calculate the variance as presented at the equation 3; The ECCENTRICITY module, which calculates the eccentricity, as presented in the equation 1; and the OUTLIER module, a block used to normalize the eccentricity as in equation 5 and compare with the threshold, as showed in equation 6. The architecture was developed in an attempt to pipeline the operations presented in Algorithm 1 in order to decrease the TEDA processing time. So, the output of the ECCENTRICITY and OUTLIER modules are one clock cycle delayed in relation to VARIANCE module and two in relation to MEAN module. As well as VARIANCE module is one clock cycle delayed in relation to MEAN module. Each of the modules are detailed later in the following sections. The implementation has the Algorithm 1 as reference. The system receives the FPGA clock and the $k$-th sample vector $\mathbf{x}_{k}$ as inputs. The $k$-th iteration number is updated from the increment of a counter and the $m$ threshold is used as a constant, stored at OUTLIER module. As in the algorithm line 1, the MEAN module do each single element average of $\mathbf{x}_{k}$ vector. It is possible to observe that there are $N$ MEAN blocks, where $N$ is the vector size. This block will be detailed in section 4.2. After this step, moving to the next line (1), the calculation of variance is done in VARIANCE Module, this block is detailed in the section 4.3. ECCENTRICITY block has as inputs the signals that left the block VARIANCE and $k$, as referred at line 1 and detailed in subsection 4.4. OUTLIER block is detailed in subsection 4.5. It receives the ECCENTRICITY, $\xi_{k}(x)$, and calculate the normalized eccentricity to compare with the threshold as presented in lines 1, 1 and 1. Figure 1: General architecture overview. ### 4.2 Module I - MEAN Each n-th MEAN module computes the average of each one of n-th elements vector $\bm{x}_{k}$ acquired at run time. The implementations is based on Equation 2 and it is detailed in Figure 2. In addition to receiving the n-th element of vector $\bm{x}_{k}$ as an input, the MEAN block uses a counter to define the number of sample interaction, k. The implementation uses a comparator block identified at the Figure 2 as MCOMPn witch is used to verify if the system is in the first iteration as Line 1 of Algorithm 1. The MMUXn is a multiplexer that acts as a conditional evaluation, using as selecting value the output of MCOMPn comparator. The register MREGn is storing the n-th $\bm{\mu}^{x}_{k}$ element ($\mu^{n}_{k}$). The $\mu^{n}_{k}$ value stored in MREGn is multiplied with $\frac{k-1}{k}$ in MMULT1n and added in MSUMn with the output of MMULT2n that has as input $x^{n}_{k}$ and the inverse value of $k$. Each n-th element of vector $\bm{x}_{k}$, $x^{n}_{k}$, requires a MEAN block. Figure 2: MEAN module. ### 4.3 Module II - VARIANCE The VARIANCE module is illustrated in Figure 3. It computes the variance of $\bm{x}_{k}$ vector samples by receiving the $\bm{x}_{k}$ vector itself and its average, $\bm{\mu}^{x}_{k}$, calculated in the previous MEAN blocks. The VARIANCE module, as the MEAN module, uses a comparator identified at the Figure 3 as VCOMP1 also to verify if the system is in the first iteration (Line 1 of Algorithm 1). The VMUX1 is a multiplexer that also implements a conditional evaluation to release the value $0$ in the register output VREG1 in the first iteration. The register VREG1 stores the variance value, $[\sigma^{2}]^{x}_{k}$, from the second iteration. The other registers in the block, VREG2 register and the N VREG$n$ registers, are used to delay by one clock cycle the iteration number $k$ and the elements of $\bm{x}_{k}$ respectively. Figure 3: VARIANCE module. As demonstrated in Equation 3, the variance calculation is done recursively. It is necessary to calculate $\left\|\bm{x}_{k}-\bm{\mu}_{k}\right\|^{2}$ and to do that, N subtractors (VSUB$n$) and N multipliers (VMULT1_$n$) are used, as well a adder (VSUM1) with N inputs. Each element of vector $\bm{\mu}^{x}_{k}$ is subtracted from its respective element in vector $\bm{x}_{k}$ and the result of this operation is multiplied by itself (squared) and then added to the other results. The $\left\|\bm{x}_{k}-\bm{\mu}_{k}\right\|^{2}$ value is the multiplied (at VMULT2) by $1/k$. It is then added at VSUM2 adder with the variance calculated in the previous iteration, $[\sigma^{2}]^{x}_{k}$, multiplied (VMULT3) by $(k-1)/k$. From the second iteration on, this value passes through the VMUX1 multiplexer to the VREG1 register, delivering the calculation of the variance value at the VARIANCE block output. The values of $\left\|\bm{x}_{k}-\bm{\mu}_{k}\right\|^{2}$ and $1/k$ are also delivered at the output of the VARIANCE block to avoid redundant operations as they will be used in the next block, the ECCENTRICITY block. ### 4.4 Module III - ECCENTRICITY The ECCENTRICITY module is a simpler block than those previously presented. This is because it uses operations already performed in the VARIANCE block to calculate eccentricity. The geometric distance $\left\|\bm{x}_{k}-\bm{\mu}_{k}\right\|^{2}$ (equivalent to $(\bm{\mu}_{k}^{x}-\bm{x}_{k})^{T}(\bm{\mu}^{x}_{k}-\bm{x}_{k})$) is stored in register EREG3 and $1/k$ is stored in EREG4 register. As the ECCENTRICITY module is the architecture design of Equation 1 (Algorithm 1 line 1), the variance value $[\sigma^{2}]^{x}_{k}$ is multiplied by $k$ (EMULT1) and used to divise (EDIV1) the geometric distance $(\bm{\mu}_{k}^{x}-\bm{x}_{k})^{T}(\bm{\mu}^{x}_{k}-\bm{x}_{k})$. This operation output is added to $1/k$ in the ESUM1 adder, calculating the eccentricity of the samples ($\xi_{k}(x)$) and delivering to the ECCENTRICITY block output. Figure 4: ECCENTRICITY module. ### 4.5 Module IV - OUTLIER Finally, in the OUTLIER block, the samples are classified into abnormal (outlier = true) or normal (outlier = false). The design module can be seen in Figure 5. To classify the samples, the OUTLIER block normalizes eccentricity by dividing (ODIV1) by a constant, as shown in Equation 5, and compares (OCOMP1 this normalized eccentricity with a threshold as shown in the Lines 1, 1, 1 and 1 of the Algorithm 1. The register OREG1 and OREG2 are used to synchronize the iteration number $k$, since as the modules act in pipeline, the operations carried out in the OUTLIER block (as well as in ECCENTRICITY module) are delayed by two clock cycles in relation to the system input. Figure 5: OUTLIER module. ### 4.6 Processing time The proposed architecture has an initial delay, $d$, that can be expressed as $d=3\times t_{c}$ (7) where $t_{c}$ is the system critical path time. The execution time of the circuit implemented for TEDA algorithm is determined by the system critical path time, $t_{c}$. So, after the initial delay, the execution time of the proposed TEDA, $t_{TEDA}$, can be expressed as $t_{TEDA}=t_{c}$ (8) thus, in every $t_{TEDA}$ it is possible to obtain the output of a sample inserted, that is, the sample classification as abnormal or normal. The throughput of the implementation, $th_{TEDA}$, in samples per second (SPS) can be expressed as $th_{TEDA}=\frac{1}{t_{TEDA}}.$ (9) ## 5 Results In this section will be presented the hardware validation and synthesis results for the architecture proposed in this work. All cases were validated and synthesized on floating point. Validation results were used to verify the hardware functionality, while synthesis results allow the system to be analyzed for important parameters for the design of hardware architectures such as hardware occupancy and processing time, considering factors such as throughput and speedup. ### 5.1 Validation results To validate the hardware architecture of the TEDA algorithm, we used the DAMADICS (Development and Application of Methods of the Actuator Diagnosis in Industrial Control Systems) benchmark dataset [26]. The benchmark provides a real data set of the water evaporation process in a Polish sugar factory. It is a plant with three actuators; a control valve, which controls the flow of water in the pipes; a pneumatic motor, which controls variable valve openings and a positioner. This dataset has faults at different times of the day on specific days. There are four different fault types, as shown in Table 1. Table 1: Fault types [26]. Fault | Description ---|--- f16 | Positioner supply pressure drop f17 | Unexpected pressure change across the valve f18 | Fully or partly opened bypass valves f19 | Flow rate sensor fault Artificial failures were introduced on specific days to plant operation data. The dataset has a set of $19$ faults in these $3$ actuators. As a way to validate the architecture, actuator $1$ failures were simulated. Table 2 shows a detailed description of some introduced faults for actuator $1$. Table 2: List of artificial failures introduced to actuator 1 [26]. Item | Fault | Sample | Date | Description ---|---|---|---|--- 1 | f18 | 58800-59800 | Oct 30, 2001 | Partly opened bypass valve 2 | f16 | 57275-57550 | Nov 9, 2001 | Positioner supply pressure drop 3 | f18 | 58830-58930 | Nov 9, 2001 | Partly opened bypass valve 4 | f18 | 58520-58625 | Nov 9, 2001 | Partly opened bypass valve 5 | f18 | 54600-54700 | Nov 17, 2001 | Partly opened bypass valve 6 | f16 | 56670-56770 | Nov 17, 2001 | Positioner supply pressure drop 7 | f17 | 37780-38400 | Nov 20, 2001 | Unexpected pressure drop across the valve Figure 6 shows the results obtained for the item 1 signal of Table 2. Figure 6(a) illustrates the behavior of two simulated input variables in hardware architecture ($\bm{x}_{k}=[x_{k}^{1}\ x_{k}^{2}]$). It is possible to observe that a failure happens between the moments $k$=58900 and $k$=59800. In Figure 6(b) it is possible to observe that there is a sudden change in the behavior of the eccentricity (black curve), surpassing the value of the comparison threshold with $m=3$ (red curve). (a) Fault item 1 - input vector $\bm{x}_{k}$. (b) Fault item 1 - normalized eccentricity $\zeta_{k}(x)$ with $5/k$ $(m=3)$ threshold. Figure 6: Detection of outliers in the dataset: Behavior of fault item 1. In Figure 7 it is possible to observe the results obtained for the item 7 signal, from Table 2. As within Figure 6, Figures 7(a) illustrates the behavior of two elements of input $\bm{x}_{k}=[x^{1}_{k}\ x^{2}_{k}]$ in hardware architecture and in Figure 7(b) it is possible to observe that there is a change of eccentricity (black curve), surpassing the value of the comparison threshold (red curve) also to $m=3$. The failure happens between moments $k=37700$ and $k=38400$. (a) Fault item 7 - input vector $\bm{x}_{k}$. (b) Fault item 7 - normalized eccentricity $\zeta_{k}(x)$ with $5/k$ $(m=3)$ threshold. Figure 7: Detection of outliers in the dataset: Behavior of fault item 7. Validation results in hardware architecture were compared with results obtained in a python software implementation of the algorithm TEDA. The hardware architecture was designed with floating point number format. ### 5.2 Synthesis results After performing to validate the implemented circuit, the hardware synthesis was performed to obtain the FPGA resource occupation report, as well as the critical time information used to calculate the proposed implementation processing time. The floating point synthesis results were obtained for a Xilinx Virtex 6 xc6vlx240t-1ff1156 FPGA. #### 5.2.1 Hardware occupation Table 3 presents data related to the hardware occupation of the circuit implemented in the target FPGA. The first column shows the number of multipliers used, the second column displays the number of registers, and the third column shows the number of logical cells used as LUT ($n_{LUT}$) throughout the circuit. Table 3: Hardware occupation. Multipliers | | Registers --- $n_{LUT}$ $27$ ($3$%) | $414$ ($<1$%) | $11.567$ ($7$%) Analyzing the data presented in Table 3 it can be seen that even using a floating point resolution, which demands a greater amount of hardware resources than a fixed point implementation, only a small portion of the resources were occupied from the target FPGA, with a total of only about $3\%$ from multipliers, less than $1\%$ from registers, and about $7\%$ from logical cells used as LUT. With this, we found that the proposed circuit could also be applied in low cost FPGAs, where the amount of available hardware resources is even smaller. In addition, multiple TEDA modules could be applied in parallel for anomaly detection in the same dataset, in order to further reduce processing time. #### 5.2.2 Processing time Table 4 presents information about the processing time (from Line 1 to Line 1 in Algorithm 1) of the architecture implemented for the TEDA technique. The first column indicates the circuit critical time, $t_{c}$, the second column shows the initial delay, expressed by Equation 7, the third column the TEDA run-time, expressed by Equation 8, and the last column the implementation throughput in samples per second (SPS), expressed by Equation 9, which consists of the amount of samples processed and classified (as normal or outlier) by TEDA every second. Table 4: Processing time. Critical time | | Delay --- TEDA time | Throughput $138\,\text{ns}$ | $414\,\text{ns}$ | $138\,\text{ns}$ | $7.2$ MSPS The data presented in Table 4 are quite expressive. The circuit critical time, which also corresponds in the TEDA run-time, was only $t_{c}=138\,\text{ns}$. Thus, after the $414\,\text{ns}$ delay, it is possible to get output for a processed sample sorted every $138\,\text{ns}$, which guarantees a throughput of $7.2$ million sorted samples per second. These results indicate the feasibility of using the proposal presented in this work to manipulate large data flows in real time. ### 5.3 Platforms comparison To date, no previous literature has been found to explore TEDA hardware implementations. Thus, this paper presents, for the first time, a proposal to implement the TEDA technique on FPGA. To verify the advantages of the hardware application proposed here over implementations on other software platforms, some comparisons of the FPGA processing time with the processing time of other software implementations were made. Table 5 presents the results of the comparisons made. The first column indicates the hardware used, the second presents the processing time required to obtain the classification of each sample, and the third column, the speedup achieved by the proposal presented in this paper. Table 5: Software implementations comparison. Platform | Time | Speedup ---|---|--- This work proposal on FPGA | $138\,\text{ns}$ | $-$ Python (Colab without GPU) | $435\,\text{ms}$ | $3\text{,}000\text{,}000\times$ Python (Colab with Tesla K80 GPU) | $39.2\,\text{ms}$ | $280\text{,}000\times$ Python (Local execution with 940 MX GPU) | $23.1\,\text{ms}$ | $167\text{,}000\times$ The data presented in Table 5 reaffirm the importance of this work. The hardware implementation on FPGA proposed here has been able to achieve speedups of up to $3$ million times compared to a Pyhton TEDA implementation using the Colab tool (without GPU processing). For the same Pyhton implementation using the Tesla K80 GPU processing Colab tool, a speedup of $280$ thousand times was obtained. In addition, when compared to a Python implementation on Intel(R) Core(TM) i7-7500U<EMAIL_ADDRESS>with 16 GB of RAM and GeForce 940 MX GPU, the hardware implementation on FPGA still had a $167$ thousand times advantage. Results that prove the advantages of using the proposal presented in this work to accelerate the TEDA technique, through the implementation on FPGA. ## 6 Conclusion This work presented a proposal for hardware implementation of the TEDA data streaming anomaly detection technique. The hardware was implemented in RTL using floating point format. Synthesis results were obtained for a Xilinx Virtex 6 xc6vlx240t-1ff1156 FPGA. The proposed implementation used a small portion of the target FPGA resources, besides allowing the results to be obtained in a short processing time. The high speedups obtained in comparison with other software platforms reaffirmed the importance of this work, which is pioneering the hardware implementation of the TEDA technique on FPGA. The proposed architecture is feasible to be used in practical fault detection applications in real industrial processes with severe time constraints, as well as to handle large data volumes, such as data streaming, using low processing time. ## References * [1] M. Gupta, J. Gao, C. C. Aggarwal, J. Han, Outlier detection for temporal data: A survey, IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 26 (9) (2019) 2250–2267. * [2] S. Ahmad, A. Lavin, S. Purdy, Z. Agha, Unsupervised real-time anomaly detection for streaming data, Neurocomputing 262 (2017) 134 – 147, online Real-Time Learning Strategies for Data Streams. doi:https://doi.org/10.1016/j.neucom.2017.04.070. URL http://www.sciencedirect.com/science/article/pii/S0925231217309864 * [3] C. G. Bezerra, B. S. J. Costa, L. A. Guedes, P. P. Angelov, An evolving approach to unsupervised and real-time fault detection in industrial processes, Expert Systems with Applications 63 (2016) 134 – 144. doi:https://doi.org/10.1016/j.eswa.2016.06.035. URL http://www.sciencedirect.com/science/article/pii/S0957417416303153 * [4] B. S. J. Costa, C. G. Bezerra, L. A. Guedes, P. P. Angelov, Unsupervised classification of data streams based on typicality and eccentricity data analytics, in: 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2016, pp. 58–63. doi:10.1109/FUZZ-IEEE.2016.7737668. * [5] D. Osherson, E. E. Smith, On typicality and vagueness, Cognition 64 (2) (1997) 189 – 206. doi:https://doi.org/10.1016/S0010-0277(97)00025-5. URL http://www.sciencedirect.com/science/article/pii/S0010027797000255 * [6] P. Angelov, Anomaly detection based on eccentricity analysis, in: 2014 IEEE Symposium on Evolving and Autonomous Learning Systems (EALS), 2014, pp. 1–8. doi:10.1109/EALS.2014.7009497. * [7] P. Napoletano, F. Piccoli, R. Schettini, Anomaly detection in nanofibrous materials by cnn-based self-similarity, Sensors 18 (1). doi:10.3390/s18010209. URL https://www.mdpi.com/1424-8220/18/1/209 * [8] I. A. T. Hashem, I. Yaqoob, N. B. Anuar, S. Mokhtar, A. Gani, S. U. Khan, The rise of “big data” on cloud computing: Review and open research issues, Information Systems 47 (2015) 98 – 115. doi:https://doi.org/10.1016/j.is.2014.07.006. URL http://www.sciencedirect.com/science/article/pii/S0306437914001288 * [9] M. G. F. Coutinho, M. F. Torquato, M. A. C. Fernandes, Deep neural network hardware implementation based on stacked sparse autoencoder, IEEE Access 7 (2019) 40674–40694. doi:10.1109/ACCESS.2019.2907261. * [10] L. M. D. Da Silva, M. F. Torquato, M. A. C. Fernandes, Parallel implementation of reinforcement learning q-learning technique for fpga, IEEE Access 7 (2019) 2782–2798. doi:10.1109/ACCESS.2018.2885950. * [11] M. F. Torquato, M. A. Fernandes, High-performance parallel implementation of genetic algorithm on fpga, Circuits, Systems, and Signal Processing 38 (9) (2019) 4014–4039. * [12] F. F. Lopes, J. C. Ferreira, M. A. Fernandes, Parallel implementation on fpga of support vector machines using stochastic gradient descent, Electronics 8 (6) (2019) 631. * [13] A. L. X. Da Costa, C. A. D. Silva, M. F. Torquato, M. A. C. Fernandes, Parallel implementation of particle swarm optimization on fpga, IEEE Transactions on Circuits and Systems II: Express Briefs 66 (11) (2019) 1875–1879. doi:10.1109/TCSII.2019.2895343. * [14] B. S. J. Costa, C. G. Bezerra, L. A. Guedes, P. P. Angelov, Online fault detection based on typicality and eccentricity data analytics, in: 2015 International Joint Conference on Neural Networks (IJCNN), 2015, pp. 1–6. doi:10.1109/IJCNN.2015.7280712. * [15] D. Kangin, P. Angelov, J. A. Iglesias, A. Sanchis, Evolving classifier tedaclass for big data, Procedia Computer Science 53 (2015) 9 – 18, iNNS Conference on Big Data 2015 Program San Francisco, CA, USA 8-10 August 2015. doi:https://doi.org/10.1016/j.procs.2015.07.274. URL http://www.sciencedirect.com/science/article/pii/S1877050915017779 * [16] P. Angelov, Typicality distribution function — a new density-based data analytics tool, in: 2015 International Joint Conference on Neural Networks (IJCNN), 2015, pp. 1–8. doi:10.1109/IJCNN.2015.7280438. * [17] A. Lavin, S. Ahmad, Evaluating real-time anomaly detection algorithms – the numenta anomaly benchmark, in: 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), 2015, pp. 38–44. doi:10.1109/ICMLA.2015.141. * [18] R. S. Martins, P. Angelov, B. Sielly Jales Costa, Automatic detection of computer network traffic anomalies based on eccentricity analysis, in: 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2018, pp. 1–8. doi:10.1109/FUZZ-IEEE.2018.8491507. * [19] T. Ziermann, J. Teich, Adaptive traffic scheduling techniques for mixed real-time and streaming applications on reconfigurable hardware, in: 2010 IEEE International Symposium on Parallel Distributed Processing, Workshops and Phd Forum (IPDPSW), 2010, pp. 1–4. doi:10.1109/IPDPSW.2010.5470738. * [20] B. Yang, M. Yang, A. Plaza, L. Gao, B. Zhang, Dual-mode fpga implementation of target and anomaly detection algorithms for real-time hyperspectral imaging, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 8 (6) (2015) 2950–2961. doi:10.1109/JSTARS.2015.2388797. * [21] M. Wess, P. D. S. Manoj, A. Jantsch, Neural network based ecg anomaly detection on fpga and trade-off analysis, in: 2017 IEEE International Symposium on Circuits and Systems (ISCAS), 2017, pp. 1–4. doi:10.1109/ISCAS.2017.8050805. * [22] P. Angelov, Outside the box: an alternative data analytics framework, Journal of Automation Mobile Robotics and Intelligent Systems 8 (2) (2014) 29–35. * [23] A. Plamen, Outside the box: an alternative data analytics framework, Journal of Automation, Mobile Robotics and Intelligent Systems 8 (2) (2014) 29–35. * [24] A. Bernieri, G. Betta, C. Liguori, On-line fault detection and diagnosis obtained by implementing neural algorithms on a digital signal processor, IEEE Transactions on Instrumentation and Measurement 45 (5) (1996) 894–899. doi:10.1109/19.536707. * [25] J. G. Saw, M. C. Yang, T. C. Mo, Chebyshev inequality with estimated mean and variance, The American Statistician 38 (2) (1984) 130–132. * [26] E. F. R. T. Network, Damadics rtn information web site (2002). URL http://diag.mchtr.pw.edu.pl/damadics/
2024-09-04T02:54:58.219827
2020-03-08T23:23:31
2003.03867
{ "authors": "Wojciech Jamroga, Wojciech Penczek, and Teofil Sidoruk", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26106", "submitter": "Wojciech Jamroga", "url": "https://arxiv.org/abs/2003.03867" }
arxiv-papers
 # Strategic Abilities of Asynchronous Agents: Semantic Side Effects and How to Tame Them Wojciech Jamroga1,2 Wojciech Penczek1 Teofil Sidoruk1,3 1Institute of Computer Science, Polish Academy of Sciences, Warsaw, Poland 2Interdisciplinary Centre on Security, Reliability and Trust, SnT, University of Luxembourg 3Faculty of Mathematics and Information Science, Warsaw University of Technology {jamroga, penczek<EMAIL_ADDRESS> ###### Abstract Recently, we have proposed a framework for verification of agents’ abilities in asynchronous multi-agent systems (MAS), together with an algorithm for automated reduction of models (?). The semantics was built on the modeling tradition of distributed systems. As we show here, this can sometimes lead to counterintuitive interpretation of formulas when reasoning about the outcome of strategies. First, the semantics disregards finite paths, and yields unnatural evaluation of strategies with deadlocks. Secondly, the semantic representations do not allow to capture the asymmetry between proactive agents and the recipients of their choices. We propose how to avoid the problems by a suitable extension of the representations and change of the execution semantics for asynchronous MAS. We also prove that the model reduction scheme still works in the modified framework. ## 1 Introduction Modal logics of strategic ability. _Alternating-time temporal logic_ $\mathbf{ATL_{\mathrm{}}^{*}}$ (?; ?; ?) is probably the most popular logic to describe interaction of agents in multi-agent systems. Formulas of $\mathbf{ATL_{\mathrm{}}^{*}}$ allow to express statements about what agents (or groups of agents) can achieve. For example, $\langle\\!\langle{taxi}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\neg\mathsf{{fatality}}$ says that the autonomous cab can drive in such a way that nobody is ever killed, and $\langle\\!\langle{taxi,passg}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{destination}}$ expresses that the cab and the passenger have a joint strategy to arrive at the destination, no matter what any other agents do. Such statements allow to express important functionality and safety requirements in a simple and intuitive way. Moreover, the provide input to algorithms and tools for verification of strategic abilities, that have been in constant development for over 20 years (?; ?; ?; ?; ?; ?; ?; ?; ?; ?; ?; ?; ?; ?; ?; ?). Still, there are two caveats. First, all the realistic scenarios of agent interaction, that one may want to specify and verify, involve imperfect information. That is, the agents in the system do not always know exactly the global state of the system, and thus they have to make their decisions based on their local view of the situation. Unfortunately, verification of agents with imperfect information is hard to very hard – more precisely, $\mathbf{{\Delta_{{2}}^{\mathbf{{P}}}}}$-complete to undecidable, depending on the syntactic and semantic variant of the logic (?; ?; ?). Also, the imperfect information semantics of $\mathbf{ATL_{\mathrm{}}^{*}}$ does not admit alternation-free fixpoint characterizations (?; ?; ?), which makes incremental synthesis of strategies impossible, or at least difficult to achieve (?; ?; ?; ?; ?). Secondly, the semantics of strategic logics is traditionally based on synchronous concurrent game models. In other words, one implicitly assumes the existence of a global clock that triggers subsequent global events in the system; at each tick of the clock, all the agents choose their actions, and the system proceeds accordingly with a global transition. However, many real- life systems are inherently asynchronous, and do not operate on a global clock that perfectly synchronizes the atomic steps of all the components. Moreover, many systems that are synchronous at the implementation level can be more conveniently modeled as asynchronous on a more abstract level. In many scenarios, both aspects combine. For example, when modeling an anti-poaching operation (?), one may take into account the truly asynchronous nature of events happening in different national parks, but also the best level of granularity for modeling the events happening within a single nature reserve. Asynchronous semantics and partial-order reduction. We have recently proposed how to adapt the semantics of $\mathbf{ATL_{\mathrm{}}^{*}}$ to asynchronous MAS (?). We also showed that the technique of _partial order reduction (POR)_ (?; ?; ?; ?; ?; ?; ?) can be adapted to verification of strategic abilities in asynchronous MAS. In fact, the (almost 30 years old) POR for linear time logic $\mathbf{LTL}$ can be taken off the shelf and applied to a significant part of $\mathbf{ATL^{*}_{\mathrm{ir}}}$, the variant of $\mathbf{ATL_{\mathrm{}}^{*}}$ based on strategies with imperfect information and imperfect recall. This is very important, as the practical verification of asynchronous systems is often impossible due to the state- and transition- space explosion resulting from interleaving of local transitions. POR allows for a significant, sometimes even exponential, reduction of the models. Semantic side effects. While the result is appealing, there is a sting in its tail: the $\mathbf{ATL_{\mathrm{}}^{*}}$ semantics in (?) leads to counterintuitive interpretation of strategic properties. First, it disregards finite paths, and evaluates some intuitively losing strategies as winning (and vice versa). Secondly, it provides a flawed interpretation of the _concurrency fairness_ assumption. Thirdly, the representations and their execution semantics do not allow to capture the asymmetry between the agents that control which synchronization branch will be taken, and those influenced by their choices. We tentatively indicated some of the problems in the extended abstract (?). In this paper, we demonstrate them carefully, and propose how they can be avoided. Contribution. Our contribution is threefold. First, we discuss in detail the semantic side effects of adding strategic reasoning on top of classical models of concurrent systems (?). We identify the reasons, and demonstrate the problematic phenomena on simple examples. Secondly, we show how to avoid these pitfalls by extending the class of representations and slightly changing the execution semantics of strategies. Specifically, we add “silent” $\epsilon$-transitions in the models and on outcome paths of strategies, and allow for nondeterministic choices in the agents’ repertoires. We also identify a family of fairness-style conditions, suitable for the interaction of proactive and reactive agents. No less importantly, we prove that partial order reduction is still correct in the modified framework. Motivation. The variant of $\mathbf{ATL_{\mathrm{}}^{*}}$ for asynchronous systems in (?) was proposed mainly as a framework for formal verification. This was backed by the results showing that it submits to partial order reduction. However, a verification framework is only useful if it allows to specify requirements in an intuitive way, so that the property we _think_ we are verifying is indeed _the one being verified_. In this paper, we show that this was not the case. We also propose how to overcome the problems without spoiling the efficient reduction scheme. The solutions are not merely technical. In fact, they lead to a better understanding of how strategic activity influences the overall behavior of the system, and how it should be integrated with the traditional models of asynchronous interaction. ## 2 Models of Multi-agent Systems We first recall the models of asynchronous interaction in MAS, proposed in (?) and inspired by (?; ?; ?). ### 2.1 Asynchronous Multi-agent Systems In logical approaches to MAS, one usually assumes synchronous actions of all the agents (?; ?). However, many agent systems are inherently asynchronous, or it is useful to model them without assuming precise timing relationships between the actions of different agents. As an example, consider a team of logistic robots running in a factory (?). Often no global clock is available to all the robots, and even if there is one, the precise relative timing for robots operating in different places is usually irrelevant. Such a system can be conveniently represented with a set of automata that execute asynchronously by interleaving local transitions, and synchronize their moves whenever a shared event occurs. The idea is to represent the behavior of each agent by a finite automaton where the nodes and transitions correspond, respectively, to the agent’s local states and the events in which it can take part. Then, the global behavior of the system is obtained by the interleaving of local transitions, assuming that, in order for a shared event to occur, all the corresponding agents must execute it in their automata. This motivates the following definition. ###### Definition 2.1 (Asynchronous MAS). An _asynchronous multi-agent system (AMAS)_ S consists of $n$ agents ${\mathbb{A}\mathrm{gt}}=\\{{1,\dots,n}\\}$,111 We do not consider the environment component, which may be added with no technical difficulty. each associated with a tuple $A_{i}=(L_{i},\iota_{i},\mathit{Evt}_{i},R_{i},T_{i}{,\mathcal{PV}_{i},V_{i}})$ including a set of _possible local states_ $L_{i}=\\{l_{i}^{1},l_{i}^{2},\dots,l_{i}^{n_{i}}\\}$, an _initial state_ $\iota_{i}\in L_{i}$, and a set of _events_ $\mathit{Evt}_{i}=\\{\alpha_{i}^{1},\alpha_{i}^{2},\ldots,\alpha_{i}^{m_{i}}\\}$. An agent’s _repertoire of choices_ 222 In interpreted systems, this function is usually referred to as a _protocol_. Here, we opt for a different name to avoid possible confusion, e.g., with security protocols. $R_{i}:L_{i}\to 2^{\mathit{Evt}_{i}}\setminus\\{\emptyset\\}$ selects the events available at each local state. $T_{i}:L_{i}\times\mathit{Evt}_{i}\rightharpoonup L_{i}$ is a (partial) _local transition function_ such that $T_{i}(l_{i},\alpha)$ is defined iff $\alpha\in R_{i}(l_{i})$. That is, $T_{i}(l,\alpha)$ indicates the result of executing event $\alpha$ in local state $l$ from the perspective of agent $i$. Let $\mathit{Evt}=\bigcup_{i\in{\mathbb{A}\mathrm{gt}}}\mathit{Evt}_{i}$ be the set of all events, and $Loc=\bigcup_{i\in{\mathbb{A}\mathrm{gt}}}L_{i}$ be the set of all local states in the system. For each event $\alpha\in\mathit{Evt}$, $Agent(\alpha)=\\{{i\in{\mathbb{A}\mathrm{gt}}\mid\alpha\in\mathit{Evt}_{i}}\\}$ is the set of agents which have $\alpha$ in their repertoires; events shared by multiple agents are jointly executed by all of them. We assume that each agent $i$ in the AMAS is endowed with a disjoint set of its _local propositions $\mathcal{PV}_{i}$_, and their valuation $V_{i}:L_{i}\rightarrow 2^{\mathcal{PV}_{i}}$. The overall set of propositions $\mathcal{PV}=\bigcup_{i\in{\mathbb{A}\mathrm{gt}}}\mathcal{PV}_{i}$ collects all the local propositions. As our working example, we use the following scenario. ###### Example 2.2 (Conference in times of epidemic). Consider the AMAS in Figure 1, consisting of the Steering Committee Chair ($sc$), the General Chair ($gc$), and the Organizing Committee Chair ($oc$). Faced with the Covid-19 epidemics, $sc$ can decide to give up the conference, or send a signal to $gc$ to proceed and open the meeting. Then, $gc$ and $oc$ jointly decide whether the conference will be run on site or online. In the former case, the epidemiologic risk is obviously much higher, indicated by the atomic proposition $\mathsf{{epid}}$. The set of events, the agents’ repertoires of choices, and the valuation of atomic propositions can be easily read from the graph. For easier reading, all the private events are shown in grey. Note that event $proceed$ is shared by agents $sc$ and $gc$, and can only be executed jointly. Similarly, $onsite$ and $online$ are shared by $gc$ and $oc$. All the other events are private, and do not require synchronization. gc | oc | sc ---|---|--- | $0$$1$$\mathsf{{open}}$$2$$3$$\boldsymbol{proceed}$$\boldsymbol{onsite}$${online}$$\boldsymbol{rest}$$\boldsymbol{rest}$ --- | $0$$1$$\mathsf{{epid}}\quad{}$$2$$3$$\mathsf{{closed}}\quad{}$$onsite$$\boldsymbol{online}$$\boldsymbol{handle}$$\boldsymbol{handle}$$\boldsymbol{idle}$ --- | $0$$1$$2$${proceed}$$giveup$$proceed$$giveup$ --- Figure 1: Simple asynchronous MAS: agents $gc$, $oc$, and $sc$. A joint strategy of agents $\\{{gc,oc}\\}$ is highlighted. ### 2.2 Interleaved Interpreted Systems To understand the interaction between asynchronous agents, we use the standard execution semantics from concurrency models, i.e., interleaving with synchronization on shared events. To this end, we compose the network of local automata (i.e., AMAS) to a single automaton based on the notions of _global states_ and _global transitions_ , see below. ###### Definition 2.3 (Model). Let $S$ be an AMAS with $n$ agents. Its _model_ $IIS(S)$ extends $S$ with: (i) the set of global states $St\subseteq L_{1}\times\ldots\times L_{n}$, including the _initial state_ $\iota=(\iota_{1},\dots,\iota_{n})$ and all the states reachable from $\iota$ by $T$ (see below); (ii) the _global transition function_ $T:St\times\mathit{Evt}\rightharpoonup St$, defined by $T(g_{1},\alpha)=g_{2}$ iff $T_{i}(g_{1}^{i},\alpha)=g^{i}_{2}$ for all $i\in Agent(\alpha)$ and $g_{1}^{i}=g^{i}_{2}$ for all $i\in{\mathbb{A}\mathrm{gt}}\setminus Agent(\alpha)$; (iii) the _global valuation_ of propositions $V:St\rightarrow 2^{\mathcal{PV}}$, defined as $V(l_{1},\dots,l_{n})=\bigcup_{i\in{\mathbb{A}\mathrm{gt}}}V_{i}(l_{i})$. Models, sometimes called _interleaved interpreted systems_ (IIS), are used to provide an execution semantics to AMAS, and consequently provide us with semantic structures to reason about AMAS. Intuitively, the global states in $IIS(S)$ can be seen as the possible configurations of local states of all the agents. Moreover, the transitions are labeled by events that are simultaneously selected (in the current configuration) by all the agents that have the event in their repertoire. Clearly, private events (i.e., events such that $Agent(\alpha)$ is a singleton) require no synchronization. ###### Example 2.4 (Conference). The model for the asynchronous MAS of Example 2.2 is shown in Figure 1. We say that event $\alpha\in\mathit{Evt}$ is _enabled_ at $g\in St$ if $T(g,\alpha)=g^{\prime}$ for some $g^{\prime}\in St$. The set of events enabled at $g$ is denoted by $enabled(g)$. The global transition function is assumed to be serial, i.e., at each $g\in St$ there exists at least one enabled event. Discussion. This modeling approach is standard in theory of concurrent systems, where it dates back to the early 1980s and the idea of APA Nets (asynchronous, parallel automata nets) (?). Note that APA Nets and their models were _not_ proposed with causal interpretation in mind. In particular, they were _not_ meant to capture the interaction of purposeful agents that freely choose their strategies, but rather a set of reactive components converging to a joint behavior. Despite superficial differences, the same applies to process-algebraic approaches to concurrency, such as CSP (?), CCS (?), ACP (?), and $\pi$-calculus (?). Definition 2.1 extends that with the repertoire functions from synchronous models of MAS (?; ?). Agent $i$’s repertoire lists the events available to $i$, and is supposed to define the space of $i$’s strategies. As we show further, this is not enough in case of asynchronous MAS. ## 3 Reasoning About Abilities: ATL* _Alternating-time temporal logic_ $\mathbf{ATL_{\mathrm{}}^{*}}$ (?; ?; ?) generalizes the branching-time temporal logic $\mathbf{CTL^{*}}$ (?) by replacing the path quantifiers $\mathsf{E},\mathsf{A}$ with _strategic modalities_ $\langle\\!\langle{A}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\gamma$, expressing that agents $A$ can enforce the temporal property $\gamma$. While the semantics of $\mathbf{ATL_{\mathrm{}}^{*}}$ is typically defined for models of synchronous systems, a variant for asynchronous MAS was proposed recently (?). We summarize the main points in this section. ### 3.1 Syntax Let $\mathcal{PV}$ be a set of propositional variables and ${\mathbb{A}\mathrm{gt}}$ the set of all agents. The language of $\mathbf{ATL_{\mathrm{}}^{*}}$ is defined as below. $\varphi::=\mathsf{{p}}\mid\neg\varphi\mid\varphi\wedge\varphi\mid\langle\\!\langle{A}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\gamma$, $\gamma::=\varphi\mid\neg\gamma\mid\gamma\land\gamma\mid\mathrm{X}\,\gamma\mid\gamma\,\mathrm{U}\,\gamma$, where $\mathsf{p}\in\mathcal{PV}$, $A\subseteq{\mathbb{A}\mathrm{gt}}$, $\mathrm{X}\,$ stands for “next”, and $\,\mathrm{U}\,$ for “strong until” ($\gamma_{1}\,\mathrm{U}\,\gamma_{2}$ denotes that $\gamma_{1}$ holds until $\gamma_{2}$ becomes true). The other Boolean operators and constants are defined as usual. “Release” can be defined as $\gamma_{1}\,\mathrm{R}\,\gamma_{2}\equiv\neg((\neg\gamma_{1})\,\mathrm{U}\,(\neg\gamma_{2}))$. “Eventually” and “always” can be defined as $\mathrm{F}\,\gamma\equiv\mathit{true}\,\mathrm{U}\,\gamma$ and $\mathrm{G}\,\gamma\equiv\mathit{false}\,\mathrm{R}\,\gamma$. Moreover, the $\mathbf{CTL^{*}}$ operator “for all paths” can be defined as $\mathsf{A}\gamma\equiv\langle\\!\langle{\emptyset}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\gamma$. ###### Example 3.1 (Conference). Formula $\langle\\!\langle{sc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{open}}$ expresses that the Steering Chair can enforce that the conference is eventually opened. Moreover, formula $\langle\\!\langle{gc,oc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\neg\mathsf{{epid}}$ says that the General Chair and the Organizing Chair have a joint strategy to avoid high epidemiological risk. ### 3.2 Strategies and Outcomes We adopt Schobbens’ taxonomy and notation for strategy types (?): $\mathrm{ir}$, $\mathrm{Ir}$, $\mathrm{iR}$, and $\mathrm{IR}$, where _I_ (resp. _i_) denotes perfect (resp. imperfect) _information_ , and _R_ (resp. _r_) denotes perfect (resp. imperfect) _recall_. In particular, an _imperfect information/imperfect recall strategy ($\mathrm{ir}$-strategy) for $i$_ is a function $\sigma_{i}\colon L_{i}\to\mathit{Evt}_{i}$ s.t. $\sigma_{i}(l)\in R_{i}(l)$ for each $l\in L_{i}$. We denote the set of such strategies by $\Sigma_{i}^{\mathrm{ir}}$. A _collective strategy_ $\sigma_{A}$ for a coalition $A=(1,\dots,m)\subseteq{\mathbb{A}\mathrm{gt}}$ is a tuple of strategies, one per agent $i\in A$. The set of $A$’s collective $\mathrm{ir}$ strategies is denoted by $\Sigma_{A}^{\mathrm{ir}}$. We will sometimes use $\sigma_{A}(g)=(\sigma_{a_{1}}(g),\dots,\sigma_{a_{m}}(g))$ to denote the tuple of $A$’s selections at state $g$. ###### Example 3.2 (Conference). A collective strategy for the General Chair and the OC Chair in the conference scenario is shown in Figure 1. An infinite sequence of global states and events $\pi=g_{0}\alpha_{0}g_{1}\alpha_{1}g_{2}\dots$ is called an (interleaved) _path_ if $g_{i}\stackrel{{\scriptstyle\alpha_{i}}}{{\longrightarrow}}g_{i+1}$ for every $i\geq 0$. $\mathit{Evt}(\pi)=\alpha_{0}\alpha_{1}\alpha_{2}\ldots$ is the sequence of events in $\pi$, and $\pi[i]=g_{i}$ is the $i$-th global state of $\pi$. $\Pi_{M}(g)$ denotes the set of all paths in model $M$ starting at $g$. Intuitively, the outcome of $\sigma_{A}$ in $g$ is the set of all the paths that can occur when the agents in $A$ follow $\sigma_{A}$ and the agents in ${\mathbb{A}\mathrm{gt}}\setminus A$ freely choose events from their repertoires. To define it formally, we first refine the concept of an enabled event, taking into account the choices of $A$ in strategy $\sigma_{A}$. ###### Definition 3.3 (Enabled events). Let $A=(1,\dots,m)$, $g\in St$, and let $\overrightarrow{\alpha}_{A}=(\alpha_{1},\dots,\alpha_{m})$ be a tuple of events such that every $\alpha_{i}\in R_{i}(g^{i})$. That is, every $\alpha_{i}$ can be selected by its respective agent $i$ at state $g$. We say that event $\beta\in\mathit{Evt}$ is _enabled by $\overrightarrow{\alpha}_{A}$ at $g\in St$_ iff * • for every $i\in Agent(\beta)\cap A$, we have $\beta=\alpha_{i}$, and * • for every $i\in Agent(\beta)\setminus A$, it holds that $\beta\in R_{i}(g^{i})$. Thus, $\beta$ is enabled by $\overrightarrow{\alpha}_{A}$ if all the agents that “own” $\beta$ can choose $\beta$ for execution, even when $\overrightarrow{\alpha}_{A}$ has been selected by the coalition $A$. We denote the set of such events by $enabled(g,\overrightarrow{\alpha}_{A})$. Clearly, $enabled(g,\overrightarrow{\alpha}_{A})\subseteq enabled(g)$. ###### Example 3.4 (Conference). Consider state $g=000$ and the choices of agents $A=\\{{gc,oc}\\}$ shown in Figure 1, i.e., $\overrightarrow{\alpha}_{A}=(proceed,online)$. The only events enabled by $\overrightarrow{\alpha}_{A}$ are $proceed$ and $giveup$. Event $onsite$ is not enabled because $A$ chose different events for execution; $online$ is not enabled because it requires synchronization which is impossible at $000$. ###### Definition 3.5 (Outcome paths). The _outcome_ of strategy $\sigma_{A}\in\Sigma_{A}^{\mathrm{ir}}$ in state $g\in St$ is the set $\mathit{out}_{M}(g,\sigma_{A})\subseteq\Pi_{M}(g)$ such that $\pi=g_{0}\alpha_{0}g_{1}\alpha_{1}g_{2}\dots\in\mathit{out}_{M}(g,\sigma_{A})$ iff $g_{0}=g$, and $\forall i\geq 0\quad\alpha_{i}\in enabled(\pi[i],\sigma_{A}(\pi[i]))$. One often wants to look only at paths that do not consistently ignore agents whose choice is always enabled. Formally, a path $\pi$ satisfies _concurrency- fairness_ (CF) if there is no event $\alpha$ enabled in all states of $\pi$ from $\pi[n]$ on and such that for every $\alpha_{i}$ actually executed in $\pi[i]$, $i=n,n+1,\dots$, we have $Agent(\alpha)\cap Agent(\alpha_{i})=\emptyset$. We denote the set of all such paths starting at $g$ by $\Pi_{M}^{\textbf{CF}}(g)$. ###### Definition 3.6 (CF-outcome). The _CF -outcome_ of $\sigma_{A}\in\Sigma_{A}^{\mathrm{ir}}$ is defined as $\mathit{out}^{\textbf{CF}}_{M}(g,\sigma_{A})=\mathit{out}_{M}(g,\sigma_{A})\cap\Pi_{M}^{\textbf{CF}}(g)$. ### 3.3 Strategic Ability for Asynchronous Systems The semantics of $\mathbf{ATL_{\mathrm{ir}}^{*}}$ in AMAS is defined by the following clause for strategic modalities (?): $M,g\models_{{}_{\mathrm{ir}}}\langle\\!\langle{A}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\gamma$ iff there is a strategy $\sigma_{A}\in\Sigma_{A}^{\mathrm{ir}}$ s.t. $\mathit{out}_{M}(g,\sigma_{A})\neq\emptyset$ and, for each path $\pi\in\mathit{out}_{M}(g,\sigma_{A})$, we have $M,\pi\models_{{}_{\mathrm{ir}}}\gamma$. The clauses for Boolean and temporal operators are standard. Moreover, the _concurrency-fair semantics_ $\models_{{}_{\mathrm{ir}}}^{\textbf{CF}}$ of $\mathbf{ATL_{\mathrm{}}}$ and $\mathbf{ATL_{\mathrm{}}^{*}}$ is obtained by replacing $\mathit{out}_{M}(g,\sigma_{A})$ with $\mathit{out}_{M}^{\textbf{CF}}(g,\sigma_{A})$ in the above clause. ###### Example 3.7 (Conference). Clearly, formula $\langle\\!\langle{gc,oc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\neg\mathsf{{epid}}$ holds in $(M_{\mathit{conf}},000)$, in both $\models_{{}_{\mathrm{ir}}}$ and $\models_{{}_{\mathrm{ir}}}^{\textbf{CF}}$ semantics. To see that, fix $\sigma_{gc}(0)=proceed$ and $\sigma_{gc}(1)=\sigma_{oc}(0)=online$ in the collective strategy of $\\{{gc,oc}\\}$. Note also that $M_{\mathit{conf}},000\models_{{}_{\mathrm{ir}}}\neg\langle\\!\langle{gc,oc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{closed}}$ because, after executing $proceed$ and $online$ (or $onsite$), event $rest$ may be selected forever. On the other hand, such paths are not concurrency- fair, and thus $M_{\mathit{conf}},000\models_{{}_{\mathrm{ir}}}^{\textbf{CF}}\langle\\!\langle{gc,oc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{closed}}$. Discussion. Strategic play assumes proactive attitude: the agents in $\langle\\!\langle{A}\rangle\\!\rangle_{{}_{\\!\mathit{}}}$ are free to choose _any_ available strategy $\sigma_{A}$. This is conceptually consistent with the notion of agency (?). At the same time, it is somewhat at odds with the standard semantics of concurrent processes, where the components cannot stubbornly refuse to synchronize if that is the only way to proceed with a transition. This seems a minor problem, but it is worrying that a strategy can have the empty set of outcomes, and equally worrying that such strategies are treated differently from the other ones. Indeed, as we will show in the subsequent sections, the semantics proposed in (?) leads to a counterintuitive interpretation of strategic formulas. ## 4 Semantic Problems and How to Avoid Them $000$$101$$\mathsf{{open}}$$002$$211$$\mathsf{{epid}}$$321$$231$$\mathsf{{closed}}$$331$$\mathsf{{closed}}$$proceed$$giveup$$onsite$$online$$giveup$$rest$$handle$$rest$$handle$$rest$$idle$$rest$$idle$ --- Figure 2: Model $M_{\mathit{conf}}$ for the conference scenario. We highlight the transitions enabled by the strategy in Figure 1, and the resulting reachable states. Starting with this section, we describe some problematic phenomena that follow from the straightforward combination of strategic ability with models of concurrent systems, proposed in (?). We also show how to extend the representations and modify their execution semantics to avoid the counterintuitive interpretation of formulas. ### 4.1 Deadlock Strategies and Finite Paths An automata network is typically required to produce no deadlock states, i.e., every global state in its composition must have at least one outgoing transition. Then, all the maximal paths are infinite, and it is natural to refer to only infinite paths in the semantics of temporal operators. In case of AMAS, the situation is more delicate. Even if the AMAS as a whole produces no deadlocks, some strategies might, which makes the interpretation of strategic modalities cumbersome. We illustrate this on the following example. ###### Example 4.1 (Conference). Recall the 3-agent AMAS of Figure 1, together with its model $M_{\mathit{conf}}$ (Figure 2). Clearly, $M_{\mathit{conf}}$ has no deadlock states. Let us now look at the collective strategies of coalition $\\{{gc,oc}\\}$, with agent $sc$ serving as the opponent. It is easy to see that the coalition has no way to prevent the opening of the conference, i.e., it cannot prevent the system from reaching state $101$. However, the strategy depicted in Figure 1 produces only one _infinite_ path: $(000\,giveup\,002\,giveup\,\dots)$. Since the $\mathbf{ATL_{\mathrm{}}^{*}}$ semantics in Section 3 disregards finite paths, we get that $M_{\mathit{conf}},000\models\langle\\!\langle{gc,oc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\neg\mathsf{{open}}$, which is counterintuitive. Things can get even trickier. In particular, the outcome of a strategy can be empty – in fact, it may even happen that a coalition has only strategies with empty outcomes. | $0$$1$$2$$3$$\mathsf{{voted_{a}}}$$4$$\mathsf{{voted_{b}}}$$vote_{a}$$vote_{b}$$send$$send$$idle_{v}$$idle_{v}$ --- | $0$$1$$vote_{a}$$vote_{b}$$send$$idle_{ebm}$ --- Figure 3: Casting a ballot: voter $v$ (left) and EBM $ebm$ (right) ###### Example 4.2 (Voting). Consider the AMAS in Figure 3 that depicts a simple voting scenario. A voter $v$ can fill in an electronic ballot with a vote for candidate $\mathsf{{a}}$ or $\mathsf{{b}}$, and then push the $send$ button. The Electronic Ballot Machine $ebm$ duly registers the choices of the voter. Note that all the _joint_ strategies of $\\{{v,ebm}\\}$ produce only finite sequences of transitions. This is because $ebm$ must choose a single event at location $0$ in a memoryless strategy, and thus $v$ and $ebm$ are bound to “miscoordinate” either at the first or at the second step. Since finite paths are not included in the outcome sets, and the semantics in Section 3.3 rules out strategies with empty outcomes, we get that $IIS(S_{vote}),00\models\neg\langle\\!\langle{v,ebm}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\top$, which is quite strange. Notice that removing the non-emptiness requirement from the semantic clause in Section 3.3 does not help. In that case, any joint strategy of $\\{{v,ebm}\\}$ could be used to demonstrate that $\langle\\!\langle{v,ebm}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\bot$. ### 4.2 Solution: Adding Silent Transitions To deal with the problem, we augment the model of the system with special “silent” transitions, labeled by $\epsilon$, that are fired whenever no “real” transition can occur. In our case, the $\epsilon$-transitions account for the possibility that some agents miscoordinate and thus block the system. Moreover, we redefine the outcome set of a strategy so that an $\epsilon$-transition is taken whenever such miscoordination occurs. ###### Definition 4.3 (Undeadlocked IIS). Let $S$ be an AMAS, and assume that no agent in S has $\epsilon$ in its alphabet of events. The _undeadlocked model of S_ , denoted $M^{\text{$\epsilon$}}=IIS^{\text{$\epsilon$}}(S)$, extends the model $M=IIS(S)$ as follows: * • $\mathit{Evt}_{M^{\text{$\epsilon$}}}=\mathit{Evt}_{M}\cup\\{{\epsilon}\\}$, where $Agent(\epsilon)=\emptyset$; * • For each $g\in St$, we add the transition $g\stackrel{{\scriptstyle\epsilon}}{{\longrightarrow}}g$ iff there is a selection of agents’ choices $\overrightarrow{\alpha}_{A}=(\alpha_{1},\dots,\alpha_{k})$, $\alpha_{i}\in R_{i}(g)$, such that $enabled_{M}(g,\overrightarrow{\alpha}_{A})=\emptyset$. In that case, we also fix $enabled_{M^{\text{$\epsilon$}}}(g,\overrightarrow{\alpha}_{A})=\\{{\epsilon}\\}$. In other words, “silent” loops are added in the states where a combination of the agents’ actions can block the system. Paths are defined as in Section 2.2. The following is trivial. ###### Proposition 4.4. For any AMAS $S$, any state $g\in IIS^{\text{$\epsilon$}}(S)$, and any strategy $\sigma_{A}$, we have that $enabled_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A}(state))\neq\emptyset$. ###### Example 4.5 (Conference). The undeadlocked model $M_{\mathit{conf}}^{\text{$\epsilon$}}$ of the conference scenario (Example 2.2) extends the model in Figure 2 with one $\epsilon$-loop at state $101$. The loop models the situation when the agents choose $(onsite,online,proceed)$ or $(online,onsite,proceed)$. We leave it for the reader to check that, at the other states, all the combinations of choices enable at least one transition. For the strategy in Example 4.1, notice that its outcome in $M_{\mathit{conf}}^{\text{$\epsilon$}}$ contains _two_ infinite paths: not only $(000\,giveup\,002\,giveup\,002\dots)$, but also $(000\,proceed\,101\,\epsilon\,101\dots)$. Since the latter path invalidates the temporal formula $\mathrm{G}\,\neg\mathsf{{open}}$, we get that $M_{\mathit{conf}},000\not\models\langle\\!\langle{gc,oc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\neg\mathsf{{open}}$, as expected. $00$$10$$20$$31$$\mathsf{{voted_{a}}}$$41$$\mathsf{{voted_{b}}}$$\epsilon$$vote_{a}$$vote_{b}$$\epsilon$$send$$\epsilon$$send$$\genfrac{}{}{0.0pt}{}{idle_{v}}{idle_{ebm}}$$\genfrac{}{}{0.0pt}{}{idle_{v}}{idle_{ebm}}$ Figure 4: Undeadlocked IIS for the voting scenario ###### Example 4.6 (Voting). The undeadlocked model for the voting scenario is presented in Figure 4. Note that formula $\neg\langle\\!\langle{v,ebm}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\top$ does not hold anymore, because the joint strategies of $\\{{v,ebm}\\}$ have nonempty outcomes in $IIS^{\text{$\epsilon$}}(S_{vote})$. On the other hand, the formula $\langle\\!\langle{v}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{voted_{a}}}$ (and even $\langle\\!\langle{v,ebm}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{voted_{a}}}$) does not hold, which is contrary to the intuition behind the modeling. We will come back to this issue in Section 7. Discussion. Adding “silent” transitions to account for the control flow when no observable event occurs is pretty standard. The crucial issue is _where_ to add them. Here, we add the $\epsilon$-transitions whenever a subset of agents might choose to miscoordinate (and stick to their choices). Again, this is in line with the notion of agency and strategic play in MAS (?; ?). In the next section, we will discuss a concept of “agent fairness” where the addition of $\epsilon$-transitions is constrained by the assumption that only a given subset of agents is fully proactive. The examples used in this section expose an important feature of agent systems. The execution semantics of concurrent processes is often defined by a state-transition graph (or, alternatively, by the tree of paths generated by the graph, i.e., the tree unfolding of the graph). For systems that involve proactive agents, this is not enough. Rather, the execution semantics should map from the possible coalitions and their available strategies to the outcome sets of those strategies. In this sense, the possible behaviors of an agent system should be understood via the _set of possible execution trees_ , rather than a single tree. This is consistent with the theoretical model of MAS in (?), based on path effectivity functions. An alternative way out of the problem is to include finite maximal paths in the outcomes of strategies. However, the interpretation of strategic modalities over finite paths is rather nonstandard (?) and may pose new problems in the asynchronous setting. Moreover, our approach allows to reuse the existing techniques and tools, which are typically built for infinite path semantics, including the verification and partial order reduction functionalities of tools like SPIN (?) and STV (?). In general, this is a design dilemma between changing the logical semantics of the formulas vs. updating the execution semantics of the representations. Here, we choose the latter approach. ## 5 Playing Against Reactive Opponents The solution proposed in Section 4.2 is based on the assumption that an agent is free to choose any event in its repertoire – even one that prevents the system from executing anything. The downside is that, for most systems, only safety goals can be achieved (i.e., properties specified by $\langle\\!\langle{A}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\varphi$). For reachability, there is often a combination of the opponents’ choices that blocks the execution early on, and prevents the coalition from reaching their goal. In this section, we define a fairness-style condition that constrains the choices of more “reactive” opponents. We also show a construction to verify the abilities of the coalition over the resulting paths in a technically simpler way. ### 5.1 Opponent-Reactiveness Given a strategy $\sigma_{A}$, the agents in $A$ are by definition assumed to be proactive. Below, we propose an execution semantics for $\sigma_{A}$ which assumes that $A$ cannot be stalled forever by miscoordination on the part of the opponents. ###### Definition 5.1 (Opponent-reactiveness). A path $\pi=g_{0}\alpha_{0}g_{1}\alpha_{1}g_{2}\dots$ in $IIS^{\text{$\epsilon$}}(S)$ is _opponent-reactive for strategy $\sigma_{A}$_ iff we have that $\alpha_{n}=\epsilon$ implies $enabled(g_{n},\sigma_{A}(g_{n}))=\\{{\epsilon}\\}$. In other words, whenever the agents outside $A$ have a way to proceed, they must proceed. The _reactive outcome_ (or _React-outcome_) of $\sigma_{A}$ in $g$, denoted $\mathit{out}^{\textup{React}}(g,\sigma_{A})$, is the restriction of $\mathit{out}(g,\sigma_{A})$ to its opponent-reactive paths. ###### Example 5.2 (Conference). Consider the undeadlocked model $M_{\mathit{conf}}^{\text{$\epsilon$}}$ of Example 4.5. Path $(000\,proceed\,101\,\epsilon\,101\dots)$ is opponent- reactive for the strategy of agents $\\{{gc,oc}\\}$ shown in Figure 1. On the other hand, consider coalition $\\{{gc,sc}\\}$, and the following strategy of theirs: $\sigma_{gc}(0)=proceed,\sigma_{gc}(1)=onsite,\sigma_{sc}(0)=proceed$. The same path is _not_ opponent-reactive for the strategy because the only opponent ($oc$) has a response at state $101$ that enables a “real” transition ($onsite$). ###### Proposition 5.3. In $\mathit{out}^{\textup{React}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$, the only possible occurrence of $\epsilon$ is as an infinite sequence of $\epsilon$-transitions following a finite prefix of “real” transitions. ###### Proof. Take any $\pi=g_{0}\alpha_{0}g_{1}\alpha_{1}g_{2}\dots\in\mathit{out}^{\textup{React}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$ such that $\epsilon$ occurs on $\pi$, and let $i$ be the first position on $\pi$ st. $\alpha_{i}=\epsilon$. By Definition 5.1, we get that $enabled(g_{i},\sigma_{A}(g_{i}))=\\{{\epsilon}\\}$. Moreover, $state_{i+1}=g_{i}$, so also $enabled(g_{i+1},\sigma_{A}(g_{i+1}))=\\{{\epsilon}\\}$. Thus, $\alpha_{i+1}=\epsilon$. It follows by simple induction that $\alpha_{j}=\epsilon$ for every $j\geq i$. ∎ The _opponent-reactive semantics_ $\models_{{}_{\mathrm{ir}}}^{\textup{React}}$ of $\mathbf{ATL_{\mathrm{}}^{*}}$ is obtained by replacing $\mathit{out}_{M}(g,\sigma_{A})$ with $\mathit{out}_{M}^{\textup{React}}(g,\sigma_{A})$ in the semantic clause presented in Section 3.3. ### 5.2 Encoding Strategic Deadlock-Freeness Under Opponent-Reactiveness in AMAS If we adopt the assumption of opponent-reactiveness for coalition $A$, there is an alternative, technically simpler way to obtain the same semantics of strategic ability as in Section 4.2. The idea is to introduce the “silent” transitions already at the level of the AMAS. ###### Definition 5.4 (Undeadlocked AMAS). The _undeadlocked variant of $S$_ is constructed from $S$ by adding an auxiliary agent $A_{\epsilon}$ with $L_{\epsilon}=\\{{q_{0}^{\epsilon}}\\}$, $\iota_{\epsilon}=q_{0}^{\epsilon}$, $\mathit{Evt}_{\epsilon}=\\{{\epsilon}\\}$, $R_{\epsilon}(q_{0}^{\epsilon})=\\{{\epsilon}\\}$, $T_{i}(q_{0}^{\epsilon},\epsilon)=q_{0}^{\epsilon}$, and $\mathcal{PV}_{\epsilon}=\emptyset$. In other words, we add a module with a single local state and a “silent” loop labeled by $\epsilon$, as in Figure 5. We will denote the undeadlocked variant of $S$ by $S^{\text{$\epsilon$}}$. Note that $S^{\text{$\epsilon$}}$ can be seen as a special case of AMAS. Thus, the outcome sets and reactive outcomes of strategies in $IIS(S^{\text{$\epsilon$}})$ are defined exactly as before. $q_{0}^{\epsilon}$$\epsilon$ Figure 5: The auxiliary agent added in $S^{\text{$\epsilon$}}$ ###### Example 5.5 (Voting). The undeadlocked AMAS $\mathrm{ASV}_{1,2}^{\epsilon}$ is obtained by augmenting $\mathrm{ASV}_{1,2}$ with the auxiliary agent in Figure 5. Obviously, the extra agent adds $\epsilon$-loops to the model of $S$, i.e., to $IIS(S)$. We show now that, under the assumption of opponent-reactiveness, the view of $A$’s strategic ability in the undeadlocked AMAS $S^{\text{$\epsilon$}}$ corresponds precisely to $A$’s abilities in the undeadlocked model of the original AMAS $S$, i.e., $IIS^{\text{$\epsilon$}}(S)$. This allows to deal with deadlocks and finite paths without redefining the execution semantics for AMAS, set in Definition 2.3, and thus use the existing tools such as SPIN (?) in a straightforward way. ###### Proposition 5.6. Let $A\subseteq{\mathbb{A}\mathrm{gt}}$. In $\mathit{out}^{\textup{React}}_{IIS(S^{\text{$\epsilon$}})}(g,\sigma_{A})$, the only possible occurrence of $\epsilon$ is as an infinite suffix of $\epsilon$-transitions. ###### Proof. Analogous to Proposition 5.3. ∎ ###### Theorem 5.7. For every strategy $\sigma_{A}$ in $S$, we have that $\mathit{out}^{\textup{React}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})=\mathit{out}^{\textup{React}}_{IIS(S^{\text{$\epsilon$}})}(g,\sigma_{A}).$ ###### Proof. $\boldsymbol{\mathit{out}^{\textup{React}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})\subseteq\mathit{out}^{\textup{React}}_{IIS(S^{\text{$\epsilon$}})}(g,\sigma_{A})}$: Consider any $\pi=g_{0}\alpha_{0}g_{1}\alpha_{1}g_{2}\dots\in\mathit{out}^{\textup{React}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$. If there are no $\epsilon$-transitions on $\pi$, we have that $\pi\in\mathit{out}^{\textup{React}}_{IIS(S)}(g,\sigma_{A})\subseteq\mathit{out}^{\textup{React}}_{IIS(S^{\text{$\epsilon$}})}(g,\sigma_{A})$, QED. Suppose that $\pi$ includes $\epsilon$-transitions, with $\alpha_{i}$ being the first one. Then, we have that $\alpha_{j}\neq\epsilon$ and $\alpha_{j}\in enabled_{IIS^{\text{$\epsilon$}}(S)}(g_{j},\sigma_{A}(g_{j}))$ for every $j<i$, hence also $\alpha_{j}\in enabled_{IIS(S)}(g_{j},\sigma_{A}(g_{j}))\subseteq enabled_{IIS(S^{\text{$\epsilon$}})}(g_{j},\sigma_{A}(g_{j}))$. (*) By Proposition 5.3, $g_{j}=g_{i}$ and $\alpha_{j}=\epsilon$ for every $j\geq i$. By Definition 5.1, $enabled_{IIS^{\text{$\epsilon$}}(S)}(g_{j},\sigma_{A}(g_{j}))=\\{{\epsilon}\\}$. Hence, $enabled_{IIS(S)}(g_{j},\sigma_{A}(g_{j}))=\emptyset$ and $enabled_{IIS(S^{\text{$\epsilon$}})}(g_{j},\sigma_{A}(g_{j}))=\\{{\epsilon}\\}$. (**) Thus, by (*) and (**), $\pi\in\mathit{out}^{\textup{React}}_{IIS(S^{\text{$\epsilon$}})}(g,\sigma_{A})$, QED. $\boldsymbol{\mathit{out}^{\textup{React}}_{IIS(S^{\text{$\epsilon$}})}(g,\sigma_{A})\subseteq\mathit{out}^{\textup{React}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})}$: Analogous, with Proposition 5.6 used instead of Proposition 5.3. ∎ Discussion. Opponent-reactiveness is to strategic properties what fairness conditions are to temporal properties of asynchronous systems. If an important property cannot be satisfied in all possible executions, it may at least hold under some reasonable assumptions about which events can be selected by whom in response to what. Clearly, the condition can be considered intuitive by some and problematic by others. The main point is, unlike in the previous semantics, now it is made explicit, and can be adopted or rejected depending on the intuition. Note that the semantic extensions proposed in this paper (silent transitions and nondeterministic choices for strategies) make sense both with and without opponent-reactiveness. Note that, under the reactiveness assumption, we have that $M_{\mathit{conf}}^{\text{$\epsilon$}},000\models_{{}_{\mathrm{ir}}}^{\textup{React}}\langle\\!\langle{gc,sc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{epid}}$ and $M_{\mathit{conf}}^{\text{$\epsilon$}},000\models_{{}_{\mathrm{ir}}}^{\textup{React}}\langle\\!\langle{oc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\neg\mathsf{{epid}}$. This seems to contradict the commonly accepted requirement of _regularity_ in games (?). However, the contradiction is only superficial, as the two formulas are evaluated _under different execution assumptions_ : for the former, we assume agent $oc$ to be reactive, whereas the latter assumes $gc$ and $sc$ to react to the strategy of $oc$. ## 6 Concurrency-Fairness Revisited In Def. 3.6, we recalled the notion of concurrency-fair outcome of (?). The idea was to remove from $out(g,\sigma_{A})$ the paths that consistently ignore agents whose events are enabled _at the level of the whole model_. Unfortunately, the definition has unwelcome side effects, too. ### 6.1 Problems with Concurrency-Fairness We first show that, contrary to intuition, Definition 3.6 automatically disregards _deadlock paths_ , i.e., paths with finitely many “real” transitions. ###### Proposition 6.1. Consider an AMAS $S$ and a path $\pi$ in $IIS^{\text{$\epsilon$}}(S)$ such that, from some point $i$ on, $\pi$ includes only $\epsilon$-transitions. Then, for every strategy $\sigma_{A}$ in $S$, we have that $\pi\notin\mathit{out}^{\textbf{CF}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$. ###### Proof. Take $\pi$ as above, i.e., $\pi=g_{0}\alpha_{0}g_{1}\alpha_{1}\dots g_{i}\epsilon g_{i}\epsilon g_{i}\dots$. Since the transition function in $IIS^{\text{$\epsilon$}}(S)$ is serial, there must be some event $\beta\neq\epsilon$ enabled in $g_{i}$. In consequence, $\beta$ is always enabled from $i$ on, but none of its “owners” in $Agent(\beta)$ executes an event on $\pi$ after $i$. Hence, $\pi$ does not satisfy CF, and does not belong to $\mathit{out}^{\textbf{CF}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$ for any strategy $\sigma_{A}$. ∎ Thus, the CF condition eliminates all the deadlock paths from the outcome of a strategy (for instance, the path $(000\,proceed\,101\,\epsilon\,101\dots)$ in Example 4.5). In consequence, reasoning about concurrency-fair paths suffers from the problems that we identified in Section 4.1, even for undeadlocked models. Moreover, combining the temporal and strategic fairness (i.e., CF and React) collapses the undeadlocked execution semantics altogether, see below. ###### Proposition 6.2. Reasoning about reactive _and_ fair outcomes in an undeadlocked model reduces to reasoning about the fair executions in the original model without $\epsilon$-transitions. Formally, let $\mathit{out}^{\textup{React},\textbf{CF}}_{M}(g,\sigma_{A})=\mathit{out}^{\textup{React}}_{M}(g,\sigma_{A})\cap\mathit{out}^{\textbf{CF}}_{M}(g,\sigma_{A})$. For any AMAS $S$ and any strategy $\sigma_{A}$ in $S$, we have: $\mathit{out}^{\textup{React},\textbf{CF}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})=\mathit{out}^{\textbf{CF}}_{IIS(S)}(g,\sigma_{A})$. ###### Proof. Clearly, we have $\mathit{out}^{\textbf{CF}}_{IIS(S)}(g,\sigma_{A})\subseteq\mathit{out}^{\textup{React},\textbf{CF}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$, since $\mathit{out}^{\textup{React},\textbf{CF}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$ can only add to $\mathit{out}^{\textbf{CF}}_{IIS(S)}(g,\sigma_{A})$ new paths that include $\epsilon$-transitions. For the other direction, take any $\pi\in\mathit{out}^{\textup{React},\textbf{CF}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$, and suppose that it contains an $\epsilon$-transition. By Proposition 5.3, it must have an infinite suffix consisting only of $\epsilon$-transitions. Then, by Proposition 6.1, $\pi\notin\mathit{out}^{\textbf{CF}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$, which leads to a contradiction. Thus, $\pi$ contains only transitions from $IIS(S)$, and hence $\pi\in\mathit{out}^{\textbf{CF}}_{IIS(S)}(g,\sigma_{A})$, QED. ∎ ### 6.2 Strategic Concurrency-Fairness So, how should fair paths be properly defined for strategic reasoning? The answer is simple: in relation to the outcome of the strategy being executed. ###### Definition 6.3 (Strategic CF). $\pi=g_{0}\alpha_{0}g_{1}\alpha_{1}g_{2}\dots$ is a _concurrency-fair path for strategy $\sigma_{A}$ and state $g$_ iff $g_{0}=g$, and there is no event $\alpha$ s.t., for some $n$ and all $i\geq n$, we have $\alpha\in enabled(\pi[i],\sigma_{A}(\pi[i]))$ and $Agent(\alpha)\cap Agent(\alpha_{i})=\emptyset$. That is, agents with an event always enabled _by $\sigma_{A}$_ cannot be ignored forever. The _SCF -outcome_ of $\sigma_{A}\in\Sigma_{A}^{\mathrm{ir}}$ is defined as $\mathit{out}^{\textbf{SCF}}_{M}(g,\sigma_{A})=\\{\pi\in\mathit{out}_{M}(g,\sigma_{A})\mid\pi\text{ is concurrency-fair for }\sigma_{A},g\\}$. The following formal results show that SCF does not suffer from the problems demonstrated in Section 6.1. ###### Proposition 6.4. There is an AMAS $S$, a strategy $\sigma_{A}$ in $S$, and a deadlock path $\pi$ in $IIS^{\text{$\epsilon$}}(S)$ such that $\pi$ is concurrency-fair for $\sigma_{A}$. ###### Proof. To demonstrate the property, it suffices to take the AMAS and the strategy of $\\{{gc,oc}\\}$ depicted in Figure 1, and the path $\pi=(000\,proceed\,101\,\epsilon\,101\dots)$. ∎ ###### Theorem 6.5. Opponent-reactiveness and strategic concurrency-fairness are incomparable. Formally, there exists an AMAS $S$, a state $g$ in $IIS^{\text{$\epsilon$}}(S)$, and a strategy $\sigma_{A}$ such that $\mathit{out}^{\textbf{SCF}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})\not\subseteq\mathit{out}^{\textup{React}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$, and vice versa. ###### Proof. Consider the undeadlocked model $M_{\mathit{conf}}^{\text{$\epsilon$}}$ in Example 4.5, and the strategy discussed in Example 5.2: $\sigma_{gc}(0)=proceed$, $\sigma_{gc}(1)=onsite$, $\sigma_{sc}(0)=proceed$. Let $\pi_{1}=(000\,proceed\,101\,\epsilon\,101\,onsite\,211\,rest\,211\linebreak handle\,211\,rest\,211\dots)$. We have $\pi_{1}\in\mathit{out}^{\textbf{SCF}}_{M_{\mathit{conf}}^{\text{$\epsilon$}}}(g,\sigma_{A})$, but $\pi_{1}\notin\mathit{out}^{\textup{React}}_{M_{\mathit{conf}}^{\text{$\epsilon$}}}(g,\sigma_{A})$. On the other hand, for path $\pi_{2}=(000\,proceed\,101\,onsite\,211\,rest\,211\,rest\,\dots)$, we have that $\pi_{2}\notin\mathit{out}^{\textbf{SCF}}_{M_{\mathit{conf}}^{\text{$\epsilon$}}}(g,\sigma_{A})$, but $\pi_{2}\in\mathit{out}^{\textup{React}}_{M_{\mathit{conf}}^{\text{$\epsilon$}}}(g,\sigma_{A})$. ∎ Discussion. Theorem 6.5 suggests that reactiveness and fairness conditions arise from orthogonal concerns. The two concepts refer to different factors that influence which sequences of events can occur. Opponent-reactiveness constrains the choices that (a subset of) the agents can select. Concurrency- fairness and its strategic variant restrict the way in which the “scheduler” (Nature, Chance, God…) can choose from the events selected by the agents. ## 7 Strategies in Asymmetric Interaction Now, we point out that AMAS are too restricted to model the strategic aspects of asymmetric synchronization in a natural way (e.g., a sender sending a message to a receiver). ### 7.1 Simple Choices are Not Enough We demonstrate the problem on an example. ###### Example 7.1 (Voting). As already pointed out, we have $IIS^{\text{$\epsilon$}}(S_{vote}),00\not\models\langle\\!\langle{v,ebm}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{voted_{a}}}$ in the model of Example 4.2. This is because receiving a vote for $\mathsf{{a}}$, a vote for $\mathsf{{b}}$, and the signal to send the vote, belong to _different choices_ in the repertoire of the EBM, and the agent can only select one of them in a memoryless strategy. Moreover, formula $\langle\\!\langle{ebm}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{voted_{a}}}$ holds under the condition of opponent-reactiveness, i.e., the EBM can force a reactive voter to vote for a selected candidate. Clearly, it was not the intention behind the AMAS: the EBM is supposed to _listen_ to the choice of the voter. No matter whose strategies are considered, and who reacts to whose actions, the EBM should have no influence on what the voter votes for. The problem arises because the repertoire functions in AMAS are based on the assumption that the agent can choose any single event in $R_{i}(l_{i})$. This does not allow for natural specification of situations when the exact transition is determined by another agent. For the AMAS in Example 4.2, the decision to vote for candidate $\mathsf{{a}}$ or $\mathsf{{b}}$ (or to press $send$) should belong solely to the voter. Thus, setting the EBM repertoire as $R_{ebm}(0)=\\{{vote_{a},vote_{b},send}\\}$ does not produce a good model of strategic play in the scenario. ### 7.2 AMAS with Explicit Control As a remedy, we extend the representations so that one can indicate which agent(s) control the choice between events. ###### Definition 7.2 (AMAS with explicit control). Everything is exactly as in Definition 2.1, except for the repertoires of choices, which are now functions $R_{i}:L_{i}\to 2^{2^{\mathit{Evt}_{i}}\setminus\\{\emptyset\\}}\setminus\\{\emptyset\\}$. That is, $R_{i}(l)$ lists nonempty subsets of events $X_{1},X_{2},\dots\subseteq\mathit{Evt}_{i}$, each capturing an available choice of $i$ at the local state $l$. If the agent chooses $X_{j}=\\{{\alpha_{1},\alpha_{2},\dots}\\}$, then only an event in that set can be executed within the agent’s module; however, the agent has no firmer control over which one will be fired. Accordingly, we assume that $T_{i}(l,\alpha)$ is defined iff $\alpha\in\bigcup R_{i}(l)$.333 For a set of sets $X$, we use $\bigcup X$ to denote its “flattening” $\bigcup_{x\in X}x$. Notice that the AMAS of Definition 2.1 can be seen as a special case where $R_{i}(l)$ is always a list of singletons. The definitions of IIS and undeadlocked IIS stay the same, as agents’ repertoires of choices are not actually used to generate the state-transition structure for the model of $S$. Moreover, undeadlocked AMAS with explicit control can be obtained analogously to Definition 5.4 by adding the auxiliary “epsilon”-agent with $R_{\epsilon}(q_{0}^{\epsilon})=\\{{\\{{\epsilon}\\}}\\}$ in its sole local state. Strategies still assign choices to local states; hence, the type of agent $i$’s strategies is now $\sigma_{i}\colon L_{i}\to 2^{\mathit{Evt}_{i}}\setminus\\{\emptyset\\}$ s.t. $\sigma_{i}(l)\in R_{i}(l)$. The definition of the outcome set is updated accordingly, see below. ###### Definition 7.3 (Outcome sets for AMAS with explicit control). First, we lift the set of events enabled by $\overrightarrow{\alpha}_{A}=(\alpha_{1},\dots,\alpha_{m})$ at $g$ to match the new type of repertoires and strategies. Formally, $\beta\in enabled(g,\overrightarrow{\alpha}_{A})$ iff: (1) for every $i\in Agent(\beta)\cap A$, we have $\beta\in\alpha_{i}$, and (2) for every $i\in Agent(\beta)\setminus A$, it holds that $\beta\in\bigcup R_{i}(g^{i})$. The outcome, React-outcome, and SCF-outcome of $\sigma_{A}$ in $M,g$ are given as in Definitions 3.5, 5.1, and 6.3. ###### Example 7.4 (Voting). We improve our voting model by assuming repertoires of choices for the voter and the EBM as follows: $R_{ebm}(0)=\\{{\\{{vote_{a},vote_{b},send}\\}}\\}$, $R_{v}(0)=\\{{\\{{vote_{a}}\\},\\{{vote_{b}}\\}}\\}$, $R_{v}(1)=R_{v}(2)=\\{{\\{{send}\\}}\\}$, etc. That is, the voter’s choices are as before, but the EBM only listens to what the voter selects. Clearly, $\langle\\!\langle{v,ebm}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{voted_{a}}}$ holds in the new AMAS. Moreover, $\langle\\!\langle{ebm}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{voted_{a}}}$ does not hold anymore, even assuming opponent-reactiveness. It is easy to see that Propositions 4.4, 5.3, 5.6, and 6.4, as well as Theorems 5.7 and 6.5 still hold in AMAS with explicit control. Discussion. When reasoning about strategic play of asynchronous agents, two kinds of asymmetry come into the picture. On the one hand, the processes (agents) being modeled often synchronize in an asymmetric way. For example, the sender chooses which message to send to the receiver. On the other hand, the agents $A$ in formula $\langle\\!\langle{A}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\varphi$ choose the strategy and thus push the other agents to respond accordingly. The variant of AMAS introduced in (?) does not allow to capture the former kind of asymmetry. In consequence, the choice between the available synchronization branches belongs solely to the agents indicated by the formula. Unfortunately, there is no natural way to model the converse situation, i.e., when the agents in $\langle\\!\langle{A}\rangle\\!\rangle_{{}_{\\!\mathit{}}}$ are forced by the choices of their opponents. With the new variant of AMAS, we extend the representations so that the modeler can explicitly specify the degree of autonomy of each participating agent. Without that, the degree of autonomy is implicit and comes from the formula being evaluated. Related modeling approaches. Various forms of asymmetric synchronization are present in most process algebras. For example, $\pi$-calculus distinguishes between the action $\overline{c}\langle a\rangle$ of sending the value $a$ on channel $c$, and action $c(x)$ of listening on channel $c$ and storing whatever comes in variable $x$. CSP goes further, and allows for a similar degree of flexibility to ours through suitable combinations of deterministic choice, nondeterministic choice, and interface parallel operators. Other synchronization primitives are also possible, see e.g. (?) for an overview. Instead of allowing for multiple synchronization primitives, we come up with a single general primitive that can be instantiated to cover different kinds of interaction. We note in passing the similarity of our new repertoire functions in Definition 7.2 to state effectivity functions (?; ?) and especially alternating transition systems (?). ## 8 Partial Order Reduction Still Works _Partial order reduction (POR)_ has been defined for temporal and temporal- epistemic logics without “next” (?; ?; ?; ?), and recently extended to strategic specifications (?). The idea is to take a network of automata (AMAS in our case), and use depth-first search through the space of global states to generate a reduced model that satisfies exactly the same formulas as the full model. Essentially, POR removes paths that change only the interleaving order of an “irrelevant” event with another event. Importantly, the method generates the reduced model directly from the representation, without generating the full model at all. ### 8.1 Correctness of POR in the New Semantics POR is a powerful technique to contain state-space explosion and facilitate verification, cf. e.g. the experimental results in (?). In this paper, we extend the class of models, and modify their execution semantics. We need to show that the reduction algorithm in (?), defined for the flawed semantics of ability, is still correct after the modifications. Our main technical result in this respect is Theorem A.11, presented below. The detailed definitions, algorithms and proofs are technical (and rather tedious) adaptations of those in (?). We omit them here for lack of space, and refer the inquisitive reader to Appendix A. Theorem A.11. Let $M=\mathit{IIS}(S^{\text{$\epsilon$}})$, $M^{\text{$\epsilon$}}=IIS^{\text{$\epsilon$}}(S)$ and let $A\subseteq{\mathbb{A}\mathrm{gt}}$ be a subset of agents. Moreover, let ${M{{}^{\prime}}}\subseteq M$ and $M^{\text{$\epsilon$}}{{}^{\prime}}\subseteq M^{\text{$\epsilon$}}$ be the reduced models generated by DFS with the choice of enabled events $E(g^{\prime})$ given by conditions C1, C2, C3 and the independence relation $I_{A,\mathit{PV}}$. For each $\mathbf{sATL_{\mathrm{\mathrm{ir}}}^{*}}$ formula $\varphi$ over $\mathit{PV}$, that refers only to coalitions $\hat{A}\subseteq A$, we have: 1. 1. $M,\iota\models_{{}_{\mathrm{ir}}}^{\textup{React}}\varphi$ iff ${M{{}^{\prime}}},\iota^{\prime}\models_{{}_{\mathrm{ir}}}^{\textup{React}}\varphi$, and 2. 2. $M^{\text{$\epsilon$}},\iota\models_{{}_{\mathrm{ir}}}\varphi$ iff $M^{\text{$\epsilon$}}{{}^{\prime}},\iota^{\prime}\models_{{}_{\mathrm{ir}}}\varphi$. Thus, the reduced models can be used to model-check the $\mathbf{sATL_{\mathrm{\mathrm{ir}}}^{*}}$ properties of the full models. Proof idea. We aim at showing that the full model $M$ and the reduced one $M^{\prime}$ satisfy the same formulas of $\mathbf{ATL_{\mathrm{\mathrm{ir}}}^{*}}$ referring only to coalitions $\hat{A}\subseteq A$ and containing no nested strategic operators. Thanks to the restriction on the formulas, the proof can be reduced to showing that ${M{{}^{\prime}}}$ satisfies the condition $\textbf{AE}_{A}$, which states that for each strategy and for each path of the outcome of this strategy in $M$ there is an equivalent path in the outcome of the same strategy in $M^{\prime}$. In order to show that $\textbf{AE}_{A}$ holds, we use the conditions on the selection of events $E(g^{\prime})$ to be enabled at state $g^{\prime}$ in $M^{\prime}$. The conditions include the requirement that $\epsilon$ is always selected, together with the three conditions ${\bf C1,C2,C3}$ adapted from (?; ?; ?). Intuitively, ${\bf C1}$ states that, along each path $\pi$ in $M$ which starts at $g^{\prime}$, each event that is dependent on an event in $E(g^{\prime})$ cannot be executed in $M$ unless an event in $E(g^{\prime})$ is executed first in $M$. ${\bf C2}$ says that $E(g^{\prime})$ either contains all the events, or only events that do not change the values of relevant propositions. ${\bf C3}$ guarantees that for every cycle in $M^{\prime}$ containing no $\epsilon$-transitions, there is at least one node $g^{\prime}$ in the cycle for which all the enabled events of $g^{\prime}$ are selected. First, we show that $M$ and $M^{\prime}$ are stuttering-equivalent, i.e., they have the same sets of paths modulo stuttering (that is, finite repetition of states on a path). The crucial observation here is that the reduction of $M$ under the conditions C1, C2, C3 is equivalent to the reduction of $M$ without the $\epsilon$-loops under the conditions C1, C2, C3 of (?), and then adding the $\epsilon$-loops to all the states of the reduced model. Therefore, for the paths without $\epsilon$-loops the stuttering equivalence can be shown similarly to (?, Theorem 12) while for the paths with $\epsilon$-loops we need more involved arguments in the proof. It turns out that in addition to the fact that $M$ and $M^{\prime}$ are stuttering equivalent, we can show that stuttering equivalent paths of $M$ and $M^{\prime}$ have the same maximal sequence of visible events. From that, we can prove that $\textbf{AE}_{A}$ holds. ## 9 Conclusions In this paper, we reconsider the asynchronous semantics of strategic ability for multi-agent systems, proposed in (?). We have already hinted at certain problems with the semantics in the extended abstract (?). Here, we demonstrate in detail how the straightforward combination of strategic reasoning and models of distributed systems leads to counterintuitive interpretation of formulas. We identify three main sources of problems. First, the execution semantics does not handle reasoning about deadlock-inducing strategies well. Secondly, fairness conditions need to be redefined for strategic play. Thirdly, the class of representations lacks constructions to resolve the tension between the asymmetry imposed by strategic operators on the one hand, and the asymmetry of interaction, e.g., between communicating parties. We deal with the problems as follows. First, we change the execution semantics of strategies in asynchronous MAS by adding “silent” $\epsilon$-transitions in states where no “real” event can be executed. We also propose and study the condition of _opponent-reactiveness_ that assumes the agents outside the coalition to not obstruct the execution of the strategy forever. Note that, while the assumption may produce similar interpretation of formulas as in (?), it is now explicit – as opposed to (?), where it was “hardwired” in the semantics. The designer or verifier is free to adopt it or reject it, depending on their view of how the agents in the system behave and choose their actions. Secondly, we propose a new notion of _strategic concurrency-fairness_ that selects the fair executions of a strategy. Thirdly, we allow for nondeterministic choices in agents’ repertoires. This way, we allow to explicitly specify that one agent has more control over the outcome of an event than the other participants of the event. The main technical result consists in proving that partial order reduction for strategic abilities (?) is still correct after the semantic modifications. Thus, the new, more intuitive semantics admits efficient verification. Beyond $\mathbf{ATL_{\mathrm{ir}}}$. In this study, we have concentrated on the logic $\mathbf{ATL^{*}_{\mathrm{ir}}}$, i.e., the variant of $\mathbf{ATL_{\mathrm{}}^{*}}$ based on memoryless imperfect information strategies. Clearly, the concerns raised here are not entirely (and not even not primarily) logical. $\mathbf{ATL^{*}_{\mathrm{ir}}}$ can be seen as a convenient way to specify the players and the winning conditions in a certain class of games (roughly speaking, $1.5$-player games with imperfect information, positional strategies, and $\mathbf{LTL}$ objectives). The semantic problems, and our solutions, apply to all such games interpreted over arenas given by asynchronous MAS. Moreover, most of the claims presented here are not specific to $\mathrm{ir}$-strategies. In fact, we conjecture that our examples of semantic side effects carry over to the other types of strategies (except for the existence of coalitions whose all strategies have empty outcomes, which can happen for neither perfect information nor perfect recall). Similarly, our technical results should carry over to the other strategy types (except for the correctness of POR, which does not hold for agents with perfect information). We leave the formal analysis of those cases for future work. Other issues. An interesting question concerns the relationship between asynchronous and synchronous models. We conjecture that AMAS with explicit control can be simulated by concurrent game structures and alternating transition systems. Similarly, it should be possible to simulate CGS and ATS by AMAS with explicit control, at the expense of using a huge space of fully synchronized actions. For the model checking complexity in AMAS with explicit control, we expect the same results as in (?). ## Acknowledgements We thank the anonymous reviewers for their insightful comments. The authors acknowledge the support of the National Centre for Research and Development, Poland (NCBR), and the Luxembourg National Research Fund (FNR), under the PolLux/FNR-CORE project STV (POLLUX-VII/1/2019). W. Penczek and T. Sidoruk acknowledge support from CNRS/PAS project PARTIES. ## References * Alur et al. 1998 Alur, R.; Henzinger, T.; Mang, F.; Qadeer, S.; Rajamani, S.; and Tasiran, S. 1998\. MOCHA: Modularity in model checking. In Proceedings of CAV, volume 1427 of Lecture Notes in Computer Science, 521–525. Springer. * Alur et al. 2001 Alur, R.; de Alfaro, L.; Grossu, R.; Henzinger, T.; Kang, M.; Kirsch, C.; Majumdar, R.; Mang, F.; and Wang, B.-Y. 2001\. jMocha: A model-checking tool that exploits design structure. In Proceedings of ICSE, 835–836. IEEE Computer Society Press. * Alur, Henzinger, and Kupferman 1997 Alur, R.; Henzinger, T. A.; and Kupferman, O. 1997\. Alternating-time Temporal Logic. In Proceedings of the 38th Annual Symposium on Foundations of Computer Science (FOCS), 100–109. IEEE Computer Society Press. * Alur, Henzinger, and Kupferman 1998 Alur, R.; Henzinger, T. A.; and Kupferman, O. 1998\. Alternating-time Temporal Logic. Lecture Notes in Computer Science 1536:23–60. * Alur, Henzinger, and Kupferman 2002 Alur, R.; Henzinger, T. A.; and Kupferman, O. 2002\. Alternating-time Temporal Logic. Journal of the ACM 49:672–713. * Belardinelli et al. 2017a Belardinelli, F.; Lomuscio, A.; Murano, A.; and Rubin, S. 2017a. Verification of broadcasting multi-agent systems against an epistemic strategy logic. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, 91–97. * Belardinelli et al. 2017b Belardinelli, F.; Lomuscio, A.; Murano, A.; and Rubin, S. 2017b. Verification of multi-agent systems with imperfect information and public actions. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2017, São Paulo, Brazil, May 8-12, 2017, 1268–1276. * Belardinelli et al. 2018 Belardinelli, F.; Lomuscio, A.; Murano, A.; and Rubin, S. 2018\. Alternating-time temporal logic on finite traces. In Proceedings of IJCAI, 77–83. * Bergstra and Klop 1985 Bergstra, J. A., and Klop, J. W. 1985\. Algebra of communicating processes with abstraction. Theoretical Computer Science 37:77–121. * Bloem et al. 2015 Bloem, R.; Jacobs, S.; Khalimov, A.; Konnov, I.; Rubin, S.; Veith, H.; and Widder, J. 2015\. Decidability of Parameterized Verification. Synthesis Lectures on Distributed Computing Theory. Morgan & Claypool Publishers. * Bratman 1987 Bratman, M. E. 1987\. Intentions, Plans, and Practical Reason. Harvard University Press. * Bulling and Jamroga 2011 Bulling, N., and Jamroga, W. 2011\. Alternating epistemic mu-calculus. In Proceedings of IJCAI-11, 109–114. * Busard et al. 2014 Busard, S.; Pecheur, C.; Qu, H.; and Raimondi, F. 2014\. Improving the model checking of strategies under partial observability and fairness constraints. In Formal Methods and Software Engineering, volume 8829 of Lecture Notes in Computer Science. Springer. 27–42. * Busard et al. 2015 Busard, S.; Pecheur, C.; Qu, H.; and Raimondi, F. 2015\. Reasoning about memoryless strategies under partial observability and unconditional fairness constraints. Information and Computation 242:128–156. * Cermák et al. 2014 Cermák, P.; Lomuscio, A.; Mogavero, F.; and Murano, A. 2014\. MCMAS-SLK: A model checker for the verification of strategy logic specifications. In Proc. of CAV’14, volume 8559 of Lecture Notes in Computer Science, 525–532. Springer. * Cermák, Lomuscio, and Murano 2015 Cermák, P.; Lomuscio, A.; and Murano, A. 2015\. Verifying and synthesising multi-agent systems against one-goal strategy logic specifications. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA., 2038–2044. * Chen et al. 2013 Chen, T.; Forejt, V.; Kwiatkowska, M.; Parker, D.; and Simaitis, A. 2013\. PRISM-games: A model checker for stochastic multi-player games. In Proceedings of TACAS, volume 7795 of Lecture Notes in Computer Science, 185–191. Springer. * Clarke and Emerson 1981 Clarke, E., and Emerson, E. 1981\. Design and synthesis of synchronization skeletons using branching time temporal logic. In Proceedings of Logics of Programs Workshop, volume 131 of Lecture Notes in Computer Science, 52–71. * Clarke, Grumberg, and Peled 1999 Clarke, E. M.; Grumberg, O.; and Peled, D. A. 1999\. Model Checking. Cambridge, Massachusetts: The MIT Press. * Courcoubetis et al. 1992 Courcoubetis, C.; Vardi, M.; Wolper, P.; and Yannakakis, M. 1992\. Memory-efficient algorithms for the verification of temporal properties. Formal Methods in System Design 1(2/3):275–288. * Dima and Tiplea 2011 Dima, C., and Tiplea, F. L. 2011\. Model-checking ATL under imperfect information and perfect recall semantics is undecidable. CoRR abs/1102.4225. * Dima, Maubert, and Pinchinat 2014 Dima, C.; Maubert, B.; and Pinchinat, S. 2014\. The expressive power of epistemic $\mu$-calculus. CoRR abs/1407.5166. * Dima, Maubert, and Pinchinat 2015 Dima, C.; Maubert, B.; and Pinchinat, S. 2015\. Relating paths in transition systems: The fall of the modal mu-calculus. In Proceedings of MFCS, volume 9234 of Lecture Notes in Computer Science, 179–191. Springer. * Fagin et al. 1995 Fagin, R.; Halpern, J. Y.; Moses, Y.; and Vardi, M. Y. 1995\. Reasoning about Knowledge. MIT Press. * Fang et al. 2017 Fang, F.; Nguyen, T. H.; Pickles, R.; Lam, W. Y.; Clements, G. R.; An, B.; Singh, A.; Schwedock, B. C.; Tambe, M.; and Lemieux, A. 2017\. PAWS - A deployed game-theoretic application to combat poaching. AI Magazine 38(1):23–36. * Gerth et al. 1999 Gerth, R.; Kuiper, R.; Peled, D.; and Penczek, W. 1999\. A partial order approach to branching time logic model checking. Information and Computation 150:132–152. * Godefroid and Wolper 1994 Godefroid, P., and Wolper, P. 1994\. A partial approach to model checking. Information and Computation 110(2):305–326. * Goranko and Jamroga 2015 Goranko, V., and Jamroga, W. 2015\. State and path coalition effectivity models of concurrent multi-player games. Autonomous Agents and Multi-Agent Systems 1–40. * Guelev, Dima, and Enea 2011 Guelev, D. P.; Dima, C.; and Enea, C. 2011\. An alternating-time temporal logic with knowledge, perfect recall and past: axiomatisation and model-checking. Journal of Applied Non-Classical Logics 21(1):93–131. * Hoare 1978 Hoare, C. A. R. 1978\. Communicating sequential processes. Communications of the ACM 21(8):666–677. * Holzmann 1997 Holzmann, G. J. 1997\. The model checker SPIN. IEEE Transactions on Software Engineering 23(5):279–295. * Huang and van der Meyden 2014 Huang, X., and van der Meyden, R. 2014\. Symbolic model checking epistemic strategy logic. In Proceedings of AAAI, 1426–1432. * Jamroga et al. 2018 Jamroga, W.; Penczek, W.; Dembiński, P.; and Mazurkiewicz, A. 2018\. Towards partial order reductions for strategic ability. In Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018, 156–165. IFAAMAS. * Jamroga et al. 2019 Jamroga, W.; Knapik, M.; Kurpiewski, D.; and Mikulski, Ł. 2019\. Approximate verification of strategic abilities under imperfect information. Artificial Intelligence 277\. * Jamroga et al. 2020 Jamroga, W.; Penczek, W.; Sidoruk, T.; Dembiński, P.; and Mazurkiewicz, A. 2020\. Towards partial order reductions for strategic ability. Journal of Artificial Intelligence Research 68:817–850. * Jamroga, Knapik, and Kurpiewski 2017 Jamroga, W.; Knapik, M.; and Kurpiewski, D. 2017\. Fixpoint approximation of strategic abilities under imperfect information. In Proceedings of the 16th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 1241–1249. IFAAMAS. * Jamroga, Penczek, and Sidoruk 2021 Jamroga, W.; Penczek, W.; and Sidoruk, T. 2021\. Strategic abilities of asynchronous agents: Semantic side effects. In Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021, 1545–1547. ACM. * Kacprzak and Penczek 2004 Kacprzak, M., and Penczek, W. 2004\. Unbounded model checking for alternating-time temporal logic. In Proceedings of the 3rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2004, 646–653. IEEE Computer Society. * Kurpiewski et al. 2021 Kurpiewski, D.; Pazderski, W.; Jamroga, W.; and Kim, Y. 2021\. STV+Reductions: Towards practical verification of strategic ability using model reductions. In Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021, 1770–1772. IFAAMAS. * Kurpiewski, Jamroga, and Knapik 2019 Kurpiewski, D.; Jamroga, W.; and Knapik, M. 2019\. STV: Model checking for strategies under imperfect information. In Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019, 2372–2374. IFAAMAS. * Lomuscio and Raimondi 2006 Lomuscio, A., and Raimondi, F. 2006\. MCMAS : A model checker for multi-agent systems. In Proceedings of TACAS, volume 4314 of Lecture Notes in Computer Science, 450–454. Springer. * Lomuscio, Penczek, and Qu 2010a Lomuscio, A.; Penczek, W.; and Qu, H. 2010a. Partial order reductions for model checking temporal epistemic logics over interleaved multi-agent systems. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2010, 659–666. * Lomuscio, Penczek, and Qu 2010b Lomuscio, A.; Penczek, W.; and Qu, H. 2010b. Partial order reductions for model checking temporal-epistemic logics over interleaved multi-agent systems. Fundam. Inform. 101(1-2):71–90. * Lomuscio, Qu, and Raimondi 2017 Lomuscio, A.; Qu, H.; and Raimondi, F. 2017\. MCMAS: An open-source model checker for the verification of multi-agent systems. International Journal on Software Tools for Technology Transfer 19(1):9–30. * Lomuscio, van der Meyden, and Ryan 2000 Lomuscio, A.; van der Meyden, R.; and Ryan, M. 2000\. Knowledge in multiagent systems: initial configurations and broadcast. ACM Transactions on Computational Logic 1(2):247–284. * Malvone, Murano, and Sorrentino 2017 Malvone, V.; Murano, A.; and Sorrentino, L. 2017\. Hiding actions in multi-player games. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2017, São Paulo, Brazil, May 8-12, 2017, 1205–1213. * Milner, Parrow, and Walker 1992 Milner, R.; Parrow, J.; and Walker, D. 1992\. A calculus of mobile processes, I. Information and Computation 100(1):1–40. * Milner 1980 Milner, R. 1980\. A Calculus of Communicating Systems, volume 92 of Lecture Notes in Computer Science. Springer. * Pauly 2001a Pauly, M. 2001a. Logic for Social Software. Ph.D. Dissertation, University of Amsterdam. * Pauly 2001b Pauly, M. 2001b. A logical framework for coalitional effectivity in dynamic procedures. Bulletin of Economic Research 53(4):305–324. * Pauly 2002 Pauly, M. 2002\. A modal logic for coalitional power in games. Journal of Logic and Computation 12(1):149–166. * Peled 1993 Peled, D. 1993\. All from one, one for all: On model checking using representatives. In Proceedings of the 5th International Conference on Computer Aided Verification, LNCS 697, 409–423. Springer-Verlag. * Peled 1994 Peled, D. 1994\. Combining partial order reductions with on-the-fly model-checking. In Proceedings of the 6th International Conference on Computer Aided Verification, LNCS 818, 377–390. Springer-Verlag. * Peled 1996 Peled, D. 1996\. Partial order reductions: Model checking using representatives. In Proceedings of the 21st International Symposium on Mathematical Foundations of Computer Science (MFCS’96), volume 1113 of LNCS, 93–112. Springer-Verlag. * Penczek et al. 2000 Penczek, W.; Szreter, M.; Gerth, R.; and Kuiper, R. 2000\. Improving partial order reductions for universal branching time properties. Fundamenta Informaticae 43:245–267. * Pilecki, Bednarczyk, and Jamroga 2014 Pilecki, J.; Bednarczyk, M.; and Jamroga, W. 2014\. Synthesis and verification of uniform strategies for multi-agent systems. In Proceedings of CLIMA XV, volume 8624 of Lecture Notes in Computer Science, 166–182. Springer. * Priese 1983 Priese, L. 1983\. Automata and concurrency. Theoretical Computer Science 25(3):221 – 265. * Schlingloff, Stubert, and Jamroga 2016 Schlingloff, B.; Stubert, H.; and Jamroga, W. 2016\. Collaborative embedded systems - a case study. In Proceedings of the 3rd International Workshop on Emerging Ideas and Trends in Engineering of Cyber-Physical Systems (EITEC@CPSWeek), 17–22. * Schobbens 2004 Schobbens, P. Y. 2004\. Alternating-time logic with imperfect recall. Electronic Notes in Theoretical Computer Science 85(2):82–93. ## Appendix A Partial Order Reduction: Details All the results in this appendix are formulated and proved for the semantics of $\mathbf{ATL_{\mathrm{ir}}}$ over undeadlocked AMAS with explicit control. Also, we restrict the formulas to $\mathbf{ATL_{\mathrm{}}^{*}}$ without nested strategic modalities and the next step operator $\mathrm{X}\,$ (“simple $\mathbf{ATL_{\mathrm{}}^{*}}$”, or $\mathbf{sATL_{\mathrm{}}^{*}}$). As noted in (?), $\mathbf{sATL_{\mathrm{}}^{*}}$ is sufficient for most practical specifications and much more expressive than $\mathbf{LTL}$. Yet, as we prove below, it enjoys the same efficiency of partial order reduction. We begin by introducing the relevant notions of equivalence. Then, we propose conditions on reduced models that preserve the stuttering equivalence with and without the assumption of _opponent-reactiveness_ (React). We point out algorithms that generate such models, and prove their correctness. It should be stressed that the reduction scheme proposed here is general, in the sense that it preserves equivalent representatives of both fair and unfair paths in the model. In particular, we do _not_ propose a variant of POR, optimized for strategic concurrency-fair paths, analogous to reductions of (?) for CF. A variant of POR for $\mathbf{sATL_{\mathrm{\mathrm{ir}}}}$ under the SCF assumption is planned for future work. ### A.1 Properties of Submodels Given an undeadlocked AMAS $S^{\text{$\epsilon$}}$, partial order reduction attempts to generate only a subset of states and transitions that is sufficient for verification of $S^{\text{$\epsilon$}}$, i.e., a relevant _submodel_ of $\mathit{IIS}(S^{\text{$\epsilon$}})$. ###### Definition A.1 (Submodel). Let models $M,{M{{}^{\prime}}}$ extend the same AMAS $S^{\text{$\epsilon$}}$, so that $St^{\prime}\subseteq St$, $\iota\in St^{\prime}$, $T$ is an extension of $T^{\prime}$, and $V^{\prime}=V|_{St^{\prime}}$. Then, we write ${M{{}^{\prime}}}\subseteq M$ and call ${M{{}^{\prime}}}$ a _submodel_ of $M$. Note that, for each $g\in St^{\prime}$, we have $\Pi_{{M{{}^{\prime}}}}(g)\subseteq\Pi_{M}(g)$. ###### Lemma A.2. Let ${M{{}^{\prime}}}\subseteq M$, $A\in{\mathbb{A}\mathrm{gt}}$, $\sigma_{A}\in\Sigma_{A}^{\mathrm{ir}}$. Then, we have $\mathit{out}^{\textup{React}}_{{M{{}^{\prime}}}}(\iota,\sigma_{A})=\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{A})\cap\Pi_{{M{{}^{\prime}}}}(\iota)$. _Proof._ Note that each joint $\mathrm{ir}$-strategy in $M$ is also a well defined $\mathrm{ir}$-joint strategy in ${M{{}^{\prime}}}$ as it is defined on the local states of each agent of an AMAS which is extended by both $M$ and ${M{{}^{\prime}}}$. The lemma follows directly from the definition of React- outcome (Def. 5.1 and 7.3), plus the fact that $\Pi_{{M{{}^{\prime}}}}(\iota)\subseteq\Pi_{M}(\iota)$. $\blacksquare$ ###### Lemma A.3. Let $M$ be a model, $\pi,\pi^{\prime}\in\Pi_{M}(\iota)$, and for some $i\in{\mathbb{A}\mathrm{gt}}:$ $\mathit{Evt}(\pi)\mid_{\mathit{Evt}_{i}}=\mathit{Evt}(\pi^{\prime})\mid_{\mathit{Evt}_{i}}$. Then, for each $\mathrm{ir}$-strategy $\sigma_{i}$, we have $\pi\in\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{i})$ iff $\pi^{\prime}\in\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{i})$. _Proof._ Let $\mathit{Evt}(\pi)\mid_{\mathit{Evt}_{i}}=b_{0}b_{1}\ldots$ be the sequence of the events of agent $i$ in $\pi$. For each $b_{j}$ let $\pi[b_{j}]$ denote the global state from which $b_{j}$ is executed in $\pi$. By induction we can show that for each $j\geq 0$, we have $\pi[b_{j}]^{i}=\pi^{\prime}[b_{j}]^{i}$. For $j=0$ it is easy to see that $\pi[b_{0}]^{i}=\pi[b_{0}]^{i}=\iota^{i}$. Assume that the thesis holds for $j=k$. The induction step follows from the fact the local evolution $T_{i}$ is a function, so if $\pi[b_{k}]^{i}=\pi^{\prime}[b_{k}]^{i}=l$ for some $l\in L_{i}$, then $\pi[b_{k+1}]^{i}=\pi^{\prime}[b_{k+1}]^{i}=T_{i}(l,b_{k})$. Thus, by Def. 5.1 and 7.3, for each $\mathrm{ir}$-strategy $\sigma_{i}$ we have $\pi\in\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{i})$ iff $\pi^{\prime}\in\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{i})$, which concludes the proof. $\blacksquare$ Lemma A.3 can be easily generalized to joint strategies $\sigma_{A}\in\Sigma_{A}^{\mathrm{ir}}$. ### A.2 Stuttering Equivalence Let $M$ be a model, ${M{{}^{\prime}}}\subseteq M$, and $\mathit{PV}\subseteq\mathcal{PV}$ a subset of propositions. Stuttering equivalence says that two paths can be divided into corresponding finite segments, each satisfying exactly the same propositions. Stuttering path equivalence444 The property is usually called _stuttering trace equivalence_ (?). We use a slightly different name to avoid confusion with Mazurkiewicz traces, also used in this paper. requires two models to always have corresponding, stuttering-equivalent paths. ###### Definition A.4 (Stuttering equivalence). Two paths $\pi\in\Pi_{M}(\iota)$ and $\pi^{\prime}\in\Pi_{{M{{}^{\prime}}}}(\iota)$ are stuttering equivalent, denoted $\pi\equiv_{s}\pi^{\prime}$, if there exists a partition $B_{0}=(\pi[0],\dots,\pi[i_{1}-1]),\ B_{1}=(\pi[i_{1}],\dots,\pi[i_{2}-1]),\ \ldots$ of the states of $\pi$, and an analogous partition $B^{\prime}_{0},B^{\prime}_{1},\ldots$ of the states of $\pi^{\prime}$, s.t. for each $j\geq 0:$ $B_{j}$ and $B^{\prime}_{j}$ are nonempty and finite, and $V(g)\cap\mathit{PV}=V^{\prime}(g^{\prime})\cap\mathit{PV}$ for every $g\in B_{j}$ and $g^{\prime}\in B^{\prime}_{j}$. Models $M$ and ${M{{}^{\prime}}}$ are stuttering path equivalent, denoted $M\equiv_{s}{M{{}^{\prime}}}$ if for each path $\pi\in\Pi_{M}(\iota)$, there is a path $\pi^{\prime}\in\Pi_{{M{{}^{\prime}}}}(\iota)$ such that $\pi\equiv_{s}\pi^{\prime}$.555Typically, the definition also contains the symmetric condition which in our case always holds for $M$ and its submodel ${M{{}^{\prime}}}$, as $\Pi_{{M{{}^{\prime}}}}(\iota)\subseteq\Pi_{M}(\iota)$. ###### Theorem A.5 ((?)). If $M\equiv_{s}{M{{}^{\prime}}}$, then we have $M,\iota\models\varphi$ iff ${M{{}^{\prime}}},\iota^{\prime}\models\varphi$, for any $\mathbf{LTL_{-X}}$ formula $\varphi$ over $\mathit{PV}$. ### A.3 Independence of Events Intuitively, an event is invisible iff it does not change the valuations of the propositions.666 This concept of invisibility is technical, and is not connected to the view of any agent in the sense of (?). Additionally, we can designate a subset of agents $A$ whose events are visible by definition. Furthermore, two events are independent iff they are not events of the same agent and at least one of them is invisible. ###### Definition A.6 (Invisible events). Consider a model $M$, a subset of agents $A\subseteq{\mathbb{A}\mathrm{gt}}$, and a subset of propositions $\mathit{PV}\subseteq\mathcal{PV}$. An event $\alpha\in\mathit{Evt}$ is invisible wrt. $A$ and $\mathit{PV}$ if $Agent(\alpha)\cap A=\emptyset$ and for each two global states $g,g^{\prime}\in St$ we have that $g\stackrel{{\scriptstyle\alpha}}{{\longrightarrow}}g^{\prime}$ implies $V(g)\cap\mathit{PV}=V(g^{\prime})\cap\mathit{PV}$. The set of all invisible events for $A,\mathit{PV}$ is denoted by $Invis_{A,\mathit{PV}}$, and its closure – of visible events – by $Vis_{A,\mathit{PV}}=\mathit{Evt}\setminus Invis_{A,\mathit{PV}}$. ###### Definition A.7 (Independent events). The notion of _independence_ $I_{A,\mathit{PV}}\subseteq\mathit{Evt}\times\mathit{Evt}$ is defined as: $I_{A,\mathit{PV}}=\\{(\alpha,\alpha^{\prime})\in\mathit{Evt}\times\mathit{Evt}\mid Agent(\alpha)\cap Agent(\alpha^{\prime})=\emptyset\\}\ \setminus\ (Vis_{A,\mathit{PV}}\times Vis_{A,\mathit{PV}})$. Events $\alpha,\alpha^{\prime}\in\mathit{Evt}$ are called dependent if $(\alpha,\alpha^{\prime})\not\in I_{A,\mathit{PV}}$. If it is clear from the context, we omit the subscript $\mathit{PV}$. ### A.4 Preserving Stuttering Equivalence Rather than generating the full model $M=\mathit{IIS}(S^{\text{$\epsilon$}})$, one can generate a reduced model ${M{{}^{\prime}}}$ satisfying the following property: $\textbf{AE}_{A}:\>\forall\sigma_{A}\\!\in\\!\Sigma_{A}^{\mathrm{ir}}\quad\forall\pi\\!\in\\!\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{A})$ $\qquad\exists\pi^{\prime}\\!\in\\!\mathit{out}^{\textup{React}}_{{M{{}^{\prime}}}}(\iota,\sigma_{A})\quad\pi\\!\equiv_{s}\\!\pi^{\prime}$. We define a class of algorithms that generate reduced models satisfying $\textbf{AE}_{A}$ (Section A.4), and then prove that these models preserve $\mathbf{sATL_{\mathrm{\mathrm{ir}}}^{*}}$ (Section A.4). Algorithms for partial order reduction. POR is used to reduce the size of models while preserving satisfaction for a class of formulas. The standard DFS (?) or DDFS (?) is modified in such a way that from each visited state $g$ an event $\alpha$ to compute the successor state $g_{1}$ such that $g\stackrel{{\scriptstyle\alpha}}{{\rightarrow}}g_{1}$, is selected from $E(g)\cup\\{{\epsilon}\\}$ such that $E(g)\subseteq enabled(g)\setminus\\{{\epsilon}\\}$. That is, the algorithm always selects $\epsilon$, plus a subset of the enabled events at $g$. Let $A\subseteq{\mathbb{A}\mathrm{gt}}$. The conditions on the heuristic selection of $E(g)$ given below are inspired by (?; ?; ?). C1 Along each path $\pi$ in $M$ that starts at $g$, each event that is dependent on an event in $E(g)$ cannot be executed in $\pi$ without an event in $E(g)$ being executed first in $\pi$. Formally, $\forall\pi\in\Pi_{M}(g)$ such that $\pi=g_{0}\alpha_{0}g_{1}\alpha_{1}\ldots$ with $g_{0}=g$, and $\forall b\in\mathit{Evt}$ such that $(b,c)\notin I_{A}$ for some $c\in E(g)$, if $\alpha_{i}=b$ for some $i\geq 0$, then $\alpha_{j}\in E(g)$ for some $j<i$. C2 If $E(g)\neq enabled(g)\setminus\\{{\epsilon}\\}$, then $E(g)\subseteq Invis_{A}$. C3 For every cycle in ${M{{}^{\prime}}}$ containing no $\epsilon$-transitions, there is at least one node $g$ in the cycle for which $E(g)=enabled(g)\setminus\\{{\epsilon}\\}$, i.e., for which all the successors of $g$ are expanded. ###### Theorem A.8. Let $A\subseteq{\mathbb{A}\mathrm{gt}}$, $M=\mathit{IIS}(S^{\text{$\epsilon$}})$, and ${M{{}^{\prime}}}\subseteq M$ be the reduced model generated by DFS with the choice of $E(g^{\prime})$ for $g^{\prime}\in St^{\prime}$ given by conditions C1, C2, C3 and the independence relation $I_{A}$. Then, ${M{{}^{\prime}}}$ satisfies $\textbf{AE}_{A}$. _Proof._ Let ${M{{}^{\prime}}}\subseteq M=\mathit{IIS}(S^{\text{$\epsilon$}})$ be the reduced model generated as specified. Notice that the reduction of $M$ under the conditions C1, C2, C3 above is equivalent to the reduction of $M$ without the $\epsilon$-loops under the conditions C1, C2, C3 of (?), and then adding the $\epsilon$-loops to all the states of the reduced model. Although the setting is slightly different, it can be shown similarly to (?, Theorem 12) that the conditions C1, C2, C3 guarantee that the models: (i) $M$ without $\epsilon$-loops and (ii) ${M{{}^{\prime}}}$ without $\epsilon$-loops are stuttering path equivalent. More precisely, for each path $\pi=g_{0}a_{0}g_{1}a_{1}\cdots$ with $g_{0}=\iota$ (without $\epsilon$-transitions) in $M$ there is a stuttering equivalent path $\pi^{\prime}=g^{\prime}_{0}a^{\prime}_{0}g^{\prime}_{1}a^{\prime}_{1}\cdots$ with $g^{\prime}_{0}=\iota$ (without $\epsilon$-transitions) in $M^{\prime}$ such that $\mathit{Evt}(\pi)|_{Vis_{A}}=\mathit{Evt}(\pi^{\prime})|_{Vis_{A}}$, i.e., $\pi$ and $\pi^{\prime}$ have the same maximal sequence of visible events for $A$. (*) We will now prove that this implies $M\equiv_{s}{M{{}^{\prime}}}$. Removing the $\epsilon$-loops from $M$ eliminates two kinds of paths: (a) paths with infinitely many “proper” events, and (b) paths ending with an infinite sequence of $\epsilon$-transitions. Consider a path $\pi$ of type (a) from $M$. Notice that the path $\pi_{1}$, obtained by removing the $\epsilon$-transitions from $\pi$, is stuttering-equivalent to $\pi$. Moreover, by (*), there exists a path $\pi_{2}$ in ${M{{}^{\prime}}}$ without $\epsilon$-transitions, which is stuttering-equivalent to $\pi_{1}$. By transitivity of the stuttering equivalence, we have that $\pi_{2}$ is stuttering equivalent to $\pi$. Since $\pi_{2}$ must also be a path in ${M{{}^{\prime}}}$, this concludes this part of the proof. Consider a path $\pi$ of type (b) from $M$, i.e., $\pi$ ends with an infinite sequence of $\epsilon$-transitions. Let $\pi_{1}$ be the sequence obtained from $\pi$ after removing $\epsilon$-transitions, and $\pi_{2}$ be any infinite path without $\epsilon$-transitions such that $\pi_{1}$ is its prefix. Then, it follows from (*) that there is a stuttering equivalent path $\pi_{2}^{\prime}=g^{\prime}_{0}a^{\prime}_{0}g^{\prime}_{1}a^{\prime}_{1}\cdots$ with $g^{\prime}_{0}=\iota$ in $M^{\prime}$ such that $\mathit{Evt}(\pi_{2})|_{Vis_{A}}=\mathit{Evt}(\pi_{2}^{\prime})|_{Vis_{A}}$. Consider the minimal finite prefix $\pi_{1}^{\prime}$ of $\pi_{2}^{\prime}$ such that $\mathit{Evt}(\pi_{1}^{\prime})|_{Vis_{A}}=\mathit{Evt}(\pi_{1})|_{Vis_{A}}$. Clearly, $\pi_{1}^{\prime}$ is a sequence in $M^{\prime}$ and can be extended with an infinite number of $\epsilon$-transitions to the path $\pi^{\prime}$ in $M^{\prime}$. It is easy to see that $\pi$ and $\pi^{\prime}$ are stuttering equivalent. So far, we have shown that our reduction under the conditions C1, C2, C3 guarantees that the models $M$ and ${M{{}^{\prime}}}$ are stuttering path equivalent, and more precisely that for each path $\pi=g_{0}a_{0}g_{1}a_{1}\cdots$ with $g_{0}=\iota$ in $M$ there is a stuttering equivalent path $\pi^{\prime}=g^{\prime}_{0}a^{\prime}_{0}g^{\prime}_{1}a^{\prime}_{1}\cdots$ with $g^{\prime}_{0}=\iota$ in $M^{\prime}$ such that $\mathit{Evt}(\pi)|_{Vis_{A}}=\mathit{Evt}(\pi^{\prime})|_{Vis_{A}}$, i.e., $\pi$ and $\pi^{\prime}$ have the same maximal sequence of visible events for $A$. To show that ${M{{}^{\prime}}}$ satisfies $\textbf{AE}_{A}$, consider an $\mathrm{ir}$-joint strategy $\sigma_{A}$ and $\pi\in\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{A})$. As demonstrated above, there is $\pi^{\prime}\in\Pi_{{M{{}^{\prime}}}}(\iota)$ such that $\pi\equiv_{s}\pi^{\prime}$ and $\mathit{Evt}(\pi)|_{Vis_{A}}=\mathit{Evt}(\pi^{\prime})|_{Vis_{A}}$. Since $\mathit{Evt}_{i}\subseteq Vis_{A}$ for each $i\in A$, the same sequence of events of each $\mathit{Evt}_{i}$ is executed in $\pi$ and $\pi^{\prime}$. Thus, by the generalization of Lemma A.3 to $\mathrm{ir}$-joint strategies we get $\pi^{\prime}\in\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{A})$. So, by Lemma A.2 we have $\pi^{\prime}\in\mathit{out}^{\textup{React}}_{M{{}^{\prime}}}(\iota,\sigma_{A})$. $\blacksquare$ Algorithms generating reduced models, in which the choice of $E(g)$ is given by similar conditions, can be found for instance in (?; ?; ?; ?; ?; ?). POR for proactive opponents. The same reduction still works without the assumption of opponent-reactiveness (React). ###### Theorem A.9. Let $M^{\text{$\epsilon$}}=IIS^{\text{$\epsilon$}}(S)$ be an undeadlocked IIS. Then, its reduced model $M^{\text{$\epsilon$}}{{}^{\prime}}$, generated as in Theorem A.8, satisfies $\textbf{AE}_{A}$. _Proof (Sketch)._ In this setting, there is no auxiliary agent in the AMAS, and $\epsilon$-transitions are added directly to the IIS in accordance with Definition 4.3. Hence, not every global state of $M^{\text{$\epsilon$}}$ necessarily has an $\epsilon$ loop, but only those where a miscoordinating combination of events exists. However, this does not impact the reduction itself. First, note that Lemma A.2 still holds, directly from the definition of outcome (Definition 3.5). Furthermore, because in the undeadlocked IIS $M^{\text{$\epsilon$}}$ the $\epsilon$-transitions do not belong to any agent, Lemma A.3, where sequences of some agent $i$’s events are considered, also holds. Note that the React condition only restricts the outcome sets, and not the model itself: both $M=IIS(S^{\epsilon})$ and $M^{\text{$\epsilon$}}$ contain the same two types (a) and (b) of paths with $\epsilon$-transitions as discussed in Theorem A.8. Hence, following its reasoning, it can first be shown that models $M^{\text{$\epsilon$}}$ and $M^{\text{$\epsilon$}}{{}^{\prime}}$ without their $\epsilon$-transitions are stuttering path equivalent, and that it remains the case also when both types of paths including $\epsilon$ loops are included. Note that the remark about ${M{{}^{\prime}}}$ being equivalent to reducing $M$ without $\epsilon$ loops and adding them to each global state obviously does not apply to $M^{\text{$\epsilon$}}$ (not every global state of $M^{\text{$\epsilon$}}$ has them in the first place). However, this observation has no bearing on the proof. As before, $\epsilon$ is explicitly stated to be selected for the subset $E(g)$, ensuring preservation of representative paths with $\epsilon$ in $M^{\text{$\epsilon$}}{{}^{\prime}}$. $\blacksquare$ Correctness of reductions satisfying $\textbf{AE}_{A}$. We show that the reduced models satisfying $\textbf{AE}_{A}$ preserve $\mathbf{sATL_{\mathrm{\mathrm{ir}}}^{*}}$. ###### Theorem A.10. Let $A\subseteq{\mathbb{A}\mathrm{gt}}$, and let models ${M{{}^{\prime}}}\subseteq M$, $M^{\text{$\epsilon$}}{{}^{\prime}}\subseteq M^{\text{$\epsilon$}}$ satisfy $\textbf{AE}_{A}$. For each $\mathbf{sATL_{\mathrm{\mathrm{ir}}}^{*}}$ formula $\varphi$ over $\mathit{PV}$, that refers only to coalitions $\hat{A}\subseteq A$, we have that: 1. 1. $M,\iota\models_{{}_{\mathrm{ir}}}^{\textup{React}}\varphi$ iff ${M{{}^{\prime}}},\iota^{\prime}\models_{{}_{\mathrm{ir}}}^{\textup{React}}\varphi$, and 2. 2. $M^{\text{$\epsilon$}},\iota\models_{{}_{\mathrm{ir}}}\varphi$ iff $M^{\text{$\epsilon$}}{{}^{\prime}},\iota^{\prime}\models_{{}_{\mathrm{ir}}}\varphi$. _Proof._ Proof by induction on the structure of $\varphi$. We show the case $\varphi=\langle\\!\langle{\hat{A}}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\gamma$. The cases for $\neg,\land$ are straightforward. Notice that $\mathit{out}^{\textup{React}}_{{M{{}^{\prime}}}}(\iota,\sigma_{\hat{A}})\subseteq\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{\hat{A}})$, which together with the condition $\textbf{AE}_{A}$ implies that the sets $\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{\hat{A}})$ and $\mathit{out}^{\textup{React}}_{{M{{}^{\prime}}}}(\iota,\sigma_{\hat{A}})$ are stuttering path equivalent. Analogously, this is the case for $\mathit{out}_{{M{{}^{\prime}}}}(\iota,\sigma_{\hat{A}})\subseteq\mathit{out}_{M}(\iota,\sigma_{\hat{A}})$, i.e. without the React assumption. Hence, (1) and (2) follow from Theorem A.5. $\blacksquare$ Together with Theorems A.8 and A.9, we obtain the following. ###### Theorem A.11. Let $M=\mathit{IIS}(S^{\text{$\epsilon$}})$, $M^{\text{$\epsilon$}}=IIS^{\text{$\epsilon$}}(S)$ and let ${M{{}^{\prime}}}\subseteq M$ and $M^{\text{$\epsilon$}}{{}^{\prime}}\subseteq M^{\text{$\epsilon$}}$ be the reduced models generated by DFS with the choice of $E(g^{\prime})$ for $g^{\prime}\in St^{\prime}$ given by conditions C1, C2, C3 and the independence relation $I_{A,\mathit{PV}}$. For each $\mathbf{sATL_{\mathrm{\mathrm{ir}}}^{*}}$ formula $\varphi$ over $\mathit{PV}$, that refers only to coalitions $\hat{A}\subseteq A$, we have: 1. 1. $M,\iota\models_{{}_{\mathrm{ir}}}^{\textup{React}}\varphi$ iff ${M{{}^{\prime}}},\iota^{\prime}\models_{{}_{\mathrm{ir}}}^{\textup{React}}\varphi$, and 2. 2. $M^{\text{$\epsilon$}},\iota\models_{{}_{\mathrm{ir}}}\varphi$ iff $M^{\text{$\epsilon$}}{{}^{\prime}},\iota^{\prime}\models_{{}_{\mathrm{ir}}}\varphi$. This concludes the proof that the adaptation of POR for $\mathbf{LTL_{-X}}$ to $\mathbf{sATL_{\mathrm{\mathrm{ir}}}^{*}}$, originally presented in (?), remains sound in the updated semantics proposed in Sections 4 and 7. That is, the structural condition $\textbf{AE}_{A}$ is sufficient to obtain correct reductions for $\mathbf{sATL_{\mathrm{\mathrm{ir}}}^{*}}$ with and without the new opponent-reactiveness assumption (Theorem A.11). Thanks to that, one can potentially reuse or adapt the existing POR algorithms and tools for $\mathbf{LTL_{-X}}$, and the actual reductions are likely to be substantial.
2024-09-04T02:54:58.237523
2020-03-09T02:21:30
2003.03891
{ "authors": "Nursefa Zengin, Baris Fidan", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26107", "submitter": "Nursefa Zengin", "url": "https://arxiv.org/abs/2003.03891" }
arxiv-papers
# Adaptive Extremum Seeking Using Recursive Least Squares Nursefa Zengin, Baris Fidan School of Engineering, University of Waterloo, ON, Canada<EMAIL_ADDRESS> ###### Abstract Extremum seeking (ES) optimization approach has been very popular due to its non-model based analysis and implementation. This approach has been mostly used with gradient based search algorithms. Since least squares (LS) algorithms are typically observed to be superior, in terms of convergence speed and robustness to measurement noises, over gradient algorithms, it is expected that LS based ES schemes will also provide faster convergence and robustness to sensor noises. In this paper, with this motivation, a recursive least squares (RLS) estimation based ES scheme is designed and analysed for application to scalar parameter and vector parameter static map and dynamic systems. Asymptotic convergence to the extremum is established for all the cases. Simulation studies are provided to validate the performance of proposed scheme. ## I Introduction Extremum seeking (ES) is a popular technique for adaptive optimization of the performance of dynamic systems by tuning certain system parameters based on measurements. The main advantage of this technique is that limited or no knowledge of the plant model is required. ES is suitable for optimization of the performance of systems with complex dynamics, unavailable suitable measurements to validate the model, and time-varying disturbances that are difficult to model accurately ([1]). The most common ES algorithm used in the literature is the classical band-pass filtering based one, in which the gradient of the output with respect to the input will determine the direction of adjusting the input variables. This method was successfully applied to different application areas including biochemical reactors [[2, 3]], ABS control in automotive brakes ([4, 1, 5, 6, 7]), mobile robots ([8, 9, 10]), mobile sensor networks ([11, 12, 13]). Among other types of ES algorithms, perturbation based ES relies on added perturbation signals to estimate the gradient of the output by correlating the perturbations. To overcome the implementation drawbacks of introducing perturbation signals, some methods that are free of perturbation signals have been developed by [14, 15, 16]. Convergence rate of conventional ES algorithms is a limiting factor in many applications. Recursive Least Squares (RLS) based estimation has significant potential in relaxing this limitation and improving robustness to measurement noises. [17, 15, 18] used certain LS based techniques in their ES algorithms to obtain better convergence results. [17] estimated the gradient of the output with respect to the input using a LS based adaptive law for a class of nonlinear dynamic systems together with a sinusoidal perturbation signal. [15] used past data of a performance map to estimate the gradient of this performance map by a first order LS fit. The proposed method used no dither signal, but utilized a time window of history data of the performance map. [18] provided general results and a framework for the design of ES schemes applied to systems with parametric uncertainties and used LS algorithm to estimate unknown parameters of the known system. In absence of the parameter knowledge, a series of control/optimization schemes have been proposed in the literature utilizing certain ES tools such as switching methods ([19]), signal perturbation for persistence excitation, and band pass filtering ([19],[20],[21],[18]. [22] and [23] used a discrete time ES scheme to estimate the gradient as a time-varying parameter using LS like update laws. They removed the need for averaging system in order to achieve the convergence of ES. The designs are simulated for static unknown maps, systems with unknown discrete-time dynamics and sampled-data systems. In this paper, a continuous time RLS parameter estimation based ES scheme is designed and analysed for scalar parameter and vector parameter static map and dynamic systems. Asymptotic convergence to the extremum is established for each case. Numerical simulation examples are provided to validate the performance of proposed scheme comparing the results with gradient parameter estimation based one. A specific simulation example, antilock braking systems (ABS), in [1] is studied to compare the performance of RLS estimation based ES with classical gradient based ES. Contents of this paper are as follows. Section II is dedicated to the problem statement. In Section III, existing classical perturbation based ES is reviewed. Proposed RLS estimation based adaptive ES is developed for scalar parameter systems in Section IV, and for vector parameter systems in Section V. Comparative simulation examples are presented in Section VI. Finally, conclusions of the paper are given in Section VII. ## II Problem Statement The ES problem of interest is defined for static map systems and dynamic systems separately in the following subsections. ### II-A Static Maps Consider a concave static map system $\displaystyle y=h_{s}(u)=\bar{h}_{s}(\theta^{*},u),\quad\theta^{*}=\begin{bmatrix}\theta^{*}_{1}&\cdots&\theta^{*}_{N}\end{bmatrix}^{T},$ (1) where $\theta^{*}\in\mathbb{R}^{N}$ is a fixed unknown parameter vector, $u\in\mathbb{R}^{m}$ is the input and $y\in\mathbb{R}$ is the output of the system. Assume that the control input signal $u$ is generated by a smooth control law $u=\alpha(\theta)$ (2) parametrized by a control parameter vector $\theta\in\mathbb{R}^{N}$. ###### Assumption 1 The static map $\bar{h}_{s}(\theta^{*},u)$ is smoothly differentiable. ###### Assumption 2 $h_{s}(u)=\bar{h}_{s}(\theta^{*},u)$ has a single extremum (maximum) $y^{*}$ at $u=\alpha(\theta^{*}).$ The control objective is to maximize the steady-state value of $y$ but without requiring the knowledge of $\theta^{*}$ or the system function $h_{s}$. ### II-B Dynamic Systems Consider a general multi-input-single-output (MISO) nonlinear system $\dot{x}=f(x,u)=\bar{f}(\theta^{*},x,u),$ (3) $y=h_{d}(x)=\bar{h}_{d}(\theta^{*},\theta)=h(\theta),$ (4) $\theta=\pi(x)$ (5) where $x\in\mathbb{R}^{n}$ is the state, $u\in\mathbb{R}^{m}$ is the input, $y\in\mathbb{R}$ is the output, all measurable, and $f:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}$ and $h_{d}=h\circ\pi$ are smooth functions. Assume that the control input signal $u$ is in the form (2), the control parameter $\theta\in\mathbb{R}^{N}$ is dependant on $x$ through a map $\pi(.):\mathbb{R}^{n}\rightarrow\mathbb{R}^{N}$. The closed loop system can be written as follows: $\dot{x}=f(x,\alpha(\theta))=f(x,\alpha(\pi(x)).$ (6) The equilibria of (6) can be parameterized by $\theta$. The following assumptions about the closed loop system (3) are made, similarly to [21]. ###### Assumption 3 There exists a smooth function $l:\mathbb{R}^{N}\rightarrow\mathbb{R}^{m}$ such that $f(x,\alpha(x,\theta))=0\quad\text{if and only if}\quad x=l(\theta),$ (7) for any $(x,\theta)\in\mathbb{R}^{m}\times\mathbb{R}^{N}.$ For each $\theta\in\mathbb{R}^{N}$, the equilibrium $x_{e}=l(\theta)$ of the system (6) is locally exponentially stable with decay and overshoot constants uniformly dependent on $\theta$. ###### Assumption 4 There exists $\theta^{*}\in\mathbb{R}^{N}$ such that for all admissible $x$ values, $h_{d}(x)$ has its unique maximum at $x=x^{*}=l(\theta^{*}),$ $y^{\prime}(x^{*})={\frac{\partial h}{\partial x}}\Bigr{|}_{\begin{subarray}{c}x=x^{*}\end{subarray}}=0,$ (8) and the $m\times m$ Hessian matrix $y^{\prime\prime}(x^{*})={\frac{\partial^{2}h}{\partial x^{2}}}\Bigr{|}_{\begin{subarray}{c}x=x^{*}\end{subarray}}$ is negative definite. The control objective is to maximize the steady-state value of $y$ but without requiring the knowledge of $\theta^{*}$ or the system functions $h_{d},f$. This objective could be perfectly performed if $\theta^{*}$ was known and substituted in (2). The control parameter vector estimation can be done in different ways, leading to different ES schemes, even for the fixed control structure (2). The assumption that $h$ has a maximum is without loss of generality, considering a maximum seeking task. Minimum seeking case would be treated identically, replacing $y$ with $-y$ in the subsequent feedback design. In the next section, existing classical perturbation based ES approach will be reviewed to give an idea about our proposed design and to later use in simulation comparisons. ## III Classical Perturbation Based Extremum Seeking for Dynamic Systems In the classical ES approach shown in Fig.1, a high pass filter, a multiplier, and a lowpass filter are used to find the extremum. A general single input nonlinear system is considered in the design of [21]. A multi input ES approach is examined in [24]. Figure 1: Classic perturbation based ES scheme for multi input dynamic systems given by [24]. In the approach in [1, 21], the control law (2) feeding the plant (3) is tuned via the time-varying parameter $\theta=\begin{bmatrix}\theta_{1},\theta_{2},\cdots,\theta_{N}\end{bmatrix}^{T}$ that is produced by $\displaystyle\theta(t)=\hat{\theta}(t)+S(t),$ (9) where $\displaystyle S(t)$ $\displaystyle=\begin{bmatrix}a_{1}sin(\omega_{1}t)&a_{2}sin(\omega_{2}t)&\cdots&a_{N}sin(\omega_{N}t)\end{bmatrix}^{T},$ (10) and $\hat{\theta}(t)$ is generated by $\displaystyle\dot{\hat{\theta}}(t)$ $\displaystyle=k\hat{G}(t),$ (11) $\displaystyle\dot{\hat{G}}(t)$ $\displaystyle=\omega_{l}M(t)(y(t)-\eta(t))-\omega_{l}\hat{G}(t),$ $\displaystyle\dot{\eta}(t)$ $\displaystyle=\omega_{h}\left(y(t)-\eta(t)\right).$ Perturbation signal is selected as $M(t)=\begin{bmatrix}\frac{2}{a_{1}}sin(\omega_{1}t)&\frac{2}{a_{2}}sin(\omega_{2}t)&...&\frac{2}{a_{N}}sin(\omega_{N}t)\end{bmatrix}^{T}$. In the next two sections, we develop RLS estimation based ES scheme with forgetting factor instead of the approach of Section 3. Our proposed RLS estimation based adaptive ES scheme will be separately developed for two cases: for scalar parameter $(N=1)$ systems and for vector parameter $(N>1)$ systems, in Sections 4 and 5, respectively. ## IV RLS based ES Design for Scalar Parameter Systems ### IV-A Static Maps Consider the static map (1) and the control law (2) for scalar case, $N=1$, under Assumptions 1 and 2 about the closed-loop system. The proposed scheme is depicted in Fig. 2. RLS estimation based ES block shown in Fig. 2 consists of two parts: an RLS based adaptive parameter identifier estimating the gradient $h_{\theta}=\frac{\partial y}{\partial\theta}$ and a control law to be fed by this estimate. Figure 2: RLS based ES scheme for scalar parameter static maps. Consider the static map equation (1). In this equation, the time derivative of the output $y$ is given by $\dot{y}=h_{\theta}\dot{\theta}.$ (12) Design of the RLS based estimator to generate $\hat{h}_{\theta}$ considers the relation (12) that is in the linear parametric model form. $z=h_{\theta}\phi.$ (13) where $z=\dot{y},\quad\phi=\dot{\theta}.$ (14) If $\dot{y}$ is not available for measurement, then the regressor signals can be generated as $\displaystyle z=\frac{s}{s+\omega_{l}}[y],\quad\phi=\frac{1}{s+\omega_{l}}[\dot{\theta}],$ (15) i.e., $\displaystyle\dot{z}=-\omega_{l}z+\dot{y},\quad\dot{\phi}=-\phi\omega_{l}+\dot{\theta},$ (16) where $\omega_{l}>0$ is a constant design parameter. The control law generating $\theta$ is proposed to be $\dot{\theta}=k\hat{h}_{\theta},\quad k>0.$ (17) Assuming that the time variation of $h_{\theta}$ is sufficiently slow, we design an RLS estimator for the parametric model (13) as follows: $\dot{\hat{h}}_{\theta}=p\epsilon\phi,$ (18) $\dot{p}=\beta p-p^{2}\phi^{2},$ (19) $\epsilon=z-\hat{h}_{\theta}\phi,$ (20) where $\beta>0$ is forgetting factor and $p$ is the covariance term. The overall ES scheme producing $\theta(t)$ can be summarized by (17), (18), (19), and (20). ### IV-B Dynamic Systems The RLS estimation based ES control scheme (17)-(20) applies to the dynamic system (3)-(5) for $N=1$ with the control law (2) under Assumptions 3 and 4. The proposed ES scheme is depicted in Fig. 3. Figure 3: RLS based ES scheme for scalar parameter dynamic systems. ### IV-C Stability Analysis In this section, stability proof of the proposed schemes in Sections IV-A and IV-B will be presented. We know that $\theta^{*}$ is the equilibrium point and the estimated gradient will be $h_{\theta}=0$ at the equilibrium point $\theta=\theta^{*}$. We can write our stability result as follows: ###### Theorem IV.1 Consider the RLS estimation based ES scheme given in Figs. 2, 3 and defined in (17) - (20) with $z$ and $\phi$ as given in (14) or (15), and Assumptions 1 \- 4. For any initial condition $\hat{\theta}(0)\in\mathbb{R}^{N}$ and adaptation gain $k$, $\theta(t)$ asymptotically converges to small neighborhood of extremum parameter $\theta^{*}$. ###### Proof: We consider the Lyapunov function as $V(\theta(t))=\frac{1}{2}\left(\theta(t)-\theta^{*}\right)^{2}=\frac{1}{2}\tilde{\theta}^{2}.$ (21) We write the time derivative of $V$ along the solutions of (17) as $\dot{V}=\dot{\theta}\left(\theta(t)-\theta^{*}\right)=\dot{\theta}\tilde{\theta}.$ (22) Substituting (17) into (22), we obtain $\dot{V}=k\hat{h}_{\theta}\tilde{\theta}.$ (23) For the maximum case, $k>0$. Negative definiteness of (23) depends on the initial condition $\theta_{0}$ that determines the signs of $\hat{h}_{\theta}$ and $\tilde{\theta}$. If $\theta(0)<\theta^{*}$, then $\hat{h}_{\theta}>0$ and $\tilde{\theta}<0$. On the other hand, if $\theta(0)>\theta^{*}$, then $\hat{h}_{\theta}<0$ and $\tilde{\theta}>0$. Hence, for both cases $\dot{V}<0$. We also need to examine the forgetting factor $\beta$ and the persistent excitation (PE) of $\phi$. If $\phi$ is PE, then (17) guarantees that $p\in\mathcal{L}_{\infty}$ and $\theta(t)\to\theta^{*}$ as $t\to\infty$. When $\beta>0$, the convergence of $\theta(t)\to\theta^{*}$ is exponential ([25]). ∎ ## V RLS based ES Design for Vector Parameter Systems In this section, the proposed RLS estimation based ES scheme is extended to the systems with vector parameters $(N>1)$. Similar to the classical gradient based analysis, small sinusoidal perturbation signals with different frequencies ($\omega_{1},\cdots,\omega_{N}$) are added to the control signals to provide sufficiently rich excitation. ### V-A Static Maps Figure 4: RLS based ES scheme for vector parameter static maps. Consider the block diagram in Fig. 4 for the static map in (1). The time derivative of (1) is given by $\dot{y}=h_{\theta}^{T}\dot{\theta},$ (24) which, similarly to (13), can be written in the linear parametric form $z=h_{\theta}^{T}\phi,$ (25) where $z$ and $\phi$ are again defined by either (14) or (15). The control law (17) is used for updating $\theta$ in the vector case as well. The design of the RLS estimator to produce $\hat{h}_{\theta}$ is based on the parametric model (25) and is given as follows ([25]): $\dot{\hat{h}}_{\theta}=P\epsilon\phi,$ (26) $\dot{P}=\beta P-P\phi\phi^{T}P,$ (27) $\epsilon=z-\hat{h}_{\theta}^{T}\phi,$ (28) where $\beta$ is the forgetting factor and $P$ is the covariance matrix of the RLS algorithm. The control law generating $\theta$ is proposed to be $\dot{\hat{\theta}}=k\hat{h}_{\theta},\quad k>0.$ (29) $\theta(t)=\hat{\theta}(t)+S(t),$ (30) where $S(t)$ is defined as in (10). Different from scalar parameter systems, we use perturbation signals, $S(t)$. The need to use of dither signals in vector parameter systems is that dither signals with different frequencies can be implemented on each input signal to achieve overall PE. ### V-B Dynamic Systems Figure 5: RLS based ES scheme for vector parameter dynamic systems. The RLS estimation based ES scheme (26) - (29) applies to the dynamic system (3)-(5) with control law (2) under Assumptions 3 and 4 for vector parameter systems. Block diagram of the proposed ES scheme is given in Fig.5. ### V-C Stability Analysis The intuition in (30) is to satisfy persistence of excitation for $N$-dimensional $\phi$ by introducing at least one distinct dither frequency for each input, following the standard perturbation based ES control approaches mentioned in Section III. Similar to the analysis in Section IV-C, consider the Lyapunov function as $V(\tilde{\theta}(t))=\frac{1}{2}\tilde{\theta}^{T}\tilde{\theta}.$ (31) We write the time derivative of $V$ along the solutions of (29) as $\dot{V}=\tilde{\theta}^{T}\dot{\tilde{\theta}}=\tilde{\theta}^{T}\dot{\theta}.$ (32) Substituting (30) into (32), we obtain $\dot{V}=\tilde{\theta}^{T}(k\hat{h}_{\theta}+\dot{S}).$ (33) The relationship between $\tilde{\theta}$ and $\hat{h}_{\theta}$ in Section 4.3 applies to vector parameter case. The stability again depends on $k$, initial condition $\theta(0)$, forgetting factor $\beta$, and PE of $\phi$, that is guaranteed by addition of dither signals in (30). Hence, $P\in\mathcal{L}_{\infty}$ and $\theta(t)\to\theta^{*}$ as $t\to\infty$. ## VI Simulations In this section, we present simulation results to show the validity of the proposed schemes. We will present two examples for scalar parameter and vector parameter cases with their comparison results with classical ES method in Section III. ### VI-A Scalar Parameter Simulation Example Consider the following model $\displaystyle y$ $\displaystyle=10m(u),$ (34) $\displaystyle m(u)$ $\displaystyle=k_{1}\left(1-e^{-k_{2}u}\right)-k_{3}u$ $\displaystyle u$ $\displaystyle=\theta,$ where $\theta^{*}=0.3$. $\theta_{0}=0.01$ is chosen as initial value for both schemes. $k_{1}=1.05,k_{2}=23,k_{3}=0.52$ are given. For RLS estimation based ES scheme, the following parameters are used: $k_{ls}=0.01$, $p_{0}=10^{3}$, and $\beta=0.98$ are given. For classical ES scheme, the following parameters are given: $k=0.08$, $\omega_{h}=0.6$, $\omega_{l}=0.8$, $S(t)=0.01\sin 3t$, and $M(t)=sin3t$. We apply the Gaussian measurement noise as ($\sigma=0.05$) for both gradient and RLS algorithms. We apply RLS estimation based ES scheme in Fig.3. The results for this example is given in Fig.6. It is obvious that proposed scheme can reach a neighborhood of the extremum point $\theta^{*}=0.3$ at $y^{*}=8.85$ less than 2 second while classical ES finds the extremum point very late and cannot maintain that extremum point under measurement noise. Figure 6: Single parameter RLS estimation based ES results. ### VI-B Vector Parameter Simulation Example Consider the following model $\displaystyle y$ $\displaystyle=y_{1}+y_{2},$ (35) $\displaystyle y_{1}$ $\displaystyle=am(u_{1}),\leavevmode\nobreak\ m(u_{1})=(2m^{*}_{1}u^{*}_{1}u_{1})/(u^{*2}_{1}+u^{2}_{1}),$ $\displaystyle y_{2}$ $\displaystyle=am(u_{2}),\leavevmode\nobreak\ m(u_{2})=(2m^{*}_{2}u^{*}_{2}u_{2})/(u^{*2}_{2}+u^{2}_{2}),$ $\displaystyle u$ $\displaystyle=[u_{1},\leavevmode\nobreak\ u_{2}]=[\theta_{1},\leavevmode\nobreak\ \theta_{2}].$ where $[\theta^{*}_{1},\leavevmode\nobreak\ \theta^{*}_{2}]=[0.2,\leavevmode\nobreak\ 0.3]$. For both schemes, initial values are given as $u_{0}=[0.1,\leavevmode\nobreak\ 0.1].$ We aim to reach $y^{*}_{1}(\theta^{*}_{1})=5$ and $y^{*}_{2}(\theta^{*}_{2})=9$. For RLS estimation based ES scheme, the following parameters are used: $k=[0.01,\leavevmode\nobreak\ 0.01]$, $P_{0}=10^{4}$, $\beta=0.98$, and $S(t)=[0.01\sin 7t,\leavevmode\nobreak\ 0.01\sin 10t]$ are given. For classical ES scheme, the following parameters are given: $k=[0.02,\leavevmode\nobreak\ 0.01]$, $\omega_{h}=[0.6,\leavevmode\nobreak\ 0.6]$, $\omega_{l}=[0.8,\leavevmode\nobreak\ 0.8]$, $S(t)=[0.01\sin t,\leavevmode\nobreak\ 0.01\sin 2t]$, and $M(t)=[4.5\sin 5t,\leavevmode\nobreak\ 11\sin 5t]$. We apply the Gaussian measurement noise as ($\sigma=0.05$) for both gradient and RLS algorithms. Simulation results are given in Fig.7 for both RLS estimation based and classical ES schemes. It is clear that the results taken with RLS can converge the extremum point and find the maximized output $y^{*}$ while classical ES scheme has difficulty to reach the extremum point. One reason for this difficulty is that in classical ES scheme has many tuning parameters that must be tuned accordingly. For vector case, we also emphasize the need to apply perturbation terms to the scheme in order to observe multiple input channels separately. When there is no perturbation signal applied, the inputs cannot be distinguished and converge to an average value that caused to reach a value near the maximum. Similar to scalar case, RLS estimation based ES scheme outweighs classical ES scheme in terms of reaching extremum under measurement noises. ### VI-C ABS Simulation Example In this section, we also tested our ES scheme in ABS using MATLAB/Simulink. Then, we compared its performance with gradient based ES scheme developed by [1]. The wheel characteristics are given by the following set of equations Figure 7: Vector parameter RLS estimation based ES results. $\displaystyle m\dot{\nu}$ $\displaystyle=N\mu(\lambda),$ (36) $\displaystyle I\dot{\omega}$ $\displaystyle=-B\omega-NR\mu(\lambda)+\tau,$ where $v,\omega,m,N,R,I$ are linear velocity, angular velocity, the mass, the weight, radius, and the moment of inertia of the wheel, respectively. $B\omega$ is the bearing friction torque, $\tau$ is braking torque, $\mu(\lambda)$ is the friction force coefficient. $\lambda$ is the wheel slip which is defined as $\lambda(v,\omega)=\frac{R\omega-\nu}{\nu}.$ (37) Controller design procedure are identical to the design in [1]. The parameters that are identical in both schemes are given as follows: $m=400kg$, $R=0.3m$, $I=1.7kgm^{2}$, $B=0.01kg/s$. Perturbation signal amplitude and frequency is selected as $a=0.01$, $\omega=3$, high pass, low pass and regulation gain are selected as $\omega_{h}=0.6$, $\omega_{l}=0.8$, $k=6$ in gradient based scheme equations (9), (10), and (11). $k=-0.01$ is used in ABS case and $\beta=0.95$ is selected for RLS based scheme. The simulation for both gradient and RLS schemes is performed under the Gaussian noise ($\sigma=0.1$) in longitudinal acceleration measurement, $\dot{v}$. Initial conditions are selected the same in both schemes for a fair comparison. We use the approximation model (38) in simulations to see the effect of the proposed schemes. $\mu(\lambda)=2\mu_{max}\frac{\lambda^{*}\lambda}{{{\lambda}^{*2}}+\lambda^{2}},$ (38) where (38) has a maximum at $\lambda=\lambda^{*}$ with $\mu(\lambda^{*})=\mu_{m}$. For simulation, we choose wet road since it is one of the safety critical conditions. Simulation results of ABS for gradient/RLS based scheme comparison are given in Fig.8. Results show that vehicle stopping time of RLS parameter estimation based ES in an emergency situation is less than that of gradient one. Slip ratio estimation is almost 2 sec quicker with RLS parameter estimation, can be seen in Fig. 8(a). RLS based ES scheme gives better results under measurement noise and can reach the maximum deceleration in less time. (a) Friction force coefficient and estimated slip results for ABS. (b) Braking torque, velocity and deceleration results for ABS. Figure 8: Wet road comparison results for ABS. ## VII Conclusion This paper focuses on designing an RLS parameter estimation based ES scheme for scalar parameter and vector parameter static map and dynamic systems. Their stability conditions are stated for each case. The proposed ES scheme does not need perturbation signals for scalar parameter systems; however, the proposed ES scheme needs perturbation signals with different frequencies for vector parameter systems. Proposed scheme is applied to different simulation scenarios and compared to classical gradient estimation based ES under measurement noise. The results show the validity and effectiveness of RLS parameter estimation based ES scheme over gradient one. ## References * [1] K. B. Ariyur and M. Krstic, _Real-time Optimization by Extremum-Seeking Control_. John Wiley & Sons, 2003. * [2] H. Wang, M. Krstic, and G. Bastin, “Optimizing bioreactors by extremum seeking,” _International Journal of Adaptive Control and Signal Processing_ , vol. 13, no. 651, p. 669, 1999. * [3] G. Bastin, D. Nesic, Y. Tan, and I. Mareels, “On extremum seeking in bioprocesses with multivalued cost functions,” _Biotechnology Progress_ , vol. 25, no. 3, pp. 683–689, 2009. * [4] S. Drakunov, U. Ozguner, P. Dix, and B. Ashrafi, “Abs control using optimum search via sliding modes,” _IEEE Transactions on Control Systems Technology_ , vol. 3, no. 1, pp. 79–85, 1995. * [5] H. Yu and U. Ozguner, “Extremum-seeking control strategy for abs system with time delay,” in _Proc. IEEE American Control Conference_ , vol. 5, 2002, pp. 3753–3758. * [6] E. Dincmen, B. Guvenc, and T. Acarman, “Extremum-seeking control of abs braking in road vehicles with lateral force improvement,” _IEEE Transactions on Control Systems Technology_ , vol. 22, no. 1, pp. 230–237, 2014\. * [7] E. Dincmen, “Adaptive extremum seeking scheme for abs control,” in _13th IEEE International Workshop on Variable Structure Systems_ , 2014, pp. 1–6. * [8] C. Mayhew, R. Sanfelice, and A. Teel, “Robust source-seeking hybrid controllers for autonomous vehicles,” in _Proc. IEEE American Control Conference_ , 2007, pp. 1185–1190. * [9] C. Zhang and R. Ordonez, “Robust and adaptive design of numerical optimization-based extremum seeking control,” _Automatica_ , vol. 45, no. 3, pp. 634–646, 2009. * [10] J. Lin, S. Song, K. You, and M. Krstic, “Overshoot-free nonholonomic source seeking in 3-d,” _International Journal of Adaptive Control and Signal Processing_ , vol. 31, no. 9, pp. 1285–1295, 2017. * [11] E. Biyik and M. Arcak, “Gradient climbing in formation via extremum seeking and passivity-based coordination rules,” _Asian Journal of Control_ , vol. 10, no. 2, pp. 201–211, 2008. * [12] M. Stankovic and D. Stipanovic, “Stochastic extremum seeking with applications to mobile sensor networks,” in _Proc. IEEE American Control Conference_ , 2009, pp. 5622–5627. * [13] B. Moore and C. Canudas-de Wit, “Source seeking via collaborative measurements by a circular formation of agents,” in _Proc. IEEE American Control Conference_ , 2010, pp. 6417–6422. * [14] L. Fu and U. Ozguner, “Extremum seeking with sliding mode gradient estimation and asymptotic regulation for a class of nonlinear systems,” _Automatica_ , vol. 47, no. 12, pp. 2595–2603, 2011. * [15] B. G. B. Hunnekens, M. A. M. Haring, N. van de Wouw, and H. Nijmeijer, “A dither-free extremum-seeking control approach using 1st-order least-squares fits for gradient estimation,” in _53rd IEEE Conference on Decision and Control_ , 2014, pp. 2679–2684. * [16] D. Nesic, T. Nguyen, Y. Tan, and C. Manzie, “A non-gradient approach to global extremum seeking: An adaptation of the shubert algorithm,” _Automatica_ , vol. 49, no. 3, pp. 809–815, 2009. * [17] M. Chioua, B. Srinivasan, M. Guay, and M. Perrier, “Performance improvement of extremum seeking control using recursive least square estimation with forgetting factor,” _IFAC-PapersOnLine_ , vol. 49, no. 7, pp. 424–429, 2016\. * [18] D. Nesic, A. Mohammadi, and C. Manzie, “A framework for extremum seeking control of systems with parameter uncertainties,” _IEEE Transactions on Automatic Control_ , vol. 58, no. 2, pp. 435–448, 2013. * [19] P. Blackman, “Extremum-seeking regulators,” in _An exposition of adaptive control_. Macmillan, 1962. * [20] M. Krstic, “Performance improvement and limitations in extremum seeking control,” _Systems & Control Letters_, vol. 39, no. 5, pp. 313–326, 2000\. * [21] M. Krstic and H. H. Wang, “Stability of extremum seeking feedback for general nonlinear dynamic systems,” _Automatica_ , vol. 36, no. 4, pp. 595–601, 2000\. * [22] M. Guay, “A time-varying extremum-seeking control approach for discrete-time systems,” _Journal of Process Control_ , vol. 24, no. 3, pp. 98–112, 2014\. * [23] M. Guay and D. Dochain, “A time-varying extremum-seeking control approach,” _Automatica_ , vol. 51, pp. 356–363, 2015. * [24] A. Ghaffari, M. Krstic, and D. Nesic, “Multivariable newton-based extremum seeking,” _Automatica_ , vol. 48, no. 8, pp. 1759–1767, 2012. * [25] P. Ioannou and B. Fidan, _Adaptive Control Tutorial_. Society for Industrial and Applied Mathematics, 2006.
2024-09-04T02:54:58.256546
2020-03-09T07:49:06
2003.03956
{ "authors": "Jihyun Bhom, Marcin Chrzaszcz", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26108", "submitter": "Jihyun Bhom", "url": "https://arxiv.org/abs/2003.03956" }
arxiv-papers
# HEPLike: an open source framework for experimental likelihood evaluation Jihyun Bhom Marcin Chrzaszcz Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences, Krak´ow, Poland ###### Abstract We present a computer framework to store and evaluate likelihoods coming from High Energy Physics experiments. Due to its flexibility it can be interfaced with existing fitting codes and allows to uniform the interpretation of the experimental results among users. The code is provided with large open database, which contains the experimental measurements. The code is of use for users who perform phenomenological studies, global fits or experimental averages. ###### keywords: experimental high energy physics , likelihoods ††journal: Computer Physics Communications PROGRAM SUMMARY/NEW VERSION PROGRAM SUMMARY Program Title: HEPLike Licensing provisions(please choose one): GPLv3 Programming language: C++ Supplementary material: Journal reference of previous version: FIRST VERSION OF PROGRAM Nature of problem(approx. 50-250 words): Provide a uniform way of store, share and evaluate experimental likelihoods in a proper statistical manner. The code can be easily interfaced with existing global fitting codes. In addition a large database with the measurements is published. The program targets users who perform in their scientific work: phenomenological studies, global fits or measurements averages. The HEPLike has been created for FlavBit project[1], which was used to perform several analysis[2,3] and here we present an updated version, which can be used in standalone mode. Solution method(approx. 50-250 words): C++ code that evaluates the statistical properties of the measurements without user intervention. The large open database is provided as well. The measurements are stored in YAML files allowing for easy readability and extensions. ## References * [1] arXiv: 1705.07933 * [2] arXiv: 1705.07935 * [3] arXiv: 1705.07917 ## 1 Introduction In the High Energy Physics (HEP) the experimental measurements are performed by several collaborations, which measure various different observables. The experimental results are presented in various ways; some being as simple as a measurement with an Gaussian error, some more complicated as multiple correlated measurements with asymmetric errors or in some places even a full likelihood function is being published. To make things more complicated in some cases multiple representations of the same measurement are being published. All of this makes it hard to directly use and compare various different results. It also leaves a room for misinterpreting the results by theorists, which use these inputs to their studies. It happens that the asymmetric errors are being symmetrized, instead of using the full likelihood only central value with approximated asymmetric error is being used. The High Energy Physics Likelihoods (HEPLike) is a computer program that allows to store and share the likelihoods of various measured quantities. The published code can be useful for users performing phenomenological studies using experimental results, global fitting collaborations or experimental averages. Thanks to its structure it is easy to be interface with existing codes. It simplifies the work of people as instead of looking up the appropriate measurement and coding up their own likelihood they can download the database of measurements and choose the one they need. Furthermore, it shifts the burden of constructing the proper likelihood functions back to the experimentalists, which performed the measurement at the first place and are clearly the most appropriate people to handle this task. The computer code described in this paper is written in C++, making it useful for majority of fitting programs available on the market [1, 2, 3, 4, 5, 6]. The library can be used in both the $\chi^{2}$ and likelihood fits. Moreover, it contains a statistical module with useful functions that can be used in the statistical analysis. Besides the computer code a database with the likelihoods is being published. The measurements are stored in the YAML files making them easy to read by both the machine and human. This database can be easily extended by adding new YAML files if new measurement becomes available. With the software we provide useful utilities, which allows to perform searches inside the database, create BiBtex containing publications, which have been in the fit, etc. The paper is organized as follows: in Sec. 2 construction of the likelihood functions is presented. Sec. 3 explains the detailed code implementations and data storage, while Sec. 4 describes how to install and use HEPLike software. ## 2 Likelihood constructions In this section we will present how likelihoods in HEPLike are stored and constructed. Each measurement is stored in separate YAML file. There are several ways collaborations published their results depending on the measurements itself: * 1. Upper limits, * 2. Single measurement with symmetric uncertainty, * 3. Single measurement with asymmetric uncertainty, * 4. Multiple measurements with symmetric uncertainty, * 5. Multiple measurements with asymmetric uncertainty, * 6. One dimensional likelihood function, * 7. n-dimensional likelihood function. In addition, there is growing interest from the community that the experimental collaborations instead of only the results of the analysis publish also the dataset that has been used to obtain the result. For this future occasion we have also implement a way that this data can be directly used in the fits. Each of these cases has a different module of HEPLike that is designed to evaluate the likelihood functions. In this section we will present the statistical treatment of the above cases and the modules that are responsible for their evaluation. Each of the YAML files is required to have the following information (here for example we use the $R_{\mathup{{{K}}^{\scriptstyle{\ast}}}}$ measurement [7]): ⬇ 1BibCite: Aaij:2017vbb 2BibEntry: ’@article{Aaij:2017vbb, 3 author = ”Aaij, R. and others”, 4 title = ”{Test of lepton universality 5 with $B^{0} \rightarrow 6 K^{*0}\ell^{+}\ell^{-}$ decays}”, 7 collaboration = ”LHCb”, 8 journal = ”JHEP”, 9 volume = ”08”, 10 year = ”2017”, 11 pages = ”055”, 12 doi = ”10.1007/JHEP08(2017)055”, 13 eprint = ”1705.05802”, 14 archivePrefix = ”arXiv”, 15 primaryClass = ”hep-ex”, 16 reportNumber = ”LHCB-PAPER-2017-013, 17 CERN-EP-2017-100”, 18 SLACcitation = ”%%CITATION = ARXIV:1705.05802;%%” 19 } 20 ’ 21DOI: 10.1007/JHEP08(2017)055 22Process: R_{Kstar^{*}} 23FileName: RKstar.yaml 24Name: RKstar 25Source: HEPDATA 26SubmissionYear: 2017 27PublicationYear: 2018 28Arxiv: 1705.05802 29Collaborations: LHCb 30Kinematics: q2>1.1 && q2<6\. 31HLAuthor: Gal Anonim 32HLEmail<EMAIL_ADDRESS> 33HLType: HL_ProfLikelihood The above informations are used to store the information relevant for the bookkeeping. For instance the entries BibCite and BibEntry correspond to the information that are used to generate a BiBtex citation file with the measurements that have been used in the studies. The DOI corresponds to the digital object identifier of the publication. The Decay defines the process that has been studied. It can also be replaced by the Process entry. The Name is a unique name of this measurement type. If the measurement gets updated with more data or by other collaboration the Name entry in the new YAML file should be the same as in the old one. Source entry corresponds to the source of the measurement. This can be either a HEPData or the collaboration itself. The SubmissionYear (PublicationYear) refers to the year of appearance (publication) of the result. The Arxiv codes the Arxiv number, while the Collaborations stores the information which experimental collaboration has performed the measurement. Finally, the Kinematics stores additional information about the kinematic region that has been measured. The HLAuthor and HLEmail encode the information about the YAML file author and his email in case user needs further information about the encoded measurement. Last but not least the entry HLType contains the information about which HEPLike object should be used to read the file. Reading of this content in the YAML is implemented in the HL_Data class. All other classes that construct the likelihood functions inherit from this class its capabilities. Please note that if the information is missing in the YAML file the program will omit reading this entry. The only exception is the FileName, which is mandatory. If a user wants to be notified by the program that some informations are missing the HL_debug_yaml variable has to be set to true (default value is false). ### 2.1 Upper limits In case where the measurement did not observe a significant access of signal candidates the collaborations usually report an upper limit on the measured quantity. Commonly $90\%$ or $95\%$ upper limits are quoted. Experiments use various statistical approaches to compute this limits. It can be the $\rm CL_{s}$ method [8], Feldman–Cousins [9] or some variation of Bayesian methods [10]. Publication of only an upper limits does not provide enough information to use the result in global fits. However, nowadays experiments besides the aforementioned upper limits publish a full p-value scans. Examples of such scans are shown in Fig. 1. The plots are usually available in digital format, which allows the information to be extracted and used in computer program. Figure 1: Example of p-value scans for the $\mathup{{{B}}{}_{\scriptstyle{\mathup{{{s}}}}}^{\scriptstyle{0}}}\to\mathup{{{\tau}}^{\scriptstyle{-}}}\mathup{{{\tau}}^{\scriptstyle{+}}}$ [11] (left) and $\mathup{{{D}}}\to\mathup{{{e}}}\mathup{{{\mu}}}$ [8] (right). Please note that the $\rm CL_{s}$ value can be interpreted as p-value as explained in [12]. The black line corresponds to the observed $\rm CL_{s}$/p-value. In HEPLike a class HL_Limit is responsible for handling this type of measurements. It reads the YAML file that contains the standard information about the measurement (see Sec. 2 for details). The additional information of the observed $\rm CL_{s}$/p-value is stored in the YAML file in the following way111Please note that the besides this information the previous information from Sec. 2 should be included.: ⬇ 1Cls: 2- [0.0, 1.0] 3- [1.0e-10, 0.977091694706] 4- [2.0e-10, 0.954375824297] 5- [3.0e-10, 0.93200355343] 6- [4.0e-10, 0.910630700546] 7- [5.0e-10, 0.889382721809] The Cls can be replaced in the YAML file by p-value as they correspond to the same information. The first number in each array is the value of tested hypothesis (for example branching fraction), while the second is the corresponding $\rm CL_{s}$/p-value. These values are then interpreted using a $\chi^{2}$ distribution with one degree of freedom: $pdf(x)=\frac{1}{2^{1/2}\Gamma(1/2)}x^{1/2-1}e^{-x/2},$ (1) which had the cumulative distribution function defined as: $cdf(x)=\frac{1}{\Gamma(1/2)}\gamma(1/2,x/2).$ (2) In the above equations the $\Gamma(x)$ and $\gamma(k,x)$ correspond to Gamma and incomplete gamma functions. By revering the $cdf(x)$ one can obtain the $\chi^{2}$ value: $\chi^{2}=cdf^{-1}(1-p),$ (3) where p corresponds to the p-value of a given x hypothesis. This $\chi^{2}$ can be then translated to the log-likelihood via Wilks theorem [13]: $-\log(\mathcal{L})=\frac{1}{2}\chi^{2},$ (4) where the $\mathcal{L}$ is the likelihood. The user can choose if he wants to obtain the $\chi^{2}$, likelihood or a log-likelihood value of a given hypothesis. ### 2.2 Single measurement with symmetric uncertainties The simplest case of a published experimental result is a single value with a symmetric uncertainty. This is for example a typical result of an PDG of HFLAV average [14, 15]. The measurement is coded in the YAML file as: ⬇ 1Observables: 2- [ ”Br_A2BCZ”, 0.1, 0.05, 0.01 ] The first argument in the array ”Br_A2BCZ” corresponds to the observable name. Then the first number corresponds to the measured central value. The 2nd and the 3rd number are the statistical and systematic uncertainties. In case where only one uncertainty is available the 3rd number should be omitted and it will be automatically set to 0 in the software. We have decided to keep the plural Observables to be more uniform in case where more observables are measured. The module responsible for reading this YAML file is called HL_Gaussian, it calculates the $\chi^{2}$ for an $x$ hypothesis: $\chi^{2}=\frac{(x_{obs}-x)^{2}}{\sigma_{stat}^{2}+\sigma_{syst}^{2}},$ (5) where the $x_{obs}$ correspond to the measured central value in the YAML file and the $\sigma_{stat}$ and $\sigma_{syst}$ are the statistical and systematic uncertainties respectively. This can be the again translated to the likelihood and log-likelihood value using Eq. 4. ### 2.3 Single measurement with asymmetric uncertainties A simple extension of the Gaussian uncertainty is when an asymmetric uncertainty is reported. This type of measurements although less frequent appear in the literature. The publication in this case reports the central value and two uncertainties: $\sigma_{+}$ and $\sigma_{-}$, which correspond to the right (for values larger than the measured central value) and left (for values smaller than the measured central value) uncertainty. In HEPLike we have created a HL_BifurGaussian class, which reads the following entry in the YAML file: ⬇ 1Observables: 2- [ ”Br_A2BCZ”, 0.1, 0.05, -0.06, 0.01, -0.02 ] The first argument is again the name of the observable and the second one is its central value. The third and fourth arguments correspond to the statistical $\sigma_{+}$ and $\sigma_{-}$ uncertainties, while the fifth and sixth to the systematical $\sigma_{+}$ and $\sigma_{-}$ uncertainties. It is important to keep the minus sign before the left side uncertainties. The code will indicate the error in case of missing sign. In some cases the systematical uncertainty is reported to be symmetric. In such case the last number can be omitted in the YAML entry. In the literature there exist number of ways to interpret asymmetric uncertainties [16]. We have chosen the most commonly used one which is the so- called bifurcated Gaussian: $\displaystyle\chi^{2}=\begin{cases}\frac{(x_{obs}-x)^{2}}{\sigma_{+}^{2}},&\text{if }x\geq x_{obs}\\\ \frac{(x_{obs}-x)^{2}}{\sigma_{-}^{2}},&\text{if }x<x_{obs},\\\ \end{cases}$ (6) where the $\sigma_{\pm}^{2}$ is the sum of squared statistical and systematic uncertainties for right/left case. Once $\chi^{2}$ is calculated it can be translated to the log-likelihood using Eq. 4. ### 2.4 Multiple measurements with symmetric uncertainties Nowadays the most common are simultaneous measurements of various quantities, which are correlated between each other. For instance cross section measurements in different kinematic bins, or measurements of angular coefficients in heavy mesons decays. In HEPLike the class responsible for handling these cases is called HL_nDimGaussian. It reads the following information from the YAML file: ⬇ 1Observables: 2- [ ”BR1”, 0.1, 0.02] 3- [ ”BR2”, 0.2, 0.01, 0.01] 4- [ ”BR3”, 0.4, 0.04] 5Correlation: 6- [ ”BR1”, ”BR2”, ”BR3”] 7- [ 1. , 0.2 , 0 ] 8- [ 0.2, 1., 0. ] 9- [ 0 , 0., 1. ] The information in the “Observables” entry is exactly the same as in the HL_Gaussian class. Please note that similarly to the previous class the systematic uncertainty is not mandatory and in case if it is not provided the code will treat it as 0. The next entry in the YAML file is the “Correlation”, which encodes the correlation matrix. The first ”row” is the names of the variables it is important to keep the same order of variables as in the “Observables” entry. The HL_nDimGaussian evaluates the $\chi^{2}$ in the following way: $\displaystyle\chi^{2}=V^{T}{\rm Cov}^{-1}V,$ (7) where V is a column matrix, which is the difference between the measured and the tested i-th observable value. The $\rm Cov$ is a square matrix, constructed from the correlation matrix (${\rm Corr}$): ${\rm Cov}_{i,j}={\rm Corr}_{i,j}\sigma_{i}\sigma_{j}$. Often a user does not want to use the full set of measured quantities but just their subset. In this case a function Restrict(vector<string>) can be used. By passing in a form of vector the list of observables to be used, the program will create new smaller covariance matrix, which will be used to evaluate the $\chi^{2}$. In a similar manner the $\chi^{2}$ can be translated to the likelihood and log-likelihood value by Eq. 4. ### 2.5 Multiple measurements with asymmetric uncertainties More complicated case is when multiple correlated measurements are reported with asymmetric uncertainties. The case is similar to the one discussed in Sec. 2.3 and same statistic comments apply in this case. The YAML file encoding such a measurement will contain the following entries: ⬇ 1Observables: 2- [ ”BR1”, 0.1, +0.02, -0.01, 0.02] 3- [ ”BR2”, 0.2, +0.01, -0.05, +0.03, -0.02] 4- [ ”BR3”, 0.3, +0.04, -0.03, 0.05] 5Correlation: 6- [ ”BR1”, ”BR2”, ”BR3”] 7- [ 1. , 0.1 , 0.2 ] 8- [ 0.1, 1., 0.1 ] 9- [ 0.2 , 0.1, 1. ] The meaning of the “Observables” entry is the same as in the previous class (cf. Sec. 2.3) and the “Correlation” encodes the same information as in the HL_nDimGaussian class (cf.. Sec. 2.4). The rules about the minus sign and symmetric systematic uncertainty are the same as in case of the HL_BifurGaussian (cf. Sec. 2.3). The difference arises when one evaluates the $\chi^{2}$, namely the $\rm cov$ matrix is constructed depending if $\sigma_{+}$ and $\sigma_{-}$ uncertainty is relevant: $\displaystyle{\rm Cov}_{i,j}=\begin{cases}{\rm Corr}_{i,j}~{}\sigma^{i}_{+}\sigma^{j}_{+},&\text{if }x^{i}\geq x^{i}_{obs}\text{ and }x^{j}\geq x^{j}_{obs}\\\ {\rm Corr}_{i,j}~{}\sigma^{i}_{+}\sigma^{j}_{-},&\text{if }x^{i}\geq x^{i}_{obs}\text{ and }x^{j}<x^{j}_{obs}\\\ {\rm Corr}_{i,j}~{}\sigma^{i}_{-}\sigma^{j}_{+},&\text{if }x^{i}<x^{i}_{obs}\text{ and }x^{j}\geq x^{j}_{obs}\\\ {\rm Corr}_{i,j}~{}\sigma^{i}_{-}\sigma^{j}_{-},&\text{if }x^{i}<x^{i}_{obs}\text{ and }x^{j}<x^{j}_{obs}\\\ \end{cases}$ (8) The obtained $\rm Cov$ matrix is then used to calculate the $\chi^{2}$ using Eq. 7. The rest follows the same procedure as described in Sec. 2.4. ### 2.6 One dimensional likelihood function The best way a result can be published is by providing the (log-)likelihood function. This type of results are more and more common in the literature. The most easy is the one-dimensional likelihood scans as can be presented in form of a figure, which examples are shown in Fig. 2. Figure 2: Examples of published one-dimensional likelihoods in the Lepton Universality Violation of the $\mathup{{{B}}}\to\mathup{{{K}}^{\scriptstyle{\ast}}}\ell\ell$ [7] (left) and $\mathup{{{B}}}\to\mathup{{{K}}}\ell\ell$ [17] (right). The biggest advantage of publishing the results in this form is its completeness. The (log-)likelihood curve contains all the information about all the non-Gaussian effects and incorporates the systematic uncertainties. The technical problem is how to publish such information. Usually plots are published in the pdf or png formats which makes them hard to be used. Since experiments are mostly using ROOT [18] framework the plots are saved also in the C format, which contains the points in the form of arrays. This of course makes the points accessible however it is not easy to automate retrieving this data from the C file. The best solution is provided by the HEPData portal [19]. It allows to download the data in a user preferred format. In HEPLike we have chosen to use the ROOT format by default, in which the data points are saved in the form of a TGraph object, which is also the way experimentalists like to store this information. In the YAML file we specify the path of the ROOT in the following way: ⬇ 1ROOTData: data/HEPData-ins1599846-v1-Table_1.root 2TGraphPath: ”Table 1/Graph1D_y1” The ROOTData encodes the location of the ROOT file, while the TGraphPath encodes the location of the TGraph object in that ROOT file. In HEPLike the class HL_ProfLikelihood is responsible for reading and encoding this likelihood. The value of the log-likelihood can be ten translated again into the $\chi^{2}$ with Eq. 4. ### 2.7 n-dimensional likelihood function The natural extension of one dimensional likelihood is an n-dim likelihood, where $n\geq 2$. Currently experimental collaborations publish only 2-dim likelihood functions (cf. Fig. 3). Figure 3: Examples of published two-dimensional likelihoods. The $\mathcal{B}(\mathup{{{B}}{}_{\scriptstyle{\mathup{{{s}}}}}^{\scriptstyle{0}}}\to\mu\mu)$ vs $\mathcal{B}(\mathup{{{B}}{}_{\scriptstyle{\mathup{{{d}}}}}^{\scriptstyle{0}}}\to\mu\mu)$ likelihood [20] (left) and $\sigma(\mathup{{{t}}}\mathup{{\overline{{t}}}}\mathup{{{Z}}})$ vs $\sigma(\mathup{{{t}}}\mathup{{\overline{{t}}}}\mathup{{{W}}})$ likelihood [21] (right). The natural way of encoding such information is a histogram: TH2D or TH3D and we have chosen this way to store this information. The corresponding entry in the YAML file looks as following: ⬇ 1ROOTData: data/LHCb/RD/Bs2mumu_5fb/histB2mumu.root 2TH2Path: ”h_2DScan” Similar to the one dimensional likelihood (Sec. 2.6) the ROOTData encodes the location of the ROOT file, while the TH2Path(TH3Path) encodes the location of the TH2D(TH3D) object. In the long run the community will have to address the question how to publish higher dimensional likelihoods and this module (HL_nDimLikelihood) will have to be extended for such use cases. ### 2.8 Fits to experimental data It is possible that in the future experimental collaborations besides the results will made the datasets public. The procedure and the form in which the data should be published is not decided and there is an ongoing debate if the published data should correspond to the raw detector data, the final selected points used in the analysis or something between? Clearly publishing a raw data is problematic, as people outside the collaboration do not have the necessary knowledge about the calibration and efficiency correction procedures or data taking conditions. The most useful way to publish the dataset is to allow the experimentalists to perform all the selection, all the necessary efficiency corrections and publish the final dataset that has been used for analysis. This would allow the theory community to use the dataset directly in their fits without knowing the technicalities about the experimental data analysis. For this case in HEPLike we have implemented such a class HL_ExpPoints. The data are stored in the TTree structure located in the ROOT file. The YAML file encodes this information in form: ⬇ 1ROOTData: data/toy/data.root 2TTreePath: t 3Observables: 4- [ x ] 5- [ y ] 6- [ z ] 7Weight: w where the ROOTData points to the ROOT file and the TTreePath stores the information of the TTree location inside the ROOT file. It is assumed that the experiments will provide all the corrections in form of event-by-event weights. The name of the weight inside the TTree is encoded in the Weight entry. In general the data points are elements of $\mathcal{R}^{n}$ vector space, which coordinates are stored in the Observables entry. The only thing that user needs to provide to the HL_ExpPoints object is a pointer to the function to be fitted. The function should have a form: double (*fun)(vector<double> par , vector<double> point), where the par vector encodes the parameters that want to be fitted and the point corresponds to a data point. The HL_ExpPoints will then evaluate the likelihood: $\mathcal{L}(\omega)=f(\textbf{x}|\omega)^{w(\textbf{x})}$ (9) for the whole dataset. In the above the x correspondents to the n-dimensional point, $\omega$ denotes the parameters that want to be fitted par, and $f$ denotes the fitting function (fun). The HEPLike does not provide a minimalizer or a scanner tool as it is not purpose of this type of software. It has to be interfaced with proper scanner tool for example [1]. Again the user can decide if he/she prefers to perform a $\chi^{2}$ or log-likelihood fit. The biggest advantage of such format is the compatibility with the experimental analysis. Experimentalist can in principle publish as well the function that they have used to fit this data and therefore a theorists reproduce the experimental result and start where the experimentalists finished. ## 3 Code implementation In this section we will discuss the implementation of the code used to create likelihoods discussed in Sec. 2. The code is build in several classes: * 1. HL_Data: base class from which other classes inherit their base functionality. * 2. HL_Limit: class that handles the upper limit measurements. * 3. HL_Gaussian: class that handles measurements with Gaussian uncertainty. * 4. HL_BifurGaussian: class that handles measurements with asymmetric uncertainty. * 5. HL_nDimGaussian: class that handles measurements with n-dimensional Gaussian uncertainties. * 6. HL_nDimBifurGaussian: class that handles measurements with n-dimensional asymmetric uncertainties. * 7. HL_ProfLikelihood: class that handles measurements with one-dimensional likelihood function. * 8. HL_nDimLikelihood: class that handles measurements with 2(3)-dimensional likelihood function. * 9. HL_ExpPoints: class that allows to perform the fits to experimental datasets. In Tab. LABEL:tab:functions we present the functionality of these classes. In addition we present the hierarchy of the structure of the class inheritance in Fig. 4. Table 1: Functions available in the HEPLike software. Function | Description ---|--- HL_Data() | Constructor of the HL_Data class. HL_Data(string) | Constructor of the HL_Data class. The argument that is taken by constructor is the path for the YAML file encoding the measurement. HL_Limit() | Constructor of the HL_Limit class. HL_Limit(string) | Constructor of the HL_Limit class. The argument that is taken by constructor is the path for the YAML file encoding the measurement. HL_Gaussian() | Constructor of the HL_Gaussian class. HL_Gaussian(string) | Constructor of the HL_Gaussian class. The argument that is taken by constructor is the path for the YAML file encoding the measurement. HL_BifurGaussian() | Constructor of the HL_BifurGaussian class. HL_BifurGaussian(string) | Constructor of the HL_Gaussian class. The argument that is taken by constructor is the path for the YAML file encoding the measurement. HL_nDimGaussian() | Constructor of the HL_nDimGaussian class. HL_nDimGaussian(string ) | Constructor of the HL_nDimGaussian class. The argument that is taken by constructor is the path for the YAML file encoding the measurement. HL_nDimBifurGaussian() | Constructor of the HL_nDimBifurGaussian class. HL_nDimBifurGaussian(string) | Constructor of the HL_nDimBifurGaussian class. The argument that is taken by constructor is the path for the YAML file encoding the measurement. HL_ProfLikelihood() | Constructor of the HL_ProfLikelihood class. HL_ProfLikelihood(string) | Constructor of the HL_ProfLikelihood class. The argument that is taken by constructor is the path for the YAML file encoding the measurement. HL_nDimLikelihood() | Constructor of the HL_nDimLikelihood class. HL_ProfLikelihood(string) | Constructor of the HL_nDimLikelihood class. The argument that is taken by constructor is the path for the YAML file encoding the measurement. HL_ExpPoints() | Constructor of the HL_ExpPoints class. HL_ExpPoints(string) | Constructor of the HL_ExpPoints class. The argument that is taken by constructor is the path for the YAML file encoding the measurement. read_standard() | Function that reads the general information about the measurement from the YAML file. set_debug_yaml(bool) | Function that enables debugging the YAML file. By default the debugging is switched off and can be switched on by passing a true bool argument to this function. Debugging will print a message that for a given information in the YAML file is missing. Read() | Function reading the YAML file. The function GetChi2(double) | Function that returns the $\chi^{2}$ value for a given point (passed to the function as double). Function is available for all classes besides HL_Data. GetLogLikelihood(double) | Function that returns the log-likelihood value for a given point (passed to the function as double). Function is available for all classes besides HL_Data. GetLikelihood(double) | Function that returns the likelihood value for a given point (passed to the function as double). Function is available for all classes besides HL_Data. GetCLs(double) | Function that returns $\rm CL_{s}$ or p-value for a given point (passed to the function as double). The function is a member of the HL_Limit class. Restrict(vector<string>) | Function that restricts number of observables from the YAML file. Function is a member of the HL_nDimGaussian, HL_nDimBifurGaussian and HL_nDimLikelihood classes. InitData() | Function of HL_ExpPoints class that reads to the memory the data from the TTree object. Profile() | Function of HL_nDimLikelihood class that creates the profile log-likelihood projections. SetFun() | Function of HL_ExpPoints class, that sets the pointer to the function to be fitted. Figure 4: Diagram of class inheritance of the HEPLike package. ## 4 Installation and usage In this chapter we will present the requirements and installation for the HEPLike package. The software is distributed via the github site: https://github.com/mchrzasz/HEPLike. In order to compile HEPLike the following packages (and the minimal version) needs to be installed: * 1. git * 2. cmake, 2.8 * 3. yaml-cpp, 1.58.0 * 4. gsl, 2.1 * 5. Boost, 1.58.0 * 6. ROOT, 6.08 The compilation is done in the following way: ⬇ 1cd <instalation dir> 2git clone https://github.com/mchrzasz/HEPLike.git 3cd HEPLike 4mkdir build 5cd build 6cmake .. 7make In the above the make can be replaced with make -jN, where N is the number of threads that user wants to be used for compilation. Please note that in case of non standard installation of some packages one might have to provide cmake with a proper path to the library. After successful compilation a libHEPLike.a and libHEPLike.solibraries will be created in the build directory. The HEPLike is provided with seven examples: * 1. Br_example.cc: example program showing the usage of the HL_Gaussian class. * 2. BrBifurGaussian_example.cc: example program showing the usage of the HL_BifurGaussian class. * 3. Data_Fit_example.cc: example program showing the usage of the HL_ExpPoints class. * 4. Limit_example.cc: example program showing the usage of the HL_Limit class. * 5. Ndim_BifurGaussian_example.cc: example program showing the usage of the HL_nDimBifurGaussian class. * 6. Ndim_Gaussian.cc: example program showing the usage of the HL_nDimGaussian class. * 7. Ndim_Likelihood_example.cc: example program showing the usage of the HL_nDimLikelihood class. * 8. ProfLikelihood_example.cc: example program showing the usage of the HL_ProfLikelihood class. To compile them a proper variable has to be set during the cmake stage: ⬇ 1 cd build 2 cmake -DEXECUTABLE=TRUE .. 3 make After the compilation in the build directory will contain executables from the following examples. The HEPLike package comes also with test procedures for each of the classes. To perform the tests user has to perform the command: ⬇ ctest or an equivalent: ⬇ make test If the HEPLike was successfully installed the output will look as following: ⬇ Test project /storage/github/HEPLike/build Start 1: HL_Test_YAML 1/7 Test #1: HL_Test_YAML ………………… Passed 0.01 sec Start 2: HL_Limit 2/7 Test #2: HL_Limit ……………………. Passed 0.27 sec Start 3: HL_Br_example 3/7 Test #3: HL_Br_example ……………….. Passed 0.02 sec Start 4: HL_BrBifurGaussian_example 4/7 Test #4: HL_BrBifurGaussian_example ……. Passed 0.01 sec Start 5: HL_Ndim_Gaussian 5/7 Test #5: HL_Ndim_Gaussian …………….. Passed 0.01 sec Start 6: HL_ProfLikelihood_example 6/7 Test #6: HL_ProfLikelihood_example …….. Passed 0.25 sec Start 7: HL_Ndim_BifurGaussian_example 7/7 Test #7: HL_Ndim_BifurGaussian_example …. Passed 0.01 sec 100% tests passed, 0 tests failed out of 7 Total Test time (real) = 0.57 sec ### 4.1 Available measurement The YAML files that contain the stored measurements are located in a second independent repository. The reason for this separation is that the YAML files are expected to be updated more frequently then the code itself. It is expected that users and experiments will contribute to this repository. By implementing such model it is ensured that the repository will contain the most up to date measurements. The repository can be found at: https://github.com/mchrzasz/HEPLikeData. The repository should be downloaded or cloned: ⬇ 1cd <some new dir> 2git clone https://github.com/mchrzasz/HEPLikeData.git Since the repository contains only YAML files there is no need for any compilation. The repository contains a directory data, where all the YAML files are kept. It should be linked by a symbolic link to the HEPLike package. Inside the data the measurements are grouped by experiments (ex. LHCb, ATLAS, CMS, etc.). Inside the experiment directory the measurements are grouped according to type of measurement in the collaborations, for example: RD, Semileptonic, Charmless, Exotica, etc. The names of the YAML files should be named accordingly to publication report number. For example: CERN- EP-2018-331.yaml. If a single publication produced more independent measurements, user might code them in the independent files and give further information at the end of the file, for example:CERN-PH- EP-2015-314_q2_01_0.98.yaml. Currently we are publishing the measurements that have been used by us in other projects [22, 23, 24]. The list of YAML files with the context is presented in Tab. LABEL:tab:yaml. Table 2: Functions available in the HEPLike software. | ---|--- File | Description CERN-EP-2017-100.yaml | YAML file encoding the measurement of branching fraction of the $\mathup{{{B}}{}_{\scriptstyle{\mathup{{{d}}}}}^{\scriptstyle{0}}}\to\mu\mu$ and $\mathup{{{B}}{}_{\scriptstyle{\mathup{{{s}}}}}^{\scriptstyle{0}}}\to\mu\mu$ decays [20]. PH-EP-2015-314_q2_0.1_0.98.yaml PH-EP-2015-314_q2_11.0_12.5.yaml PH-EP-2015-314_q2_1.1_2.5.yaml PH-EP-2015-314_q2_15.0_19.yaml PH-EP-2015-314_q2_2.5_4.0.yaml PH-EP-2015-314_q2_4.0_6.0.yaml PH-EP-2015-314_q2_6.0_8.0.yaml | YAML files encoding the measurements of the angular coefficients of $\mathup{{{B}}{}_{\scriptstyle{\mathup{{{d}}}}}^{\scriptstyle{0}}}\to\mathup{{{K}}^{\scriptstyle{\ast}}}\mu\mu$ decay in different $q^{2}$ regions [25]. CERN-EP-2016-141_q2_0.1_0.98.yaml CERN-EP-2016-141_q2_11.0_12.5.yaml CERN-EP-2016-141_q2_1.1_2.5.yaml CERN-EP-2016-141_q2_15.0_19.yaml CERN-EP-2016-141_q2_2.5_4.0.yaml CERN-EP-2016-141_q2_4.0_6.0.yaml CERN-EP-2016-141_q2_6.0_8.0.yaml | YAML files encoding the measurements of the branching fraction of the $\mathup{{{B}}{}_{\scriptstyle{\mathup{{{d}}}}}^{\scriptstyle{0}}}\to\mathup{{{K}}^{\scriptstyle{\ast}}}\mu\mu$ decay in different $q^{2}$ regions [26]. CERN-EP-2016-215_q2_0.1_0.98.yaml CERN-EP-2016-215_q2_1.1_2.5.yaml CERN-EP-2016-215_q2_2.5_4.yaml CERN-EP-2016-215_q2_4_6.yaml CERN-EP-2016-215_q2_6_8.yaml | YAML files encoding the measurements of the branching fraction of the $\mathup{{{B}}{}_{\scriptstyle{\mathup{{{d}}}}}^{\scriptstyle{0}}}\to\mathup{{{K}}}\mathup{{{\pi}}}\mu\mu$ decay in different $q^{2}$ regions [27]. CERN-PH-EP-2015-145_0.1_2.yaml CERN-PH-EP-2015-145_11_12.5.yaml CERN-PH-EP-2015-145_15_19.yaml CERN-PH-EP-2015-145_1_6.yaml CERN-PH-EP-2015-145_2_5.yaml CERN-PH-EP-2015-145_5_8.yaml | YAML files encoding the measurements of the branching fraction of the $\mathup{{{B}}{}_{\scriptstyle{\mathup{{{s}}}}}^{\scriptstyle{0}}}\to\mathup{{{\phi}}}\mu\mu$ decay in different $q^{2}$ regions [27]. CERN-EP-2019-043.yaml | YAML file encoding the measurement of the $R_{K}$ [28]. CERN-EP-2017-100_q2_0.045_1.1.yaml CERN-EP-2017-100_q2_1.1_6.yaml | YAML file encoding the measurement of the $R_{\mathup{{{K}}^{\scriptstyle{\ast}}}}$ [7]. b2sgamma.yaml | YAML file encoding the HFLAV average of the $\mathup{{{b}}}\to\mathup{{{s}}}\mathup{{{\gamma}}}$ [15]. RD_RDstar.yaml | YAML file encoding the HFLAV average of the $R(\mathup{{{D}}})$ and $R(\mathup{{{D}}^{\scriptstyle{\ast}}})$ [15]. HFLAV_2016_157.yaml HFLAV_2016_160.yaml HFLAV_2016_161.yaml HFLAV_2016_162.yaml HFLAV_2016_164.yaml HFLAV_2016_165.yaml HFLAV_2016_166.yaml HFLAV_2016_167.yaml HFLAV_2016_168.yaml HFLAV_2016_169.yaml HFLAV_2016_170.yaml HFLAV_2016_171.yaml HFLAV_2016_176.yaml HFLAV_2016_177.yaml HFLAV_2016_178.yaml HFLAV_2016_179.yaml HFLAV_2016_180.yaml HFLAV_2016_181.yaml HFLAV_2016_182.yaml HFLAV_2016_183.yaml HFLAV_2016_211.yaml HFLAV_2016_212.yaml | YAML files encoding the upper limits of $\tau$ Lepton Flavour Violation decays [27]. As already mentioned the measurements are constantly growing and there is expected that the community will contribute to develop this repository. When a new YAML file is wrote before merging it with the repository it should be checked if it contains all the necessary information. It can be checked with the Test_YAML.cc program. It can be used in the following way: ⬇ 1cd HEPLike 2./build/Test_YAML <PATH_TO_YAML> If an entry is missing the user will be notified by a printout. The HEPLikeData repository contains also a template YAML (data/template.yaml) file which can be used to create new measurements YAML files. As already mentioned we provide useful utilities for the encoded measurements. The first is the ability to create BiBtex file for the measurements that have been used. The user should store the BiBtex items or YAML file names: ⬇ 1Aaij:2017vbb 2b2mumu.yaml To prepare the BiBtex file user should run the make_citations.py script located in the utils directory: ⬇ 1cd utils 2python make_citations.py list.txt after this command a new file references.bib, will be created, which will contain the full BiBtex entries. This can be directly used in preparing the publication. Another useful feature of HEPLike is the ability to search the measurement database for relevant measurements. The script allowing for that utility is also located in the utils. Currently the database can be searched for using the year of publication, Arxiv number, author of the YAML file or the unique name of the measurements. The syntax for running a search is the following: ⬇ 1python lookup.py –Arxiv 1705.05802 2Found files: 3../data/examples/RKstar_lowq2.yaml To see all available search options in the following script user can run it with help option: python lookup.py -h. ## 5 Summary We have presented a computer program HEPLike that enables to construct and evaluate experimental likelihoods. The software is designed to handle the interpretation of wide range of published results. It also allows to perform direct fits to data once it is provided by the experimental collaborations. The program can be easily interfaced with other computer programs and is aimed to help users, who perform fits to experimental results in their scientific work. It is especially useful for large fitting collaborations, which till now had to implement the experimental measurements on their own. The measurement themselves are stored in YAML files in separate repository. This allows for easy extensions of the database without the need of compilation. Furthermore, users and experimental collaborations can share their encoded measurements with the community. ## Acknowledgments This work is partly supported by the CERN FCC Design Study Program. The research of M. Chrzaszcz is funded by the Polish National Agency for Academic Exchange under the Bekker program. M. Chrzaszcz is also grateful to Foundation for Polish Science (FNP) for its support. We would like thank Mike Williams, Patrick Koppenburg, Pat Scott, Danny van Dyk and Maria Moreno Llacer for invaluable comments about our manuscript. ## References * [1] P. Athron, et al., GAMBIT: The Global and Modular Beyond-the-Standard-Model Inference Tool, Eur. Phys. J. C77 (11) (2017) 784, [Addendum: Eur. Phys. J.C78,no.2,98(2018)]. arXiv:1705.07908, doi:10.1140/epjc/s10052-017-5513-2,10.1140/epjc/s10052-017-5321-8. * [2] J. C. Costa, et al., Likelihood Analysis of the Sub-GUT MSSM in Light of LHC 13-TeV Data, Eur. Phys. J. C78 (2) (2018) 158. arXiv:1711.00458, doi:10.1140/epjc/s10052-018-5633-3. * [3] P. Bechtle, K. Desch, P. Wienemann, Fittino, a program for determining MSSM parameters from collider observables using an iterative method, Comput. Phys. Commun. 174 (2006) 47–70. arXiv:hep-ph/0412012, doi:10.1016/j.cpc.2005.09.002. * [4] F. Mahmoudi, New constraints on supersymmetric models from b —¿ s gamma, JHEP 12 (2007) 026. arXiv:0710.3791, doi:10.1088/1126-6708/2007/12/026. * [5] T. Feldmann, D. Van Dyk, K. K. Vos, Revisiting $B\to\pi\pi\ell\nu$ at large dipion masses, JHEP 10 (2018) 030. arXiv:1807.01924, doi:10.1007/JHEP10(2018)030. * [6] J. Kumar, D. London, R. Watanabe, Combined Explanations of the $b\to s\mu^{+}\mu^{-}$ and $b\to c\tau^{-}{\bar{\nu}}$ Anomalies: a General Model Analysis, Phys. Rev. D99 (1) (2019) 015007. arXiv:1806.07403, doi:10.1103/PhysRevD.99.015007. * [7] R. Aaij, et al., Test of lepton universality with $B^{0}\rightarrow K^{*0}\ell^{+}\ell^{-}$ decays, JHEP 08 (2017) 055. arXiv:1705.05802, doi:10.1007/JHEP08(2017)055. * [8] R. Aaij, et al., Search for the lepton-flavour violating decay $D^{0}\to e^{\pm}\mu^{\mp}$, Phys. Lett. B754 (2016) 167–175. arXiv:1512.00322, doi:10.1016/j.physletb.2016.01.029. * [9] G. J. Feldman, R. D. Cousins, A Unified approach to the classical statistical analysis of small signals, Phys. Rev. D57 (1998) 3873–3889. arXiv:physics/9711021, doi:10.1103/PhysRevD.57.3873. * [10] C. Rover, C. Messenger, R. Prix, Bayesian versus frequentist upper limits, in: Proceedings, PHYSTAT 2011 Workshop on Statistical Issues Related to Discovery Claims in Search Experiments and Unfolding, CERN,Geneva, Switzerland 17-20 January 2011, CERN, CERN, Geneva, 2011, pp. 158–163. arXiv:1103.2987, doi:10.5170/CERN-2011-006.158. * [11] R. Aaij, et al., Search for the decays $B_{s}^{0}\to\tau^{+}\tau^{-}$ and $B^{0}\to\tau^{+}\tau^{-}$, Phys. Rev. Lett. 118 (25) (2017) 251802. arXiv:1703.02508, doi:10.1103/PhysRevLett.118.251802. * [12] A. L. Read, Modified frequentist analysis of search results (the $CL_{s}$ method) (CERN-OPEN-2000-205). URL https://cds.cern.ch/record/451614 * [13] S. S. Wilks, The large-sample distribution of the likelihood ratio for testing composite hypotheses, Ann. Math. Statist. 9 (1) (1938) 60–62. doi:10.1214/aoms/1177732360. URL https://doi.org/10.1214/aoms/1177732360 * [14] M. Tanabashi, et al., Review of particle physics, Phys. Rev. D 98 (2018) 030001. doi:10.1103/PhysRevD.98.030001. URL https://link.aps.org/doi/10.1103/PhysRevD.98.030001 * [15] Y. Amhis, et al., Averages of $b$-hadron, $c$-hadron, and $\tau$-lepton properties as of summer 2016, Eur. Phys. J. C77 (12) (2017) 895. arXiv:1612.07233, doi:10.1140/epjc/s10052-017-5058-4. * [16] R. Barlow, Asymmetric systematic errorsarXiv:physics/0306138. * [17] R. Aaij, et al., Test of lepton universality using $B^{+}\rightarrow K^{+}\ell^{+}\ell^{-}$ decays, Phys. Rev. Lett. 113 (2014) 151601. arXiv:1406.6482, doi:10.1103/PhysRevLett.113.151601. * [18] I. Antcheva, et al., ROOT: A C++ framework for petabyte data storage, statistical analysis and visualization, Comput. Phys. Commun. 180 (2009) 2499–2512. arXiv:1508.07749, doi:10.1016/j.cpc.2009.08.005. * [19] E. Maguire, L. Heinrich, G. Watt, HEPData: a repository for high energy physics data, J. Phys. Conf. Ser. 898 (10) (2017) 102006. arXiv:1704.05473, doi:10.1088/1742-6596/898/10/102006. * [20] R. Aaij, et al., Measurement of the $B^{0}_{s}\to\mu^{+}\mu^{-}$ branching fraction and effective lifetime and search for $B^{0}\to\mu^{+}\mu^{-}$ decays, Phys. Rev. Lett. 118 (19) (2017) 191801. arXiv:1703.05747, doi:10.1103/PhysRevLett.118.191801. * [21] M. Aaboud, et al., Measurement of the $t\bar{t}Z$ and $t\bar{t}W$ cross sections in proton-proton collisions at $\sqrt{s}=13$ TeV with the ATLAS detectorarXiv:1901.03584. * [22] F. U. Bernlochner, et al., FlavBit: A GAMBIT module for computing flavour observables and likelihoods, Eur. Phys. J. C77 (11) (2017) 786. arXiv:1705.07933, doi:10.1140/epjc/s10052-017-5157-2. * [23] P. Athron, et al., Global fits of GUT-scale SUSY models with GAMBIT, Eur. Phys. J. C77 (12) (2017) 824. arXiv:1705.07935, doi:10.1140/epjc/s10052-017-5167-0. * [24] P. Athron, et al., A global fit of the MSSM with GAMBIT, Eur. Phys. J. C77 (12) (2017) 879. arXiv:1705.07917, doi:10.1140/epjc/s10052-017-5196-8. * [25] R. Aaij, et al., Angular analysis of the $B^{0}\to K^{*0}\mu^{+}\mu^{-}$ decay using 3 fb-1 of integrated luminosity, JHEP 02 (2016) 104. arXiv:1512.04442, doi:10.1007/JHEP02(2016)104. * [26] R. Aaij, et al., Measurements of the S-wave fraction in $B^{0}\rightarrow K^{+}\pi^{-}\mu^{+}\mu^{-}$ decays and the $B^{0}\rightarrow K^{\ast}(892)^{0}\mu^{+}\mu^{-}$ differential branching fraction, JHEP 11 (2016) 047, [Erratum: JHEP04,142(2017)]. arXiv:1606.04731, doi:10.1007/JHEP11(2016)047,10.1007/JHEP04(2017)142. * [27] R. Aaij, et al., Differential branching fraction and angular moments analysis of the decay $B^{0}\to K^{+}\pi^{-}\mu^{+}\mu^{-}$ in the $K^{*}_{0,2}(1430)^{0}$ region, JHEP 12 (2016) 065. arXiv:1609.04736, doi:10.1007/JHEP12(2016)065. * [28] R. Aaij, et al., Search for lepton-universality violation in $B^{+}\to K^{+}\ell^{+}\ell^{-}$ decaysarXiv:1903.09252.
2024-09-04T02:54:58.268761
2020-03-09T09:01:53
2003.03977
{ "authors": "Nikhil Iyer, V Thejas, Nipun Kwatra, Ramachandran Ramjee, Muthian\n Sivathanu", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26109", "submitter": "Nikhil Iyer", "url": "https://arxiv.org/abs/2003.03977" }
arxiv-papers
# Wide-minima Density Hypothesis and the Explore-Exploit Learning Rate Schedule Nikhil Iyer Microsoft Research India<EMAIL_ADDRESS>V Thejas 11footnotemark: 1 Atlassian India<EMAIL_ADDRESS>Nipun Kwatra Microsoft Research India<EMAIL_ADDRESS>Ramachandran Ramjee Microsoft Research India<EMAIL_ADDRESS>Muthian Sivathanu Microsoft Research India<EMAIL_ADDRESS>Work done during an internship at Microsoft Research India ###### Abstract Several papers argue that wide minima generalize better than narrow minima. In this paper, through detailed experiments that not only corroborate the generalization properties of wide minima, we also provide empirical evidence for a new hypothesis that the density of wide minima is likely lower than the density of narrow minima. Further, motivated by this hypothesis, we design a novel explore-exploit learning rate schedule. On a variety of image and natural language datasets, compared to their original hand-tuned learning rate baselines, we show that our explore-exploit schedule can result in either up to 0.84% higher absolute accuracy using the original training budget or up to 57% reduced training time while achieving the original reported accuracy. For example, we achieve state-of-the-art (SOTA) accuracy for IWSLT’14 (DE-EN) dataset by just modifying the learning rate schedule of a high performing model. Keywords: deep learning, generalization, learning rate schedule, optimization ## 1 Introduction One of the fascinating properties of deep neural networks (DNNs) is their ability to generalize well, i.e., deliver high accuracy on the unseen test dataset. It is well-known that the learning rate (learning rate) schedules play an important role in the generalization performance (Keskar et al., 2016; Wu et al., 2018; Goyal et al., 2017). In this paper, we study the question, what are the key properties of a learning rate schedule that help DNNs generalize well during training? We start with a series of experiments training Resnet18 on Cifar-10 over 200 epochs. We vary the number of epochs trained at a high learning rate of $0.1$, called the explore epochs, from 0 to 100 and divide up the remaining epochs equally for training with learning rates of $0.01$ and $0.001$. Note that the training loss typically stagnates around 50 epochs with $0.1$ learning rate. Despite that, we find that as the number of explore epochs increase to 100, the average test accuracy also increases. We also find that the minima found in higher test accuracy runs are wider than the minima from lower test accuracy runs, corroborating past work on wide-minima and generalization (Keskar et al., 2016; Hochreiter and Schmidhuber, 1997; Jastrzebski et al., 2017; Wang et al., 2018). Moreover, what was particularly surprising was that, even when using fewer explore epochs, a few runs out of many trials still resulted in high test accuracies! Thus, we not only find that an initial exploration phase with a high learning rate is essential to the good generalization of DNNs, but that this exploration phase needs to be run for sufficient time, even if the training loss stagnates much earlier. Further, we find that, even when the exploration phase is not given sufficient time, a few runs still see high test accuracy values. To explain these observations, we hypothesize that, in the DNN loss landscape, the density of narrow minima is significantly higher than that of wide minima. Intuitively, a large learning rate can escape narrow minima easily (as the optimizer can jump out of them with large steps). However, once it reaches a wide minima, it is likely to get stuck in it (if the ”width” of the wide minima is large compared to the step size). With fewer explore epochs, a large learning rate might still get lucky occasionally in finding a wide minima but invariably finds only a narrower minima due to their higher density. As the explore duration increases, the probability of eventually landing in a wide minima also increases. Thus, a minimum duration of explore is necessary to land in a wide minimum with high probability. An observation on the rarity of wide minima has been hinted at by prior work (Wu et al., 2018; Baldassi et al., 2020) based on theoretical analysis of simple neural networks (see Section 2). In this paper, we add significant empirical evidence to these theoretical observations. We believe that all these results together constitute sufficient evidence for this observation to now be classified as a hypothesis, that we term the wide-minima density hypothesis. The hypothesis helps explain not only our experiments but also the generalization out-performance of prior heuristic-based learning rate decay schemes such as cosine decay (Loshchilov and Hutter, 2016). Cosine decay implicitly maintains a higher learning rate during the first half of training compared to schemes like linear decay. Based on the hypothesis, the higher learning rate allows cosine decay to find wider minima with higher probability, resulting in cosine decay’s better generalization compared to linear decay. Apart from helping explain empirical observations, the hypothesis also enables a principled learning rate schedule design that explicitly accounts for the requisite explore duration. Motivated by the hypothesis, we design a novel Explore-Exploit learning rate schedule, where the initial explore phase optimizes at a high learning rate in order to arrive in the vicinity of a wide minimum. This is followed by an exploit phase which descends to the bottom of this wide minimum. We give explore phase enough time so that the probability of landing in a wide minima is high. For the exploit phase, we experimented with multiple schemes, and found a simple, parameter-less, linear decay to zero to be effective. Thus, our proposed learning rate schedule optimizes at a constant high learning rate for a given duration, followed by a linear decay to zero. We call this learning rate schedule the Knee schedule. We extensively evaluate the Knee schedule across a wide range of models and datasets, ranging from NLP (BERT pre-training, Transformer on WMT’14(EN-DE) and IWSLT’14 (DE-EN)) to CNNs (ImageNet on ResNet-50, Cifar-10 on ResNet18), and spanning multiple optimizers: SGD Momentum, Adam, RAdam, and LAMB. In all cases, Knee schedule improves the test accuracy of state-of-the-art hand-tuned learning rate schedules, when trained using the original training budget. The explore duration is a hyper-parameter in Knee schedule but even if we set the explore duration to a fixed 50% fraction of total training budget, we find that it still outperforms prior schemes. We also experimented with reducing the training budget, and found that Knee schedule can achieve the same accuracy as the baseline under significantly reduced training budgets. For the BERTLARGE pretraining, WMT’14(EN-DE) and ImageNet experiments, we are able to train in 33%, 57% and 44% less training budget, respectively, for the same test accuracy. This corresponds to significant savings in GPU compute, e.g. savings of over 1000 V100 GPU-hours for BERTLARGE pretraining. The main contributions of our work 111Our work is available at: https://github.com/nikhil-iyer-97/wide-minima-density-hypothesis are: 1. nosep A hypothesis of lower density of wide minima in the DNN loss landscape, backed by extensive experiments, that explains why a high learning rate needs to be maintained for sufficient duration to achieve good generalization. 2. nosep The hypothesis explains the good performance of heuristic-based schemes such as cosine decay, and promotes a principled design of learning rate decay schemes. 3. nosep Motivated by the hypothesis, we design an Explore-Exploit learning rate schedule called Knee schedule that outperforms prior heuristic-based learning rate schedules, including achieving state-of-the-art results on the IWSLT’14 (DE-EN) dataset. ## 2 Related Work Generalization. There has been a lot of work on understanding the generalization characteristics of DNNs. Kawaguchi (2016) found that DNNs have many local minima, but all local minima were also the global minima. It has been observed by several authors that wide minima generalize better than narrow minima (Arora et al., 2018; Hochreiter and Schmidhuber, 1997; Keskar et al., 2016; Jastrzebski et al., 2017; Wang et al., 2018) but there have been other works questioning this hypothesis as well (Dinh et al., 2017; Golatkar et al., 2019; Guiroy et al., 2019; Jastrzebski et al., 2019; Yoshida and Miyato, 2017). Keskar et al. (2016) found that small batch SGD generalizes better and lands in wider minima than large batch SGD. However, recent work has been able to generalize quite well even with very large batch sizes (Goyal et al., 2017; McCandlish et al., 2018; Shallue et al., 2018), by scaling the learning rate linearly as a function of the batch size. Jastrzebski et al. (2019) analyze how batch size and learning rate influence the curvature of not only the SGD endpoint but also the whole trajectory. They found that small batch or large step SGD have similar characteristics, and yield smaller and earlier peak of spectral norm as well as smaller largest eigenvalue. Chaudhari et al. (2019); Baldassi et al. (2019) propose methods to drive the optimizer to wide minima. Wang et al. (2018) analytically show that generalization of a model is related to the Hessian and propose a new metric for the generalization capability of a model that is unaffected by model reparameterization of Dinh et al. (2017). Yoshida and Miyato (2017) argue that regularizing the spectral norm of the weights of the neural network help them generalize better. On the other hand, Arora et al. (2018) derive generalization bounds by showing that networks with low stable rank (high spectral norm) generalize better. Guiroy et al. (2019) looks at generalization in gradient-based meta-learning and they show experimentally that generalization and wide minima are not always correlated. Finally, Golatkar et al. (2019) show that regularization results in higher test accuracy specifically when it is applied during initial phase of training, similar to the importance of Knee schedule’s explore phase during initial phase of training. In a similar vein, Li et al. (2019) explain the regularization benefits of the initial higher learning rate by showing that higher learning rate helps networks learn easier-to-fit general patterns. Neural network loss landscapes. The landscape of loss in neural networks have been extensively studied (Draxler et al., 2018; Freeman and Bruna, 2016; Garipov et al., 2018; Sagun et al., 2017). These papers point out that the loss landscape contains both wide and narrow minima, and there may even exist a path from one minima to another without barriers. However, there are multiple paths between these minima and some paths indeed face barriers (e.g., see Figure 1 in Draxler et al. (2018)). Since we don’t know which path SGD and other optimizers might follow, even if wide and narrow minima are part of a single basin, SGD and other optimizers might still require higher learning rates to navigate from narrow to wide minima. Lower density of wide minima. Wu et al. (2018) compares the sharpness of minima obtained by full-batch gradient descent (GD) with different learning rates for small neural networks on FashionMNIST and Cifar10 datasets. They find that GD with a given learning rate finds the theoretically sharpest feasible minima for that learning rate. Thus, in the presence of several flatter minimas, GD with lower learning rates does not find them, leading to the conjecture that density of sharper minima is perhaps larger than density of wider minima. Baldassi et al. (2020) show analytically for simple, two- layer non-convex networks that wide minima exists and are rare, compared to narrow minima, local minima and saddle points. In this paper, we add significant evidence to these theoretical observations based on empirical results obtained on large-scale, state-of-the-art neural networks through carefully designed experiments. ## 3 Wide-Minima Density Hypothesis Many popular learning rate schedules, such as the step decay schedules for image datasets, start the training with high learning rate, and then reduce the learning rate periodically. For example, consider the case of Cifar-10 on Resnet-18, trained using a typical step learning rate schedule of $0.1,0.01,$ and $0.001$ for 100, 50, 50 epochs each. In many such schedules, even though training loss stagnates after several epochs of high learning rate, one still needs to continue training at high learning rate in order to get good generalization. For example, Figure 2 shows the training loss for Cifar-10 on Resnet-18, trained with a fixed learning rate of 0.1 (orange curve), compared to a model trained via a step schedule with learning rate reduced at epoch 50 (blue curve). As can be seen from the figure, the training loss stagnates after $\approx$ 50 epochs for the orange curve, and locally it makes sense to reduce the learning rate to decrease the loss. However, as shown in Table 2, generalization is directly correlated with duration of training at high learning rate, with the highest test accuracy achieved when the high learning rate is used for 100 epochs, well past the point where training loss stagnates. Note that the final training loss remains similar for all runs. To understand the above phenomena, we perform another experiment. We train Cifar-10 on Resnet-18 for 200 epochs, using a high learning rate of $0.1$ for only 30 epochs and then use learning rate of $0.01$ and $0.001$ for 85 epochs each. We repeat this training 50 times with different random weight initializations. On an average, as expected, this training yields a low test accuracy of $94.81$. However, in 1 of the 50 runs, we find that the test accuracy reaches $95.24$, even higher than the average accuracy of $95.1$ obtained while training at high learning rate for 100 epochs! Figure 1: Training loss for Cifar-10 on Resnet-18. Orange plot uses a fixed learning rate of 0.1, while in blue plot, the learning rate is reduced from 0.1 to 0.01 at epoch 50. Figure 2: Cifar-10 on Resnet-18 trained for 200 epochs with Momentum. A learning rate of 0.1 is used for the explore epochs. Half the remaining epochs are trained at 0.01 and the other half at 0.001. Reported results are average over 4 runs. Epochs at | Test Accuracy | Train Loss ---|---|--- 0.1 LR | Avg. (Std. Dev) | Avg. (Std. Dev.) 0 | 94.34 (0.13) | 0.0017 (8e-5) 30 | 94.81 (0.15) | 0.0017 (8e-5) 40 | 94.91 (0.14) | 0.0018 (9e-5) 60 | 95.01 (0.14) | 0.0018 (1e-4) 80 | 95.05 (0.15) | 0.0019 (1e-4) 100 | 95.10 (0.14) | 0.0021 (1e-4) ### 3.1 Hypothesis To explain the above observations, i.e., using a high learning rate for short duration results in low average test accuracy with rare occurrences of high test accuracy, while using the same high learning rate for long duration achieves high average test accuracy and frequent occurrences of high test accuracy, we introduce a new hypothesis. We hypothesize that, in the DNN loss landscape, the density of narrow minima is significantly higher than that of wide minima. Intuitively, a large learning rate can escape narrow minima “valleys” easily (as the optimizer can jump out of them with large steps). However, once it reaches a wide minima “valley”, it is likely to get stuck in it (if the “width” of the wide valley is large compared to the step size). This intuition is backed by theoretical results from Xie et al. (2020) that show that the time to escape a minimum using SGD is exponential in the inverse of learning rate as well as inverse of the sharpness (measured by eigenvalue of the Hessian at the minima). Thus, large learning rates escape narrow minima exponentially faster than wide minima. If wide and narrow minima were uniformly distributed, SGD with a large LR would be able to quickly escape the narrow minima, land on a wide minima and get stuck there. Yet, we see that we need to maintain large LR for significant duration for landing in a wide minima with high probability. On the other hand, if our hypothesis is true, i.e., wide minima are much fewer than narrow minima, the probability of landing in a wide minima after escaping a narrow minima is low, and the optimizer needs to take a lot of steps to have a high probability of eventually landing in a wide minimum. Thus, the hypothesis is a better explanation for the observation in Table 2, where the average accuracy continues to improve as we increase the number of high learning rate training steps. The hypothesis also explains why very few (just 1) of the 50 runs trained at $0.1$ learning rate for just 30-epochs also manages to attain high accuracy—these runs just got lucky in a probabilistic sense and landed in a wide minimum even with a shorter duration of explore. | | | ---|---|---|--- (a) Explore 0 | (b) Explore 30 | (c) Explore 60 | (d) Explore 100 Figure 3: Histogram of minima sharpness (Keskar et al., 2016) for 50 random trials of Cifar-10 on Resnet-18. Each figure shows histograms for runs with different number of explore epochs. The distribution moves toward lower sharpness and tightens as the number of explore epochs increase. | | | ---|---|---|--- (a) Explore 0 | (b) Explore 30 | (c) Explore 60 | (d) Explore 100 Figure 4: Histogram of test accuracy for 50 random trials of Cifar-10 on Resnet-18. Each figure shows histograms for runs with different number of explore epochs. The distribution moves toward higher test accuracy and sharpens as the number of explore epochs increase. To validate this hypothesis further, we run experiments similar to the one in Table 2. Specifically, we train Cifar-10 on Resnet-18 model for 200 epochs using a standard step schedule with learning rate of $0.1,0.01,0.001$. We vary the number of epochs trained using the high learning rate of 0.1, called the explore epochs, from 0 to 100 epochs, and divide up the rest of the training equally between 0.01 and 0.001. For each experimental setting, we conduct 50 random trials and plot the distributions of final test accuracy and the minima sharpness as defined by the metric in Keskar et al. (2016) (see section 3.2). If our hypothesis is true, then the more you explore, the higher the probability of landing (and getting stuck) in a wide minima region, which should cause the distribution to tighten and move towards wider minima (lower sharpness), as the number of explore steps increase. This is exactly what is observed in Figure 4. Also since wide minima correlate with higher test accuracy, we should see the test accuracy distribution move towards higher accuracy and sharpen, as the number of explore steps increase. This is confirmed as well in Figure 4. Longer training with low learning rate is not sufficient. Finally, to verify whether explore at high learning rate is essential, we train Cifar-10 for 10,000 epochs at a fixed lower learning rate of 0.001. The training loss converged but the final test accuracy was only 93.9, compared to an accuracy of over 95% in 200 epochs in Table 2. Thus, even training $50\times$ longer at low learning rate is not sufficient to achieve good generalization. Again, this observation ties in well with the theoretical results from Xie et al. (2020) where the authors show that the time to escape a minimum using SGD is exponential in the inverse of learning rate. Thus, this result adds further evidence to our density hypothesis, since even training $50\times$ longer at a low learning rate is not sufficient to land in a wide minima. Multi-scale. Given the importance of explore at high learning rate, a natural question that may arise is whether explore is necessary at smaller learning rate as well. To answer this, we train the same network for a total of 200 epochs with an initial high learning rate of $0.1$ for 100 epochs, but now we vary the number of epochs trained with the learning rate of $0.01$ (we call this finer-scale explore), and train with learning rate of $0.001$ for the remaining epochs. As can be seen from Table 1, although the final training loss remains similar, we find that finer-scale explore also plays a role similar to the initial explore in determining the final test accuracy. This indicates that our hypothesis about density of wide/narrow regions indeed holds at multiple scales. Table 1: Cifar-10 on Resnet-18 trained for 200 epochs. A learning rate of 0.1 is used for the first 100 epochs. We then vary the number of epochs trained with learning rate of $0.01$ (called finer-scale explore), and train the remaining epochs with a learning rate of $0.001$. We report averages values over 3 runs. Explore Epochs (Finer-scale) | Test Accuracy | Training Loss | Sharpness ---|---|---|--- 10 | 94.78 | 0.0031 | 5.48 20 | 94.91 | 0.0026 | 4.47 30 | 95.00 | 0.0023 | 4.02 40 | 95.02 | 0.0021 | 3.91 50 | 95.10 | 0.0021 | 3.54 ### 3.2 Minima Sharpness Our hypothesis predicts that higher explore helps the optimizer land in a wider minimum, which in turn helps generalization. We demonstrated this empirically in Figure 4, where we plotted the distribution of the minima sharpness, as measured by the sharpness metric introduced by (Keskar et al., 2016). In this section, we describe Keskar’s sharpness metric in detail. We also introduce a simple projected gradient ascent scheme to compute this metric efficiently, which scales well to large networks. Finally, we also evaluate our hypothesis with a different metric for minima sharpness, the Fisher Score, which is based on the Fisher information matrix. #### 3.2.1 Keskar’s Sharpness Metric Keskar’s sharpness metric is based on measuring the maximum jump in the network’s output function $F$ in a small neighborhood around the minimum. After a few simplifications, Keskar’s metric for sharpness around a point $x$ can be written as: $S_{x,F}(\epsilon):=\frac{(max_{y\in C_{\epsilon}(x)}F(x+y))-F(x)}{1+F(x)}\times 100,$ (1) where $C_{\epsilon}(x)$ is an $\epsilon$ neighborhood around $x$. Keskar et al. (2016) mentions that under certain conditions and for small values of $\epsilon$, $S_{x,F}$ is proportional to the largest eigenvalue of the Hessian. Please see Keskar et al. (2016) for more details. For our measurements we choose an $\epsilon$ of $1e^{-4}$. For solving the maximization problem in Equation 1, Keskar et al. (2016) uses a second-order L-BFGS-B (Byrd et al., 2003) optimization scheme. However, in our experiments we found the method to be very slow. To combat this, Keskar et al. (2016) limited their runs to 10 iterations but we found that results were suboptimal using few iterations. Instead, we employed a projected gradient ascent scheme to solve Equation 1. In each optimization step, we took a small step with a learning rate of 0.001 in the gradient direction and projected the updated point to lie inside $C_{\epsilon}(x)$. Because of the first order nature, this method is much faster. We found that even 1000 iterations were fast to compute and the results were much better than the second order method in all cases we evaluated. Using Keskar’s sharpness metric, we had shown in Figure 4 that the distribution of minima sharpness moves towards lower values as the number of explore epochs increase. In Table 2, we also report the average sharpness of the minima for varying explores. As predicted by our hypothesis, average sharpness decreases as number of explore epochs increase. Table 2: Keskar’s sharpness metric for Cifar-10 on Resnet-18 trained for 200 epochs with Momentum. A learning rate of 0.1 is used for the explore epochs. Half the remaining epochs are trained at 0.01 and the other half at 0.001. We report the average sharpness over 50 different trials. Explore Epochs | Sharpness ---|--- 0 | 10.56 30 | 5.43 60 | 3.86 100 | 3.54 #### 3.2.2 Fisher Score The maximum Eigen value of the Fisher Information Matrix (FIM) estimates the highest curvature at a point, and is used as another metric to measure minima sharpness (Sokol and Park, 2018). We used an unbiased estimate of the true Fisher matrix (see Kunstner et al. (2019)) using 10 unbiased samples per training data. Table 3 shows the average Fisher scores for the Cifar-10 experiments at varying explores. Again, the sharpness measured by the Fisher score decreases as the number of explore epochs increase. Table 3: Fisher Score for Cifar-10 on Resnet-18 trained for 200 epochs with Momentum. A learning rate of 0.1 is used for the explore epochs. Half the remaining epochs are trained at 0.01 and the other half at 0.001. We report the average Fisher score over 10 different trials. Explore Epochs | FIM score ---|--- 0 | 0.051 30 | 0.046 60 | 0.043 100 | 0.042 ## 4 Explore-Exploit Learning Rate Schedule Given that we need to explore at multiple scales for good generalization, how do we go about designing a good learning rate schedule? The search space of the varying learning rate steps and their respective explore duration is enormous. Fortunately, since the explore at the initial scale is searching over the entire loss surface while explore at finer-scales is confined to exploring only the wide-minima region identified by the initial explore, the former is more crucial. In our experiments as well, we found that the initial portion of training is much more sensitive to exploration and needs a substantial number of explore steps, while after this initial phase, several decay schemes worked equally well. This is similar to the observations in (Golatkar et al., 2019) where the authors found that regularization such as weight-decay and data augmentation mattered significantly only during the initial phase of training. The above observations motivate our Explore-Exploit learning rate schedule, where the explore phase first optimizes at a high learning rate for some minimum time in order to land in the vicinity of a wide minima. We should give the explore phase enough time (a hyper-parameter), so that the probability of landing in a wide minima is high. After the explore phase, we know with a high probability, that the optimizer is in the vicinity of a wide region. We now start the exploit phase to descend to the bottom of this wide region while progressively decreasing the learning rate. Any smoothly decaying learning rate schedule can be thought of as doing micro explore-exploit at progressively reduced scales. A steady descent would allow more explore duration at all scales, while a fast descent would explore less at higher learning rates. We experimented with multiple schedules for the exploit phase, and found a simple linear decay to zero, that does not require any hyper- parameter, to be effective in all the models/datasets we tried. We call our proposed learning rate schedule which starts at a constant high learning rate for some minimum time, followed by a linear decay to zero, the Knee schedule. Note that any learning rate decay scheme incorporates an implicit explore during the initial part, where the learning rate stays high enough. To evaluate the benefit of an explicit explore phase, we compare Knee schedule against several decay schemes such as linear and cosine. Interestingly, the results depend on the length of training. For long budget experiments, simple decay schemes perform comparable to Knee schedule in some experiments, since the implicit explore duration is also large, helping these schemes achieve good generalization. However for short budget experiments, these schemes perform significantly worse than Knee schedule, since the implicit explore duration is much shorter. See Table 4 , 5 and 6 for the comparison. Warmup. Some optimizers such as Adam use an initial warmup phase to slowly increase the learning rate. However, as shown in Liu et al. (2019), learning rate warmup is needed mainly to reduce variance during initial training stages and can be eliminated with an optimizer such as RAdam. Learning rate warmup is also used for large-batch training (Goyal et al., 2017). Here, warmup is necessary since the learning rate is scaled to a very large value to compensate for the large batch size. This warmup is complementary and can be incorporated into Knee schedule. For example, we do this for BERTLARGE pretraining experiment where a large 16k batch size was used. ## 5 Evaluation In this section we present extensive empirical evaluation of Knee schedule on multiple models and datasets across various optimizers, and compare Knee schedule against the original hand-tuned learning rate baselines. We first provide an overview of our main results followed by detailed experimental results. We then run further experiments to validate our wide-minima density hypothesis, as well as run sensitivity analysis of seed learning rate on the Knee schedule. Note that, for completeness, we present a detailed comparison of Knee schedule with many other learning rate schedules in literature such as linear decay, cosine decay (Loshchilov and Hutter, 2016), one-cycle (Smith, 2018) in Appendix A. ### 5.1 Experiments We evaluate Knee schedule on multiple models and datasets spanning both vision and NLP problems. The training of these models spanned various optimizers including SGD Momentum, Adam (Kingma and Ba, 2014a), RAdam (Liu et al., 2019) and LAMB (You et al., 2019). For all experiments, we used an out of the box policy, where we only change the learning rate schedule, without modifying anything else. We evaluate on multiple image datasets – Imagenet on Resnet-50, Cifar-10 on Resnet-18; as well as various NLP datasets – pretraining BERTLARGE on Wikipidea+BooksCorpus and fine-tuning it on SQuADv1.1; and WMT’14 (EN-DE), IWSLT’14 (DE-EN) on Transformers. ### 5.2 Results Overview In all our experiments, we find that Knee schedule shows an improvement in test accuracy over the original hand-tuned learning rate baseline as well as various other learning rate schedules in the literature. Further, we also find that Knee schedule can achieve the same accuracy as the baseline with a much reduced training budget. Table 4: We report the top-1 accuracy for ImageNet and Cifar-10, BLEU score for IWSLT’14 and WMT’14 and F1 score for BERT on SQuAD. All values are averaged over multiple runs for each experiment. Experiment details are mentioned in the individual sections of the experiments. | | | Knee | | | | ---|---|---|---|---|---|---|--- Experiment | Training | Knee | Schedule | Baseline | One-Cycle | Cosine | Linear | Budget | Schedule | (Fixed 50% | | Decay | Decay | | (epochs) | | explore) | | | | ImageNet | 90 | 76.71 | 76.58 | 75.87 | 75.39 | 76.41 | 76.54 Cifar-10 | 200 | 95.26 | 95.26 | 95.10 | 94.09 | 95.23 | 95.18 IWSLT | 50 | 35.53 | 35.23 | 34.97 | 34.77 | 35.21 | 34.97 WMT’14 | 70 | 27.53 | 27.41 | 27.29 | 27.19 | 27.35 | 27.29 BERTLARGE | 31250 (iters) | 91.51 | 91.51 | 91.34 | - | - | 91.34 Table 5: Shorter budget training: Test accuracy on all learning rate schedules tried in this paper, but trained with a shortened budget. We report same metrics as Table 4. Knee schedule achieves the same accuracy as baseline schedules using much lower budget, saving precious GPU-hours. | Shortened Training | Knee | | Cosine | Linear | Saving ---|---|---|---|---|---|--- Experiment | Budget | Schedule | One-Cycle | Decay | Decay | ( V100 GPU | (epochs) | | | | | hours) ImageNet | 50 | 75.92 | 75.36 | 75.71 | 75.82 | 27 Cifar-10 | 150 | 95.14 | 93.84 | 95.06 | 95.02 | 0.25 IWSLT | 35 | 35.08 | 34.43 | 34.46 | 34.16 | 0.75 WMT’14 | 30 | 27.28 | 26.80 | 26.95 | 26.77 | 80 BERTLARGE | 20854 (iters) | 91.29 | - | - | 90.64 | 1002 Table 6: Epochs required by different LR schedules to reach the target accuracy. The target accuracy is chosen based on Knee schedule’s results with a reduced budget. Experiment | Target BLEU Score | Knee schedule | Cosine Decay | Linear Decay ---|---|---|---|--- IWSLT | 35.08 | 35 | 45 | 60 WMT’14 | 27.28 | 30 | 60 | 70 Table 4 shows the test accuracies of the various experiments, when trained with the original budget; while Table 5 shows the results when trained with a reduced budget. As shown, for the original budget runs, Knee schedule improves on the test accuracies in all experiments. Note that in Knee schedule, the explore duration is a hyperparameter. To avoid tuning this hyperparameter, we experimented with a fixed 50% explore duration for the full budget runs. Even the fixed 50% explore Knee schedule outperforms all the other baselines. Also noteworthy is that Knee schedule is able to achieve the same test accuracies as the baseline’s full budget runs with a much lower training budget, saving precious GPU cycles (Table 5). While the difference in accuracy values between the various schedules might appear deceptively small in absolute terms, achieving these gains require a large amount of compute. For example, the number of epochs needed by each scheme to reach the target BLEU score for IWSLT’14 DE-EN and WMT’14 EN-DE with the Transformer network is shown in Table 6. One can see that Knee schedule is significantly more efficient as compared to say Cosine Decay, which takes 100% more training time to achieve the same accuracy for WMT‘14 EN-DE. Thus, the accuracy and/or compute gains achieved by Knee schedule is significant. A summary of our main experimental results is as follows: 1. 1. Imagenet on Resnet-50: We show an absolute gain of 0.8% in top-1 accuracy against the competitive step schedule baseline for this model. Also, Knee schedule can achieve the same accuracy as baseline in $\sim$45% less training epochs. 2. 2. BERTLARGE pre-training on Wikipedia+BooksCorpus dataset: Compared to the baseline of You et al. (2019), we improve the F1 score on SQuAD v1.1 fine- tuning task by 0.2% (91.51 compared to 91.34). Also, we were able to achieve similar accuracy as baseline in 33% less training steps (a saving of $\sim$1002 V100 GPU-hours!). 3. 3. WMT’14 and IWSLT machine translation on Transformers: Compared to competitive baselines, we were able to improve the BLEU scores by 0.24 and 0.56 points for the two tasks. Moreover, Knee schedule was able to achieve the same accuracy as baselines in 57% and 30% less training times. 4. 4. State of the Art (SOTA) Results: We also attain state of the art results on the IWSLT’14(DE-EN) machine translation dataset by simply replacing the learning rate schedule of the current SOTA model (Shen et al., 2020) with Knee. We were able to improve the BLEU score by 0.18, reaching a new SOTA score of 37.78. Moreover, Knee can achieve the current SOTA baseline value in 30% less training time. ### 5.3 Detailed Results We now describe each of our main experimental results in detail. #### 5.3.1 ImageNet Image Classification on Resnet-50 We train ImageNet dataset (Russakovsky et al., 2015) on Resnet-50 network (He et al., 2016) which has 25 million parameters, with a batch size of 256 and a seed learning rate of 0.1. Random cropping and random horizontal flipping augmentations were applied to the training dataset. We use SGD optimizer with momentum of 0.9 and weight decay of $1e^{-4}$. For baseline runs, we used the standard hand-tuned step learning rate schedule of 0.1, 0.01 and 0.001 for 30 epochs each. For Knee schedule we used a seed learning rate of 0.1 (same as baseline). We trained with the original budget of 90 epochs as well as with a reduced budget of 50 epochs. We used 30 explore epochs for the two experiments. 222We used the opensource implementation at: https://github.com/cybertronai/imagenet18_old Table 7 shows the training loss and test accuracies for our experiments. Knee schedule comfortably beats the test accuracy of baseline in the full budget run (with absolute gains of 0.8% and 0.4% in top-1 and top-5 accuracy, respectively), while meeting the baseline accuracy even with a much shorter budget. The fact that the baseline schedule takes almost $80\%$ more training time than Knee schedule for the same test accuracy, shows the effectiveness of our Explore-Exploit scheme. See Figure 6 in Appendix B for training curves. Table 7: ImageNet on Resnet-50 results. We report mean (stddev) over 3 runs. LR Schedule | Test Top 1 Acc. | Test Top 5 Acc. | Training Loss | Training Epochs ---|---|---|---|--- Baseline | 75.87 (0.035) | 92.90 (0.015) | 0.74 (1e-3) | 90 Knee | 76.71 (0.097) | 93.32 (0.031) | 0.79 (1e-3) | 90 Knee (short budget) | 75.92 (0.11) | 92.90 (0.085) | 0.90 (3e-3) | 50 #### 5.3.2 Cifar-10 Image Classification on Resnet-18 We train Cifar-10 dataset (Krizhevsky et al., 2009) on Resnet-18 network (He et al., 2016) which has around 11 million parameters. SGD optimizer is used with momentum of 0.9 and weight decay of $5e^{-4}$. Random cropping and random horizontal flipping augmentations were applied to the training dataset. 333We used the open-source implementation at: https://github.com/kuangliu/pytorch- cifar. For baseline, we used the hand-tuned step learning rate schedule of 0.1, 0.01 and 0.001 for 100, 50, 50 epochs, respectively. With Knee schedule, we train the network with the original budget of 200 epochs, as well as a reduced budget of 150 epochs. We used 100 explore epochs for both runs, and a seed learning rate of 0.1 (same as baseline). Table 8 shows the training loss and test accuracies for the experiments. Knee schedule beats the test accuracy of baseline in the full budget run, while meeting the baseline test accuracy in $25\%$ less budget. Refer to figure 7 in Appendix B for detailed comparisons of training loss, test accuracy, and learning rate. Table 8: Training loss and Test accuracy for Cifar-10 on Resnet-18. We report mean (stddev) over 7 runs. LR Schedule | Test Accuracy | Training Loss | Training Epochs ---|---|---|--- Baseline | 95.10 (0.14) | 0.002 (1e-4) | 200 epochs Knee | 95.26 (0.11) | 0.002 (1e-4) | 200 epochs Knee (short budget) | 95.14 (0.18) | 0.004 (3e-4) | 150 epochs #### 5.3.3 BERTLARGE Pre-training We pretrain on BERTLARGE on Wikipedia+BooksCorpus dataset with LAMB optimizer (You et al. (2019)). BERTLARGE has around 330 million parameters and the pre- training is divided into two phases with different sequence lengths. The first phase consists of 90% steps with sequence length of 128 and the second phase consists of the remaining 10% steps with sequence length of 512 (Devlin et al. (2018)). We used a batch size of 16384 in both phases of training 444We used the open-source implementation at: https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling/BERT. We use the same training budget of 31250 steps mentioned in (You et al. (2019)). We also train the model on a shortened training budget of 2/3rd the original steps (20854 steps). Since large batch training requires learning rate warmup (see Goyal et al. (2017)), we incorporate it into the Knee schedule by first doing a warmup of 10% as suggested in (You et al., 2019) followed by the explore-exploit phases. We used an explore of 50% of the total steps available for both phases of BERT training. For baseline, we use the warmup (10%) + linear decay (90%) schedule (You et al., 2019; Devlin et al., 2018). The pre-trained models are evaluated on the SQuAD v1.1 (Rajpurkar et al., 2016) dataset by fine-tuning on the dataset for 2 epochs. See Table 9 for the results. For the full budget run, Knee schedule improves the baseline by 0.2%, while for the reduced budget we achieved similar fine-tuning accuracy as baseline. The baseline schedule achieves a much lower accuracy with shorter budget training, showing the efficacy of Knee schedule. BERT pre-training is extremely compute expensive and takes around 47 hours on 64 V100 GPUs (3008 V100 GPU-hrs) on cloud VMs. The reduced budget amounts to a saving of approximately 1002 V100 GPU-hours! Table 9: BERTLARGE results. We report the pre-training train loss, and the test F1 accuracy on SQuAD v1.1 after fine-tuning. See figure 9 in Appendix B for training curves. LR Schedule | F1 score on SQuAD v1.1 | Training loss | Total Training Steps ---|---|---|--- Knee | 91.51 | 1.248 | 31250 Baseline (You et al., 2019) | 91.34 | - | 31250 Baseline (short budget) | 90.64 | 1.336 | 20854 Knee (short budget) | 91.29 | 1.275 | 20854 #### 5.3.4 Machine Translation on Transformer Network with WMT’14 and IWSLT In the second NLP task, we train the Transformer (base model) (Vaswani et al., 2017) on the IWSLT’14 (De-En) (Cettolo et al., 2014) and WMT’14 (En-De) (Bojar et al., 2014) datasets with the RAdam (Liu et al., 2019) optimizer. ##### WMT’14 (EN-DE): We use the default implementation provided by the fairseq package (Ott et al., 2019) 555https://github.com/pytorch/fairseq. We train WMT’14 (EN-DE) dataset on the TransformerBASE (Vaswani et al., 2017) model which has around 86 million parameters and use the RAdam (Liu et al., 2019) optimizer with $\beta_{1}$ of 0.9 and $\beta_{2}$ of 0.999. Label smoothed cross entropy was used as the objective function with an uncertainty of 0.1. A dropout of 0.1, clipping norm of 25 and weight decay of $1e^{-4}$ is used. Each training batch contains approximately 30000 tokens. The baseline schedule uses a linear decay for 70 epochs (Liu et al., 2019). With Knee schedule, we trained with the original budget of 70 epochs, as well as a reduced budget of 30 epochs. We used 50 and 25 explore epochs for the two runs, respectively and a seed learning rate of $3e^{-4}$ for both Knee schedule and baseline. In all cases we use the model checkpoint with least loss on the validation set for computing BLEU scores on the test set. Table 10 shows the training loss and test accuracy averaged over 3 runs. Knee schedule improves the test BLEU score of baseline in the full budget run by 0.24 points. In the shorter budget run, Knee schedule matches the test accuracy of the baseline while taking $57\%$ less training time (a saving of 80 V100 GPU- hours!). See Figure 10 in Appendix B for training curves. ##### IWSLT’14 (DE-EN): For IWSLT’14 (DE-EN) we use the same configuration as WMT’14 (EN-DE), except for a dropout of 0.3 following Fairseq’s out-of-box implementation. Each training batch contains approximately 4000 tokens. For Knee schedule, we choose explore as 30 epochs for short budget runs and 40 epochs for full budget runs. The baseline schedule uses a linear decay for 50 epochs (Liu et al., 2019). With Knee schedule, we trained with the original budget of 50 epochs, as well as a reduced budget of 35 epochs. We used 40 and 30 explore epochs for the two runs, respectively and a seed learning rate of $3e^{-4}$ for both Knee schedule and baseline. In all cases we use the model checkpoint with least loss on the validation set for computing BLEU scores on the test set. Knee schedule improves the baseline test BLEU score by 0.56 points in the full budget run. In the shorter budget run, Knee schedule matches the test accuracy of the baseline schedule while taking $30\%$ less training time. See Figure 11 in Appendix B for training curves. Table 10: Results for WMT’14 (EN-DE) on Transformer networks. The test BLEU scores are computed on the checkpoint with the best validation perplexity. We report mean (stdev) over 3 runs. LR Schedule | Test BLEU | Train | Validation | Training ---|---|---|---|--- Score | Perplexity | Perplexity | Epochs Baseline | 27.29 (0.06) | 3.87 (0.017) | 4.89 (0.02) | 70 Knee | 27.53 (0.12) | 3.89 (0.017) | 4.87 (0.006) | 70 Knee (short budget) | 27.28 (0.17) | 4.31 (0.02) | 4.92 (0.007) | 30 Table 11: Training, validation perplexity and test BLEU scores for IWSLT on Transformer networks. The test BLEU scores are computed on the checkpoint with the best validation perplexity. We report the mean and standard deviation over 3 runs. LR Schedule | Test BLEU | Train | Validation | Training ---|---|---|---|--- Score | Perplexity | Perplexity | Epochs Baseline | 34.97 (0.035) | 3.36 (0.001) | 4.91 (0.035) | 50 Knee | 35.53 (0.06) | 3.00 (0.044) | 4.86 (0.02) | 50 Knee (short budget) | 35.08 (0.12) | 3.58 (0.049) | 4.90 (0.063) | 35 #### 5.3.5 SQuAD-v1.1 fine-tuning on BERTBASE We also evaluate Knee schedule on the task of fine-tuning BERTBASE model Devlin et al. (2018) on SQuAD v1.1 Rajpurkar et al. (2016) with the Adam Kingma and Ba (2014b) optimizer 666We used the implementation at: https://github.com/huggingface/transformers. BERT fine-tuning is prone to overfitting because of the huge model size compared to the small fine-tuning dataset, and is typically run for only a few epochs. For baseline we use the linear decay schedule mentioned in Devlin et al. (2018). We use a seed learning rate of $3e^{-5}$ and train for 2 epochs. For Knee schedule, we train the network with 1 explore epoch with the same seed learning rate of $3e^{-5}$. Table 12 shows our results over 3 runs. We achieve a mean EM score of 81.4, compared to baseline’s 80.9, a 0.5% absolute improvement. We don’t do a short budget run for this example, as the full budget is just 2 epochs. Please refer to Figure 14 in Appendix B for the training loss, test accuracy and learning rate curves. Table 12: SQuAD fine-tuning on BERTBASE. We report the average training loss, and average test EM, F1 scores over 3 runs. LR Schedule | EM | F1 | Train Loss | Training Epochs ---|---|---|---|--- Baseline | 80.89 (0.15) | 88.38 (0.032) | 1.0003 (0.004) | 2 Knee schedule | 81.38 (0.02) | 88.66 (0.045) | 1.003 (0.002) | 2 #### 5.3.6 State of the Art Result To further demonstrate the effectiveness of Knee schedule, we took a recent high performing model, Cutoff (Shen et al., 2020)777We used the code available at https://github.com/dinghanshen/Cutoff, which had reported state-of-the-art accuracy on the IWSLT’14 (DE-EN) dataset. They reported a BLEU score of 37.6 when trained with an inverse square root learning rate schedule for 100 epochs, with the first 6000 steps allocated for warmup. We simply retrained the model with our Knee schedule, and achieved a new SOTA BLEU score of 37.78 (an absolute increase of 0.18). See Table 13 for the BLEU scores, training and validation perplexities. We also show that Knee schedule can train the model in 30% less training time (70 epochs), while achieving slightly better accuracy of 37.66 BLUE score compared to the 100 epoch baseline. The baseline schedule when run for 70 epochs achieves a much worse accuracy of 37.31. For both the full budget (100 epochs) and the short budget (70 epochs) Knee runs, we choose 50% of the total training epochs as explore epochs. We also perform warmup for the same number of steps as baseline. For all runs (Knee and baseline), we report the BLEU score obtained by averaging the last 5 checkpoints and computing on the test set. See Figure 12 and 13 in Appendix B for training curves. Table 13: Training, validation perplexity and test BLEU scores for IWSLT’14 DE-EN on Cutoff. The test BLEU scores are computed by averaging the last 5 checkpoints LR Schedule | Test BLEU | Train | Validation | Training ---|---|---|---|--- Score | Perplexity | Perplexity | Epochs Inv. Sqrt | 37.60 | 3.46 | 4.24 | 100 Knee | 37.78 | 3.29 | 4.13 | 100 Inv. Sqrt (short budget) | 37.31 | 3.76 | 4.29 | 70 Knee (short budget) | 37.66 | 3.48 | 4.18 | 70 ### 5.4 Hypothesis Validation with Knee schedule on Language Tasks For validating our hypothesis on the density of wide minima vs narrow minima, we did multiple experiments on vision tasks, most of which were discussed in Section 3. To summarize, in Figures 4 and 4, we showed that for Cifar-10 on Resnet-18, as the number of explore steps increase, the distribution of minima width and test accuracy sharpens and shifts towards wider minima and better accuracy, respectively. Table 14: IWSLT’14 (DE-EN) on the Transformer network trained with the Knee schedule. The explore duration is varied, while keeping the total training budget fixed at 50 epochs. We report averages over 3 runs. Explore Epochs | Test BLEU score | Training Perplexity ---|---|--- 5 | 34.93 | 3.29 10 | 35.02 | 3.22 15 | 35.08 | 3.11 20 | 35.10 | 3.08 25 | 35.23 | 3.02 30 | 35.28 | 2.99 40 | 35.53 | 3.00 We now perform similar experiments on the IWSLT’14 German to English dataset (Cettolo et al., 2014) trained on Transformer networks (Vaswani et al., 2017) to demonstrate that our hypothesis holds even on a completely different NLP dataset and network architecture. We train with the Knee schedule for a total budget of 50 epochs with explore lr as $3e^{-4}$, but keep varying the number of explore epochs. As shown in Table 14, the test BLEU score increases as we increase the number of explore epochs. Further, we found that among multiple trials, a 20 epoch explore run had a high BLEU score of 35.29, suggesting that the run got lucky. Thus, these results on the IWSLT’14 (DE-EN) dataset add more evidence to the wide-minima density hypothesis. ### 5.5 Learning Rate Sensitivity for Knee schedule We performed sensitivity analysis of the starting learning rate, referred to as the seed learning rate, for Knee schedule. We trained the Cifar-10 dataset on Resnet-18 with the Knee schedule for a shortened budget of 150 epochs, starting at different seed learning rates. For each experiment, we do a simple linear search to find the best explore duration. The test accuracies and optimal explore duration for the different seed learning rate choices is shown in Table 15. As shown, the seed learning rate can impact the final accuracy, but Knee schedule is not highly sensitive to it. In fact, we can achieve the target accuracy of 95.1 with multiple seed learning rates of 0.05, 0.075, 0.0875 and 0.115, as compared to the original seed learning rate of 0.1, by tuning the number of explore epochs. Another interesting observation is that the optimal explore duration varies inversely with the seed learning rate. Since a bigger learning rate has higher probability of escaping narrow minima compared to a lower learning rate, it would, on an average, require fewer steps to land in a wide minima. Thus, larger learning rates can explore faster, and spend more time in the exploit phase to go deeper in the wide minimum. This observation is thus consistent with our hypothesis and further corroborates it. We also note that by tuning both seed learning rate and explore duration, we can achieve the twin objectives of achieving a higher accuracy, as well as a shorter training time – e.g. here we are able to achieve an accuracy of 95.34 in 150 epochs (seed learning rate 0.075), compared to 95.1 achieved by the baseline schedule in 200 epochs. Table 15: Seed learning rate sensitivity analysis. Cifar-10 on Resnet-18 trained for 150 epochs with Knee schedule. We vary the seed learning rate and explore epochs to get the best test accuracy for the particular setting. We report averages over 3 runs. Seed LR | Test Accuracy | Optimal Explore Epochs ---|---|--- 0.03 | 95.07 | 120 0.05 | 95.12 | 120 0.0625 | 95.15 | 120 0.075 | 95.34 | 100 0.0875 | 95.22 | 100 0.1 | 95.14 | 100 0.115 | 95.20 | 60 0.125 | 95.06 | 60 0.15 | 95.04 | 30 ## 6 Conclusions In this paper, we make an observation that an initial explore phase with a high learning rate is essential for good generalization of DNNs. Further, we find that a minimum explore duration is required even if the training loss stops improving much earlier. We explain this observation via our hypothesis that in the DNN loss landscape, the density of wide minima is significantly lower than that of narrow minima. Motivated by this hypothesis, we present an Explore-Exploit based learning rate schedule, called the Knee schedule. We do extensive evaluation of Knee schedule on multiple models and datasets. In all experiments, the Knee schedule outperforms prior hand-tuned baselines, including achieving SOTA test accuracies, when trained with the original training budget, and achieves the same test accuracy as the baseline when trained with a much shorter budget. ## 7 Acknowledgement We would like to thank Sanjith Athlur for his help in setting up the VM cluster for large training runs and Harshay Shah for helpful discussions on minima width computation. ## References * Arora et al. (2018) Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. _arXiv preprint arXiv:1802.05296_ , 2018. * Baldassi et al. (2019) Carlo Baldassi, Fabrizio Pittorino, and Riccardo Zecchina. Shaping the learning landscape in neural networks around wide flat minima. _CoRR_ , abs/1905.07833, 2019. URL http://arxiv.org/abs/1905.07833. * Baldassi et al. (2020) Carlo Baldassi, Fabrizio Pittorino, and Riccardo Zecchina. Shaping the learning landscape in neural networks around wide flat minima. _Proceedings of the National Academy of Sciences_ , 117(1):161–170, 2020. * Bojar et al. (2014) Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Ale s Tamchyna. Findings of the 2014 workshop on statistical machine translation. In _Proceedings of the Ninth Workshop on Statistical Machine Translation_ , pages 12–58, Baltimore, Maryland, USA, June 2014. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W/W14/W14-3302. * Byrd et al. (2003) Richardh Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. A limited memory algorithm for bound constrained optimization. _SIAM Journal on Scientific Computing_ , 16, 02 2003. doi: 10.1137/0916069. * Cettolo et al. (2014) Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. Report on the 11th iwslt evaluation campaign, iwslt 2014. In _Proceedings of the International Workshop on Spoken Language Translation, Hanoi, Vietnam_ , page 57, 2014. * Chaudhari et al. (2019) Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer Chayes, Levent Sagun, and Riccardo Zecchina. Entropy-sgd: Biasing gradient descent into wide valleys. _Journal of Statistical Mechanics: Theory and Experiment_ , 2019(12):124018, 2019. * Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_ , 2018. * Dinh et al. (2017) Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. In _Proceedings of the 34th International Conference on Machine Learning-Volume 70_ , pages 1019–1028. JMLR. org, 2017. * Draxler et al. (2018) Felix Draxler, Kambis Veschgini, Manfred Salmhofer, and Fred Hamprecht. Essentially no barriers in neural network energy landscape. In _International conference on machine learning_ , pages 1309–1318. PMLR, 2018. * Freeman and Bruna (2016) C Daniel Freeman and Joan Bruna. Topology and geometry of half-rectified network optimization. _arXiv preprint arXiv:1611.01540_ , 2016. * Garipov et al. (2018) Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, and Andrew Gordon Wilson. Loss surfaces, mode connectivity, and fast ensembling of dnns. _arXiv preprint arXiv:1802.10026_ , 2018. * Golatkar et al. (2019) Aditya Golatkar, Alessandro Achille, and Stefano Soatto. Time matters in regularizing deep networks: Weight decay and data augmentation affect early learning dynamics, matter little near convergence. _arXiv preprint arXiv:1905.13277_ , 2019. * Goyal et al. (2017) Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. _arXiv preprint arXiv:1706.02677_ , 2017. * Guiroy et al. (2019) Simon Guiroy, Vikas Verma, and Christopher Pal. Towards understanding generalization in gradient-based meta-learning. _arXiv preprint arXiv:1907.07287_ , 2019. * He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 770–778, 2016. * Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. _Neural Computation_ , 9(1):1–42, 1997. * Jastrzebski et al. (2017) Stanisław Jastrzebski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amos Storkey. Three factors influencing minima in sgd. _arXiv preprint arXiv:1711.04623_ , 2017. * Jastrzebski et al. (2019) Stanisław Jastrzebski, Zachary Kenton, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amost Storkey. On the relation between the sharpest directions of DNN loss and the SGD step length. In _International Conference on Learning Representations_ , 2019. URL https://openreview.net/forum?id=SkgEaj05t7. * Kawaguchi (2016) Kenji Kawaguchi. Deep learning without poor local minima. In _Advances in neural information processing systems_ , pages 586–594, 2016. * Keskar et al. (2016) Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. _arXiv preprint arXiv:1609.04836_ , 2016. * Kingma and Ba (2014a) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014a. * Kingma and Ba (2014b) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014b. * Krizhevsky et al. (2009) Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. * Kunstner et al. (2019) Frederik Kunstner, Lukas Balles, and Philipp Hennig. Limitations of the empirical fisher approximation. _arXiv preprint arXiv:1905.12558_ , 2019. * Li et al. (2019) Yuanzhi Li, Colin Wei, and Tengyu Ma. Towards explaining the regularization effect of initial large learning rate in training neural networks. _arXiv preprint arXiv:1907.04595_ , 2019. * Liu et al. (2019) Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. On the variance of the adaptive learning rate and beyond. _arXiv preprint arXiv:1908.03265_ , 2019. * Loshchilov and Hutter (2016) Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. _arXiv preprint arXiv:1608.03983_ , 2016. * McCandlish et al. (2018) Sam McCandlish, Jared Kaplan, Dario Amodei, and OpenAI Dota Team. An empirical model of large-batch training. _arXiv preprint arXiv:1812.06162_ , 2018. * Ott et al. (2019) Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In _Proceedings of NAACL-HLT 2019: Demonstrations_ , 2019. * Rajpurkar et al. (2016) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. _arXiv preprint arXiv:1606.05250_ , 2016. * Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. _International journal of computer vision_ , 115(3):211–252, 2015. * Sagun et al. (2017) Levent Sagun, Utku Evci, V Ugur Güney, Yann Dauphin, and Léon Bottou. Empirical analysis of the hessian of over-parametrized neural networks. iclr 2018 workshop contribution. _arXiv preprint arXiv:1706.04454_ , 2017. * Shallue et al. (2018) Christopher J Shallue, Jaehoon Lee, Joe Antognini, Jascha Sohl-Dickstein, Roy Frostig, and George E Dahl. Measuring the effects of data parallelism on neural network training. _arXiv preprint arXiv:1811.03600_ , 2018. * Shen et al. (2020) Dinghan Shen, Mingzhi Zheng, Yelong Shen, Yanru Qu, and Weizhu Chen. A simple but tough-to-beat data augmentation approach for natural language understanding and generation. _arXiv preprint arXiv:2009.13818_ , 2020. * Smith (2017) Leslie N Smith. Cyclical learning rates for training neural networks. In _2017 IEEE Winter Conference on Applications of Computer Vision (WACV)_ , pages 464–472. IEEE, 2017. * Smith (2018) Leslie N Smith. A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay. _arXiv preprint arXiv:1803.09820_ , 2018. * Sokol and Park (2018) Piotr A Sokol and Il Memming Park. Information geometry of orthogonal initializations and training. _arXiv preprint arXiv:1810.03785_ , 2018. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _Advances in neural information processing systems_ , pages 5998–6008, 2017. * Wang et al. (2018) Huan Wang, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. Identifying generalization properties in neural networks. _arXiv preprint arXiv:1809.07402_ , 2018. * Wu et al. (2018) Lei Wu, Chao Ma, and E Weinan. How sgd selects the global minima in over-parameterized learning: A dynamical stability perspective. In _Advances in Neural Information Processing Systems_ , pages 8279–8288, 2018. * Xie et al. (2020) Zeke Xie, Issei Sato, and Masashi Sugiyama. A diffusion theory for deep learning dynamics: Stochastic gradient descent exponentially favors flat minima. _arXiv e-prints_ , pages arXiv–2002, 2020. * Yoshida and Miyato (2017) Yuichi Yoshida and Takeru Miyato. Spectral norm regularization for improving the generalizability of deep learning. _arXiv preprint arXiv:1705.10941_ , 2017. * You et al. (2019) Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes. In _International Conference on Learning Representations_ , 2019. ## A Comparisons with Other Baseline Learning Rate Schedules In this section we compare Knee schedule against several other learning rate schedules – one-cycle, linear decay and cosine decay. One-Cycle: The one-cycle learning rate schedule was proposed in Smith (2018) (also see Smith (2017)). This schedule first chooses a maximum learning rate based on an learning rate range test. The learning rate range test starts from a small learning rate and keeps increasing the learning rate until the loss starts exploding (see figure 5). Smith (2018) suggests that the maximum learning rate should be chosen to be bit before the minima, in a region where the loss is still decreasing. There is some subjectivity in making this choice, although some blogs and libraries888See e.g. https://towardsdatascience.com/finding-good-learning-rate-and-the-one-cycle- policy-7159fe1db5d6 and https://sgugger.github.io/how-do-you-find-a-good- learning-rate.html. Also see https://docs.fast.ai/callbacks.lr_finder.html and https://docs.fast.ai/callbacks.one_cycle.html suggest using a learning rate one order lower than the one at minima. We go with this choice for all our runs. Once the maximum learning rate is chosen, the one-cycle schedule proceeds as follows. The learning rate starts at a specified fraction999See div_factor in https://docs.fast.ai/callbacks.one_cycle.html. We chose the fraction to be 0.1 in our experiments. of the maximum learning rate and is increased linearly to the maximum learning rate for 45 percent of the training budget and then decreased linearly for the remaining 45. For the final 10 percent, the learning rate is reduced by a large factor (we chose a factor of 10). We used an opensource implementation 101010https://github.com/nachiket273/One_Cycle_Policy for our experiments. (a) LR range test for CIFAR-10 (b) LR range test for IWSLT’14 DE-EN (c) LR range test for WMT’14 EN-DE (d) LR range test for ImageNet Figure 5: learning rate range test for selecting the maximum learning rate. A good choice is the learning rate is a bit before the minima in a region where the loss is still decreasing. Linear Decay: The linear decay learning rate schedule simply decays the learning rate linearly to zero starting from a seed learning rate. Cosine Decay: The cosine decay learning rate schedule decays the learning rate to zero following a cosine curve, starting from a seed learning rate. ### A.1 Cifar-10 Figure 5(a) shows the learning rate range test for Cifar-10 with the Resnet-18 network. The minima occurs around learning rate of 0.09, and we choose $9e^{-3}$ as the maximum learning rate for the One-Cycle runs. For linear, cosine decay schedules we start with a seed learning rate of 0.1 as used in the standard baselines. The training loss and test accuracy for the various schedules are shown in Table 16 for the full budget runs (200 epochs), and in Table 17 for the short budget runs (150 epochs). Table 16: Cifar-10 on Resnet-18 full budget training (200 epochs): Training loss and Test accuracy for more learning rate schedules. We report the mean and standard deviation over 7 runs. LR Schedule | Test Accuracy | Train Loss ---|---|--- One-Cycle | 94.08 (0.07) | 0.0041 (6e-5) Cosine Decay | 95.23 (0.11) | 0.0023 (9e-5) Linear Decay | 95.18 (0.15) | 0.0018 (7e-5) Knee schedule | 95.26 (0.11) | 0.0023 (1e-4) Table 17: Cifar-10 on Resnet-18 short budget training (150 epochs): Training loss and Test accuracy for more learning rate schedules. We report the mean and standard deviation over 7 runs. LR Schedule | Test Accuracy | Train Loss ---|---|--- One-Cycle | 93.84 (0.082) | 0.0052 (7e-5) Cosine Decay | 95.06 (0.16) | 0.0030 (2e-4) Linear Decay | 95.02 (0.10) | 0.0021 (1e-4) Knee schedule | 95.14 (0.18) | 0.0044 (3e-4) ### A.2 ImageNet Figure 5(d) shows the learning rate range test for ImageNet with the Resnet-50 network. The minima occurs around learning rate of 2.16, and we choose $0.216$ as the maximum learning rate for One-Cycle runs. For linear, cosine decay schedules we start with a seed learning rate of 0.1 as used in the standard baselines. The training loss and test accuracy for the various schedules are shown in Table 18 for the full budget runs (90 epochs), and in Table 19 for the short budget runs (50 epochs). Table 18: ImageNet with ResNet-50 full budget training (90 epochs): Training loss, Test Top-1 and Test Top-5 for more learning rate schedules. We report the mean and standard deviation over 3 runs. LR Schedule | Test Top-1 | Test Top-5 | Train Loss (av) ---|---|---|--- One Cycle | 75.39 (0.137) | 92.56 (0.040) | 0.96 (0.003) Cosine Decay | 76.41 (0.212) | 93.28 (0.066) | 0.80 (0.002) Linear decay | 76.54 (0.155) | 93.21 (0.051) | 0.75 (0.001) Knee schedule | 76.71 (0.097) | 93.32 (0.031) | 0.79 (0.001) Table 19: ImageNet with ResNet-50 short budget training (50 epochs): Training loss, Test Top-1 and Test Top-5 for more learning rate schedules. We report the mean and standard deviation over 3 runs. LR Schedule | Test Top-1 | Test Top-5 | Train Loss (av) ---|---|---|--- One Cycle | 75.36 (0.096) | 92.53 (0.079) | 1.033 (0.004) Cosine Decay | 75.71 (0.116) | 92.81 (0.033) | 0.96 (0.002) Linear decay | 75.82 (0.080) | 92.84 (0.036) | 0.91 (0.002) Knee schedule | 75.92 (0.11) | 92.90 (0.085) | 0.90 (0.003) ### A.3 WMT’14 EN-DE Figure 5(c) shows the learning rate range test for WMT’14 EN-DE on the transformer networks. The minima occurs near $1.25e^{-3}$. For the maximum learning rate, we choose $2.5e^{-4}$ for the default one-cycle policy. For linear, cosine decay schedules we start with a seed learning rate of $3e^{-4}$ as used in the standard baselines The training, validation perplexity and BLEU scores for the various schedules are shown in Table 20 for the full budget runs (70 epochs), and in Table 21 for the short budget runs (30 epochs). Table 20: WMT’14 (EN-DE) on Transformer networks full budget training (70 epochs): Training, validation perplexity and test BLEU scores for more learning rate schedules. The test BLEU scores are computed on the checkpoint with the best validation perplexity. We report the mean and standard deviation over 3 runs. LR Schedule | Test BLEU Score | Train ppl | Validation ppl ---|---|---|--- One-Cycle | 27.19 (0.081) | 3.96 (0.014) | 4.95 (0.013) Cosine Decay | 27.35 (0.09) | 3.87 (0.011) | 4.91 (0.008) Linear Decay | 27.29 (0.06) | 3.87 (0.017) | 4.89 (0.02) Knee schedule | 27.53 (0.12) | 3.89 (0.017) | 4.87 (0.006) Table 21: WMT’14 (EN-DE) on Transformer networks short budget training (30 epochs): Training, validation perplexity and test BLEU scores for more learning rate schedules. The test BLEU scores are computed on the checkpoint with the best validation perplexity. We report the mean and standard deviation over 3 runs. LR Schedule | Test BLEU Score | Train ppl | Validation ppl ---|---|---|--- One-Cycle | 26.80 (0.2) | 4.38 (0.017) | 5.02 (0.007) Cosine Decay | 26.95 (0.23) | 4.32 (0.013) | 4.99 (0.011) Linear Decay | 26.77 (0.12) | 4.36 (0.092) | 5.02 (0.01) Knee schedule | 27.28 (0.17) | 4.31 (0.02) | 4.92 (0.007) ### A.4 IWSLT’14 DE-EN Figure 5(b) shows the learning rate range test for IWSLT on the transformer networks. The minima occurs near $2.5e^{-3}$. For the maximum learning rate, we choose $2.5e^{-4}$ for the default one-cycle policy. For linear, cosine decay schedules we start with a seed learning rate of $3e^{-4}$ as used in the standard baselines The training, validation perplexity and BLEU scores for the various schedules are shown in Table 22 for the full budget runs (50 epochs), and in Table 23 for the short budget runs (35 epochs). Table 22: IWSLT’14 (DE-EN) on Transformer networks full budget training (50 epochs): Training, validation perplexity and test BLEU scores for more learning rate schedules. The test BLEU scores are computed on the checkpoint with the best validation perplexity. We report the mean and standard deviation over 3 runs. LR Schedule | Test BLEU Score | Train ppl | Validation ppl ---|---|---|--- One-Cycle | 34.77 (0.064) | 3.68 (0.009) | 4.97 (0.010) Cosine Decay | 35.21 (0.063) | 3.08 (0.004) | 4.88 (0.014) Linear Decay | 34.97 (0.035) | 3.36 (0.001) | 4.92 (0.035) Knee schedule | 35.53 (0.06) | 3.00 (0.044) | 4.86 (0.02) Table 23: IWSLT’14 (DE-EN) on Transformer networks short budget training (35 epochs): Training, validation perplexity and test BLEU scores for more learning rate schedules. The test BLEU scores are computed on the checkpoint with the best validation perplexity. We report the mean and standard deviation over 3 runs. LR Schedule | Test BLEU Score | Train ppl | Validation ppl ---|---|---|--- One-Cycle | 34.43 (0.26) | 3.98 (0.028) | 5.09 (0.017) Cosine Decay | 34.46 (0.33) | 3.86 (0.131) | 5.06 (0.106) Linear Decay | 34.16 (0.28) | 4.11 (0.092) | 5.14 (0.066) Knee schedule | 35.08 (0.12) | 3.58 (0.063) | 4.90 (0.049) ### A.5 SQuAD-v1.1 finetuning with BERTBASE We choose $1e^{-5}$ as the maximum learning rate for One-Cycle runs as the minima occurs close to $1e^{-4}$ . For linear, cosine decays we start with a seed learning rate of $3e^{-5}$ as used in standard baselines. Table 24 show the average training loss, average test EM and F1 scores for the various schedules. We did not do a short budget training for this dataset, as the full budget is just 2 epochs. Table 24: SQuAD-v1.1 fine-tuning on BERTBASE for more learning rate schedules. We report the average training loss, average test EM, F1 scores over 3 runs. LR Schedule | EM (av) | F1 (av) | Train Loss (av) ---|---|---|--- One Cycle | 79.9 (0.17) | 87.8 (0.091) | 1.062 (0.003) Cosine Decay | 81.31 (0.07) | 88.61 (0.040) | 0.999 (0.003) Linear decay | 80.89 (0.15) | 88.38 (0.042) | 1.0003 (0.004) Knee schedule | 81.38 (0.02) | 88.66 (0.045) | 1.003 (0.002) ## B Detailed Plots Figure 6: ImageNet on Resnet-50 trained with Momentum. Shown are the training loss, top-1/top-5 test accuracy and learning rate as a function of epochs, for the baseline scheme (orange) vs the Knee schedule scheme (blue). The plot is split into 3 parts to permit higher fidelity in the y-axis range. Figure 7: Cifar-10 on Resnet-18 trained with Momentum. Shown are the training loss, test accuracy and learning rate as a function of epochs, for the baseline scheme (orange) vs the Knee schedule scheme (blue). The plot is split into 3 parts to permit higher fidelity in the y-axis range. Figure 8: BERTLARGE pretraining for batch size of 16k with LAMB optimizer for the short budget runs. Shown are the training loss and learning rate as a function of steps, for the baseline scheme short budget (orange) vs the Knee schedule scheme short budget (blue). The plot is split into 2 parts to give a clear picture of the two phases of training Devlin et al. (2018). Note that even though the training loss curves look similar for the two runs, we see a significant gap in F1 score obtained when we fine-tune the model checkpoints on SQuAD-v1.1 Rajpurkar et al. (2016). See Table 9 for details. LR Schedule | F1 - Trial 1 | F1 - Trial 2 | F1 - Trial 3 | F1 avg. | F1 max ---|---|---|---|---|--- Baseline (short budget) | 90.39 | 90.64 | 90.53 | 90.52 | 90.64 Knee schedule ( short budget ) | 91.22 | 91.29 | 91.18 | 91.23 | 91.29 Knee schedule ( full budget ) | 91.45 | 91.41 | 91.51 | 91.46 | 91.51 Figure 9: SQuAD fine-tuning on BERTLARGE. We report F1 scores for 3 different trials as well as the maximum and average values. Figure 10: WMT’14 (EN-DE) on TransformerBASE network trained with RAdam. Shown are the training perplexity, validation perplexity and learning rate as a function of epochs, for the baseline scheme (orange) vs the Knee schedule scheme (blue). The plot is split into 3 parts to permit higher fidelity in the y-axis range. Figure 11: IWSLT’14 (DE-EN) on TransformerBASE network trained with RAdam. Shown are the training perplexity, validation perplexity and learning rate as a function of epochs, for the baseline scheme (orange) vs the Knee schedule scheme (blue). The plot is split into 3 parts to permit higher fidelity in the y-axis range. Figure 12: IWSLT’14 (DE-EN) on the SOTA model Cutoff(Shen et al., 2020), trained with Adam. Shown are the training perplexity, validation perplexity and learning rate as a function of epochs, for the baseline scheme (orange) vs the Knee schedule scheme (blue). Figure 13: IWSLT’14 (DE-EN) on the SOTA model Cutoff(Shen et al., 2020), trained with Adam with a reduced training budget of 70 epochs. Shown are the training perplexity, validation perplexity and learning rate as a function of epochs, for the baseline scheme (orange) vs the Knee schedule scheme (blue). Figure 14: SQuAD-v1.1 fine-tuning on BERTBASE trained with Adam. Shown are the training loss, test EM score, and learning rate as a function of epochs, for the baseline scheme (orange) vs the Knee schedule scheme (blue). The plot is split into 2 parts to permit higher fidelity in the y-axis range. It is clear that with Knee schedule the network starts to overfit after the 2nd epoch, where the testing loss continues to go down, but generalization suffers. We saw similar behavior with different seeds, and thus need to train with Knee schedule for only 2 epochs.
2024-09-04T02:54:58.282887
2020-03-09T09:26:23
2003.03988
{ "authors": "Nasir Ahmad, Luca Ambrogioni, Marcel A. J. van Gerven", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26110", "submitter": "Nasir Ahmad", "url": "https://arxiv.org/abs/2003.03988" }
arxiv-papers
Original Article Journal Section STDWI, Spike-Timing-Dependent Weight Inference; LIF, Leaky Integrate and Fire; RDD, Regression Discontinuity Design. Nasir Ahmad<EMAIL_ADDRESS> # Overcoming the Weight Transport Problem via Spike-Timing-Dependent Weight Inference Nasir Ahmad Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands Luca Ambrogioni Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands Marcel van Gerven Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands ###### Abstract We propose a solution to the weight transport problem, which questions the biological plausibility of the backpropagation algorithm. We derive our method based upon a theoretical analysis of the (approximate) dynamics of leaky integrate-and-fire neurons. Our results demonstrate that the use of spike timing alone outcompetes existing biologically plausible methods for synaptic weight inference in spiking neural network models. Furthermore, our proposed method is more flexible, being applicable to any spiking neuron model, is conservative in how many parameters are required for implementation and can be deployed in an online-fashion with minimal computational overhead. These features, together with its biological plausibility, make it an attractive mechanism for weight inference at single synapses. ###### keywords: Weight Transport Problem, Biologically Plausible Learning, Spiking Neural Network ## 1 Introduction Backpropagation of error is a successful approach for training rate-based neural network models [1, 2]. However, since its inception it has been criticised for its lack of biological plausibility [3, 4]. In particular, in order to update individual synaptic connections weights within a network, information is required about distant error signals and the weights of other synaptic connections of the network – information which is not available locally to the synapse. However, backpropagation’s flexibility, unrelenting success in application-based research, and most significantly its capacity for modelling and reproducing neural response statistics has contributed to a recent re-examination of its potential role and plausibility in neural systems [5, 6, 7, 8, 9]. A number of attempts have been made to explain mechanisms by which backpropagation’s implausibilities can be addressed. These can be divided into methods which propose alternative implementations of backpropagation, namely energy-based and dynamical systems methods which converge to backpropagation of error [10, 11, 12], for an overview see [6], and methods which show that components which are considered implausible can be approximated using alternative and plausible computations [13, 14, 15, 16, 17]. We focus on the latter approaches in this study. Figure 1: The weight transport problem in backpropagation of error. A. The computations involved in the forward-pass of an example feedforward neural network model. B. The backpropagation of error method. Specifically, the derivative of the loss function can be computed with respect to each weight matrix in our example network. Observe that the derivative of the loss function with respect to a weight matrix ($W_{1}$) deep in the network depends upon the weight matrices in the higher layers ($W_{2}$). C. Backpropagation of error requires a copy of the weights of the forward network. One particularly difficult-to-reconcile component of backpropagation is the need to propagate error signals backwards through a network (see Fig. 1). This requires that the backward propagating error signals between layers of neurons is weighted according to the forward synaptic connection weights, leading to a situation in which feedback weight matrices are copies of the feedforward matrices. This duplication of weights has been identified as particularly troubling in terms of a plausible biological implementation and is known as the weight transport problem [3]. Early attempts to address the weight transport problem included proposals that the feedback weights can converge to the values of the feedforward weights by applying the same weight changes to both matrices during training (see [13]). This explanation was criticised for simply shifting the problem from transporting weights to transporting weight changes in a network. More recently, feedback-alignment was proposed as a method to completely sidestep the need for weight symmetry [14]. It was empirically shown that by having fixed random feedback weight matrices between the layers of a network, the feedforward weight matrices are modified by backpropagation such they they come into alignment with the feedback matrices. This approach can also be implemented with a randomly weighted direct feedback error to every layer (direct feedback alignment, [18]), a method which has also been applied in spiking neural networks [19]. Though such an error distribution process is biologically plausible, the effectiveness of the approach is limited to shallow networks and the accuracy of deep networks appears to suffer severely under such a protocol [20]. Beyond static feedback matrices, matrices with arbitrary magnitudes but alignment of the signs of weights (i.e. positive feedforward weights are mirrored with positive feedback weights and vice versa) show greatly improved performance over feedback alignment [21, 22]. However, propagating the sign of feedback weights is itself a transport problem and performance with this method is less than optimal. Recently, methods have been proposed by which the symmetric feedback weight matrices could be learned by biologically plausible methods (using local only information). Specifically, methods have emerged which carry out a process of synaptic weight inference [15, 16]. In essence the backwards synaptic connections (which would propagate the error) attempt to infer the feedforward weight between two neurons by observation of their activity alone. This is a process in which, based upon the activity patterns of a pair of neurons, a feedback synapse can infer (and thereby copy) the strength of the feedforward synapse. Such a method was successfully applied in a rate-based neural network by Akrout et al. [15] (hereafter referred to as the Akrout method). This method makes use of inference phases during which neurons are randomly stimulated and their activation is correlated in order to infer synaptic weights. Alternative rate-based methods are available though we do not consider them given their non-locality [17]. A more recent proposal [16] considers a spiking neural network model and makes use of the spiking threshold of neurons to implement a quasi-experimental, causal inference method known as regression discontinuity design (we hereafter refer to this method as RDD, also see [23]). This method similarly uses inference phases in between training epochs in order to infer the backward synaptic weight matrices. These inference methods have proven successful in inferring the feedforward synaptic weights for use in the feedback weight matrices but also suffer from a number of drawbacks. First, the Akrout method operates on firing rates and requires a demeaning process which is carried out in batches. This demeaning and batching process is particularly troublesome when applied to spiking networks where the learning must therefore be carried out offline and firing rates measured by aggregating spikes at specific intervals. In the RDD method, weight inference requires a piece-wise linear fitting process in order to infer the synaptic weights. This procedure requires the storage of four times more parameters per synapse (than just the synaptic weight), a second state variable per neuron and a high computational complexity per update. Though these components and the calculation protocols might be possible for a neuron to compute, they incur a significant computational cost. To overcome these issues, we propose a spike-timing-dependent weight inference (STDWI) mechanism for solving the weight transport problem in spiking neural networks. Our method is motivated by analysis of the time-to-spike of various neuron models under the influence of an incident spikes. In order to estimate this in a biologically plausible and computationally efficient manner, we make use of local information for this computation, in particular just the spike times of the pre- and post-synaptic neurons. We show that under a number of conditions our method outperforms both the Akrout and RDD methods when applied to weight estimation in spiking neural network models. We also compare our method to an optimal Bayesian update rule for an integrate-and-fire neuron with stochastic input. Our rule proves effective as an approximation of this update rule. Furthermore, for networks in which the neurons emit action potentials at random times (i.e. without a correlation structure), our learning rule can analytically be shown to approximate a rate-based learning rule similar to the Akrout method. Finally, the update rule we propose is computationally cheap and can be applied in an online fashion. ## 2 Methods To address the weight transport problem, it has been proposed that network weights can be inferred from activity [15, 16]. We can formulate this problem as follows: Consider two neurons, labelled “A” and “B”, embedded in a larger network structure. Amongst other connections, there exists a ‘forward’ synaptic connection from neuron A to neuron B. Therefore, the activity of neuron B is dependent upon some internal dynamics as well as the network activity as a whole, including the activity of neuron A, via incoming synaptic connections. Let us also now consider a pseudo synaptic connection from B to A, a connection meant to carry error information backward through the network (note that this work and prior work do not describe this synapse as having an impact upon the network activity during inference). According to the backpropagation of error algorithm, the optimal value of this synaptic connection weight should be equivalent to the weight of the forward synapse from A to B. How the forward synaptic weight can be copied to the backward synaptic connection is the problem at hand. Here we address how to infer the forward synaptic weight value at the backwards synapse given knowledge of the spike times (indexed $k$) of the neurons A and B ($t_{A}^{k}$ and $t_{B}^{k}$ respectively) and by accounting for some average impact from all other synapses. We derive a computationally simple and biologically plausible method which, by use of appropriate approximations, achieves this aim and could be employed at the synaptic level to learn feedback weights for error propagation. ### 2.1 Derivation of the weight inference method In order to derive our proposed weight inference rule, we analyse a simplified deterministic leaky integrate-and-fire (LIF) neuron with instantaneous synaptic inputs from a single source and drift (where drift is a placeholder for the unknown impact of all other incident synapses) and then consider the impact of noise upon this model. A deterministic LIF neuron with drift $\mu$ has voltage dynamics $\tau_{m}\frac{dv(t)}{dt}=v_{r}-v(t)+\mu\,.$ (1) In the absence of any input spikes, this equation can be solved, for an arbitrary initial condition $v_{0}$, at time $t_{0}$, yielding $v(t)=(v_{r}+\mu)\left(1-e^{-(t-t_{0})/\tau_{m}}\right)+v_{0}e^{-(t-t_{0})/\tau_{m}}\,.$ (2) With this expression we can now consider two cases, one in which the neuron is not stimulated by any incoming spikes from neuron $j$ and, beginning at voltage $v_{0}$ at time $t_{0}$, it spikes with some time delay $\hat{T}$ (purely under the influence of drift). The other case is one in which the neuron received an additional instantaneous voltage injection of magnitude $w$ at time $t_{0}$ (i.e. a spike arrives and stimulates the neuron) and it spikes with a different time delay, $T$ (such that the second case involves replacement of $v_{0}$ with $v_{0}+w$). These cases can be subtracted at threshold in order to give an expression for $w$, the stimulation magnitude, of the form $w={e^{T/\tau_{m}}}(v_{r}+\mu- v_{0})\left(e^{-T/\tau_{m}}-e^{-\hat{T}/\tau_{m}}\right).$ (3) Equation (3) provides an exact solution for determining the amount of instantaneous voltage ($w$) injected to a neuron at some time $t_{0}$ given that its spike time was modified from an expected time $\hat{T}$ to the time $T$. This is under the assumption that other than the instantaneous voltage injection and a background drift, there are no other inputs to the neuron during this time. We wish to make use of this deterministic solution for application to noisy conditions. In particular, when the background drift is considered as due to input from many other neurons it would inherently be noisy (unlike our derivation above). However, the current expression includes a number of terms which are highly susceptible to noise. First, the exponential term, $e^{T/\tau_{m}}$ is a strictly positive function which is exponential in the time that the neuron took to spike. If we consider a case in which $T$ is noisy, this term scales our noise exponentially but never changes sign. Second, the expected time to spike, $\hat{T}$ is difficult to estimate in a noisy neuron. However, this term is crucial for our ability to accurately identify positive and negative weights and it must, therefore, be approximated. First we consider the exponential term $e^{T/\tau_{m}}$. Though this term might (in the noiseless case) aid in producing a highly accurate weight estimation, in the face of noise it introduces a significant error. Furthermore, in the noiseless case (where a single estimate of the weight is exact), its biggest function is to scale the estimated weight based upon the time taken to spike. This, in essence, reweighs cases in which the neuron dynamics take excess time to reach threshold – due to beginning far from threshold (low $v_{0}$), having a small drift, and/or when the incident spike arrives from a synapse with a small/negative weight. This is therefore a mechanism to ensure that, for cases in which our system setup results in dynamics which take some time to reach threshold, the weight scale is treated sensibly. However, in the coming derivation we intend to sample over multiple instances of such an estimation in a noisy system such that there is an unreliable signal of ‘time to spike’. And given that this term is heavily influenced by noise we wish to ignore it. Therefore, given its function, our intention to sample, and its susceptibility to noise, we test in this work the removal of this term from our weight estimation and instead propose weight estimation without this scaling. We empirically find this approach successful. Thus, our approach to (approximate) weight estimation can be described as $\tilde{w}=C(v_{r}+\mu- v_{0})\left(e^{-T/\tau_{m}}-e^{-\hat{T}/\tau_{m}}\right)$ (4) where $\tilde{w}$ is an approximate estimation of the weight (ignoring a rescaling based upon time to spike) and we have introduced a general constant $C$ to allow linear rescaling of weight estimates. Next, we wish to approximate $\hat{T}$ in the face of noisy samples. For this purpose, let us average our estimate of the weight over $K$ observations. In particular, let us consider a set of samples $T^{k}$, indexed by $k$, each of which correspond to the time to spike given that the ‘output’ neuron started from some initial voltage $v_{0}^{k}$ at the moment of an incident spike. For each of these samples, there exists an “expected” time from incident spike to neuron spike, $\hat{T}^{k}$, which corresponds to when the neuron would have spiked if not for this incident spike. Taking an average of the weight estimate over these $K$ samples yields an estimated weight $\tilde{w}^{K}=\frac{C}{K}\sum_{k=0}^{K}\left(v_{r}+\mu- v_{0}^{k}\right)\left(e^{-T^{k}/\tau_{m}}-e^{-\hat{T}^{k}/\tau_{m}}\right)$ (5) with $K$ indicating the number of observations/samples taken. If we assume that our $K$ samples are chosen independently of the incident activity (i.e. the incident spikes are random), then the values of the initial voltage, $v_{0}^{k}$, and expected times to spike, $\hat{T}^{k}$, are both independent of the sampling process (and of $w^{k}$ and $T^{k}$). Therefore, these can be independently averaged and, hence, replaced with $\langle v_{0}\rangle$ and $\langle\hat{T}\rangle$. Thus, we arrive at an expression $\tilde{w}^{K}=\frac{D}{K}\sum_{k=0}^{K}\left(e^{-T^{k}/\tau_{m}}-e^{-\langle\hat{T}\rangle/\tau_{m}}\right)\,,$ (6) where $D=C(v_{r}+\mu-\langle v_{0}\rangle)$ combines the various constants and scales our estimate of the weights. If we now finally consider how we ought to update our estimate of $w$ when we receive an additional $(K+1)$-th sample, we arrive at $\Delta\tilde{w}=\tilde{w}^{K+1}-\tilde{w}^{K}=\frac{1}{K+1}\left(D\left(e^{-T^{K+1}/\tau_{m}}-e^{-\langle\hat{T}\rangle/\tau_{m}}\right)-w^{K}\right).$ (7) Inspecting our derived update rule, the first exponential term in Eq. (7) is exponential in the time since an incident spike arrived. Given this, it is equivalent to sampling a trace which continuously measures the (fast exponential) instantaneous firing rate of the neuron from which the incident spike is arriving. The second exponential term is exponential in the average time since incident spikes ‘should’ arrive if the weight had been zero, $\langle\hat{T}\rangle$, an measure of the incident spike-rate. This term can be approximated as a sampling of a slow exponential measure of the average rate of the neuron from which incident spikes arrive. Finally, the constant term $D=C(v_{r}+\mu-\langle v_{0}\rangle)$, has a factor of the drift term $\mu$. In our model assumption, this drift is background input aside from the synapse under inference and affects the baseline time to spike of our output unit. This drift therefore scales up with the output neuron’s average firing rate. With these observations, we can make appropriate replacements in order to describe a local spike-timing-dependent weight inference rule. ### 2.2 Spike-timing-dependent weight inference We propose a spike-timing-dependent rule for the purpose of weight inference (STDWI) which can be deployed for parallel and online updates with minimal computational complexity. Our method maintains multiple online estimates of neuron firing rates through eligibility traces [24, 25] and makes use of these for synaptic weight estimation. In particular, each neuron (indexed $j$) maintains a fast trace $\epsilon_{j}^{f}(t)$ and a slow trace $\epsilon_{j}^{s}(t)$. The dynamics of the fast and slow traces are calculated for each neuron as $\tau_{f}\frac{d\epsilon_{j}^{f}(t)}{dt}=-\epsilon_{j}^{f}(t)+S_{j}(t)\quad\text{and}\quad\tau_{s}\frac{d\epsilon_{j}^{s}(t)}{dt}=-\epsilon_{j}^{s}(t)+\frac{\tau_{f}}{\tau_{s}}S_{j}(t)\,,$ (8) where $\tau_{f}$ and $\tau_{s}$ are the decay constants of the fast and slow traces respectively, and $S_{j}(t)$ is the spike train of the $j$th neuron. These traces are computed during simulation in a time-stepping method with the analytic (exponential) solution to these traces computed at every timestep. This spike train is computed from the set of $k$ spike times of the $j$th neuron, $t^{k}_{j}$, such that $S_{j}(t)=\sum_{k}\delta(t-t^{k}_{j})$, where $\delta(\cdot)$ is the Dirac delta function. Note that these two traces have an equal area (across time) when they both start with an initial value of zero due to the normalization of the slow trace spike-train, scaling factor $\tau_{f}/\tau_{s}$. This property ensures that both eligibility traces act to measure the firing rate of the neurons with the same scale. Having defined these eligibility traces, we define our weight inference rule as $\frac{dw_{ji}}{dt}=\alpha S_{i}(t)\left(\epsilon_{i}^{s}(t)\left(\epsilon_{j}^{f}(t)-\epsilon_{j}^{s}(t)\right)-\eta w_{ji}\right)\,,$ (9) where this rule describes inference of the weight of the forward synapse at the backward synapse (from neuron $i$ to neuron $j$), $w_{ji}$, with $\alpha$ as the learning rate and $\eta$ as the relative level of weight decay (both constant hyper-parameters). This learning rule and the fast and slow measures of the neuron’s firing rates are inspired by the synaptic inference rule derived in Section 2.1. Note that though this rule is given as a differential equation, since updates are gated by neuron $i$’s spike times, it is implemented as updating the synaptic weights such that $w_{ji}\leftarrow w_{ji}+dw_{ji}/dt$ at every timepoint where neuron $i$ spikes. Figure 2: A) Illustration of the difference between our derived method for weight inference by analysis of a deterministic LIF neuron (left) versus our proposed STDWI method (right) which uses a fast trace to measure the instantaneous firing rate (the first exponential term in Eq. (7)) and a slow trace to measure the average firing rate (second exponential term in Eq. (7)). B) Assuming regular neuron firing conditions, our method can be interpreted as an STDP rule of the form shown inset, where $T$ is the post minus pre-synaptic neuron spike time. Note pre and post-synaptic are termed relative to the backward synaptic connection. The formulation for the weight update given in Eq. (7) and our proposed STDWI rule given in Eq. (9) have corresponding terms, see Figure 2A. Both of these formulations include updates which occur only upon our pre-synaptic neuron spikes. Note we use the terms post-synaptic/pre-synaptic relative to the backward synaptic connection for application to the weight transport problem. In our approximation, we replace the first exponential term of Eq. (7) (an exponential measure of the time since the post-synaptic neuron’s last spike) with a fast timescale measure of the post-synaptic neuron’s firing rate (the fast trace) and we use a slow timescale measure of the post-synaptic neuron’s firing rate (the slow trace) to approximate the second exponential term (which computes a trace tracking the average post-synaptic neuron’s firing rate). Finally, we include a slow measure of the pre-synaptic neuron’s firing rate as a multiplicative factor, which is intended to capture the dependence of the weight estimate upon the pre-synaptic neuron drift. Figure 2A depicts how updates are calculated upon pre-synaptic neuron spikes for both the deterministic LIF and STDWI update, highlighting both the similarities and key differences between these implementations. Note that the learning rule being proposed here relates in a curious form to traditional Spike-Timing Dependent Plasticity (STDP) rules. In particular, the sign of the weight update is determined by the spike-timings and firing-rates of the pre and post-synaptic units. In general, if we assume some fixed regular firing rate of the post-synaptic neuron, then depending upon the spike-time of the pre-synaptic neuron relative to this regular firing, we encounter positive or negative weight estimation. This rule therefore appears in a mirrored-form to the commonly cited STDP observations [26], see Figure 2B. ### 2.3 Spiking neuron model For simulations in this study, we consider neurons with membrane leakage and conductance-based synaptic kernels whose membrane voltage dynamics can be described by $\tau_{m}\frac{dv_{i}(t)}{dt}=(v_{r}-v_{i}(t))+\frac{g_{D}}{g_{L}}\bigg{(}\sum_{j}w_{ij}\kappa_{j}(t)-v_{i}(t)\bigg{)}\,,$ (10) where $\tau_{m}$ is the leakage time constant, $v_{r}$ is the rest voltage, $g_{D}$ and $g_{L}$ are the dendritic and somatic leakage conductances, respectively, $w_{ij}$ is the weighting of the forward synaptic connection from the $j$th neuron to the $i$th neuron and $\kappa_{j}$ describes a filtered form of the $j$th neuron’s spike train. The form of the synaptic filtering kernel is taken as a double exponential with a fast rise and slow decay, such that $\kappa_{j}(t)=\frac{1}{\tau_{2}-\tau_{1}}\sum_{k}H(t-t^{k}_{j})\left(e^{-\frac{t-t^{k}_{j}}{\tau_{2}}}-e^{-\frac{t-t^{k}_{j}}{\tau_{1}}}\right)\,,$ (11) where $\tau_{1}$ and $\tau_{2}$ are the timescales of the fast rise and slow decay, taken to be 3ms and 10ms respectively, and $H(\cdot)$ is the Heaviside step function. When the membrane voltage, $v_{i}(t)$, reaches a threshold, $\theta$, an action potential is recorded and propagated. The membrane voltage is thereafter reset to a reset voltage $v_{\text{reset}}$. For the simulations in this study, we do not implement a refractory period explicitly. This is not expected to cause much deviation of the analysis in our low firing-rate regime. ### 2.4 Comparison against alternative weight inference methods In Section 3.2, we compare our method (STDWI) to alternative methods (RDD and Akrout methods) proposed for the local inference of weights in a network. The inference is carried out in networks composed of a group of spiking input neurons connected via a single forward weight matrix to a group of spiking output neuron. The spiking neuron dynamics are equivalent to those used in the work which introduced RDD as a causal weight inference method [16], see Section 2.3. The network is stimulated by selectively exciting input neurons, collecting the responses of input and output neurons, and applying the set of techniques to these neural data. During simulation, some percent of the input neurons are randomly sampled every 100ms and these are excited with background random Poisson distributed spike trains (with a fixed positive synaptic connection weight from stimulation nodes to the input neurons). Every 100ms the input neurons being stimulated are re-sampled. During this stimulation, non-selected neurons are left unstimulated with zero input. The STDWI and RDD methods are applied in a continuous form, paying no attention to the 100ms stimulation periods. The Akrout method was proposed for rate-based neural networks and makes use of a batch-wise de-meaning of firing rates. Therefore, the 100ms stimulation periods are considered as individual ’stimuli’ for the Akrout method, and the firing rates computed for each of these ‘stimuli’. These individual stimuli are then grouped into batches (batch-size chosen by grid search) and used to update the inferred weight according to the Akrout method. The spiking dynamics with stimulation, as described above, were simulated for 2500s. During weight inference, these 2500s of dynamics were then looped ten times in order to produce a total 25,000s training time. This looping was necessary due to memory and storage constraints and can be interpreted as ten epochs of training. All methods were trained with a learning rate of $5\times 10^{-5}$. This learning rate was chosen by iteratively reducing the learning rate until stable weight inference was clear for all methods. Conclusions are (and should be) drawn based upon asymptotic performance and not speed given that this hyperparameter was not tuned on a method-by-method basis. Free parameters were optimized for by measurement of sign-accuracy and Pearson correlation (best average performance) using a grid search carried out with a single seed of the network simulation. Selected parameters were then applied for inference to five other network seeds, and results collected. See Appendix B for the grid search results. ## 3 Results To validate our approach, we compare it against a Bayes-optimal method for a simple neuron model that affords an analytical solution. Furthermore, we compare it to two state-of-the-art synaptic weight inference methods for estimation of the connectivity of simulated spiking neural networks (see models described in Section 2.3). Code to reproduce results is available at https://github.com/nasiryahm/STDWI. ### 3.1 Comparison of STDWI to a Bayesian optimal method To verify the validity of our proposed STDWI rule and demonstrate its flexibility, we compare it against a Bayes-optimal method for inferring synaptic inputs to a neuron with internal state modelled by a Wiener process (Figure 3). Unlike a stochastic LIF neuron model, this model has a tractable hitting-time analysis and thereby we can form a Bayesian update rule for estimating the size of a synaptic input given a subsequent output neuron spike time. A detailed derivation of the Bayes-optimal method is provided in Appendix A. Figure 3: Weight inference accuracy of Bayesian and STDWI approaches applied to a pure Wiener process with jumps. Panels A and B show scatter plots of the true and inferred weights for the Bayesian and STDWI approach, respectively, at the end of the training time ($t=50s$). Panels C and D show how Pearson correlation and sign alignment between the true and inferred weights evolve through the training process. The standard deviation of the measures across 10 random network seeds are shown as envelopes about the curves. The Bayesian update rule occurs upon every pre-synaptic neuron spike and is based upon knowledge of when the last post-synaptic spike occurred (rather than knowledge of all past post-synaptic spikes), it would be an improper comparison to test the optimal Bayesian method against our full STDWI rule (which makes use of all previous spikes in its eligibility traces). Therefore, to ensure a fair comparison we modify our STDWI rule (Eq. 9) to use only single spikes. To do this we replaced the slow eligibility traces, $\epsilon_{j}^{s}(t)$, with a constant (optimally set as the average of the fast traces), and replaced the fast trace, $\epsilon_{j}^{f}(t)$, with a term which is exponential in the time since the last spike alone (rather than a decaying trace of all past post-synaptic spikes). This modification is equivalent to Eq. 7 if we treat the second exponential terms as a constant and use an arbitrary learning rate. We repeatedly simulated stochastic neurons, each with a single forward synaptic input connection but with varying synaptic connection strengths across simulations. We simulated the systems for 50s and thereafter used the network activity in this time period for synaptic weight inference. We repeated this analysis for synaptic weight strengths over a wide range to attempt inference of many different synaptic strengths. Figure 3 shows various measures of the similarity between the true and inferred jump widths for this simulation when using either the Bayesian or our derived method for weight inference. Both the scatter plots and learning curves show that the STDWI method closely matches the Bayes-optimal results, supporting the theoretical soundness of our approach. ### 3.2 Comparison of STDWI to alternative weight inference methods Figure 4: Weight inference accuracy comparison between the RDD, Akrout, and STDWI approaches for a network of LIF neurons with conductance-based synapses. Panels A, B and C show scatter plots of the true and inferred weights for each method at the end of training for a single network. Panels D and E show Pearson correlation and sign alignment between the inferred and true weights. Solid lines show the mean of these measures across ten randomly seeded networks and the shaded areas show the standard deviation across these networks. Panel F shows the convergence of the inferred weights for each method. The inferred weights for the 75% largest inferred weight (by magnitude) were collected, individually normalized to their final value and their average plot with standard deviation shown, as before, by the shaded area. We also compared our proposed STDWI approach to two existing methods for synaptic weight inference. In particular, we compare against the RDD and Akrout methods. Details of both methods are provided in Appendix B. To simulate a neural network model which is amenable to all of these weight inference methods, we use the same neural network models and setup as that described in [16]. This network is composed of LIF neurons with kernel- filtered, conductance-based synaptic inputs. We simulate two-layer network models with an input layer of 100 LIF neurons fully connected to an output layer of 10 LIF neurons. The synaptic weight matrix connecting these is drawn from a normal distribution with a small but positive mean. It is this weight matrix which must be inferred by the range of methods. The network is stimulated by selectively exciting input neurons. Some percentage of the input neurons are randomly sampled every 100ms and these are excited with background Poisson distributed input spike trains (with a fixed positive synaptic connection weight from stimulation nodes to the neurons). Every 100ms the input neurons being stimulated are re-sampled. During this stimulation process, non-selected neurons are left unstimulated with zero input. Figure 4 shows the result of weight inference with the range of methods discussed above for networks in which 20% of input neurons are simultaneously stimulated (sparse random stimulation). Scatter plots of the inferred vs true weights (see Panels 4A-C) show the strength of the STDWI method, which produces a tighter distribution of weights than competitors. Note that the scale of the synaptic weights inferred differs from the true weights for all methods, relating to the approximate nature of the methods. In practice, a rescaling could be applied to improve the correspondence to the scale of the true weights though none of our measures were sensitive to inferred weight scale and therefore this was disregarded. Panels 4D and E show the evolution of the Pearson correlation and sign alignment between the true and inferred weights through training for the range of methods. As can be seen, our proposed STDWI method outperforms both the RDD and Akrout methods, though the difference in the Pearson correlation of all three methods is small. Note that RDD outperforms Akrout in terms of sign accuracy. Finally, Panel 4F shows the successful convergence of inferred weights for all three methods. This plot shows the normalized weights (normalised through a division by the final converged weight value) of the top 75% largest magnitude network weights. These weights had a relatively unambiguous sign, hence their selection. This plot is provided to support the argument that the hyperparameter selections made were sufficient for stable inference by these methods. ### 3.3 The impact of stimulation protocol on weight inference It is also instructive to investigate how different stimulation protocols affect weight inference. To this end, and in contrast to the sparse stimulation in the previous section, we assume that all input neurons are stimulated (dense stimulation). Furthermore, we investigate how input timing correlations affect weight inference. Since input neurons are stimulated by random Poisson spike trains, we can create correlation between individual Poisson spike trains by a thinning process (via a Single Process Interaction Model, see [27]). Figure 5 shows results for this dense stimulation regime. Figure 5: Weight inference accuracy comparison between the RDD, Akrout, and STDWI approaches for a network of LIF neurons with conductance-based synapses when all input neurons are stimulated. Panels A, B and C show scatter plots of the true and inferred weights for each method at the end of training for a single network. Panels D and E show Pearson correlation and sign alignment between the inferred and true weights in the uncorrelated spiking case during training. Panels F and G show the final results (post-training) under varying input spike-time correlation. Solid lines (points) show the mean of these measures across five randomly seeded networks and the shaded areas (error bars) show the standard deviation across these networks. Scatter plots of the true vs inferred weights (see Panels 5A-C) again show that STDWI produces a tighter distribution of weights than its competitors. This highlights the smaller impact of stimulation density upon the STDWI inference method compared with the Akrout or RDD methods. These scatter plots show inferred weights for the dense stimulation case with zero correlation in timing between the various input stimulation spike-trains. Panels 5D and E show that the STDWI method remains most successful (as measured by the Pearson correlation and sign alignment mean) when compared with RDD and Akrout methods under dense stimulation. However, the Akrout method benefits significantly from dense stimulation (whereas the RDD method appears to suffer somewhat). Thus, the RDD method does not systematically outperform the Akrout method as previously reported (cf. Panels 4D and 5E). Panels 5F and G demonstrate how weight inference is affected by input timing correlations. STDWI remains largely successful, however as input spike timing correlation increases, the RDD method performs favourably. This may be expected as unlike the STDWI and Akrout methods, the RDD method compares only events which are near-threshold to establish synaptic weights. This filtering of events by which inference is done may be favourable in the regime of high input spike timing correlation, though the benefit only exists for some parameter range. ## 4 Discussion Our results demonstrate the efficacy of STDWI for synaptic weight inference across a range of network models and stimulation protocols. We have shown that our approach successfully approximates Bayes-optimal results in a simple neuron model and outperforms existing methods for weight inference. Our results also highlight the attention that must be paid to the employed stimulation protocols since the efficacy of different synaptic weight inference method has been shown to crucially depend on these. Existing methods cannot be so indiscriminately applied to arbitrary neuron models. For example, the RDD method requires a neuron model which has a second state variable mimicking the membrane voltage. This state variable should relax to the same value as the membrane voltage when the neuron is not spiking and otherwise should reflect how “driven” the neuron is when it spikes. However, such a state variable is not necessarily constructable for an arbitrary neuron model. In contrast, STDWI makes use of spike timing alone and is therefore agnostic to the neuron dynamics being simulated. In our analyses, we reported both on Pearson correlation and sign accuracy. STDWI systematically outperformed the alternative approaches on both measures for a range of parameters. One exception is in the investigation of timing- based correlations (Figure 5F and G), in which RDD outperformed the other methods for inference in the case of the medium correlation regime. This suggests a particular regime might favour the RDD method, however its failure in other regimes suggests that the current formulation of RDD for analysis about a spike-threshold may not be most effective. It is also important to realize that the number of variables stored per synaptic connection is greater for RDD compared to either the STDWI or Akrout methods. RDD requires a fitting process using data-points corresponding to events within which the neuron’s membrane voltage was near the spiking threshold. Aside from the selection of events close to threshold, the RDD method also uses four variables per synaptic connection to characterise a piece-wise linear function (with linear functions fit above and below the spiking threshold). By comparison, STDWI uses two variables for the fast and slow eligibility traces of each neuron and the Akrout method uses two variables storing the firing rate and mean firing rate of each unit within a batch. To derive our learning rule, we made use of a deterministic analysis of a LIF neuron and considered the spike times of a single input neuron. Our deterministic analyses later required approximations in order to remove terms which are highly affected by noise. Ideally we would instead have carried out a stochastic process analysis for a LIF neuron. The particular stochastic process to which our leaky neuron model corresponds is known as an Ornstein- Uhlenbeck (OU) process. Unfortunately a general analysis of the OU process that describes when we ought to expect such a neuron to spike (the hitting time) is non-trivial [28]. Nonetheless, the efficacy of our assumptions is validated by the quality of our results. Furthermore, under a rate-based analysis of our proposed STDWI rule, we can show a correspondence to the Akrout rule (see Appendix C). A limitation of our approach is that the inference process considers the spike times of a single input neuron. Instead, a multivariate approach which would take into account the spike-times of all input neurons to infer synaptic weights could prove even more powerful and accurate for weight inference. Indeed multivariate analyses, often making use of such multi-neuron spiking information along with cross-correlation measures and statistical significance testing, have been applied previously in approaches which aim to infer neural circuit connectivity from neural data [29, 30, 31, 32]. These approaches, however, make use of globally available network information and are not concerned with whether this information is locally available at the synapse. Instead, we took a simplified but powerful approach which could plausibly be implemented at the single synapse level, providing a candidate solution to the weight transport problem. Concluding, we have shown that STDWI outperforms existing approaches for solving the weight transport problem. Moreover, it is more flexible, being capable of application to any spiking network data, while requiring minimal computational overhead. The benefits of data efficiency and online computation along with its computational simplicity and accuracy make STDWI a promising biologically plausible mechanism for gradient-based learning in spiking neural networks. ## acknowledgements We thank Blake Richards and Jordan Guerguiev for their correspondence and for providing us with the code they used for the RDD method. ## References * LeCun et al. [2015] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015 May;521(7553):436–444. * Schmidhuber [2014] Schmidhuber J. Deep Learning in Neural Networks: An Overview. ArXiv 2014 Apr;1404.7828. * Grossberg [1987] Grossberg S. Competitive learning: From interactive activation to adaptive resonance. Cogn Sci 1987 Jan;11(1):23–63. * Crick [1989] Crick F. The recent excitement about neural networks. Nature 1989 Jan;337(6203):129–132. * Richards et al. [2019] Richards BA, Lillicrap TP, Beaudoin P, Bengio Y, Bogacz R, Christensen A, et al. A deep learning framework for neuroscience. Nat Neurosci 2019 Nov;22(11):1761–1770. * Whittington and Bogacz [2019] Whittington JCR, Bogacz R. Theories of Error Back-Propagation in the Brain. Trends Cogn Sci 2019 Mar;23(3):235–250. * Lillicrap and Santoro [2019] Lillicrap TP, Santoro A. Backpropagation through time and the brain. Curr Opin Neurobiol 2019 Apr;55:82–89. * Yamins and DiCarlo [2016] Yamins DLK, DiCarlo JJ. Using goal-driven deep learning models to understand sensory cortex. Nat Neurosci 2016 Mar;19(3):356–365. * Güçlü and van Gerven [2015] Güçlü U, van Gerven MAJ. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream. J Neurosci 2015 Jul;35(27):10005–10014. * Whittington and Bogacz [2017] Whittington JCR, Bogacz R. An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity. Neural Comput 2017 May;29(5):1229–1262. * Guerguiev et al. [2017] Guerguiev J, Lillicrap TP, Richards BA. Towards deep learning with segregated dendrites. Elife 2017 Dec;6. * Sacramento et al. [2018] Sacramento J, Ponte Costa R, Bengio Y, Senn W. Dendritic cortical microcircuits approximate the backpropagation algorithm. In: Bengio S, Wallach H, Larochelle H, Grauman K, Cesa-Bianchi N, Garnett R, editors. Advances in Neural Information Processing Systems 31 Curran Associates, Inc.; 2018.p. 8721–8732. * Kolen and Pollack [1994] Kolen JF, Pollack JB. Backpropagation without weight transport. In: Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN’94), vol. 3 IEEE; 1994. p. 1375–1380. * Lillicrap et al. [2016] Lillicrap TP, Cownden D, Tweed DB, Akerman CJ. Random synaptic feedback weights support error backpropagation for deep learning. Nat Commun 2016 Nov;7:13276. * Akrout et al. [2019] Akrout M, Wilson C, Humphreys P, Lillicrap T, Tweed DB. Deep learning without weight transport. In: Advances in Neural Information Processing Systems; 2019. p. 974–982. * Guerguiev et al. [2019] Guerguiev J, Kording KP, Richards BA. Spike-based causal inference for weight alignment. ArXiv 2019 Oct;1910.01689. * Kunin et al. [2020] Kunin D, Nayebi A, Sagastuy-Brena J, Ganguli S, Bloom J, Yamins DLK. Two Routes to Scalable Credit Assignment without Weight Symmetry. ArXiv 2020 Feb;2003.01513. * Nøkland [2016] Nøkland A. Direct Feedback Alignment Provides Learning in Deep Neural Networks. ArXiv 2016 Sep;1609.01596. * Samadi et al. [2017] Samadi A, Lillicrap TP, Tweed DB. Deep Learning with Dynamic Spiking Neurons and Fixed Feedback Weights. Neural Comput 2017 Mar;29(3):578–602. * Bartunov et al. [2018] Bartunov S, Santoro A, Richards B, Marris L, Hinton GE, Lillicrap T. Assessing the scalability of biologically-motivated deep learning algorithms and architectures. In: Advances in Neural Information Processing Systems; 2018. p. 9368–9378. * Moskovitz et al. [2018] Moskovitz TH, Litwin-Kumar A, Abbott LF. Feedback alignment in deep convolutional networks. ArXiv 2018 Dec;1812.06488. * Xiao et al. [2018] Xiao W, Chen H, Liao Q, Poggio T. Biologically-plausible learning algorithms can scale to large datasets. ArXiv 2018 Nov;1811.03567. * Lansdell and Kording [2019] Lansdell BJ, Kording KP. Neural spiking for causal inference. BioRxiv 2019 Oct;p. 253351. * Izhikevich [2007] Izhikevich EM. Solving the distal reward problem through linkage of STDP and dopamine signaling. Cereb Cortex 2007 Oct;17(10):2443–2452. * Gerstner et al. [2018] Gerstner W, Lehmann M, Liakoni V, Corneil D, Brea J. Eligibility Traces and Plasticity on Behavioral Time Scales: Experimental Support of NeoHebbian Three-Factor Learning Rules. Front Neural Circuits 2018 Jul;12:53. * Bi and Poo [1998] Bi GQ, Poo MM. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci 1998 Dec;18(24):10464–10472. * Kuhn et al. [2003] Kuhn A, Aertsen A, Rotter S. Higher-order statistics of input ensembles and the response of simple model neurons. Neural Comput 2003 Jan;15(1):67–101. * Lipton and Kaushansky [2018] Lipton A, Kaushansky V. On the First Hitting Time Density of an Ornstein-Uhlenbeck Process. ArXiv 2018 Oct;1810.02390. * Van Bussel et al. [2011] Van Bussel F, Kriener B, Timme M. Inferring synaptic connectivity from spatio-temporal spike patterns. Front Comput Neurosci 2011 Feb;5:3. * Timme and Casadiego [2014] Timme M, Casadiego J. Revealing networks from dynamics: an introduction. J Phys A: Math Theor 2014 Aug;47(34):343001. * Kobayashi et al. [2019] Kobayashi R, Kurita S, Kurth A, Kitano K, Mizuseki K, Diesmann M, et al. Reconstructing neuronal circuitry from parallel spike trains. Nat Commun 2019 Oct;10(1):4468. * Gerhard et al. [2013] Gerhard F, Kispersky T, Gutierrez GJ, Marder E, Kramer M, Eden U. Successful reconstruction of a physiological circuit with known connectivity from spiking activity alone. PLoS Comput Biol 2013 Jul;9(7):e1003138. * Morrison et al. [2008] Morrison A, Diesmann M, Gerstner W. Phenomenological models of synaptic plasticity based on spike timing. Biol Cybern 2008 Jun;98(6):459–478. ## Appendix A Bayesian weight estimation for a stochastic neuron model As a method of verification of our proposed STDWI rule and an exhibition of its flexibility, we compare it against an optimal Bayesian method for inferring a single synaptic input to a neuron with internal state modelled by Brownian motion with drift and diffusion (a Wiener process). Unlike a stochastic leaky integrate and fire neuron model, this model has a tractable hitting-time analysis and thereby we can form an optimal Bayesian update rule for estimating the size of a synaptic input given a subsequent output neuron spike time. This synaptic weight inference analysis for this simple neuron model and its similarity to our STDWI rule is described in the following section. ### A.1 Bayesian estimation of synaptic weights We wish to estimate the weight of synaptic connection given local-only information. In particular, this involves estimating the weight of a synaptic connection given the spike times of an input and output neuron (input and output relative to the forward synaptic connection) as well as the output neuron’s membrane voltages. Constraining this further, let us estimate a synaptic connection weight, $w$, between two neurons given a single input spike time, $t_{\text{in}}$, and the first output spike time which follows this input spike, $t_{\text{out}}$ where $t_{\text{out}}>t_{\text{in}}$. If we carry out all analysis relative to the input spike time, $t_{\text{in}}$, we can define the key dependent factors. First, the output neuron’s time to spike (the hitting time), following the input neuron spike, is a key measure which we define as $T=t_{\text{out}}-t_{\text{in}}$. The initial state of the output neuron is also a determining factor in this analysis as it defines the distance to threshold $\Delta$, which we elaborate on below. Given this setup and by Bayes’ rule, we aim to compute $p(w\mid T,\Delta)\propto p(T\mid w,\Delta)p(w)\,.$ (12) The likelihood term $p(T\mid w,\Delta)$ can be computed through analysis of the neural dynamics. To compute it, we must account for the impact of spikes from all other input neurons. In general this is non-trivial. To simplify this analysis, we consider the case of a non-leaky integrate-and-fire neuron driven by random input. ### A.2 Stochastic neuron model We consider a spiking neural network of neurons with membrane voltage under the effect of Brownian motion. As such, changes in the membrane voltage, $v(t)$, can be described by $\frac{dv(t)}{dt}=I(t)\,,$ (13) where $I(t)$ is the total input to the cell at time $t$. Notably, this change in membrane voltage is agnostic to the current voltage $v(t)$ (meaning there is no leakage effect). When this membrane voltage meets the threshold, $\theta$, an action potential is emitted and the membrane voltage is directly reset to the reset voltage $v_{\text{reset}}$. Let us consider the input, $I(t)$, as composed of input from the single synaptic connection and some background stochastic process. The synaptic connection is modelled as a producing instantaneous voltage injections which occur upon the spike times of the input neurons. The amplitudes of the instantaneous voltage injections induced by input spikes are equal to the weight of the synaptic connection from input to output neuron, $w$. Aside from these synaptic inputs, we also consider some background input which is a stochastic process. Assuming that there are a large number of randomly spiking input neurons, we can approximate their impact as a random Gaussian input with some mean and variance. This describes a stochastic process, known as a Wiener process, with some drift (mean input) and a diffusion constant (variance). This approximation for a neuron’s membrane voltage is valid in the limit of a large number of synaptic connections with small synaptic weight magnitudes. The above details are all approximations but provide us with a simple description of the neural dynamics such that $dv(t)=w\delta(t-t_{\text{in}})dt+\sqrt{D}dX_{i}(t)\,,$ (14) where $\delta(\cdot)$ is the Dirac-delta function and $X(t)$ is a Wiener process with drift $\mu$ and variance scaled by $D$. ### A.3 The hitting time of a non-leaky neuron We can now attempt to determine the “hitting time” of this system, i. e., the time $T$ at which it makes contact with our neuron membrane voltage threshold. The hitting-time density for a Wiener process with drift (by which we are approximating our non-leaky neuron) can be calculated as; $f(T\mid\Delta)=\frac{\Delta}{\sqrt{2D\pi T^{3}}}\exp\left(-\frac{(\Delta-\mu T)^{2}}{2DT}\right)\,,$ (15) where $\Delta=\theta-v_{0}$ is the membrane voltage distance to threshold ( where $v_{0}=v(t_{\text{in}})$), $T=t_{\text{out}}-t_{\text{in}}$ is defined as above, $\mu$ is the drift of our Wiener process, and $D$ is the variance of our Wiener process. In our neuron model, $\Delta$ corresponds to the difference between some initial membrane voltage $v_{0}$ and the threshold $\theta$, whereas $\mu$ corresponds to the average input to the output neuron from all input synapses in volts. The description assumes that the membrane voltage starts at some value $v_{0}$ and is under constant drift. However, instead we wish to assume that at the initial time, $t_{0}=t_{\text{in}}$, our input neuron fired and added some unknown voltage $w$ to the membrane voltage. Furthermore, rather than computing a probability distribution over the possible times at which the output neuron might spike, we instead know the next spike time of the output neuron, $t_{\textrm{out}}$, and wish to use it to infer the weight, $w$. We can therefore assume that for a given pair of input and output spikes, we have a fixed hitting time, $T$, as described above. Furthermore, under our synapse description for the non-leaky neuron (where synaptic inputs cause an instantaneous change in output neuron membrane voltage of size proportional to the synaptic weight) our initial membrane voltage, $v_{0}$, can be represented as the membrane voltage just prior to the input spike, plus the synaptic weight. That is, we take the limit of $v_{0}$ from below, i.e., $v_{0}=\lim_{t\to t_{0}^{-}}v(t)+w.$ This allows us to augment our first- passage density in terms of $w$ such that $f(T\mid w,\Delta)=\frac{(\Delta-w)}{\sqrt{2D\pi T^{3}}}\exp\bigg{(}-\frac{(\Delta-w-\mu T)^{2}}{2DT}\bigg{)}\,,$ (16) where we now define $\Delta=\theta-\lim_{t\to t_{0}^{-}}\>v(t)$. With this formulation of the hitting-time density, we can compute an estimate of the weight $w$ given a particular set of input and output neuron spike times. Thereafter we can update our estimate of the synaptic weight of interest through Eq. (12). To make our inference of $w$ tractable, we first take a Laplace approximation of Eq. (16). This produces a Gaussian with mean weight $\hat{w}=\Delta-\frac{\mu T+\sqrt{(\mu T)^{2}+4DT}}{2}\,,$ (17) calculated as the maximum of our likelihood $f(T\mid w,\Delta)$, and a variance $\hat{\sigma}=1/((\Delta-\hat{w})^{-2}+(DT)^{-1})\,.$ (18) Since we have Gaussian distributions for our likelihood, we can take a Gaussian conjugate prior with mean $\mu_{0}$ and variance $\sigma^{2}_{0}$ and obtain a closed-form solution to our posterior weight when given a single input-output spike pair as ${w}_{p}=\frac{1}{\sigma_{0}^{-2}+\hat{\sigma}^{-2}}\bigg{(}\frac{w_{0}}{\sigma_{0}^{2}}+\frac{\hat{w}}{\hat{\sigma}^{2}}\bigg{)}\,.$ (19) Similarly, we can compute the posterior variance as ${\sigma}_{p}^{2}=\bigg{(}\sigma_{0}^{-2}+\sigma^{-2}\bigg{)}^{-1}\,.$ (20) ### A.4 Weight estimation under negligible drift Let us assume that the diffusion term, $D$, is sufficiently small compared to the drift $\mu$ (such that $\mu\gg D$). This allows us to ignore the diffusion term in the numerator of Eq. (17). Having assumed this small diffusion scale, we can then describe the maximum likelihood estimate of the weight as $\hat{w}\approx\Delta-\mu T\,.$ (21) Furthermore, recall that $\Delta$ is the distance to threshold when the input neuron spikes, $\Delta=\theta-v(t_{\textrm{in}})$. By dividing this distance, $\Delta$, by the drift, $\mu$, we can calculate the expected time of the output spike under drift alone, $\hat{T}$, such that $\frac{\Delta}{\mu}=\hat{T}\implies\Delta=\mu\hat{T}\,.$ (22) Given these assumptions, we can approximate Eq. (17) as $\hat{w}\approx\mu\hat{T}-\mu T=\mu(\hat{T}-T).$ (23) This formulation can be understood well if we consider a non-leaky neuron under the effect of drift alone (without any stochastic input) and a single input neuron providing instantaneous voltage injections. In such a case, with knowledge of the initial membrane voltage and drift of the output neuron, we have a deterministic system which will spike at a specific time, $\hat{T}$. If we perturb this system with a spike from an input neuron (which causes a jump in the membrane voltage), we can decode the synaptic weight by simply measuring the effect on the timing of the output neuron spike time. The induced change in the output spike time is linearly proportional to the synaptic weight. ## Appendix B Details on baseline methods The STDWI method is compared to existing methods for synaptic weight inference. We provide more details on these methods below. ### B.1 The Akrout method In our simulations of LIF neurons, we compare against the Akrout method [15]. This rate-based method makes use of an inference phase in which neurons are stimulated (with mean zero) and then the levels of activity of input and output neurons are correlated to form a weight estimate. This approach was shown to be highly successful for weight inference and thereby training of rate-based neural network models. However, since we simulate spiking neurons, which cannot have a negative firing rate, we instead demean the neuron firing ratesand randomly stimulate the input neurons (post-synaptic from the perspective of the backward synapse). In particular, we use an update rule of the form $\Delta w_{ji}=\eta(r_{i}-\langle r_{i}\rangle)(r_{j}-\langle r_{j}\rangle)-\eta\lambda w_{ji}\,,$ (24) where $\Delta w_{ji}$ is the update to backward synaptic weight, from a neuron indexed $j$ to a neuron indexed $i$, which is attempting to estimate the weight of the forward synaptic connection, $w_{ij}$. $r_{i}$ and $r_{j}$ denote the firing rates of the $i$th and $j$th neurons, and $\langle\cdot\rangle$ indicates an average of these over time. Parameters $\eta$ and $\lambda$ are the learning rate and the weight decay respectively. The learning rate is fixed at with value $\eta=0.0001$ and the weight decay determined by grid search, see below. The firing rates $r_{i}$ and $r_{j}$ are calculated by computing the firing rates within the non-overlapping 100ms stimulation periods of the network. These stimulation periods are then grouped into batches (of size again determined by grid search) for calculation of the mean firing rates for this batch ($\langle r_{j}\rangle$ and $\langle r_{i}\rangle$ respectively) according to the weight-mirror gradient descent method described in [15]. ### B.2 Regression discontinuity design We also compare against the regression discontinuity design (RDD) method, which was proposed for application in spiking neural networks [16]. It makes use of all times at which a neuron spiked or almost spiked (i.e. its membrane voltage came within some margin of the spiking threshold but never reached it). It thereafter separately fits the almost-spiked and spiked events linearly against the membrane voltage. Notably, for the spiking events, a non- reset version of the membrane voltage is used for the linear fitting. Following a fitting process, the discontinuity of these linear fits at the spiking threshold is used as a measure of the synaptic weight. For full details of the RDD implementation, see [16]. ### B.3 Grid-based optimization of free parameters The methods compared have a number of free parameters that can be optimized for. In case of STDWI these are the time constants of the fast ($\tau_{f}$) and slow ($\tau_{s})$ traces. In case of RDD these are the distance to threshold at which samples are initiated and the window duration of a sample. For the Akrout method, the weight decay scaling and the batch-size are hyperparameters. These parameters are chosen from a grid-search using a single test network’s spike trains. The parameters producing highest average sign accuracy and Pearson correlation between the inferred and true weights are then chosen for analysis of a further four networks (each with a different random seed for input stimulation and the synaptic weight matrix). Figure 6: Variation in the performance of the STDWI, RDD, and Akrout methods with changes in the method parameters. The best parameter sets are highlighted with a black box. These were the parameter used to analyse all other seeded networks and produce the main results. Figure 6 shows the parameters maps for the grid searches carried out to select parameters for Figure 4. The same grid search parameter sweeps were repeated in order to choose parameters for Figure 5. ## Appendix C Rate-based analysis of the STDWI rule To appreciate the effect of STDWI rule, we can consider its approximate rate- based form under the assumption of random Poisson process sampled neuron spikes (for a review of the rationale of such methods see [33]). This produces an update rule based upon the firing rates of the neurons. Note that below, as in Section 2.2, we refer to pre/post-synaptic relative to a ‘backward’ synapse. In our case, the dependence upon the post-synaptic firing rate has two forms which correspond to a quickly-adapting exponential average, ${\lambda}_{\textrm{j}}^{\textrm{f}}$, and a slowly-adapting exponential average, ${\lambda}_{\textrm{j}}^{\text{s}}$. Similarly there is a dependence upon the pre-synaptic firing rate as a slowly-adapting exponential average, ${\lambda}_{\textrm{i}}^{\text{s}}$. Taking the assumption of Poisson random spiking, we can describe our weight update in a rate-based form as $\frac{d\hat{w}_{ji}}{dt}=\alpha S_{i}(t)\left(\lambda_{\textrm{i}}^{\textrm{s}}({\lambda}_{\textrm{j}}^{\textrm{f}}-{\lambda}_{\textrm{j}}^{\textrm{s}})-\eta\hat{w}_{ji}\right).$ (25) We can solve this equation for its fixed point ($\frac{d\hat{w_{ji}}}{dt}=0$), producing an expression for the fixed-point weight as $\hat{w}_{ji}^{*}=\frac{1}{\eta}{\lambda}_{\textrm{i}}^{\textrm{s}}\left({\lambda}_{\textrm{j}}^{\textrm{f}}-{\lambda}_{\textrm{j}}^{\textrm{s}}\right)$ (26) when $S_{i}(t)$ is non-zero. For networks with solely positive firing rates, Akrout et al. [15] proposed correlating the demeaned firing rates of pre and post-synaptic neurons in order to estimate synaptic weights. If we here interpret the slow firing rate measure of the input neuron activity as an approximation of its average value, then our method similarly correlates pre-synaptic firing rate with the demeaned post-synaptic neuron firing rate. Though this rate-based analysis shows similarities to the Akrout method, our spike timing implementation is unique in that it makes use of asymmetric causal kernels and has a demeaning process which slowly tracks the firing rates of neurons (rather than making use of batches). We attribute our performance gains to these features. Furthermore, given the spike-timing-dependent feature of the rule, weight updates can be computed in an event-driven fashion and with minimal communication between neurons (weight updates requiring communication only upon spike times). If we compare Eqs. (26) and (23) then we can also appreciate the correspondence of the STDWI rule and the Bayesian estimate. The STDWI update, instead of making use of an estimate of the a drift, $\mu$, makes use of the pre-synaptic neuron firing rate as a proxy. This is appropriate given the linear relationship between drift and firing rate for a non-leaky neuron. Furthermore, rather than directly comparing the expected and true time to spike, $\hat{T}$ and $T$ respectively, the STDWI rule keeps track of a slow and fast estimate of the post-synaptic neuron firing rate, through $\lambda_{\textrm{j}}^{\text{s}}$ and $\lambda_{\textrm{j}}^{\text{f}}$ respectively. The subtraction of these firing rate estimates in Eq. (26) provides a measure with a similar form to the subtraction of expected and true spike times ($\hat{T}-T$). Specifically, an earlier than average spike time induces a positive weight estimate and a later than average spike time induces a negative weight estimate.
2024-09-04T02:54:58.299400
2020-03-09T09:34:20
2003.03994
{ "authors": "Divyam Aggarwal, Dhish Kumar Saxena, Thomas B\\\"ack, Michael Emmerich", "full_text_license": null, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "arxiv-papers-0000.json.gz:26111", "submitter": "Divyam Aggarwal", "url": "https://arxiv.org/abs/2003.03994" }
arxiv-papers
WGMwgm QEqe EPep PMSpms BECbec DEde [orcid=0000-0003-0740-780X] [orcid=0000-0001-7809-7744] [1] [orcid=0000-0001-6768-1478] [orcid=0000-0002-7342-2090] [cor1]Corresponding author; Email Address<EMAIL_ADDRESS>Postal Address: Room No.-231, East Block, MIED, IIT Roorkee, Roorkee, Uttarakhand-247667, India; Phone: +91-8218612326 # Airline Crew Pairing Optimization Framework for Large Networks with Multiple Crew Bases and Hub-and-Spoke Subnetworks Divyam Aggarwal<EMAIL_ADDRESS>Department of Mechanical & Industrial Engineering (MIED), Indian Institute of Technology Roorkee, Roorkee, Uttarakhand-247667, India Dhish Kumar Saxena<EMAIL_ADDRESS>Thomas Bäck<EMAIL_ADDRESS>Leiden Institute of Advanced Computer Science (LIACS), Leiden University, Niels Bohrweg 1, 2333 CA Leiden, the Netherlands Michael Emmerich<EMAIL_ADDRESS> ###### Abstract Crew Pairing Optimization aims at generating a set of flight sequences (crew pairings), covering all flights in an airlines’ flight schedule, at minimum cost, while satisfying several legality constraints. CPO is critically important for airlines’ business viability considering that the crew operating cost is second only to the fuel cost. It poses an NP-hard combinatorial optimization problem, to tackle which, the state-of-the-art relies on relaxing the underlying Integer Programming Problem (IPP) into a Linear Programming Problem (LPP), solving the latter through Column Generation (CG) technique, and integerization of the resulting LPP solution. However, with the growing scale and complexity of the airlines’ networks (those with large number of flights, multiple crew bases and/or multiple hub-and-spoke subnetworks), the efficacy of the conventionally used exact CG-implementations is severely marred, and their utility has become questionable. This paper proposes an Airline Crew Pairing Optimization Framework, $AirCROP$, whose constitutive modules include the Legal Crew Pairing Generator, Initial Feasible Solution Generator, and an Optimization Engine built on heuristic-based CG- implementation. $AirCROP$’s novelty lies in not just the design of its constitutive modules but also in how these modules interact. In that, insights in to several important questions which the literature is otherwise silent on, have been shared. These relate to sensitivity analysis of $AirCROP^{\prime}s$ performance in terms of final solutions’ cost quality and run-time, with respect to - sources of variability over multiple runs for a given problem; cost quality of the initial solution and the run-time spent to obtain it; and termination parameters for LPP-solutioning and IPP-solutioning. In addition, the efficacy of the $AirCROP$ has been: (a) demonstrated on real-world airline flight networks with an unprecedented conjunct scale-and-complexity, marked by over 4200 flights, 15 crew bases, and billion-plus pairings, and (b) validated by the research consortium’s industrial sponsor. It is hoped that with the emergent trend of conjunct scale and complexity of airline networks, this paper shall serve as an important milestone for affiliated research and applications. ###### keywords: Airline Crew Scheduling Crew Pairing Combinatorial Optimization Column Generation Mathematical Programming Heuristics ## 1 Introduction Airline scheduling poses some of the most challenging optimization problems encountered in the entire Operations Research (OR) domain. For a large-scale airline, the crew operating cost constitutes the second-largest cost component, next to the fuel cost, and even its marginal improvements may translate to annual savings worth millions of dollars. Given the potential for huge cost-savings, Airline Crew Scheduling is recognized as a critical planning activity. It has received an unprecedented attention from the researchers of the OR community over the last three decades. Conventionally, it is tackled by solving two problems, namely, Crew Pairing Optimization Problem (CPOP) and Crew Assignment Problem, in a sequential manner. The former problem is aimed at generating a set of flight sequences (each called a crew pairing) that covers all flights from an airlines’ flight schedule, at minimum cost, while satisfying several legality constraints linked to federations’ rules, labor laws, airline-specific regulations, etc. These optimally-derived crew pairings are then fed as input to the latter problem, which is aimed to generate a set of pairing sequences (each sequence is a schedule for an individual crew member), while satisfying the corresponding crew requirements. Being the foremost step of the airline crew scheduling, CPOP is the main focus of this paper, and interested readers are referred to Barnhart . (2003) for a comprehensive review of the airline crew scheduling. CPOP is an NP-hard111For NP-hard (NP-complete) problems, no polynomial time algorithms on sequential computers are known up to now. However, verification of a solution might be (can be) accomplished efficiently, i.e., in polynomial time. combinatorial optimization problem (Garey Johnson, 1979). It is modeled as either a set partitioning problem (SPP) in which each flight is allowed to be covered by only one pairing, or a set covering problem (SCP) in which each flight is allowed to be covered by more than one pairing. In CPOP, a crew pairing has to satisfy hundreds of legality constraints (Section 2.2) to be classified as legal, and it is imperative to generate legal pairings in a time-efficient manner to assist optimization search. Several legal pairing generation approaches, based on either a flight- or a duty-network, have been proposed in the literature (Aggarwal ., 2018). Depending upon how the legal pairing generation module is invoked, two CPOP solution-architectures are possible. In the first architecture, all possible legal pairings are enumerated a priori the CPOP-solutioning. However, this is computationally- tractable only for small-scale CPOPs (with $\approx$¡1000 flights). Alternatively, legal pairings are generated during each iteration of the CPOP- solutioning, but only for a subset of flights, so the CPOP solution could be partially improved before triggering the next iteration. Such an architecture mostly suits medium- to large-scale CPOPs (with $\approx\geq$1000 flights) involving millions/billions of legal pairings, whose complete-enumeration is computationally-intractable. In terms of solution-methodologies, heuristic-based optimization techniques and mathematical programming techniques, are commonly employed (Section 2.3). In the former category, Genetic Algorithms (GAs) which are population-based randomized-search heuristics (Goldberg, 2006) are most commonly used. However, they are found to be efficient only for tackling very small-scale CPOPs (Ozdemir Mohan, 2001). Alternatively, several mathematical programming based approaches do exist to solve CPOPs of varying-scales. CPOP is inherently an Integer Programming Problem (IPP), and some approaches have used standard Integer Programming (IP) techniques to find a best-cost pairing subset from a pre-enumerated pairings’ set (Hoffman Padberg, 1993). However, these approaches have proven effective only with small-scale CPOPs with up to a million pairings. This perhaps explains the prevalence of an altogether different strategy, in which the original CPOP/IPP is relaxed into a Linear Programming Problem (LPP); the LPP is solved iteratively by invoking a LP solver and relying on Column Generation (CG) technique to generate new pairings as part of the pricing sub-problem; and finally, the resulting LPP solution is integerized using IP techniques and/or some special connection- fixing heuristics. The challenge associated with this strategy is that even though the LPP solver may lead to a near-optimal LPP solution, the scope of finding a good-cost IPP solution is limited to the pairings available in the LPP solution. To counter this challenge, heuristic implementations of branch- and-price framework (Barnhart . (1998)) in which CG is utilized during the integerization phase too, have been employed to generate new legal pairings at nodes of the IP-search tree. However, the efficacy of such heuristic implementations depends on a significant number of algorithmic-design choices (say, which branching scheme to adopt, or how many CG-iterations to perform at the nodes). Furthermore, it is noteworthy that the scale and complexity of flight networks have grown alarmingly over the past decades. As a result, an inestimably large number of new pairings are possible under the pricing sub- problem, given which most existing solution methodologies are rendered computationally-inefficient. Recognition of such challenges have paved the way towards domain-knowledge driven CG strategies to generate a manageable, yet crucial part of the overall pairings’ space under the pricing sub-problem (Zeren Özkol, 2016). Though rich in promise, the efficacy of this approach is yet to be explored vis-$\grave{a}$-vis the emergent large-scale and complex flight networks characterized by multiple crew bases and/or multiple hub-and- spoke subnetworks where billions of legal pairings are possible. In an endeavor to address airline networks with conjunct scale and complexity, this paper proposes an Airline Crew Pairing Optimization Framework ($AirCROP$) based on domain-knowledge driven CG strategies, and: * • presents not just the design of its constitutive modules (including Legal Crew Pairing Generator, Initial Feasible Solution Generator, and Optimization Engine powered by CG-driven LPP-solutioning and IPP-solutioning), but also how these modules interact * • discusses how sensitive its performance is to - sources of variability over multiple runs for a given problem; cost quality of the initial solution and the run-time spent to obtain it; and termination parameters for LPP- solutioning and IPP-solutioning. Such an investigation promises important insights for researchers and practitioners on critical issues which are otherwise not discussed in the existing literature. * • presents empirical results for real-world, large-scale (over 4200 flights), complex flight network (over 15 crew bases and multiple hub-and-spoke subnetworks) for a US-based airline, the data for which has been provided by the research consortium’s industrial partner. The outline of the remaining paper is as follows. Section 2 discusses the underlying concepts, related work, and problem formulation; Section 3 entails the details of the proposed $AirCROP$; Section 4 presents the results of the computational experiments along with the corresponding observations; and Section 5 concludes the paper as well as briefly describes the potential future directions. ## 2 Crew Pairing Optimization: Preliminaries, Related Work and Problem Formulation This section first describes the preliminaries, including the associated terminology, pairings’ legality constraints, and pairings’ costing criterion. Subsequently, the related work is presented in which the existing CPOP solution approaches are discussed. Lastly, the airline CPOP formulation is presented. ### 2.1 Associated Terminology In airline crew operations, each crew member is assigned a fixed (home) airport, called a crew base. A crew pairing (or a pairing) is a flight sequence operated by a crew, that begins and ends at the same crew base, and satisfies the given pairing legality constraints (detailed in Section 2.2). An example of a crew pairing with the Dallas (DAL) airport as the crew base is illustrated in Figure 1. In a crew pairing, the legal sequence of flights operated by a crew in a single working day (not necessarily equivalent to a calendar day) is called a crew duty or a duty. A sit-time or a connection-time is a small rest-period, provided between any two consecutive flights within a duty for facilitating operational requirements such as aircraft changes by the crew, turn-around operation for the aircraft, etc. An overnight-rest is a longer rest-period, provided between any two consecutive duties within a pairing. Moreover, two short-periods, provided in the beginning and ending of any duty within a pairing, are called briefing and de-briefing time, respectively. The total time elapsed in a crew pairing, i.e., the time for which a crew is away from its crew base is called the time away from base (TAFB). Sometimes, it is required for a crew to be transported at an airport to fly their next flight. For this, the crew travels as passenger in another flight, flown by another crew, to arrive at the required airport. Such a flight is called a deadhead flight or a deadhead for the crew traveling as passenger. It is desired by an airline to minimize the number of deadheads (ideally zero), as it affects the airline’s profit in two-folds. Firstly, the airline suffers a loss of the revenue on the passenger seat being occupied by the deadhead-ing crew, and secondly, the airline has to pay the hourly wages to the deadhead-ing crew even when it is not operating the flight. Figure 1: An example of a crew pairing starting from Dallas (DAL) crew base ### 2.2 Crew Pairing: Legality Constraints and Costing Criterion To govern the safety of crew members, airline federations such as European Aviation Safety Agency, Federal Aviation Administration, and others, have laid down several rules and regulations, which in addition to the airline-specific regulations, labor laws, etc. are required to be satisfied by a pairing to be “legal”. These legality constraints could be broadly categorized as follows: * • Connection-city constraint ($\mathcal{C}_{connect}$): this constraint requires the arrival airport of a flight (or the last flight of a duty) within a pairing to be same as the departure airport of its next flight (or the first flight of its next duty). * • Sit-time ($C_{sit}$) and Overnight-rest ($\mathcal{C}_{night}$) constraints: these constraints imposes the respective maximum and minimum limits on the duration of sit-times and overnight-rests, where these limits are governed by airlines and federations’ regulations. * • Duty constraints ($\mathcal{C}_{duty}$): these constraints govern the regulations linked to the crew duties. For instance, they impose maximum limits on the– number of flights allowed in a duty of a pairing; duty elapsed- time and the corresponding flying-time; number of duties allowed in a pairing, etc. * • Start- and end-city constraint ($\mathcal{C}_{base}$): this constraint requires the beginning airport (departure airport of the first flight) and ending airport (arrival airport of the last flight) of a pairing, to be the same crew base. * • Other constraints ($\mathcal{C}_{other}$): Airlines formulate some specific constraints, according to their operational requirements, so as to maximize their crew utilization. For example, a pairing is refrained from involving overnight-rests at the airports that belong to the same city as the crew base from which the pairing started, etc. Considering the multiplicity of the above constraints, it is critical to develop a time-efficient legal crew pairing generation approach, enabling their prompt availability, when their requirement arises during the optimization. In general, a pairing’s cost could be split into the flying cost and non- flying (variable) cost. The flying cost is the cost incurred in actually flying all the given flights, and is computed on hourly-basis. The variable cost is the cost incurred during the non-flying hours of the pairing, and is made up of two sub-components, namely, hard cost and soft cost. The hard cost involves the pairing’s hotel cost, meal cost, and excess pay– the cost associated with the difference between the guaranteed hours of pay and the actual flying hours. Here, the pairing’s hotel cost is the lodging cost incurred during its overnight-rests, and its meal cost is computed as a fraction of its TAFB. The soft cost is the undesirable cost associated with the number of aircraft changes (during flight-connections) in the pairing, etc. ### 2.3 Related Work As mentioned in Section 1, the existing CPOP solution approaches are based on either heuristic or mathematical programming techniques. Among the heuristic- based approaches, GA is the most widely adopted technique, and Beasley Chu (1996) is the first instance to customize a GA (using guided GA-operators) for solving a general class of SCPs. In that, the authors validated their proposed approach on small-scale synthetic test cases (with over 1,000 rows and just 10,000 columns). The important details of the GA-based CPOP solution approaches, available in the literature, are reported in Table 1. Table 1: Key facts around the GA-based CPOP solution approaches, available in the literature Literature Studies | Modeling | Timetable | Airline Test Cases* | Airlines ---|---|---|---|--- Levine (1996) | Set Partitioning | - | 40R; 823F; 43,749P | - Ozdemir Mohan (2001) | Set Covering | Daily | 28R; 380F; 21,308P | Multiple Airlines Kornilakis Stamatopoulos (2002) | Set Covering | Monthly | 1R; 2,100F; 11,981P | Olympic Airways Zeren Özkol (2012) | Set Covering | Monthly | 1R; 710F; 3,308P | Turkish Airlines Deveci Demirel (20181) | Set Covering | - | 12R; 714F; 43,091P | Turkish Airlines R represents the number of real-world test cases considered; F and P represents the maximum number of flights and pairings covered, therein. Notably, the utility of the studies reported in the table, have been demonstrated on CPOPs with reasonably small number of flights, leading to relatively smaller number of pairings. Though, CPOPs with 2,100 and 710 flights have been tackled by Kornilakis Stamatopoulos (2002) and Zeren Özkol (2012) respectively, only a subset of all possible legal pairings has been considered by them for finding the reported solutions. Zeren Özkol (2012) proposed a GA with highly-customized operators, which efficiently solved small-scale CPOPs but failed to solve large-scale CPOPs with the same search- efficiency. Furthermore, Aggarwal, Saxena . (20202) tackled a small-scale CPOP (with 839 flights and multiple hub-and-spoke sub-networks) using a customized- GA (with guided operators) as well as mathematical programming techniques. The authors concluded that customized-GAs are inefficient in solving complex versions of even small-scale flight networks, compared to a mathematical programming-based solution approach. Several mathematical programming-based CPOP solution approaches have been proposed in the literature over past few decades, and based on the size and characteristics of the flight network being tackled, these approaches have been categorized into either of the three general classes. In the first class of approaches, all legal pairings or a subset of good pairings are enumerated prior to the CPOP-solutioning, and the corresponding CPOP/IPP model is solved using standard IP techniques (such as branch-and-bound algorithm (Land Doig, 1960)). Gershkoff (1989) proposed an iterative solution approach, which is initialized using a set of artificial pairings (each covering a single flight at a high pseudo-cost). In that, each iteration involves selection of very few pairings (5 to 10); enumeration of all legal pairings using the flights covered in the selected pairings; optimization of the resulting SPP to find the optimal pairings; and lastly, replacement of the originally selected pairings with the optimal pairings, only if the latter offers a better cost. The search-efficiency of such an approach is highly dependent on the sub- problem-size (handled up to 100 flights and 5,000 pairings), as the length and breadth of the branching tree increases drastically with an increase in sub- problem-size. Hoffman Padberg (1993) proposed an alternative approach to tackle SPPs with up to 825 flights and 1.05 million pairings in which all possible pairings are enumerated a priori, and the resulting SPP is solved to optimality using a branch-and-cut algorithm222The branch-and-cut algorithm was first proposed by Padberg Rinaldi (1991) to solve Mixed Integer Programs (MIP), by integrating the standard branch-and-bound and cutting-plane algorithms. For comprehensive details of the MIP solvers, interested readers are referred to Lodi (2009); Linderoth Lodi (2011); Achterberg Wunderling (2013).. Such approaches are efficient only in tackling small-scale CPOPs, that too with up to a million pairings. However, even small-scale CPOPs may involve large number of pairings (an instance reported in Vance . (1997) had 250 flights and over five million pairings), rendering it computationally- intractable to use such approaches. The second class of approaches relies on relaxing the integer constraints in the original CPOP/IPP to form an LPP, which is then solved iteratively by– invoking an LP solver and generating new pairings using CG; and integerizing the resulting LPP solution. In any iteration of the LPP-solutioning (referred to as an LPP iteration), an LP solver (based on either a simplex method or an interior-point method333The class of interior-point methods was first introduced by Karmarkar (1984). In that, a polynomial-time algorithm, called Karmarkar’s algorithm, was proposed, which, in contrary to simplex method, searches for the best solution by traversing the interior of the feasible region of the search space.) is invoked on the input pairing set to find the LPP solution and its corresponding dual information (shadow price corresponding to each flight-coverage constraint), which are then utilized to generate new pairings as part of the pricing sub-problem, promising the corresponding cost-improvements. For the first LPP iteration, any set of pairings covering all the flights becomes the input to the LP solver, and for any subsequent LPP iteration, the current LPP solution and the set of new pairings (from the pricing sub-problem) constitute the new input. For more details on how new pairings are generated under the pricing sub-problem in the CG technique, interested readers are referred to Vance . (1997); Lübbecke Desrosiers (2005). As cited in Zeren Özkol (2016), the CG technique has several limitations, out of which the prominent ones are– heading-in effect (poor dual information in initial LPP iteration leads to generation of irrelevant columns), bang-bang effect (dual variables oscillate from one extreme point to another, leading to poor or slower convergence), and tailing- off effect (the cost-improvements in the later LPP iterations taper-off). While, different stabilization techniques are available for CG in the literature Du Merle . (1999); Lübbecke (2010), the use of interior point methods is gaining prominence. Anbil . (1991) presented the advancements at the American Airlines, and enhanced the approach proposed by Gershkoff (1989) (discussed above), by leveraging the knowledge of dual variables to screen- out/price-out the pairings from the enumerated set at each iteration, enabling it to solve larger sub-problems (up to 25 flights and 100,000 pairings). As an outcome of a collaboration between IBM and American Airlines, Anbil . (1992) proposed an iterative global solution approach (though falling short of global optimization) in which an exhaustive set of pairings ($\approx$5.5 million) is enumerated a priori. Several thousands of these pairings are used to initialize the iterative procedure, and in each of its iterations, a sub- problem is solved to obtain optimal dual variables, which are then used to price-out all 5.5 million pairings to select a sufficiently-sized set of good pairings ($\approx$5,000 pairings). For integerization of the LPP solution, the literature points to two prominent strategies. The first strategy is based on utilizing either a branch-and-bound, or a branch-and-cut algorithm. The other strategy utilizes some special “connection-fixing” heuristics either solely for integerization (Anbil ., 1992; Marsten, 1994), or during the iterations of the LPP-solutioning (Zeren Özkol, 2016) to boost the performance of the subsequent MIP solver (in some cases, may even get integer solution without using the MIP solver). These heuristics eliminate some irrelevant pairings by exploiting the knowledge of their linear variables and fixing some specific flight-connections. The limitation in this class of approaches is that even though, a good IPP solution to the original CPOP may exist and the LPP-solutioning leads to a near-optimal LPP solution, the pairings available in it may not fit well together to constitute a good-cost IPP solution. The third class of approaches share a similar solution-architecture as of the preceding class, however, differs in terms of the integerization of the LPP solution. In that, a heuristic branch-and-price framework444The branch-and- price algorithm was originally proposed by Barnhart . (1998) as an exact algorithm to solve then-known large-scale IPPs, and has been utilized to solve a variety of combinatorial optimization problems in transportation such as Desrosiers . (1984); Desrochers Soumis (1989); Desrochers . (1992). is adopted, wherein, CG is utilized during the integerization phase too, to generate new legal pairings at nodes of the MIP-search tree. Desrosiers . (1991) is the first instance that solved CPOP using a branch-and-price framework. However, given the inestimable number of legal pairings possible for even medium-scale CPOPs, numerous branch-and-price based heuristic- approaches have been proposed over the last three decades (Desaulniers ., 1997; Vance ., 1997; Anbil ., 1998; Desaulniers Soumis, 2010). Notably, the development of these approaches, being heuristic in nature, require a significant number of algorithmic-design choices to be taken empirically, which may vary with the characteristics of the flight networks being solved for. To name a few such decisions, which branching scheme should be employed (branching on linear variables, branching on flight-connections, or others), should CG be performed on each node of the MIP-search tree, how many CG iterations to be performed each time, etc. Furthermore, the commercial LP and MIP solvers are not much open to modifications, making it difficult for the new researchers to implement a computationally- and time-efficient branch-and- price framework from scratch. For further details of the existing CPOP solution approaches, interested readers are referred to recent survey articles– Kasirzadeh . (2017); Deveci Demirel (20182). In addition to the above classification of solution approaches, the literature differs on the notion of how the pricing sub-problem is modeled and solved to generate new legal pairings during the LPP iterations. However, the focus of this paper is not on the solution to the pricing sub-problem step, but on the interactions between different modules of a CG-based CPOP solution approach. Hence, for details on the existing work related to the pricing sub-problem step, interested readers are referred to Vance . (1997); Aggarwal, Saxena . (20201). ### 2.4 Integer Programming Problem Formulation As mentioned earlier, a CPOP is intrinsically an IPP, modeled either as a SCP or a SPP. Notably, the SCP formulation provides higher flexibility during its solutioning compared to the SPP formulation by accommodating deadhead flights in the model, possibly resulting in faster convergence (Gustafsson, 1999). For a given flight set $\mathcal{F}$ (including $F$ flights) that could be covered in numerous ways by a set of legal pairings $\mathcal{P}$ (including $P$ pairings), the set covering problem is aimed to find a subset of pairings ($\in\mathcal{P}$), say $\mathcal{P}_{IP}^{*}$, which not only covers each flight ($\in\mathcal{F}$) at least once, but does it at a cost lower than any alternative subset of pairings in $\mathcal{P}$. In that, while finding $\mathcal{P}_{IP}^{*}$ ($\subseteq\mathcal{P}$), each pairing $p_{j}\in\mathcal{P}$ corresponds to a binary variable $x_{j}$, which represents whether the pairing $p_{j}$ is included in $\mathcal{P}_{IP}^{*}$ (marked by $x_{j}=1$) or not ($x_{j}=0$). Here, $p_{j}$ is a $F$-dimensional vector, whose each element, say $a_{ij}$, represents whether the flight $f_{i}$ is covered by pairing $p_{j}$ (marked by $a_{ij}=1$) or not ($a_{ij}=0$). In this background, the IPP formulation, as used in this paper, is as follows. $\displaystyle\text{Minimize}~{}Z_{IP}=\sum_{j=1}^{P}c_{j}x_{j}+\psi_{D}\cdot\left(\sum_{i=1}^{F}\left(\sum_{j=1}^{P}a_{ij}x_{j}-1\right)\right),$ (1) $\displaystyle\text{subject to}\quad\sum_{j=1}^{P}a_{ij}x_{j}\geq 1,\quad~{}~{}~{}~{}~{}\forall i\in\\{1,2,...,F\\}$ (2) $\displaystyle\qquad\qquad\quad x_{j}\in\mathbb{Z}=\\{0,1\\},~{}~{}~{}~{}\forall j\in\\{1,2,...,P\\}$ (3) $\displaystyle\text{where},c_{j}$ $\displaystyle:~{}\text{the cost of a legal pairing }p_{j},$ $\displaystyle\psi_{D}$ $\displaystyle:~{}\text{an airline- defined penalty cost against each deadhead in the solution},$ $\displaystyle\quad a_{ij}$ $\displaystyle=~{}1,~{}\text{if flight}~{}f_{i}~{}\text{is covered in pairing}~{}p_{j};\text{ else }0$ $\displaystyle x_{j}$ $\displaystyle=~{}1,~{}\text{if pairing}~{}p_{j}~{}\text{contributes to Minimum}~{}Z;\text{ else }0$ In the objective function (Equation 1), the first component gives the sum of the individual costs of the pairings selected in the solution, while the other component gives the penalty cost for the deadheads incurred in the solution (note, $(\sum_{j=1}^{P}a_{ij}x_{j}-1)$ gives the number of deadheads, corresponding to the flight $f_{i}$). Notably, in the above formulation, it is assumed that the set of all possible legal pairings, namely, $\mathcal{P}$, are available a priori, and the task is to determine $\mathcal{P}_{IP}^{*}$. However, the generation of $\mathcal{P}$ a priori is computationally- intractable for large-scale CPOPs, as mentioned in Section 2.3. Hence, the solution to the CPOP/IPP is pursued in conjunction with the corresponding LPP (formulation deferred till Section 3.3.1) assisted by the CG technique. ## 3 Proposed Airline Crew Pairing Optimization Framework (AirCROP) This section presents the constitutive modules of the proposed optimization framework - $AirCROP$, their working, and their interactions. As per the schematic in Figure 2, $AirCROP$ accepts a set of given flights $\mathcal{F}$ along with the pairings’ legality constraints and costing criterion as input, and outputs a minimal-cost set of legal pairings $\mathcal{P}_{IP}^{\star}$, that covers all given flights. This transition from the input to output is enabled by the constitutive modules, namely, the Legal Crew Pairing Generator, the Initial Feasible Solution Generator, and an Optimization Engine in turn enabled by CG-driven LPP-solutioning and IPP-solutioning submodules and their intermittent interactions. While parts of these modules have been presented elsewhere (Aggarwal ., 2018; Aggarwal, Saxena ., 20201) in isolation, these are being detailed below towards a holistic view on the experimental results presented later. Figure 2: A schematic of $AirCROP$ illustrating the interactions between its constitutive modules– Legal Crew Pairing Generator, Initial Feasible Solution Generator, Optimization Engine (CG-driven LPP-solutioning interacting with IPP-solutioning). The CG heuristic in LPP-solutioning generates a set of fresh pairings $\mathcal{P}_{CG}^{t}$ at any LPP iteration $t$ using the following CG strategies: Deadhead reduction ($CGD$, generating $\mathcal{P}_{CGD}^{t}$), Crew Utilization enhancement ($CGU$, generating $\mathcal{P}_{CGU}^{t}$), Archiving ($CGA$, generating $\mathcal{P}_{CGA}^{t}$), and Random exploration ($CGR$, generating $\mathcal{P}_{CGR}^{t}$). The interactions between LPP- solutioning and IPP-solutioning are tracked by the counter $T$. ### 3.1 Legal Crew Pairing Generator This module enables generation of the legal pairings in a time-efficient manner, so they could feed real-time into the other modules - Initial Feasible Solution Generator and the optimization engine. For time-efficiency, it employs a parallel, duty-network based legal pairing generation approach, whose distinctive contributions are two-folds. Firstly, a crew base centric parallel architecture is adopted considering that several duty- and pairing- constitutive constraints do vary with crew bases. In that, for an input flight set, the legal pairing generation process is decomposed into independent sub- processes (one for each crew base), running in parallel on idle-cores of the central processing unit (CPU). This leads to a significant reduction in the pairing generation time ($\approx$10 folds for a CPOP with 15 crew bases, as demonstrated in Aggarwal . (2018)). Secondly, the set of all possible legal duties and the corresponding duty overnight-connection graph with-respect-to each crew base are enumerated and stored a priori the CPOP-solutioning. In a duty overnight-connection graph, a node represents a legal duty, and an edge between any two nodes represents a legal overnight-rest connection between the respective duties. Such a preprocessing ensures that all the connection-city, sit-time, duty, and overnight-rest constraints get naturally satisfied, eliminating the need for their re-evaluation during the generation of legal pairings, and leading to a significant reduction in the legal pairing generation time. The implementation of this module, formalized in Algorithms 1 & 2, is elaborated below. Input: $\mathcal{F}$; $\mathcal{B}$; and constraints: $\mathcal{C}_{connect},~{}\mathcal{C}_{sit},~{}\mathcal{C}_{duty}~{}\&~{}\mathcal{C}_{night}$ Output: $\mathcal{D}_{b}$ & $\mathcal{G}^{d}_{b}~{}\forall b\in\mathcal{B}$ $\mathcal{G}^{f}\leftarrow$ Generate the flight-connection graph by evaluating $\mathcal{C}_{connect}~{}\&~{}\mathcal{C}_{sit}$ between each pair of flights $\in\mathcal{F}$ $\triangleright$ $\mathcal{G}^{f}\equiv\left(\mathcal{F},\mathcal{E}^{f}\right)$ 1 for _each crew base $b\in\mathcal{B}$ in parallel_ do 2 for _each flight $f\in\mathcal{F}$_ do 3 Push $f$ into an empty $duty$ 4 if _updated flight-sequence in $duty$ satisfies constraints in $\mathcal{C}_{duty}$_ then 5 Add $duty$ to $\mathcal{D}_{b}$ 6 if _$f$ has at least one flight-connection in $\mathcal{G}^{f}$_ then 7 $\texttt{DFS(}duty,f,\mathcal{G}^{f},\mathcal{C}_{duty}\texttt{)}$, and add the enumerated duties to $\mathcal{D}_{b}$ 8 9 end if 10 11 end if 12 Pop out $f$ from $duty$ 13 14 end for 15 $\mathcal{G}^{d}_{b}\leftarrow$ Generate the duty overnight-connection graph by evaluating $\mathcal{C}_{night}$ between each pair of duties $\in\mathcal{D}_{b}$ 16 end for 17return $\mathcal{D}_{b}$ & $\mathcal{G}^{d}_{b}~{}\forall b\in\mathcal{B}$ $\triangleright$ DFS($duty,parent,\mathcal{G}^{f},\mathcal{C}_{duty}$) 18 for _each $child$ of $parent$ in $\mathcal{G}^{f}$_ do 19 Push $child$ into $duty$ 20 if _updated flight-sequence in $duty$ satisfies $\mathcal{C}_{duty}$_ then 21 yield $duty$ to $\mathcal{D}_{b}$ 22 if _$child$ has at least one connection in $\mathcal{G}^{f}$_ then 23 DFS($duty,child,\mathcal{G}^{f},\mathcal{C}_{duty}$) 24 25 end if 26 27 end if 28 Pop out $child$ from $duty$ 29 30 end for Algorithm 1 Procedure for enumeration of legal duties and duty overnight- connection graphs Input: $\mathcal{F}_{*}\text{ or }\mathcal{D}_{*};~{}\mathcal{B};~{}\mathcal{D}_{b}~{}\&~{}\mathcal{G}^{d}_{b}~{}\forall b\in\mathcal{B}$; and constraints: $\mathcal{C}_{base}~{}\&~{}\mathcal{C}_{other}$ Output: $\mathcal{P}_{*}$ 1 for _each crew base $b\in\mathcal{B}$ in parallel_ do 2 Update $\mathcal{D}_{b}~{}\&~{}\mathcal{G}^{d}_{b}$ by removing duties $\notin\mathcal{D}_{*}$ if $\mathcal{D}_{*}$ is input, or by removing those duties which cover flights $\notin\mathcal{F}_{*}$ if $\mathcal{F}_{*}$ is input 3 for _each $duty\in\mathcal{D}_{b}$_ do 4 if _departure airport of $duty$ is $b$_ then 5 Push $duty$ into an empty $pairing$ 6 if _updated duty-sequence in $pairing$ satisfies $\mathcal{C}_{other}$_ then 7 if _updated duty-sequence in $pairing$ satisfies $\mathcal{C}_{base}$_ then 8 Add $pairing$ to $\mathcal{P}_{*}$ 9 10 else if _$duty$ has at least one duty overnight-connection in $\mathcal{G}^{d}_{b}$_ then 11 $\texttt{DFS(}pairing,duty,\mathcal{G}^{d}_{b},\mathcal{C}_{base}\cup\mathcal{C}_{other}\texttt{)}$, and add enumerated pairings to $\mathcal{P}_{*}$ 12 13 end if 14 15 end if 16 Pop out $duty$ from $pairing$ 17 18 end if 19 20 end for 21 22 end for 23return $\mathcal{P}_{*}$ $\triangleright$ DFS($pairing,parent,\mathcal{G}^{d}_{b},\mathcal{C}_{base}\cup\mathcal{C}_{other}$) 24 for _each $child$ of $parent$ in $\mathcal{G}^{d}_{b}$_ do 25 Push $child$ into $pairing$ 26 if _updated duty-sequence in $pairing$ satisfies $\mathcal{C}_{other}$_ then 27 if _updated duty-sequence in $pairing$ satisfies $\mathcal{C}_{base}$_ then 28 yield $pairing$ to $\mathcal{P}_{*}$ 29 30 else if _$child$ has at least one duty overnight-connection in $\mathcal{G}^{d}_{b}$_ then 31 $\texttt{DFS(}pairing,child,\mathcal{G}^{d}_{b},\mathcal{C}_{base}\cup\mathcal{C}_{other}\texttt{)}$ 32 33 end if 34 35 end if 36 Pop out $child$ from $pairing$ 37 38 end for Algorithm 2 Procedure for enumeration of legal pairings from an input flight set $\mathcal{F}_{*}\text{ or a duty set }\mathcal{D}_{*}$ For solving any CPOP, the foremost step of the $AirCROP$ is to preprocess the entire duty-connection network– set of legal duties $\mathcal{D}_{b}$ and duty overnight-connection graph $\mathcal{G}^{d}_{b}\left(\equiv\left(\mathcal{D}_{b},~{}\mathcal{E}^{d}_{b}\right)\right)$ for each crew base $b$ in the given set of crew bases $\mathcal{B}$, where $\mathcal{E}^{d}_{b}$ is the set of legal overnight-rest connections between duty-pairs $\in\mathcal{D}_{b}$. The procedure for the above preprocessing is presented in Algorithm 1. In that, the first step is the generation of a flight-connection graph (denoted by $\mathcal{G}^{f}$) by evaluating the legality of connection-city ($\mathcal{C}_{connect}$) and sit-time ($\mathcal{C}_{sit}$) constraints between every flight-pair in the given flight schedule $\mathcal{F}$ (line 1). Here, in $\mathcal{G}^{f}~{}\left(\equiv\left(\mathcal{F},\mathcal{E}^{f}\right)\right)$, $\mathcal{F}$ is the set of nodes (flights) and $\mathcal{E}^{f}$ is the set of edges (legal flight connections). Subsequently, $\mathcal{G}^{f}$ is used for legal duty enumeration, by decomposing the process into independent sub- processes, one for each crew base $b\in\mathcal{B}$, and executing them in parallel (lines 2-12). In each of these sub-processes, enumeration of legal duties, starting from each flight $f\in\mathcal{F}$, is explored. In that: * • flight $f$ is added to an empty candidate duty stack, given by $duty$ (line 4). * • the flight-sequence in $duty$ is checked for satisfaction of duty constraints $\mathcal{C}_{duty}$, and if satisfied, $duty$ is added to the desired legal duty set $\mathcal{D}_{b}$ (lines 5-6). Notably, if $f$ has at least one connection with another flight in $\mathcal{G}^{f}$, and if the duty constraints permit, then more flights could be accommodated in $duty$, leading to enumeration of other legal duties (lines 7-9). * • a Depth-first Search (DFS) algorithm (Tarjan, 1972) is adapted, which is called recursively to enumerate legal duties, starting from a parent flight node ($parent$), by exploring its all successive paths in $\mathcal{G}^{f}$ in a depth-first manner (lines 16-25). In each recursion, a child flight node ($child$) is pushed into $duty$, the updated flight-sequence is checked for satisfaction of $\mathcal{C}_{duty}$, and if satisfied, $duty$ is yielded to $\mathcal{D}_{b}$, followed by another recursion of DFS() with $child$ as the new $parent$. In this way, all legal duties, starting from flight $f$, are enumerated. Subsequently, $f$ is popped out from $duty$, and duty enumeration using other flights in $\mathcal{F}$ is explored (lines 3 & 11). The resulting set $\mathcal{D}_{b}$ is then used to generate the duty overnight-connection graph $\mathcal{G}^{d}_{b}$), by evaluating the legality of connection-city ($\mathcal{C}_{connect}$) and overnight-rest ($\mathcal{C}_{night}$) constraints between every duty-pair $\in\mathcal{D}_{b}$ (line 13). Here, in $\mathcal{G}^{d}_{b}~{}\left(\equiv\left(\mathcal{D}_{b},\mathcal{E}^{d}_{b}\right)\right)$, $\mathcal{D}_{b}$ is the set of nodes (legal duties), and $\mathcal{E}^{d}_{b}$ is the set of edges (legal overnight-rest connections). The preprocessed sets of legal duties and the corresponding duty overnight- connection graphs are utilized to enumerate legal pairings for any input flight set (say $\mathcal{F}_{*}$) or a duty set (say $\mathcal{D}_{*}$), when required in real-time in other modules of the $AirCROP$. Its procedure, formalized in Algorithm 2, is elaborated below. For legal pairing enumeration, the same crew base driven parallel architecture is utilized in which the process is decomposed into independent sub-processes, one for each crew base $b\in\mathcal{B}$, running in parallel on idle-cores of the CPU (line 1). In each of these sub-processes, the first step is to update $\mathcal{D}_{b}$ and $\mathcal{G}^{d}_{b}$, by removing duties $\notin\mathcal{D}_{*}$ if $\mathcal{D}_{*}$ is input, or those duties that cover flights $\notin\mathcal{F}_{*}$ if $\mathcal{F}_{*}$ is input (line 2). Subsequently, the enumeration of legal pairings, starting from each duty ($duty$) $\in\mathcal{D}_{b}$, is explored (line 3). In that: * • the $duty$ is pushed into an empty candidate pairing stack, given by $pairing$, only if the departure airport of $duty$ is same as the crew base $b$ (lines 4-5). * • the $pairing$ is checked for satisfaction of pairing constraints $\mathcal{C}_{other}$, and if satisfied, $pairing$ is further checked for satisfaction of end-city constraint $\mathcal{C}_{base}$, which ensures that the arrival airport of the $pairing$’s last duty is same as the crew base $b$. * – If $pairing$ satisfies $\mathcal{C}_{base}$, it is classified as legal, and is added to the desired pairing set $\mathcal{P}_{*}$ (lines 7-8). * – If $pairing$ does not satisfy $\mathcal{C}_{base}$, it is not complete, and more duties are required to be covered in it to complete the legal duty- sequence. This is only possible if $duty$ has at least one overnight-rest connection in $\mathcal{G}^{d}_{b}$. And if it does, the DFS() sub-routine, similar to the one used in legal duty enumeration, is called recursively to enumerate legal pairings, starting from a parent duty node ($parent$), by exploring its all successive paths in $\mathcal{G}^{d}_{b}$ in a depth-first manner (lines 18-28). In each recursion: * $\circ$ a child duty node ($child$) is pushed into the $pairing$ (line 19). * $\circ$ the updated duty-sequence in $pairing$ is checked for satisfaction of first $\mathcal{C}_{other}$ and then $\mathcal{C}_{base}$ (lines 20-21). * $\circ$ if it satisfies both constraints, then $pairing$ is complete (legal), and is yielded to the desired pairing set $\mathcal{P}_{*}$ (line 22). * $\circ$ if it satisfies $\mathcal{C}_{other}$ but not $\mathcal{C}_{base}$, then another recursion of DFS() with $child$ as new $parent$ is called, only if $child$ has at least one duty overnight-rest connection in $\mathcal{G}^{d}_{b}$ (lines 23-25). In the above way, all legal pairings, starting from $duty$, are enumerated using the DFS() sub-routine. Subsequently, $duty$ is popped out of $pairing$ (line 13), and the legal pairing enumeration using other duties $\in\mathcal{D}_{b}$ is explored (line 3). Once, all the sub-processes are complete, the desired pairing set $\mathcal{P}_{*}$ is returned (line 17). ### 3.2 Initial Feasible Solution Generator An initial feasible solution (IFS) is any set of pairings, covering all flights in the given flight schedule, which is used to initialize a CPOP solution approach. For large-scale CPOPs, generation of an IFS standalone is a computationally-challenging task. This module is designed to generate a reasonably-sized IFS in a time-efficient manner for large and complex flight networks, which is then used to initialize the Optimization Engine of $AirCROP$. For this, it employs a novel Integer Programming based Divide-and- cover Heuristic (IPDCH), which relies on: (a) a divide-and-cover strategy to decompose the input flight schedule into sufficiently-small flight subsets, and (b) integer programming to find a lowest-cost pairing set, covering the maximum possible flights for each of the decomposed flight subsets. The procedure of the proposed IPDCH, formalized in Algorithm 3, is elaborated below. Input: $\mathcal{F},~{}K,~{}\texttt{Pairing\\_Gen()}$ Output: $\mathcal{P}_{IFS}$ 1 while _all flights $\in\mathcal{F}$ are not covered in $\mathcal{P}_{IFS}$_ do $\mathcal{F}_{K}\leftarrow$ Select $K$ random flights from $\mathcal{F}$ without replacement $\triangleright$ $K<F$ 2 $\mathcal{P}_{K}\leftarrow~{}\texttt{Pairing\\_Gen(}\mathcal{F}_{K}\texttt{)}$ $\mathcal{F}_{K^{\prime}}\leftarrow$ Flights covered in $\mathcal{P}_{K}$ $\triangleright$ $K^{\prime}\leq K$ 3 Add remaining flights $\left(\mathcal{F}_{K}\backslash\mathcal{F}_{K^{\prime}}\right)$ back to $\mathcal{F}_{K}$ 4 Formulate the IPP using flights in $\mathcal{F}_{K^{\prime}}$ and pairings in $\mathcal{P}_{K}$ 5 $\mathcal{P}_{IP}\leftarrow$ Solve the IPP using an MIP solver, and select pairings corresponding to non-zero variables 6 Add pairings from $\mathcal{P}_{IP}$ to $\mathcal{P}_{IFS}$ 7 Replace flights in $\mathcal{F}$ if it becomes empty 8 9 end while 10return $\mathcal{P}_{IFS}$ Algorithm 3 Procedure for IFS generation using the proposed IPDCH Being an iterative heuristic, IPDCH terminates when all flights in the input set are covered by pairings in the desired IFS, notated as $\mathcal{P}_{IFS}$ (lines 1). The input to the heuristic involves the given flight schedule $\mathcal{F}$ (with $F$ number of flights), the pairing generation sub-routine Pairing_Gen() (presented in Section 3.1), and a pre-defined decomposition parameter $K$, which regulates the number of flights to be selected from $\mathcal{F}$ in each IPDCH-iteration. The setting of $K$ largely depends upon the available computational resources, and the characteristics of the input flight dataset (as highlighted in Section 4.3.3). In each IPDCH-iteration, first a flight subset, say $\mathcal{F}_{K}$ $\left(K<F\right)$, is formed by randomly selecting $K$ number of flights from $\mathcal{F}$ without replacement (line 2). Subsequently, $\mathcal{F}_{K}$ is fed as input to the Pairing_Gen() sub-routine to enumerate the set of all possible legal pairings, say $\mathcal{P}_{K}$ (line 3). Notably, all flights in $\mathcal{F}_{K}$ may not get covered by pairings in $\mathcal{P}_{K}$, as random selection of flights does not guarantee legal connections for all selected flights. Let $\mathcal{F}_{K^{\prime}}$ $\left(K^{\prime}\leq K\right)$ be the set of flights covered in $\mathcal{P}_{K}$ (line 4). The remaining flights, given by $\mathcal{F}_{K}\backslash\mathcal{F}_{K^{\prime}}$, are added back to $\mathcal{F}$ (line 5). Subsequently, $\mathcal{F}_{K^{\prime}}$ and $\mathcal{P}_{K}$ are used to formulate the corresponding IPP (line 6), which is then solved using a commercial off-the-shelf MIP solver to find the optimal IPP solution, say $\mathcal{P}_{IP}$, constituted by pairings corresponding to only non-zero variables (line 7). The pairings in $\mathcal{P}_{IP}$ are then added to the desired set $\mathcal{P}_{IFS}$ (line 8). Lastly, the flights in $\mathcal{F}$ are replaced if it becomes empty (line 9). As soon as $\mathcal{P}_{IFS}$ covers all the required flights, IPDCH is terminated, and $\mathcal{P}_{IFS}$ is passed over to the Optimization Engine for its initialization. ### 3.3 Optimization Engine: Interactions between CG-driven LPP-solutioning and IPP-solutioning The search for minimal cost, full flight-coverage CPOP solution is enabled by an optimization engine. It tackles the underlying LPP and IPP through intermittent interactions of two submodules, namely, CG-driven LPP-solutioning and IPP-solutioning, tracked by a counter $T$. These submodules are presented below. #### 3.3.1 CG-driven LPP-solutioning As illustrated in Figure 2, this submodule entails several iterations (each referred to as an LPP iteration, and is tracked by $t$) in each of which: (a) an LP solver is invoked on the input pairing set, leading to the current LPP solution $\mathcal{P}_{LP}^{t}$, (b) the corresponding dual od the LPP is formulated using $\mathcal{P}_{LP}^{t}$, which is then solved to fetch dual variables (given by vector $Y^{t}$), and (c) a fresh set of pairings $\mathcal{P}_{CG}^{t}$, that promises associated cost-improvement, is generated using a domain-knowledge driven CG heuristic. For the first LPP iteration ($t=1$), the input to the LP solver is either $\mathcal{P}_{IFS}$ if $T=1$, or $\mathcal{P}_{IP}^{T-1}$ if $T>1$. For any subsequent LPP iteration ($t>1$), the input comprises of the current $\mathcal{P}_{CG}^{t}$ and $\mathcal{P}_{LP}^{t}$. In this background, each of these LPP iterations are implemented in the following three phases555For ease of reference, the notations introduced in these phases are kept independent of the LPP iteration counter $t$. However, these notations are super-scripted by $t$ in the corresponding discussions and pseudocodes with reference to a particular LPP iteration.: * • In the first phase, a primal of the LPP (Equations 4 to 6) is formulated from the input pairing set, and is solved using an interior-point method based commercial off-the-shelf LP solver (Gurobi Optimization, 2019). In the resulting LPP solution, a primal variable $x_{j}$, varying from $0$ to $1$, is assigned to each pairing $p_{j}$ in the input pairing set. These $x_{j}$s together constitute the primal vector, notated as $X~{}\left(=[x_{1}~{}x_{2}~{}x_{3}~{}...~{}x_{P}]^{\mathsf{T}}\right)$. The set of $x_{j}$s with non-zero values ($x_{j}\neq 0$) and the set of corresponding pairings are notated as $X_{LP}$ and $\mathcal{P}_{LP}$, respectively. $\displaystyle\text{Minimize}~{}Z_{LP}^{p}=\sum_{j=1}^{P}c_{j}x_{j}+\psi_{D}\cdot\left(\sum_{i=1}^{F}\left(\sum_{j=1}^{P}a_{ij}x_{j}-1\right)\right)=\sum_{j=1}^{P}\left(c_{j}+\psi_{D}\cdot\sum_{i=1}^{F}a_{ij}\right)x_{j}-F\cdot\psi_{D},$ (4) $\displaystyle\text{subject to}\quad\sum_{j=1}^{P}a_{ij}x_{j}\geq 1,\qquad\quad\forall i\in\\{1,2,...,F\\}$ (5) $\displaystyle\qquad\qquad\quad x_{j}\in\mathbb{R}=[0,1],\qquad\forall j\in\\{1,2,...,P\\}$ (6) It is to be noted that the minimization of $Z_{LP}^{p}$ will always lead to a solution with all primal variables $x_{j}\leq 1$, even without explicitly involving the corresponding constraint– Equation 6 (Vazirani, 2003). Hence, the contribution of each pairing in the LPP solution, given by its $x_{j}$, could be effectively treated as $x_{j}\in\mathbb{R}_{\geq 0}$ instead of Equation 6. * • In the second phase, dual variables are extracted from the current LPP solution. For this, the dual of the LPP (Equations 7 to 9) is formulated using the pairing set $\mathcal{P}_{LP}$, and is solved using an interior-point method (Andersen Andersen, 2000) based non-commercial LP solver (Virtanen ., 2020), to fetch the optimal dual solution. In that, a dual variable $y_{i}$ represents a shadow price corresponding to an $i^{th}$ flight-coverage constraint in the primal. The optimal dual vector, constituted by all $y_{i}$s in the optimal dual solution, is notated as $Y~{}\left(=[y_{1}~{}y_{2}~{}y_{3}~{}...~{}y_{F}]^{\mathsf{T}}\right)$, whose dimension is equal to $F$. $\displaystyle\text{Maximize}~{}Z_{LP}^{d}=\sum_{i=1}^{F}y_{i}-F\cdot\psi_{D},$ (7) $\displaystyle\text{subject to}\quad\sum_{i=1}^{F}a_{ij}y_{i}\leq\left(c_{j}+\psi_{D}\cdot\sum_{i=1}^{F}a_{ij}\right),~{}~{}~{}~{}\forall j\in\\{1,2,...,P_{LP}\\}$ (8) $\displaystyle\qquad\qquad\qquad\quad~{}~{}y_{i}\in\mathbb{R}\geq 0,\qquad\qquad\qquad~{}~{}~{}~{}\forall i\in\\{1,2,...,F\\}$ (9) $\displaystyle\text{where},\qquad P_{LP}:~{}\text{is the number of pairings in the set}~{}\mathcal{P}_{LP}$ $\displaystyle\qquad\qquad\quad~{}~{}~{}y_{i}:~{}\text{dual variable, corresponding to an $i^{th}$ flight-coverage constraint},$ Notably, in a conventional approach, the optimal $Y$ is directly computed from the optimal basis of the primal solution (obtained in the first phase), using the principles of duality theory, particularly the theorem of complementary slackness (Bertsimas Tsitsiklis, 1997), without explicitly solving the corresponding dual. However, in the second phase, solving the dual explicitly using the interior-point method (Andersen Andersen, 2000), in a sense, helps in stabilizing the oscillating behavior of dual variables over the successive LPP iterations (bang-bang effect, as discussed in Section 2.3). Moreover, this interior-point method is available via only a non-commercial LP solver (Virtanen ., 2020), and to ensure a time-efficient search, the above dual is formulated using the pairings $\in\mathcal{P}_{LP}$, instead of pairings from the large-sized input pairing set. * • In the last phase, the availability of dual variables from the second phase paves the way for solution to the pricing sub-problem. It is aimed to generate those legal pairings (non-basic), which if included as part of the input to the next LPP iteration, promise a better-cost (at least a similar-cost) LPP solution compared to the current solution. Such non-basic pairings are identified using a reduced cost metric, given by $\mu_{j}$ (Equation 10), which if negative (as CPOP is a minimization problem) indicates the potential in the pairing to further reduce the cost of the current LPP solution $Z_{LP}^{p}$, when included in the current basis (Bertsimas Tsitsiklis, 1997). Moreover, the potential of such a pairing to further reduce the current $Z_{LP}^{p}$, is in proportion to the magnitude of its $\mu_{j}$ value. $\displaystyle\mu_{j}=c_{j}-\mu d_{j},~{}\text{where,}~{}\mu d_{j}=\sum_{i=1}^{F}\left(a_{ij}\cdot y_{i}\right)~{}=\sum_{f_{i}\in p_{j}}y_{i}~{}~{}(~{}\text{represents the dual cost component of}~{}\mu_{j})$ (10) As mentioned in Section 2.3, the standard CG practices generate a complete pricing network and solves it as a resource-constrained shortest-path optimization problem, to identify only the pairing(s) with negative reduced cost(s). However, generation of a complete pricing network for CPOPs with large-scale and complex flight networks is computationally-intractable. To overcome this challenge, a domain-knowledge driven CG heuristic (Aggarwal, Saxena ., 20201) is employed here to generate a set of promising pairings (of pre-defined size, criterion for which is discussed in Section 4.2). Notably, the merit of this CG heuristic lies in the fact that from within the larger pool of pairings with negative $\mu_{j}$, besides selecting pairings randomly, it also selects pairings in a guided manner. In that, the selection of such pairings is guided by optimal solution features at a set level and an individual pairing level, and re-utilization of the past computational efforts. These optimal solution features are related to the minimization of deadheads and maximization of the crew utilization, respectively. In essence, while the standard CG practices present equal opportunity for any pairing with a negative $\mu_{j}$ to qualify as an input for the next LPP iteration, this CG heuristic, besides ensuring that the pairings have negative $\mu_{j}$, prioritizes some pairings over the others via its two-pronged strategy– exploration of the new pairings’ space and re-utilization of pairings from the past LPP iterations. In that: * – the exploration of the new pairings’ space is guided by three CG strategies, which are elaborated below. * $\circ$ Deadhead Reduction strategy ($CGD$): this strategy prioritizes a set of legal pairings that is characterized by low deadheads, a feature which domain knowledge recommends for optimality at a set level. To exploit this optimality feature, $CGD$ generates a new paring set $\mathcal{P}_{CGD}$, which not only provides an alternative way to cover the flights involved in a subset of the current $\mathcal{P}_{LP}$, but also ensures that some of these flights get covered with zero deadheads. It promises propagation of the zero deadhead feature over successive LPP iterations, as: (a) $\mathcal{P}_{CGD}$ alongside the current $\mathcal{P}_{LP}$ forms a part of the input for the next LPP iteration; (b) $\mathcal{P}_{CGD}$ provides a scope for better coverage (zero deadhead) of some flights, compared to the current $\mathcal{P}_{LP}$; and (c) $\mathcal{P}_{CGD}$ may focus on zero deadhead coverage for different flights in different LPP iterations. * $\circ$ Crew Utilization enhancement strategy ($CGU$): this strategy prioritizes a set of legal pairings each member of which is characterized by high crew utilization, a feature which domain knowledge recommends for optimality at an individual pairing level. To exploit this optimality feature, $CGU$: (a) introduces a new measure, namely, crew utilization ratio, given by $\gamma_{j}$ (Equation 11), to quantify the degree of crew utilization in a pairing $p_{j}$ at any instant; (b) identifies pairings from the current $\mathcal{P}_{LP}$, which are characterized by high dual cost component ($\mu d_{j}$, Equation 10), reflecting in turn on those constitutive flights that have high value of dual variables $y_{i}$, and hence, on the potential of these flights to generate new pairings with more negative $\mu_{j}$; and (c) utilizes these flights to generate promising pairings from which only the ones with high $\gamma_{j}$ are picked to constitute the new pairing set $\mathcal{P}_{CGU}$. $\displaystyle\gamma_{j}=\frac{1}{\text{Number of duties in }p_{j}}\cdot\sum_{d\in p_{j}}\frac{\text{Working hours in duty}~{}d}{\text{Permissible hours of duty }d}$ (11) In doing so, $CGD$ promises propagation of the higher crew utilization ratio over successive LPP iterations, given that in each LPP iteration, $\mathcal{P}_{CGU}$ alongside the current $\mathcal{P}_{LP}$ forms a part of the input for the next LPP iteration. * $\circ$ Random exploration strategy ($CGR$): this strategy, unlike $CGU$ and $CGD$ which are guided by optimal solution features, pursues random and unbiased exploration of the new pairings’ space, independent of the current LPP solution. It involves generation of new pairings for a random selected set of legal duties from which only the pairings with negative reduced cost are selected to constitute the new pairing set $\mathcal{P}_{CGR}$. Here, a random set of legal duties is used instead of a random set of flights, as the former has a higher probability of generating legal pairings, given that a majority of pairing legality constraints get satisfied with the preprocessing of legal duties. * – the re-utilization of pairings from the past LPP iterations is guided by an Archiving strategy ($CGA$), that prioritizes a set of legal pairings comprising of those flight-pairs, which as per the existing LPP solution, bear better potential for improvement in the objective function. Such a pairing set, originating from the flight-pair level information, is extracted from an archive (denoted by $\mathcal{A}$) of the previously generated pairings. In doing so, $CGA$ facilitates re-utilization of the past computational efforts, by providing an opportunity for a previously generated pairing to be re- inducted in the current pairing pool. For this, $CGA$: * $\circ$ updates the archive $\mathcal{A}$ in each LPP iteration such that any pairing is stored/retrieved with reference to a unique index $(f_{m},f_{n})$ reserved for any legal flight-pair in that pairing. * $\circ$ introduces a new measure, namely, reduced cost estimator, given by $\eta_{mn}$ (Equation 12), for a flight-pair $(f_{m},f_{n})$ in $\mathcal{A}$. In each LPP iteration, this estimator is computed for all the flight-pairs present in $\mathcal{A}$, by fetching $f_{m}$, $f_{n}$, $y_{m}$ and $y_{n}$. $\displaystyle\eta_{mn}$ $\displaystyle=\texttt{flying\\_cost($f_{m}$)}+\texttt{flying\\_cost($f_{n}$)}-y_{m}-y_{n}=\sum_{i\in\\{m,n\\}}\left(\texttt{flying\\_cost($f_{i}$)}-y_{i}\right)$ (12) Notably, this formulation is analogous to Equation 10, just that instead of the complete cost of a pairing, only the flying costs corresponding to the flights in a legal flight-pair are accounted for. Given this, $\eta_{mn}$ may be seen as an indicator of $\mu_{j}$ at the flight-pair level. * $\circ$ recognizes that towards further improvement in the current LPP solution, it may be prudent to include as a part of the input for the next LPP iteration– the new pairing set $\mathcal{P}_{CGA}$, constituted by preferentially picking pairings from $\mathcal{A}$, that cover flight-pairs with lower $\eta_{mn}$ value. In doing so, $CGA$ pursues the goal of continual improvement in the objective function, while relying on the flight-pair level information embedded in the LPP solution of current LPP iteration, and re-utilizing the computational efforts spent till that LPP iteration. For further details and associated nitty-gritty of the above domain-knowledge driven CG heuristic, interested readers are referred to the authors’ previous work– Aggarwal, Saxena . (20201). Once this CG heuristic generates a set of promising pairings $\mathcal{P}_{CG}$ of pre-defined size, it is merged with the current $\mathcal{P}_{LP}$, and fed as the input to the next LPP iteration ($t\mathrel{+}=1$). These LPP iterations are repeated until the cost-improvements over a pre- specified number of successive LPP iterations falls below a pre-specified cost-threshold (settings given in Section 4.2). In this submodule, these LPP iterations are repeated, until its termination criterion is not met. In that, the cost-improvement over LPP iterations is observed, and if it falls below a pre-specified cost-threshold, say $Th_{cost}$, over a pre-specified number of successive LPP iterations, say $Th_{t}$, then it is terminated. The settings of these pre-specified limits– $Th_{cost}$ and $Th_{t}$, are highlighted in Section 4.2. After termination, the final LPP solution $\mathcal{P}_{LP}^{T}$ is then passed over to the IPP-solutioning submodule for its integerization. #### 3.3.2 IPP-solutioning This submodule receives as input, the LPP solution $\mathcal{P}^{T}_{LP}$, and aims to find therein a full-coverage integer solution, notated as $\mathcal{P}^{T}_{IP}$. Towards it, an IPP (Equations 1 to 3) is formulated using $\mathcal{P}^{T}_{LP}$ and $\mathcal{F}$, and solved using a branch-and- cut algorithm based off-the-shelf commercial MIP solver (Gurobi Optimization, 2019). At each node of the MIP-search tree, this solver maintains a valid lower bound (cost of the LPP solution) and a best upper bound (cost of the IPP solution), and it self-terminates if the gap between these two bounds becomes zero, or all branches in the MIP-search tree have been explored. Considering that the MIP-search for large-scale CPOPs is extremely time-consuming, a pre- defined time limit, notated as $Th_{ipt}$ (setting highlighted in Section 4.2), is used to terminate this MIP solver, if it does not terminate by itself a priori. Once the $\mathcal{P}^{T}_{IP}$ is obtained, it is passed back to the previous submodule for the next LPP-IPP interaction ($T\mathrel{+}=1$), only if the termination criterion of the Optimization Engine is not satisfied. Overarching Optimization Engine In the wake of the above, the procedure of the overarching Optimization Engine, formalized in Algorithm 4, is elaborated below. Input: $\mathcal{F},~{}\mathcal{P}_{IFS},~{}Th_{cost},~{}Th_{t},~{}Th_{ipt},~{}\texttt{Pairing\\_Gen()},~{}\texttt{CGD()},~{}\texttt{CGU()},~{}\texttt{CGR()},~{}\texttt{CGA()}$ Output: $\mathcal{P}^{\star}_{IP}$ 1 $T\leftarrow 1$ while _termination criterion of Optimization Engine is not met_ do 2 $\triangleright$ CG-driven LPP-solutioning: $t\leftarrow 1$ while _termiantion criterion of CG-driven LPP-solutioning is not met_ do 3 if _$t=1$ and $T=1$_ then 4 Formulate the primal of the LPP using $\mathcal{P}_{IFS}$ and $\mathcal{F}$ 5 else if _$t=1$ and $T>1$_ then 6 Formulate the primal of the LPP using $\mathcal{P}^{T-1}_{IP}$ and $\mathcal{F}$ 7 else 8 Formulate the primal of the LPP using $\mathcal{P}^{t-1}_{CG}\cup\mathcal{P}^{t-1}_{LP}$ and $\mathcal{F}$ 9 end if 10 $\mathcal{P}^{t}_{LP},~{}X^{t}_{LP}\leftarrow$ Solve the primal using the interior-point method based LP solver $\triangleright$ Termination of the CG- driven LPP-solutioning: if _cost-improvements $\leq Th_{cost}$ over last $Th_{t}$ number of successive LPP iterations_ then 11 $\mathcal{P}^{T}_{LP}\leftarrow\mathcal{P}^{t}_{LP}$ Break 12 end if 13 Formulate the dual of the LPP using $\mathcal{F}$ and $\mathcal{P}^{t}_{LP}$ $Y^{t}\leftarrow$ Solve the dual using the interior- point method based LP solver $\triangleright$ Solution to pricing sub-problem using the CG heuristic: $\mathcal{P}^{t}_{CGD}\leftarrow\texttt{CGD($\mathcal{P}^{t}_{LP},X^{t}_{LP},Y^{t},\ldots$)}$ $\mathcal{P}^{t}_{CGU}\leftarrow\texttt{CGU($\mathcal{P}^{t}_{LP},X^{t}_{LP},Y^{t},\ldots$)}$ $\mathcal{P}^{t}_{CGR}\leftarrow\texttt{CGR($Y^{t},\ldots$)}$ $\mathcal{P}^{t}_{CGA}\leftarrow\texttt{CGA($\mathcal{P}^{t}_{LP},X^{t}_{LP},Y^{t},\ldots$)}$ $\mathcal{P}^{t}_{CG}\leftarrow\mathcal{P}^{t}_{CGD}\cup\mathcal{P}^{t}_{CGU}\cup\mathcal{P}^{t}_{CGR}\cup\mathcal{P}^{t}_{CGA}$ $t\mathrel{+}=1$ 14 end while 15 $\triangleright$ IPP-solutioning: Formulate the IPP using $\mathcal{P}^{T}_{LP}$ and $\mathcal{F}$ $\mathcal{P}^{T}_{IP}\leftarrow$ Solve the IPP using a branch-and-cut algorithm based MIP solver until its run- time becomes $\geq Th_{ipt}$ $\triangleright$ Termination of the Optimization Engine: if _$Z^{T}_{IP}\left(\text{cost of }\mathcal{P}^{T}_{IP}\right)=Z^{T}_{LP}\left(\text{cost of }\mathcal{P}^{T}_{LP}\right)$_ then 16 $\mathcal{P}^{\star}_{IP}\leftarrow\mathcal{P}^{T}_{IP}$ Break 17 end if 18 $T\mathrel{+}=1$ 19 end while return $\mathcal{P}_{IP}^{\star}$ Algorithm 4 Procedure for the Optimization Engine Its input involves the given flight set $\mathcal{F}$; the generated IFS $\mathcal{P}_{IFS}$; the pre-defined termination parameters– $Th_{cost}$ & $Th_{t}$ (for CG-driven LPP-solutioning) and $Th_{ipt}$ (for IPP-solutioning); and the sub-routines for Legal Crew Pairing Generator (Pairing_Gen()) and the four CG strategies ($\texttt{CGD()},~{}\texttt{CGU()},~{}\texttt{CGR()}~{}$and CGA()) in the proposed CG heuristic. In each LPP-IPP interaction of the Optimization Engine, first, the CG-driven LPP-solutioning is executed (lines 3-25). It entails several LPP iterations (tracked by $t$), in each of which the first step is to formulate the primal using $\mathcal{F}$ and the respective input pairing set. This input pairing set is: * • $\mathcal{P}_{IFS}$, if the first LPP iteration ($t=1$) of the first LPP-IPP interaction ($T=1$) is being executed (lines 5-6). * • $\mathcal{P}^{T-1}_{IP}$, if the first LPP iteration ($t=1$) of any subsequent LPP-IPP interaction ($T>1$) is being executed (lines 7-8). * • $\mathcal{P}^{t-1}_{CG}\cup\mathcal{P}^{t-1}_{LP}$, if any subsequent LPP iteration ($t>1$) of any LPP-IPP interaction ($T\geq 1$) is being executed (lines 9-11). Once the primal is formulated, it is solved using the corresponding LP solver to obtain the current optimal LPP solution, constituted by $\mathcal{P}^{t}_{LP}~{}$and$~{}X^{t}_{LP}$ (line 12). Subsequently, the termination criterion of CG-driven LPP-solutioning is checked (lines 13-16). If it is terminated, then the current LPP solution $\mathcal{P}^{t}_{LP}$ is fetched as the final LPP solution $\mathcal{P}^{T}_{LP}$ of this LPP-IPP interaction. If not, then a dual is formulated using $\mathcal{P}^{t}_{LP}$ and $\mathcal{F}$ (line 17), which is then solved using the corresponding LP solver to obtain the current optimal dual vector $Y^{t}$ (line 18). Using the current $\mathcal{P}^{t}_{LP}$, $X^{t}_{LP}$ and $Y^{t}$, a fresh set of pairings $\mathcal{P}^{t}_{CG}$ is obtained using the CG heuristic, which is constituted by the new pairing sets from the four underlying CG strategies (lines 19-23). At the end of the LPP iteration $t$, the fresh set of pairings $\mathcal{P}^{t}_{CG}$ is combined with the current $\mathcal{P}^{t}_{LP}$ to serve as input pairing set for the subsequent LPP iteration ($t\mathrel{+}=1$). Once this submodule is terminated, the resulting $\mathcal{P}^{T}_{LP}$ is passed over to the IPP-solutioning for its integerization, wherein, the MIP solver is used to obtain the IPP solution $\mathcal{P}^{T}_{IP}$ (lines 26 and 27). In that, the pre-defined $Th_{ipt}$ time-limit is used to terminate the MIP-search, if it does not self-terminate a priori. Subsequently, the resulting $\mathcal{P}^{T}_{IP}$ is passed back to the CG-driven LPP-solutioning for the next LPP-IPP interaction ($T\mathrel{+}=1$), or returned as the final integer solution $\mathcal{P}^{\star}_{IP}$, depending upon the termination condition of the Optimization Engine (lines 28-32). In that, if the cost of $\mathcal{P}^{T}_{IP}~{}\left(Z_{IP}^{T}\right)$, matches the cost of $\mathcal{P}^{T}_{LP}~{}\left(Z_{LP}^{p,T}\right)$, then the Optimization Engine is terminated. ## 4 Computational Experiments This section first presents the test cases and the computational setup, used to investigate the utility of $AirCROP$, its modules, and their interactions. Subsequently, the settings of parameters involved in different modules of $AirCROP$ are presented. Lastly, the experimental results are discussed. ### 4.1 Test Cases and Computational Setup The real-world airline test cases, used for experimentation, are detailed in Table 2. Each of these test cases involves a weekly flight schedule, and have been provided by the research consortium’s industrial sponsor (from the networks of US-based airlines). Table 2: Real-world airline test cases used in this research work Test Cases | $\\#$Flights | $\\#$Crew Bases | $\\#$Airports | $\\#$Legal Duties ---|---|---|---|--- TC-1 | 3202 | 15 | 88 | 454205 TC-2 | 3228 | 15 | 88 | 464092 TC-3 | 3229 | 15 | 88 | 506272 TC-4 | 3265 | 15 | 90 | 446937 TC-5 | 4212 | 15 | 88 | 737184 (a) (b) Figure 3: (a) Geographical representation of TC-5 flight network, where the red nodes, green edges and yellow nodes represent the airports, scheduled flights and crew bases, respectively, and (b) legal flight-connections, each represented by a point in the plot, where for a flight marked on the y-axis, the connecting flight is marked on the x-axis. The columns in Table 2, in order of their occurrence, highlight the notations for the different test cases; the number of its constituent flights; the number of constituent crew bases; and the total number of legal duties involved, respectively. It is critical to recognize that the challenge associated with solutioning of these test cases, depends not just on the number of flights involved but also to the fact that these flights are part of complex flight networks, characterized by a multiplicity of hubs as opposed to a single hub, and multiplicity of crew bases as opposed to a single crew base. In that, the number of legal pairings possible, grow exponentially with the number of hubs and crew bases. As a sample instance, the geographical representation of the flight network associated with TC-5, and the legal flight connections involved in it, are portrayed in Figure 3. Notably, in Figure 3(a), the presence of multiple hub-and-spoke subnetworks and multiple crew bases (highlighted in yellow color) is evident. Furthermore, the pattern visible in Figure 3(b) could be attributed to the (minimum and maximum) limits on the sit-time and overnight-rest constraints. For instance, a flight, say $f_{500}$, has legal connections only with those flights that depart from the arrival airport of $f_{500}$, and whose departure-time gap (difference between its departure-time and the arrival time of $f_{500}$) lies within the minimum and maximum allowable limits, of the sit-time or the overnight-rest. All the experiments in this research have been performed on an HP Z640 Workstation, which is powered by two Intel® Xeon® E5-2630v3 processors, each with 16 cores at 2.40 GHz, and 96 GBs of RAM. All codes related to the $AirCROP$ have been developed using the Python scripting language in alignment with the Industrial sponsor’s larger vision and preference. Furthermore: * • the interior-point method from Gurobi Optimizer 8.1.1 (Gurobi Optimization, 2019) is used to solve the primal in the CG- driven LPP-solutioning submodule. * • the interior-point method (Andersen Andersen, 2000) from SciPy’s linprog library (Virtanen ., 2020) is used to solve the dual in the CG-driven LPP- solutioning submodule. * • the branch-and-cut algorithm based MIP solver from Gurobi Optimizer 8.1.1 is used to solve the IPP in the Initial Feasible Solution Generator and the IPP- solutioning submodule. * • an $AirCROP$-run, in principle, terminates when the cost of the IPP solution matches the cost of its input LPP solution in a particular LPP-IPP interaction. However, for practical considerations on the time-limit, an $AirCROP$-run is allowed to terminate if the IPP and LPP costs do not conform with each other even after 30 LPP-IPP interactions are over, or 30 hours of total run-time is elapsed. ### 4.2 Parameter Settings The settings of the parameters associated with different modules and submodules of the $AirCROP$ are, as highlighted below. * • Initial Feasible Solution Generator: here, the proposed IPDCH involves the decomposition parameter $K$, which regulates the size of flight subsets formed in each of IPDCH-iteration. As mentioned before, the setting of $K$ is dependent on the characteristics of input flight dataset and the configuration of available computational resources. Here, the aim is to cover all given flights in a time-efficient manner. Hence, it is important to understand the effect of setting of $K$ on the time-performance of IPDCH, which is highlighted below. * – For a relatively lower value of $K$, smaller flight subsets with lesser number of legal flight-connections would be formed in each IPDCH-iteration, leading to coverage of relatively lesser number of unique flights in each of them. Though, this by itself is not a challenge, but this would necessitate a significant number of additional IPDCH-iterations (and the respective run- time), since the number of unique flights covered per IPDCH-iteration, which by construct reduces with the iterations, would get further reduced with relatively smaller flight subsets. * – On the flip side, for a relatively higher value of $K$, bigger flight subsets would be formed that would lead to coverage of higher number of unique flights per IPDCH-iteration. Though, this may reduce the total number of IPDCH- iterations required to generate the desired IFS, the overall run-time of the IPDCH may increase drastically. The rationale being that with bigger flight subset in each IPDCH-iteration, the number of possible legal pairings would increase drastically, leading to huge run-time for their generation as well as for the subsequent MIP-search. The above considerations suggest that $K$ should be reasonably-sized. Considering the given computational resources and the results of initial exploration around the possible number of pairings for differently-sized flight sets, the value of $K$ in each IPDCH-iteration is guided by a random integer between one-eighth and one-fourth of the size of the input flight set $\mathcal{F}$. It may be noted that this setting of $K$ has been selected considering the scale and complexity of the given test cases, and it needs to be re-visited if the scale and complexity of the flight network changes drastically. * • CG-driven LPP-solutioning: The parameters involved in the termination criterion for this submodule– $Th_{cost}$ & $Th_{t}$, are set as 100 USD & 10 iterations respectively, to achieve an LPP solution with a sufficiently good cost in a reasonably good time. Moreover, the sensitivity of these parameters towards the $AirCROP$’s performance is discussed in Section 4.3.4. Moreover, the effect of the parameter– size of $\mathcal{P}^{t}_{CG}$, on the performance of this submodule (the final LPP solution’s cost and required run- time), and the demand on the computational resources (dominantly, RAM) is highlighted below. * – for a relatively small-sized $\mathcal{P}^{t}_{CG}$, the alternative pairings available to foster further cost improvement shall be quite limited, amounting to smaller cost benefits in each phase of the CG-driven LPP-solutioning. This would necessitate far more LPP-IPP interactions, to reach the near-optimal cost. This pre se is not a challenge, however, significant amount of additional run-time may be required, since: (a) each call for CG-driven LPP- solutioning demands a minimum of 10 LPP iterations, before it could be terminated, (b) such calls when invoked repeatedly, may consume significant run-time, yet, without reasonable cost benefit. * – On the other hand, for a very large-sized $\mathcal{P}_{CG}^{t}$, though the potential for significant cost benefits may exist, the demand on the RAM may become overwhelming for any CG-driven LPP-solutioning phase to proceed. The above considerations suggest that the size of $\mathcal{P}_{CG}^{t}$ may neither be too small nor too large. Factoring these, the experiments here aim at $\mathcal{P}_{CG}^{t}$ sized approximately of a million pairings (significant size, yet, not overwhelming for 96 GB RAM). Furthermore, for a search that is not biased in favor of any particular CG strategy, the number of pairings from each CG strategy towards the overall CG heuristic are kept equable. * • IPP-solutioning: As mentioned before, the MIP-search on a large-scale IPP is time-intensive. Hence, the termination parameter– $Th_{ipt}$, that restricts the run-time of any IPP-solutioning phase if not self-terminated a priori, is reasonably set as 20 minutes, and its sensitivity on the $AirCROP$’s performance is discussed in Section 4.3.4. ### 4.3 Results & Observations This section presents the experimental results and associated inferences, in the order highlighted below. 1. 1. The performance of the proposed $AirCROP$ on the given test cases with the aforementioned parameter settings is discussed. 2. 2. The phenomenon referred to as performance variability (Lodi Tramontani, 2013) is discussed in the context of $AirCROP$. This aspect is pertinent since some variability in performance (even for the same random seed) is inevitable owing to $AirCROP$’s reliance on the mathematical programming solvers, which over the different runs may pick different permutations of the rows (flight- coverage) or columns (pairings). 3. 3. The impact of the initialization methods: (a) the proposed IPDCH, (b) an Enhanced-DFS heuristic, earlier proposed by the authors (Aggarwal ., 2018), and (c) a commonly adopted Artificial Pairings method (Hoffman Padberg, 1993; Vance ., 1997), on the final performance of $AirCROP$ is investigated. 4. 4. The sensitivity of $AirCROP$’s performance to the termination parameters in the Optimization Engine’s submodules (CG-driven LPP-solutioning and IPP- solutioning) has been discussed. #### 4.3.1 AirCROP’s Performance The results of the $AirCROP$-runs on the given test cases (TC-1 to TC-5) with the aforementioned parameter settings are reported in Table 3. In that, for each test case: * • the first row marked by “$\mathcal{P}_{IFS}$” highlights the cost associated with the IFS that initializes the $AirCROP$-run and the run-time consumed in its generation. * • the subsequent rows present the results of the LPP-IPP interactions (marked by the counter $T$). In that, for a particular $T$, the cost of the LP-solution passed on for its integerization and the associated time are highlighted. Also the cost of the IP-solution returned and the associated time are highlighted. Here, the unit of cost is USD, and the time corresponds to the HH:MM format. * • the final crew pairing solution ($\mathcal{P}^{\star}_{IP}$) is highlighted in the last row (emboldened) marked by “Final Solution”. It may be noted that the experimental results in the subsequent sections are presented in the same format, unless any digression is specifically highlighted. Table 3: $AirCROP$’s performance∗ on the given test cases LPP-IPP | TC-1 | TC-2 | TC-3 | TC-4 | TC-5 ---|---|---|---|---|--- Interactions $T$ | $\mathcal{P}_{LP}^{T}/\mathcal{P}_{IP}^{T}$ | Cost | Time | Cost | Time | Cost | Time | Cost | Time | Cost | Time $\mathcal{P}_{IFS}$ | 85893202 | 00:05 | 81950079 | 00:05 | 51552744 | 00:03 | 131716653 | 00:08 | 89690776 | 00:06 1 | $\mathcal{P}_{LP}^{1}$ | 3468349 | 03:56 | 3493986 | 03:56 | 3483057 | 05:18 | 3595565 | 03:27 | 4583484 | 07:48 $\mathcal{P}_{IP}^{1}$ | 3689420 | 00:20 | 3715798 | 00:20 | 3697204 | 00:20 | 3807233 | 00:20 | 4930789 | 00:20 2 | $\mathcal{P}_{LP}^{2}$ | 3467837 | 02:18 | 3494675 | 01:19 | 3484645 | 02:42 | 3600195 | 01:17 | 4588740 | 02:49 $\mathcal{P}_{IP}^{2}$ | 3557615 | 00:20 | 3587139 | 00:20 | 3590336 | 00:20 | 3679138 | 00:20 | 4734553 | 00:20 3 | $\mathcal{P}_{LP}^{3}$ | 3469591 | 00:47 | 3495254 | 01:22 | 3486614 | 01:59 | 3600813 | 01:16 | 4592143 | 01:46 $\mathcal{P}_{IP}^{3}$ | 3518161 | 00:02 | 3546777 | 00:02 | 3523538 | 00:02 | 3639313 | 00:01 | 4654258 | 00:20 4 | $\mathcal{P}_{LP}^{4}$ | 3471619 | 01:13 | 3496797 | 00:57 | 3491000 | 01:13 | 3601168 | 01:27 | 4593422 | 02:17 $\mathcal{P}_{IP}^{4}$ | 3489534 | 00:01 | 3505941 | 00:01 | 3496142 | 00:01 | 3621723 | 00:01 | 4634187 | 00:01 5 | $\mathcal{P}_{LP}^{5}$ | 3472403 | 00:31 | 3497106 | 00:23 | 3490420 | 00:56 | 3604082 | 00:37 | 4594282 | 02:14 $\mathcal{P}_{IP}^{5}$ | 3484783 | 00:01 | 3497106 | 00:01 | 3490420 | 00:01 | 3612845 | 00:01 | 4617838 | 00:01 6 | $\mathcal{P}_{LP}^{6}$ | 3473238 | 00:30 | | | | | 3604753 | 00:28 | 4595481 | 01:53 $\mathcal{P}_{IP}^{6}$ | 3473238 | 00:01 | | | | | 3604753 | 00:01 | 4615272 | 00:01 7 | $\mathcal{P}_{LP}^{7}$ | | | | | | | | | 4596466 | 01:12 $\mathcal{P}_{IP}^{7}$ | | | | | | | | | 4600428 | 00:01 8 | $\mathcal{P}_{LP}^{8}$ | | | | | | | | | 4595613 | 01:42 $\mathcal{P}_{IP}^{8}$ | | | | | | | | | 4595613 | 00:01 Final Solution | 3473238 | 10:05 | 3497106 | 08:46 | 3490420 | 12:55 | 3604753 | 09:24 | 4595613 | 22:52 ∗All values in the “Cost” columns are in USD, and all corresponding real values are rounded-off to the next integer values. All values in the “Time” columns are in HH:MM format, and all corresponding seconds’ values are rounded-off to the next minute values. The above results have been tested by the research consortium’s industrial sponsor, and verified to be highly-competitive compared to the best practice solutions known, for different test cases. In general, the obtained solutions have been found to be superior by about 1.5 to 3.0% in terms of the hard cost, which reportedly is one of the most important solution quality indicator. For reference, a comparison of the obtained solution vis-$\grave{a}$-vis the best known solution has been drawn for TC-5, in Table 4, where a significant difference in terms of the size of pairings can be observed. Notably, the key features contributing to lower hard cost relate to presence of pairings with relatively lower - TAFB, overnight rests and meal cost. However, the obtained solution also entails more crew changes, some of which (involving aircraft change) negatively impact the soft cost. Hence, there appears to be a trade- off between the hard cost and the soft cost. Table 4: Salient features of $\mathcal{P}^{\star}_{IP}$ for TC-5: $AirCROP$’s solution vis-$\grave{a}$-vis the best practice solution Features | $\bm{AirCROP}$’s solution | Best practice solution ---|---|--- $\\#$ pairings | 926 | 783 $\\#$ unique flights covered | 4,212 | 4,212 $\\#$ deadhead flights | 3 | 3 $\\#$ overnight-rests | 1,203 | 1,279 $\\#$ crew changes | 1,002 | 825 $\\#$ average crew changes per pairing | 1.082 | 1.054 Total TAFB (HH:MM) | 37444:54 | 38189:39 $\\#$ pairings covering 2 flights | 303 | 205 $\\#$ pairings covering 3 flights | 17 | 31 $\\#$ pairings covering 4 flights | 170 | 95 $\\#$ pairings covering 5 flights | 63 | 37 $\\#$ pairings covering 6 flights | 202 | 153 $\\#$ pairings covering 7 flights | 59 | 62 $\\#$ pairings covering 8 flights | 83 | 90 $\\#$ pairings covering 9 flights | 19 | 49 $\\#$ pairings covering 10 flights | 8 | 45 $\\#$ pairings covering 11 flights | 1 | 10 $\\#$ pairings covering 12 flights | 1 | 5 $\\#$ pairings covering 13 flights | 0 | 0 $\\#$ pairings covering 14 flights | 0 | 1 Hotel cost (USD) | 166,240 | 176,170 Meal cost (USD) | 157,269 | 160,397 Hard cost (USD) | 340,671 | 350,818 Soft cost (USD) | 51,600 | 42,750 Actual flying cost (USD) | 4,203,342 | 4,203,342 Total cost (USD) | 4,595,613 | 4,596,910 #### 4.3.2 Performance Variability in AirCROP These section investigates the sensitivity of $AirCROP$ with respect to the sources of variability over multiple runs, even for the same problem. This study assumes importance, considering that performance variability is rather inevitable when the mathematical programming based solution approaches are employed (Koch ., 2011). As cited by Lodi Tramontani (2013), variability in the performance of LP & MIP solvers may be observed on – changing the computing platform (which may change the floating-point arithmetic), permuting the constraints/variables of the respective mathematical models, or changing the pseudo-random numbers’ seed. These changes/permutations may lead to an entirely different outcome of the respective search algorithms (LP & MIP), as highlighted below. * • The root source for the performance variability in MIP is the imperfect tie- breaking. A majority of the decisions to be taken during an MIP-search are dependent on– the ordering of the candidates according to an interim score as well as the selection of the best candidate (one with the best score value). A perfect score that could fully-distinguish between the candidates is not-known mostly due to the lack of theoretical knowledge, and even if it is known, it may be too expensive to compute666For instance, in a strong branching scheme, the best variable to branch at each node is decided after simulating one-level of branching for each fractional variable, however, it is performed heuristically to make it a computationally-affordable task for MIP solvers (Linderoth Lodi, 2011). Furthermore, additional ties or tiebreaks could be induced by changing the floating-point operations, which inherently may change when the computing platform is changed. Amidst such an imperfect tie-breaking, the permutation of the variables/constraints changes the path within the MIP- search tree, leading to a completely different evolution of the algorithm with rather severe consequences. * • Depending upon the floating-point arithmetic or the sequence of variables loaded in an LPP, the performance of the simplex and interior-point methods may vary. * • The performance of the LP and MIP solvers is also affected by the choice of pseudo-random numbers’ seed, wherever the decisions are made heuristically. For instance, an interior-point method in the LP solvers performs a (random) crossover to one of the vertices of the optimal face when the search reaches its (unique) center. Table 5: Performance variability assessment for $AirCROP$ on two test instances∗ (TC-2 and TC-5) Test | LPP-IPP | Runs with performance variability | Runs without performance variability ---|---|---|--- Case | Interactions | Run-1 | Run-2 | Run (Seed-$\alpha$) | Run (Seed-$\beta$) | Run (Seed-$\gamma$) $T$ | $\mathcal{P}_{LP}^{T}/\mathcal{P}_{IP}^{T}$ | Cost | Time | Cost | Time | Cost | Time | Cost | Time | Cost | Time TC-2 | $\mathcal{P}_{IFS}$ | 81950079 | 00:05 | 74533686 | 00:04 | 129221508 | 00:08 | 114054265 | 00:07 | 52515476 | 00:04 1 | $\mathcal{P}_{LP}^{1}$ | 3493986 | 03:56 | 3494580 | 03:57 | 3495054 | 03:51 | 3493757 | 03:52 | 3493909 | 03:52 $\mathcal{P}_{IP}^{1}$ | 3715798 | 00:20 | 3746847 | 00:20 | 3769811 | 00:20 | 3711267 | 00:20 | 3722248 | 00:20 2 | $\mathcal{P}_{LP}^{2}$ | 3494675 | 01:19 | 3494540 | 02:18 | 3495311 | 01:57 | 3496733 | 01:57 | 3494657 | 03:02 $\mathcal{P}_{IP}^{2}$ | 3587139 | 00:20 | 3621066 | 00:20 | 3628514 | 00:20 | 3581978 | 00:20 | 3620745 | 00:20 3 | $\mathcal{P}_{LP}^{3}$ | 3495254 | 01:22 | 3496475 | 01:41 | 3497558 | 00:52 | 3499651 | 00:54 | 3496398 | 01:23 $\mathcal{P}_{IP}^{3}$ | 3546777 | 00:02 | 3555152 | 00:06 | 3566092 | 00:11 | 3536050 | 00:01 | 3551149 | 00:03 4 | $\mathcal{P}_{LP}^{4}$ | 3496797 | 00:57 | 3497750 | 01:36 | 3499237 | 01:26 | 3500818 | 01:03 | 3496069 | 01:37 $\mathcal{P}_{IP}^{4}$ | 3505941 | 00:01 | 3525600 | 00:01 | 3516807 | 00:01 | 3520552 | 00:01 | 3543236 | 00:02 5 | $\mathcal{P}_{LP}^{5}$ | 3497106 | 00:23 | 3498588 | 01:40 | 3500169 | 00:42 | 3500504 | 01:02 | 3496706 | 01:01 $\mathcal{P}_{IP}^{5}$ | 3497106 | 00:01 | 3498588 | 00:01 | 3517585 | 00:01 | 3500504 | 00:01 | 3501210 | 00:01 6 | $\mathcal{P}_{LP}^{6}$ | | | | | 3501523 | 00:43 | | | 3499063 | 00:41 $\mathcal{P}_{IP}^{6}$ | | | | | 3504085 | 00:01 | | | 3499063 | 00:01 7 | $\mathcal{P}_{LP}^{7}$ | | | | | 3502118 | 00:31 | | | | $\mathcal{P}_{IP}^{7}$ | | | | | 3502118 | 00:01 | | | | Final Solution | 3497106 | 08:46 | 3498588 | 12:05 | 3502118 | 11:05 | 3500504 | 09:38 | 3499063 | 12:27 TC-5 | $\mathcal{P}_{IFS}$ | 89690776 | 00:06 | 92080420 | 00:05 | 131443284 | 00:09 | 847887053 | 00:56 | 470430395 | 00:29 1 | $\mathcal{P}_{LP}^{1}$ | 4583484 | 07:48 | 4583476 | 08:00 | 4584525 | 07:28 | 4581988 | 08:47 | 4580130 | 07:36 $\mathcal{P}_{IP}^{1}$ | 4930789 | 00:20 | 4973580 | 00:20 | 4974341 | 00:20 | 4925863 | 00:20 | 4949616 | 00:20 2 | $\mathcal{P}_{LP}^{2}$ | 4588740 | 02:49 | 4588938 | 05:59 | 4589091 | 02:25 | 4584956 | 04:51 | 4584273 | 03:22 $\mathcal{P}_{IP}^{2}$ | 4734553 | 00:20 | 4765453 | 00:20 | 4782657 | 00:20 | 4749664 | 00:20 | 4753133 | 00:20 3 | $\mathcal{P}_{LP}^{3}$ | 4592143 | 01:46 | 4591571 | 02:35 | 4589952 | 02:14 | 4587812 | 03:02 | 4585046 | 03:40 $\mathcal{P}_{IP}^{3}$ | 4654258 | 00:20 | 4661078 | 00:20 | 4736313 | 00:20 | 4653279 | 00:20 | 4666390 | 00:20 4 | $\mathcal{P}_{LP}^{4}$ | 4593422 | 02:17 | 4595741 | 01:49 | 4591145 | 02:36 | 4589247 | 02:00 | 4588952 | 02:56 $\mathcal{P}_{IP}^{4}$ | 4634187 | 00:01 | 4624039 | 00:01 | 4654627 | 00:20 | 4614651 | 00:01 | 4628239 | 00:01 5 | $\mathcal{P}_{LP}^{5}$ | 4594282 | 02:14 | 4599006 | 01:14 | 4592463 | 02:03 | 4590573 | 01:05 | 4589577 | 02:02 $\mathcal{P}_{IP}^{5}$ | 4617838 | 00:01 | 4613385 | 00:01 | 4632708 | 00:02 | 4603938 | 00:01 | 4618710 | 00:01 6 | $\mathcal{P}_{LP}^{6}$ | 4595481 | 01:53 | 4598727 | 01:11 | 4593094 | 02:00 | 4591176 | 01:15 | 4589874 | 01:48 $\mathcal{P}_{IP}^{6}$ | 4615272 | 00:01 | 4605126 | 00:01 | 4625993 | 00:01 | 4591176 | 00:01 | 4607590 | 00:01 7 | $\mathcal{P}_{LP}^{7}$ | 4596466 | 01:12 | 4598412 | 01:39 | 4593431 | 01:04 | | | 4590674 | 01:24 $\mathcal{P}_{IP}^{7}$ | 4600428 | 00:01 | 4598412 | 00:01 | 4619643 | 00:01 | | | 4605058 | 00:01 8 | $\mathcal{P}_{LP}^{8}$ | 4595613 | 01:42 | | | 4594146 | 01:03 | | | 4591065 | 02:10 $\mathcal{P}_{IP}^{8}$ | 4595613 | 00:01 | | | 4594146 | 00:01 | | | 4591065 | 00:01 Final Solution | 4595613 | 22:52 | 4598412 | 23:37 | 4594146 | 22:27 | 4591176 | 22:59 | 4591065 | 26:32 ∗All values in the “Cost” columns are in USD, and all the corresponding real values are rounded-off to the next integer values. All values in the “Time” columns are in HH:MM, and all the corresponding seconds’ values are rounded- off to the next minute values. In the above background, the plausible reasons for variability in $AirCROP$’s performance are elaborated below. * • Generation of new legal pairings using a parallel architecture: in any LPP iteration $t$, new legal pairings are generated in parallel, by allocating the sub-processes to the idle-cores of the CPU. These sub-processes return their respective pairing sets as soon as they are terminated. This by itself is not a challenge, however, when the $AirCROP$ is re-run, the order in which these sub-processes terminate may not be same as before (as it depends on the state of the CPU), permuting the pairings in the cumulative pairing set $\mathcal{P}_{CG}^{t}$. This permuted pairing set, when fed as part of the input to the LP solver in the next LPP iteration, may lead to a different LPP solution, leading to a different outcome of the subsequent $AirCROP$’s search. To curb this, the pairings in the set that trigger the LP solver are sorted in lexicographical order of their representative strings. These strings are constructed from the indices of the flights covered in the corresponding pairings. For instance, the string corresponding to a pairing that covers flights $f_{1}$, $f_{10}$, $f_{100}$ & $f_{200}$ is $1\\_10\\_100\\_200$. Given that the pairings are distinct, the resulting strings are distinct too, allowing for a crisp sorting criterion and ensuring a fixed pairing sequence in each $AirCROP$-run. * • Numerical seed for generation of pseudo-random numbers: variability may also be introduced if the numerical seed employed to generate pseudo-random numbers for use in the proposed modules or the utilized LP & MIP solvers, varies. For instance, use of the default seed method of Python (i.e., the current time of the computing system) across different $AirCROP$ runs may lead to different pseudo-random numbers, each time. This in turn would trigger variability in the IFS generated by IPDCH (since the random selection of flights in each of its iterations, is impacted), and the pairing set resulting from the CG heuristic (since each of the underlying CG strategy is impacted). Such variability could be negated by use of a fixed numerical seed, instead of a time dependent one. The intriguing questions for researchers could relate to the impact that presence or absence of causes of variability may have on the quality of $AirCROP$’s solutions, in terms of both cost and run-time. Table 5 attempts to shed light on these questions through empirical evidence for two test cases involving 3228 flights (TC-2) and 4212 flights (TC-5), respectively. In each of these test cases, the effect of variability is revealed through: * • two independent runs (Run-1 and Run-2), in each of which the causes of variability exist, that is: (a) the permutations of pairings generated using the parallel architecture is possible, and (b) the default seed method of Python, based on the time of the computing system applies. * • three independent runs, in each of which the causes of variability have been eliminated, that is: (a) the lexicographical order of the pairings is imposed, and (b) a fixed numerical seed has been fed for random number generation. For these runs, the numerical seeds are given by $\alpha=0$, $\beta=1$, and $\gamma=2$, respectively. The key observations and inferences that could be drawn from each test case in Table 5 are highlighted below. * • understandably, the Run-1 and Run-2 (corresponding to the same numerical seed), yield different cost solutions over different run-time. Importantly, the variation in cost (despite the presence of causes of variability) is not alarming, though significantly different run-times may be required. * • each run (corresponding to Seed-$\alpha$, Seed-$\beta$, and Seed-$\gamma$, respectively) where the causes of variability have been negated, if repeated, yield the same cost solution in the same run-time though it has not been shown in the table for paucity of space. * • the runs corresponding to the numerical seeds given by $\alpha$, $\beta$, and $\gamma$, respectively, differ solely due to the difference in the corresponding random numbers generated, and subsequently utilized. It can be observed that the change in numerical seed does not significantly affect the cost-quality of the final $AirCROP$ solution though the associated run-time may vary significantly. The fact that $AirCROP$ can offer final solutions with comparable cost quality, regardless of the presence or absence of causes of variability, endorses the robustness of the constitutive modules of the $AirCROP$. Also, the variation in run-time could be attributed to different search trajectories corresponding to different permutations of variables or different random numbers. It may be noted that for the subsequent runs the lexicographical order of the pairings and a fixed numerical seed (Seed-$\alpha=0$) have been utilized. #### 4.3.3 Impact of Initialization on AirCROP’s Performance This section investigates the sensitivity of $AirCROP$ with respect to the cost quality of the initial solution and the run-time spent to obtain it. Towards it, the initial solution is obtained using three different methods (offering three input alternatives with varying cost and run-time) and the cost quality of $AirCROP^{\prime}s$ final solution alongside the necessary run-time is noted. Notably, in an initial attempt to generate IFS for large-scale CPOPs, the authors proposed a DFS algorithm based heuristic, namely, Enhanced-DFS heuristic (Aggarwal ., 2018). Its performance across the five test cases has been highlighted in Table 6. In that, TC-1 emerges as an outlier owing to alarmingly high run-time, when compared to all other test cases. Table 6: Performance of Enhanced-DFS heuristic (Aggarwal ., 2018) for IFS generation. Here, the real valued “Cost” is rounded-off to the next integer value, and the seconds’ in the “Time” column are rounded-off to the next minute values. Test Cases | Time (HH:MM) | Cost (USD) | # Pairings ---|---|---|--- TC-1 | 01:48 | 3863669070 | 477617 TC-2 | 00:02 | 167405376 | 26678 TC-3 | 00:03 | 167967482 | 26871 TC-4 | 00:13 | 1072078483 | 135269 TC-5 | 00:04 | 325922318 | 51920 A plausible explanation behind this aberration is that TC-1 involves some flights with very few legal flight connections, and a DFS based algorithm may have to exhaustively explore several flight connections, to be able to generate an IFS with full flight coverage. The need to do away with reliance on DFS so as to have equable run-time across different data sets explains the motivation for: * • proposition of IPDCH in this paper, which as highlighted in Section 3.2, relies on: (a) a divide-and-cover strategy to decompose the input flight schedule into sufficiently-small flight subsets, and (b) IP to find a lowest- cost pairing set that covers the maximum-possible flights for each of the decomposed flight subsets. * • consideration of a commonly adopted Artificial Pairings method (Vance ., 1997), that constructs a pairing set which covers all the flights, though some/all the pairings may not be legal. Hence, for this method the initial solution would be referred as $\mathcal{P}_{IS}$ instead of $\mathcal{P}_{IFS}$. Table 7: Performance assessment of $AirCROP$ on TC-1 and TC-5 when initialized using the proposed IPDCH, the Artificial Pairings method, and the Enhanced-DFS heuristic. LPP-IPP | TC-1 | TC-5 ---|---|--- Interactions | Enhanced-DFS | IPDCH | Artificial Pairings | Enhanced-DFS | IPDCH | Artificial Pairings $T$ | $\mathcal{P}_{LP}^{T}/\mathcal{P}_{IP}^{T}$ | Cost | Time | Cost | Time | Cost | Time | Cost | Time | Cost | Time | Cost | Time $\mathcal{P}_{IFS}/\mathcal{P}_{IS}$ | 3863669070 | 01:48 | 74945982 | 00:05 | 14604919138 | $\approx$00:00 | 325922318 | 00:04 | 131443284 | 00:09 | 25409939785 | $\approx$00:00 1 | $\mathcal{P}_{LP}^{1}$ | 3463560 | 04:14 | 3465379 | 04:19 | 17589714 | 03:10 | 4583664 | 06:57 | 4584525 | 07:28 | 4585380 | 07:29 $\mathcal{P}_{IP}^{1}$ | 3650828 | 00:20 | 3689312 | 00:20 | 17833718 | 00:20 | 4943531 | 00:20 | 4974341 | 00:20 | 4960813 | 00:20 2 | $\mathcal{P}_{LP}^{2}$ | 3464678 | 01:51 | 3466567 | 01:32 | 17589851 | 01:29 | 4586675 | 03:41 | 4589091 | 02:25 | 4589470 | 03:34 $\mathcal{P}_{IP}^{2}$ | 17731125 | 00:20 | 3566415 | 00:20 | 3578030 | 00:20 | 4773342 | 00:20 | 4809348 | 00:20 | 4782657 | 00:20 3 | $\mathcal{P}_{LP}^{3}$ | 3466217 | 01:38 | 3467848 | 01:33 | 3466868 | 02:02 | 4586581 | 04:54 | 4589952 | 02:14 | 4593117 | 02:05 $\mathcal{P}_{IP}^{3}$ | 3531694 | 00:13 | 3556499 | 00:20 | 3553432 | 00:20 | 4701607 | 00:20 | 4736313 | 00:20 | 4672696 | 00:20 4 | $\mathcal{P}_{LP}^{4}$ | 3467672 | 01:19 | 3468777 | 01:26 | 3467935 | 00:47 | 4589568 | 01:51 | 4591145 | 02:36 | 4593938 | 02:18 $\mathcal{P}_{IP}^{4}$ | 3507987 | 00:01 | 3517901 | 00:01 | 3516376 | 00:02 | 4651824 | 00:20 | 4654627 | 00:20 | 4650449 | 00:06 5 | $\mathcal{P}_{LP}^{5}$ | 3469533 | 00:44 | 3468894 | 00:49 | 3468332 | 00:40 | 4591698 | 02:03 | 4592463 | 02:03 | 4596256 | 02:24 $\mathcal{P}_{IP}^{5}$ | 3483690 | 00:01 | 3499531 | 00:01 | 3496156 | 00:01 | 4616605 | 00:01 | 4632708 | 00:02 | 4620903 | 00:01 6 | $\mathcal{P}_{LP}^{6}$ | 3469276 | 00:52 | 3469352 | 00:48 | 3469095 | 00:47 | 4591969 | 01:05 | 4593094 | 02:00 | 4597203 | 00:49 $\mathcal{P}_{IP}^{6}$ | 3469276 | 00:01 | 3477354 | 00:01 | 3491947 | 00:01 | 4606253 | 00:01 | 4625993 | 00:01 | 4612164 | 00:01 7 | $\mathcal{P}_{LP}^{7}$ | | | 3469950 | 00:42 | 3469543 | 00:52 | 4592860 | 01:15 | 4593431 | 01:04 | 4597913 | 01:17 $\mathcal{P}_{IP}^{7}$ | | | 3469950 | 00:01 | 3487562 | 00:01 | 4592860 | 00:01 | 4619643 | 00:01 | 4606368 | 00:01 8 | $\mathcal{P}_{LP}^{8}$ | | | | | 3470100 | 00:38 | | | 4594146 | 01:03 | 4597730 | 02:00 $\mathcal{P}_{IP}^{8}$ | | | | | 3478057 | 00:01 | | | 4594146 | 00:01 | 4604551 | 00:01 9 | $\mathcal{P}_{LP}^{9}$ | | | | | 3470355 | 00:28 | | | | | 4597929 | 00:50 $\mathcal{P}_{IP}^{9}$ | | | | | 3470355 | 00:01 | | | | | 4597929 | 00:01 Final Solution | 3469276 | 13:22 | 3469950 | 12:18 | 3470355 | 12:00 | 4592860 | 23:13 | 4594146 | 22:27 | 4597929 | 23:57 ∗All values in the “Cost” columns are in USD, where the real values are rounded-off to the next integer values. All values in the “Time” columns are in HH:MM, where the seconds’ values are rounded-off to the next minute values. A comparison of the above three methods has been drawn in Table 7, for TC-1 (posing challenge to Enhanced-DFS) and TC-5 (largest flight set). In that, besides the cost and run-time of the initial solution for each test case, the results of all the iterations of $AirCROP$ leading up to the final solution have been presented. The latter is done to shed light on whether $AirCROP^{\prime}s$ final solution cost quality strongly depends on the cost of the initial solution. The prominent observations from the Table 7 include: * • In terms of run-time: IPDCH could outperform the Enhanced-DFS, as its run-time happened to be less than ten minutes in both the test cases. The Artificial pairing method even out performs IPDCH, since its run-time happened to be in milliseconds (formatted to 0 minutes in the table). * • In terms of initial cost: IPDCH could again outperform the Enhanced-DFS. This could be attributed to the use of IP to find a lowest-cost pairing set that covers the maximum-possible flights for each of the decomposed flight subsets. In contrast, the cost associated with the Artificial pairing method, is the worst. This is owing to a very high pseudo-cost attached to the pairings to offset their non-legality. Critically, regardless of the significantly varying run-time and the initial cost associated with the three methods, the variation in the cost of the final solution offered by $AirCROP$ is not significant. This endorses the robustness of its constitutive modules. #### 4.3.4 Impact of Termination Settings of Optimization Engine’s Submodules on AirCROP’s Performance This section investigates the sensitivity of $AirCROP$ to the termination parameter settings of the Optimization Engine’s submodules, namely, LPP- solutioning and IPP-solutioning. The parameters involved in LPP-solutioning are $Th_{cost}$ and $Th_{t}$, while $Th_{ipt}$ is involved in IPP-solutioning. To assess their impact on $AirCROP^{\prime}s$ performance, experiments are performed with three different sets of parameter settings each, for both the submodules. Impact of Termination Settings of CG-driven LPP-solutioning: As mentioned earlier, the CG-driven LPP-solutioning is terminated if the cost- improvement per LPP iteration falls below the pre-specified threshold $Th_{cost}$ (in USD) over $Th_{t}$ number of successive LPP iterations. To achieve a reasonable balance between $AirCROP^{\prime}s$ run time on the one hand and the cost reduction of the crew pairing solution on the other hand, three different sets of parameter settings are chosen, and experimented with. These settings of $\\{Th_{cost},Th_{t}\\}$ including $\\{500,5\\}$, $\\{100,10\\}$, and $\\{50,15\\}$ symbolize relaxed, moderate and strict settings, respectively, since the criterion for $AirCROP^{\prime}s$ termination gets more and more difficult as the settings change from $\\{500,5\\}$ to $\\{50,15\\}$. The results of the $AirCROP$-runs corresponding to these termination settings are reported in Table 8, and the key observations are as highlighted below. * • As the termination settings transition through relaxed, moderate and strict settings, the run-time to obtain the final solution increases, while the cost of the final solution decreases. An apparent exception to this trend is observed in TC-5 with the strict setting, but this could be explained by the fact that the upper limit of 30 hours set for $AirCROP^{\prime}s$ run time under practical considerations was exceeded during the fourth LPP-IPP interaction ($T=4$). It implies that due to the enforced termination in this particular case, $AirCROP$ could not fully utilize the potential for cost reduction. * • Despite the variation in the termination settings, the cost quality of $AirCROP^{\prime}s$ final solution does not vary as drastically, as its run time. For instance, as the settings switched from relaxed to moderate: an additional saving of 6384 USD could be achieved at the expense of additional 5:20 run time in the case of TC-2, while these indicators stand at 13388 USD and 10:25, respectively, in the case of TC-5. It can also be inferred that $\\{Th_{cost},Th_{t}\\}$ set as $\\{100,10\\}$ possibly offers a fair balance between solution’s cost quality and run time, and this explains why these settings have been used as the base settings for the experimental results presented in this paper, beginning with Table 3 and ending with Table 9. It is important to recognize that as the termination settings for LPP- solutioning are made stricter, its run time is bound to increase. It is also fair to expect that the cost quality of the final solution may be better, though it cannot be guaranteed. Any such departures from the expected trend may be due to the dependence of the quality of the final solution on the quality of the IPP-solution for each $T$. In that, if an IPP-solution for a particular $T$ may largely fail to approach the lower bound set by the corresponding LPP-solution, it may negatively influence the cost quality obtained in subsequent LPP- and IPP-solutioning phases. While such a possibility remains, it did not surface in the experiments above. Table 8: Performance assessment of $AirCROP$ on TC-2 and TC-5, against three different termination settings (Relaxed, Moderate and Strict Settings) of the CG-driven LPP-solutioning∗ LPP-IPP | TC-2 | TC-5 ---|---|--- Interactions | Relaxed Setting | Moderate Setting | Strict Setting | Relaxed Setting | Moderate Setting | Strict Setting | | $\bm{Th_{cost}=}$ 500, $\bm{Th_{t}=}$ 5 | $\bm{Th_{cost}=}$ 100, $\bm{Th_{t}=}$ 10 | $\bm{Th_{cost}=}$ 50, $\bm{Th_{t}=}$ 15 | $\bm{Th_{cost}=}$ 500, $\bm{Th_{t}=}$ 5 | $\bm{Th_{cost}=}$ 100, $\bm{Th_{t}=}$ 10 | $\bm{Th_{cost}=}$ 50, $\bm{Th_{t}=}$ 15 $T$ | $\mathcal{P}_{LP}^{T}/\mathcal{P}_{IP}^{T}$ | Cost | Time | Cost | Time | Cost | Time | Cost | Time | Cost | Time | Cost | Time $\mathcal{P}_{IFS}$ | 129221508 | 00:08 | 129221508 | 00:08 | 129221508 | 00:08 | 131443284 | 00:09 | 131443284 | 00:09 | 131443284 | 00:09 1 | $\mathcal{P}_{LP}^{1}$ | 3510231 | 01:12 | 3495054 | 03:51 | 3489337 | 08:21 | 4603984 | 02:43 | 4584525 | 07:28 | 4581711 | 10:05 $\mathcal{P}_{IP}^{1}$ | 3844119 | 00:20 | 3769811 | 00:20 | 3698316 | 00:20 | 5165821 | 00:20 | 4974341 | 00:20 | 4946510 | 00:20 2 | $\mathcal{P}_{LP}^{2}$ | 3510105 | 00:26 | 3495311 | 01:57 | 3491725 | 03:36 | 4605049 | 01:12 | 4589091 | 02:25 | 4582977 | 09:32 $\mathcal{P}_{IP}^{2}$ | 3729820 | 00:20 | 3628514 | 00:20 | 3607470 | 00:20 | 4962643 | 00:20 | 4782657 | 00:20 | 4780005 | 00:20 3 | $\mathcal{P}_{LP}^{3}$ | 3506864 | 00:35 | 3497558 | 00:52 | 3491685 | 04:15 | 4602283 | 01:09 | 4589952 | 02:14 | 4585457 | 05:56 $\mathcal{P}_{IP}^{3}$ | 3659201 | 00:20 | 3566092 | 00:11 | 3578774 | 00:20 | 4818918 | 00:20 | 4736313 | 00:20 | 4678596 | 00:20 4 | $\mathcal{P}_{LP}^{4}$ | 3506644 | 00:32 | 3499237 | 01:26 | 3494201 | 02:29 | 4604535 | 00:52 | 4591145 | 02:36 | 4595692 | 03:34 $\mathcal{P}_{IP}^{4}$ | 3606381 | 00:20 | 3516807 | 00:01 | 3540972 | 00:01 | 4727106 | 00:20 | 4654627 | 00:20 | 4624747 | 00:01 5 | $\mathcal{P}_{LP}^{5}$ | 3507647 | 00:29 | 3500169 | 00:42 | 3494409 | 02:36 | 4603253 | 00:47 | 4592463 | 02:03 | | $\mathcal{P}_{IP}^{5}$ | 3559484 | 00:04 | 3517585 | 00:01 | 3527254 | 00:01 | 4683130 | 00:20 | 4632708 | 00:02 | | 6 | $\mathcal{P}_{LP}^{6}$ | 3507101 | 00:20 | 3501523 | 00:43 | 3496498 | 01:00 | 4603093 | 00:45 | 4593094 | 02:00 | | $\mathcal{P}_{IP}^{6}$ | 3547304 | 00:02 | 3504085 | 00:01 | 3496498 | 00:01 | 4681335 | 00:20 | 4625993 | 00:01 | | 7 | $\mathcal{P}_{LP}^{7}$ | 3508166 | 00:18 | 3502118 | 00:31 | | | 4603638 | 00:46 | 4593431 | 01:04 | | $\mathcal{P}_{IP}^{7}$ | 3517436 | 00:01 | 3502118 | 00:01 | | | 4651002 | 00:06 | 4619643 | 00:01 | | 8 | $\mathcal{P}_{LP}^{8}$ | 3508502 | 00:17 | | | | | 4604073 | 00:44 | 4594146 | 01:03 | | $\mathcal{P}_{IP}^{8}$ | 3508502 | 00:01 | | | | | 4634316 | 00:02 | 4594146 | 00:01 | | 9 | $\mathcal{P}_{LP}^{9}$ | | | | | | | 4606250 | 00:28 | | | | $\mathcal{P}_{IP}^{9}$ | | | | | | | 4614420 | 00:01 | | | | 10 | $\mathcal{P}_{LP}^{10}$ | | | | | | | 4607534 | 00:17 | | | | $\mathcal{P}_{IP}^{10}$ | | | | | | | 4607534 | 00:01 | | | | Final Solution | 3508502 | 05:45 | 3502118 | 11:05 | 3496498 | 23:28 | 4607534 | 12:02 | 4594146 | 22:27 | 4624747 | 30:17 ∗All values in the “Cost” columns are in USD, and all the corresponding real values are rounded-off to the next integer values. All values in the “Time” columns are in HH:MM, and all the corresponding seconds’ values are rounded- off to the next minute values. Impact of Termination Settings of IPP-solutioning: As mentioned before, integerization of an LPP solution using an MIP solver is extremely time-consuming, particularly for large-scale CPOPs, and more so those involving complex flight networks. Hence, from a practical perspective, the $AirCROP$ framework imposes a threshold on the upper time limit for IPP- solutioning (for any given $T$), namely $Th_{ipt}$, in case it does not self- terminate a priori. To investigate the impact of $Th_{ipt}$ on $AirCROP^{\prime}s$ performance, experiments are performed with three different settings, including, 00:20 (one-third of an hour), 00:40 (two-third of an hour), and 01:00 (an hour). The results are presented in Table 9, and the key observations are as follows. In the case of TC-2, as the $Th_{ipt}$ is raised, the run-time to obtain the final solution increases, while the cost of the final solution decreases. However, there are exceptions to this trend in the case of TC-5. Notably, the cost quality of the final solution corresponding to $Th_{ipt}=$ 00:20 remains superior to that obtained for both $Th_{ipt}=$ 00:40 and 01:00. For these two settings, the quality of LPP- solution at $T=8$ turned worse compared to the case of $Th_{ipt}=$ 00:20, and the gap could not be bridged even in the subsequent LPP-IPP interaction ($T=9$). The worsening of LPP-solution could be attributed to the fact that LPP-solutioning relies on random number based heuristics, and the resulting pairing combinations may not necessarily offer lower cost within the pre- specified termination settings. Based on the above, it may be inferred that despite the changes in the termination parameter settings, $AirCROP$ is able to offer solutions with reasonably close cost quality, though significant variations in run time may be observed. It is also evident that even the lowest setting (desired from a practical perspective) for $Th_{ipt}=$ 00:20 offers a good balance between solution’s cost quality and run time, and this explains why it has been used as the base setting for the experimental results presented in this paper. Table 9: Performance assessment of $AirCROP$ on TC-2 and TC-5, against three different termination settings ($Th_{ipt}=$ 00:20, 00:40 & 01:00) of the IPP- solutioning∗ LPP-IPP | TC-2 | TC-5 ---|---|--- Interactions | $\bm{Th_{ipt}=}$ 00:20 | $\bm{Th_{ipt}=}$ 00:40 | $\bm{Th_{ipt}=}$ 01:00 | $\bm{Th_{ipt}=}$ 00:20 | $\bm{Th_{ipt}=}$ 00:40 | $\bm{Th_{ipt}=}$ 01:00 $T$ | $\mathcal{P}_{LP}^{T}/\mathcal{P}_{IP}^{T}$ | Cost | Time | Cost | Time | Cost | Time | Cost | Time | Cost | Time | Cost | Time $\mathcal{P}_{IFS}$ | 129221508 | 00:08 | 129221508 | 00:08 | 129221508 | 00:08 | 131443284 | 00:09 | 131443284 | 00:09 | 131443284 | 00:09 1 | $\mathcal{P}_{LP}^{1}$ | 3495054 | 03:51 | 3495054 | 04:21 | 3495054 | 03:57 | 4584525 | 07:28 | 4584525 | 07:46 | 4584525 | 07:49 $\mathcal{P}_{IP}^{1}$ | 3769811 | 00:20 | 3744301 | 00:40 | 3760028 | 01:00 | 4974341 | 00:20 | 4958532 | 00:40 | 4987497 | 01:00 2 | $\mathcal{P}_{LP}^{2}$ | 3495311 | 01:57 | 3497483 | 01:49 | 3495562 | 02:19 | 4589091 | 02:25 | 4585347 | 05:00 | 4588371 | 03:56 $\mathcal{P}_{IP}^{2}$ | 3628514 | 00:20 | 3629401 | 00:40 | 3632875 | 01:00 | 4782657 | 00:20 | 4778465 | 00:40 | 4766924 | 01:00 3 | $\mathcal{P}_{LP}^{3}$ | 3497558 | 00:52 | 3497473 | 01:33 | 3494305 | 01:50 | 4589952 | 02:14 | 4589481 | 02:56 | 4588911 | 04:25 $\mathcal{P}_{IP}^{3}$ | 3566092 | 00:11 | 3566247 | 00:40 | 3579899 | 01:00 | 4736313 | 00:20 | 4699845 | 00:40 | 4713402 | 01:00 4 | $\mathcal{P}_{LP}^{4}$ | 3499237 | 01:26 | 3500607 | 01:06 | 3495273 | 01:13 | 4591145 | 02:36 | 4590618 | 01:56 | 4591028 | 02:09 $\mathcal{P}_{IP}^{4}$ | 3516807 | 00:01 | 3524672 | 00:01 | 3551863 | 00:05 | 4654627 | 00:20 | 4656611 | 00:40 | 4681015 | 01:00 5 | $\mathcal{P}_{LP}^{5}$ | 3500169 | 00:42 | 3501809 | 00:49 | 3496754 | 00:52 | 4592463 | 02:03 | 4591826 | 01:18 | 4591448 | 02:03 $\mathcal{P}_{IP}^{5}$ | 3517585 | 00:01 | 3501809 | 00:01 | 3528564 | 00:01 | 4632708 | 00:02 | 4644467 | 00:15 | 4639287 | 00:29 6 | $\mathcal{P}_{LP}^{6}$ | 3501523 | 00:43 | | | 3496342 | 00:53 | 4593094 | 02:00 | 4592492 | 02:21 | 4591372 | 02:06 $\mathcal{P}_{IP}^{6}$ | 3504085 | 00:01 | | | 3512692 | 00:01 | 4625993 | 00:01 | 4617694 | 00:01 | 4616944 | 00:01 7 | $\mathcal{P}_{LP}^{7}$ | 3502118 | 00:31 | | | 3497967 | 00:59 | 4593431 | 01:04 | 4594599 | 01:30 | 4594479 | 01:23 $\mathcal{P}_{IP}^{7}$ | 3502118 | 00:01 | | | 3519996 | 00:01 | 4619643 | 00:01 | 4607261 | 00:01 | 4608085 | 00:01 8 | $\mathcal{P}_{LP}^{8}$ | | | | | 3498726 | 01:24 | 4594146 | 01:03 | 4595739 | 01:08 | 4595424 | 01:03 $\mathcal{P}_{IP}^{8}$ | | | | | 3518299 | 00:01 | 4594146 | 00:01 | 4598624 | 00:01 | 4603634 | 00:01 9 | $\mathcal{P}_{LP}^{9}$ | | | | | 3499104 | 00:40 | | | 4595703 | 00:45 | 4596929 | 00:59 $\mathcal{P}_{IP}^{9}$ | | | | | 3504258 | 00:01 | | | 4595703 | 00:01 | 4596929 | 00:01 10 | $\mathcal{P}_{LP}^{10}$ | | | | | 3499117 | 01:10 | | | | | | $\mathcal{P}_{IP}^{10}$ | | | | | 3509608 | 00:01 | | | | | | 11 | $\mathcal{P}_{LP}^{11}$ | | | | | 3499609 | 00:45 | | | | | | $\mathcal{P}_{IP}^{11}$ | | | | | 3499609 | 00:01 | | | | | | Final solution | 3502118 | 11:05 | 3501809 | 12:24 | 3499609 | 19:22 | 4594146 | 22:27 | 4595703 | 27:55 | 4596929 | 30:35 ∗All values in the “Cost” columns are in USD, and all the corresponding real values are rounded-off to the next integer values. All values in the “Time” columns are in HH:MM, and all the corresponding seconds’ values are rounded- off to the next minute values. ## 5 Conclusion and Future Research For an airline, crew operating cost is the second largest expense, after the fuel cost, making the crew pairing optimization critical for business viability. Over the last three decades, CPOP has received an unprecedented attention from the OR community, as a result of which numerous CPOP solution approaches have been proposed. Yet, the emergent flight networks with conjunct scale and complexity largely remain unaddressed in the available literature. Such a scenario is all the more alarming, considering that the air traffic is expected to scale up to double over the next 20 years, wherein, most airlines may need to cater to multiple crew bases and multiple hub-and-spoke subnetworks. This research has proposed an Airline Crew Pairing Optimization Framework ($AirCROP$) based on domain-knowledge driven CG strategies for efficiently tackling real-world, large-scale and complex flight networks. This paper has presented not just the design of the $AirCROP$’s constitutive modules, but has also shared insights on how these modules interact and how sensitive the $AirCROP^{\prime}s$ performance is to the sources of variability, choice of different methods and parameter settings. Given a CPOP, $AirCROP$ first preprocesses the entire duty overnight- connection network via its Legal Crew Pairing Generator777This module is utilized again to facilitate legal crew pairings when required in real-time in other modules of $AirCROP$ Subsequently, $AirCROP$ is initialized using an IFS generated by the proposed method (IPDCH). Next, the $AirCROP$’s Optimization Engine attempts to find a good-quality CPOP solution via intermittent interactions of its submodules, namely, CG-driven LPP-solutioning and IPP- solutioning. The efficacy of $AirCROP$ has been demonstrated on real-world airline flight network characterized by an unprecedented (in reference to available literature) conjunct scale-and-complexity, marked by over 4200 flights, 15 crew bases, multiple hub-and-spoke subnetworks, and billion-plus pairings. The distinctive contribution of this paper is also embedded in its empirical investigation of critically important questions relating to variability and sensitivity, which the literature is otherwise silent on. In that: * • first, the sensitivity analysis of $AirCROP$ is performed in the presence and absence of sources of variability. It is empirically highlighted that $AirCROP$ is capable of offering comparable cost solutions, both in the presence or absence of the sources of variability. This endorses the robustness of its constitutive modules. * • second, the sensitivity of $AirCROP$ with respect to the cost quality of the initial solution and the associated run-time is investigated vis-$\grave{a}$-vis three different initialization methods. Again, the robustness of $AirCROP$ is endorsed, considering that it is found to be capable of offering similar cost solutions, despite the significantly varying cost and run-time of the initial solutions. * • last, the sensitivity of $AirCROP$ to the termination parameter settings associated with the Optimization Engine’s submodules, is investigated. The fact that with the variation in termination settings of both LPP-solutioning and IPP-solutioning (independent of each other)- the $AirCROP$’s performance strongly aligns with the logically expected trends, is a testimony to the robustness of its constitutive modules. Notably, $AirCROP$ has been implemented using Python scripting language, aligned with the industrial sponsor’s preferences. However, a significant reduction in run-time could be achieved by the use of compiled programming languages such as C++, Java, etc. Moreover, employing the domain-knowledge driven CG strategies during the IPP-solutioning phase too, may augment the overall cost- and time-efficiency of the $AirCROP$. Furthermore, the emerging trend of utilizing the Machine Learning capabilities for assisting combinatorial optimization tasks, may also hold promise for the airline crew pairing optimization, towards which an exploratory attempt has been made by the authors (Aggarwal, Singh Saxena, 2020). Despite the scope for improvement, the authors hope that with the emergent trend of evolving scale and complexity of airline flight networks, this paper shall serve as an important milestone for the affiliated research and applications. ## Acknowledgment This research work is a part of an Indo-Dutch joint research project, supported by the Ministry of Electronics and Information Technology (MEITY), India [grant number 13(4)/2015-CC&BT]; Netherlands Organization for Scientific Research (NWO), the Netherlands; and General Electric (GE) Aviation, India. The authors thank GE Aviation, particularly, Saaju Paulose (Senior Manager), Arioli Arumugam (Senior Director- Data & Analytics), and Alla Rajesh (Senior Staff Data & Analytics Scientist) for providing real-world test cases, and sharing their domain knowledge which has helped the authors significantly in successfully completing this research work. ## References * Achterberg Wunderling (2013) achterberg2013mixedAchterberg, T. Wunderling, R. 2013\. Mixed integer programming: Analyzing 12 years of progress Mixed integer programming: Analyzing 12 years of progress. Facets of combinatorial optimization Facets of combinatorial optimization ( 449–481). Springer. * Aggarwal, Saxena . (20201) aggarwal2020novelAggarwal, D., Saxena, DK., Bäck, T. Emmerich, M. 20201\. A Novel Column Generation Heuristic for Airline Crew Pairing Optimization with Large-scale Complex Flight Networks A Novel Column Generation Heuristic for Airline Crew Pairing Optimization with Large-scale Complex Flight Networks. arXiv preprint arXiv:2005.08636. https://arxiv.org/abs/2005.08636v3 * Aggarwal, Saxena . (20202) aggarwal2020realworldAggarwal, D., Saxena, DK., Bäck, T. Emmerich, M. 20202\. Real-World Airline Crew Pairing Optimization: Customized Genetic Algorithm versus Column Generation Method Real-World Airline Crew Pairing Optimization: Customized Genetic Algorithm versus Column Generation Method. arXiv preprint arXiv:2003.03792. http://arxiv.org/abs/2003.03792 * Aggarwal . (2018) aggarwal2018largeAggarwal, D., Saxena, DK., Emmerich, M. Paulose, S. 2018November. On large-scale airline crew pairing generation On large-scale airline crew pairing generation. 2018 IEEE Symposium Series on Computational Intelligence (SSCI) 2018 ieee symposium series on computational intelligence (ssci) ( 593–600). * Aggarwal, Singh Saxena (2020) aggarwal2020learningAggarwal, D., Singh, YK. Saxena, DK. 2020\. On Learning Combinatorial Patterns to Assist Large-Scale Airline Crew Pairing Optimization On Learning Combinatorial Patterns to Assist Large-Scale Airline Crew Pairing Optimization. arXiv preprint arXiv:2004.13714. https://arxiv.org/abs/2004.13714v3 * Anbil . (1998) anbil1998columnAnbil, R., Forrest, JJ. Pulleyblank, WR. 1998\. Column generation and the airline crew pairing problem Column generation and the airline crew pairing problem. Documenta Mathematica31677. * Anbil . (1991) anbil1991recentAnbil, R., Gelman, E., Patty, B. Tanga, R. 1991\. Recent advances in crew-pairing optimization at American Airlines Recent advances in crew-pairing optimization at american airlines. Interfaces21162–74. * Anbil . (1992) anbil1992globalAnbil, R., Tanga, R. Johnson, EL. 1992\. A global approach to crew-pairing optimization A global approach to crew-pairing optimization. IBM Systems Journal31171–78. * Andersen Andersen (2000) andersen2000mosekAndersen, ED. Andersen, KD. 2000\. The MOSEK interior point optimizer for linear programming: an implementation of the homogeneous algorithm The mosek interior point optimizer for linear programming: an implementation of the homogeneous algorithm. High performance optimization High performance optimization ( 197–232). Springer. * Barnhart . (2003) barnhart2003airlineBarnhart, C., Cohn, AM., Johnson, EL., Klabjan, D., Nemhauser, GL. Vance, PH. 2003\. Airline crew scheduling Airline crew scheduling. Handbook of transportation science Handbook of transportation science ( 517–560). Springer. * Barnhart . (1998) barnhart1998branchBarnhart, C., Johnson, EL., Nemhauser, GL., Savelsbergh, MW. Vance, PH. 1998\. Branch-and-price: Column generation for solving huge integer programs Branch-and-price: Column generation for solving huge integer programs. Operations research463316–329. * Beasley Chu (1996) beasley1996geneticBeasley, JE. Chu, PC. 1996\. A genetic algorithm for the set covering problem A genetic algorithm for the set covering problem. European journal of operational research942392–404. * Bertsimas Tsitsiklis (1997) bertsimas1997introductionBertsimas, D. Tsitsiklis, JN. 1997\. Introduction to linear optimization Introduction to linear optimization ( 6). Athena Scientific Belmont, MA. * Desaulniers . (1997) desaulniers1997crewDesaulniers, G., Desrosiers, J., Dumas, Y., Marc, S., Rioux, B., Solomon, MM. Soumis, F. 1997\. Crew pairing at air france Crew pairing at air france. European journal of operational research972245–259. * Desaulniers Soumis (2010) desaulniers2010airlineDesaulniers, G. Soumis, F. 2010\. Airline Crew Scheduling by Column Generation Airline crew scheduling by column generation. CIRRELT Spring School, Montréal Canada. * Desrochers . (1992) desrochers1992newDesrochers, M., Desrosiers, J. Solomon, M. 1992\. A new optimization algorithm for the vehicle routing problem with time windows A new optimization algorithm for the vehicle routing problem with time windows. Operations research402342–354. * Desrochers Soumis (1989) desrochers1989columnDesrochers, M. Soumis, F. 1989\. A column generation approach to the urban transit crew scheduling problem A column generation approach to the urban transit crew scheduling problem. Transportation science2311–13. * Desrosiers . (1991) desrosiers1991breakthroughDesrosiers, J., Dumas, Y., Desrochers, M., Soumis, F., Sanso, B. Trudeau, P. 1991\. A breakthrough in airline crew scheduling A breakthrough in airline crew scheduling G-91-11. MontrealCahiers du GERAD. * Desrosiers . (1984) desrosiers1984routingDesrosiers, J., Soumis, F. Desrochers, M. 1984\. Routing with time windows by column generation Routing with time windows by column generation. Networks144545–565. * Deveci Demirel (20181) deveci2018evolutionaryDeveci, M. Demirel, NÇ. 20181\. Evolutionary algorithms for solving the airline crew pairing problem Evolutionary algorithms for solving the airline crew pairing problem. Computers & Industrial Engineering115389–406. * Deveci Demirel (20182) deveci2018surveyDeveci, M. Demirel, NC. 20182\. A survey of the literature on airline crew scheduling A survey of the literature on airline crew scheduling. Engineering Applications of Artificial Intelligence7454–69. * Du Merle . (1999) du1999stabilizedDu Merle, O., Villeneuve, D., Desrosiers, J. Hansen, P. 1999\. Stabilized column generation Stabilized column generation. Discrete Mathematics1941-3229–237. * Garey Johnson (1979) garey2002computersGarey, MR. Johnson, DS. 1979\. Computers and Intractibility: A Guide to the Theory of NP-Completeness Computers and intractibility: A guide to the theory of np-completeness ( 44). New York: W. H. Freeman & Company. * Gershkoff (1989) gershkoff1989optimizingGershkoff, I. 1989\. Optimizing flight crew schedules Optimizing flight crew schedules. Interfaces19429–43. * Goldberg (2006) goldberg2006geneticGoldberg, DE. 2006\. Genetic algorithms Genetic algorithms. Pearson Education India. * Gurobi Optimization (2019) gurobiGurobi Optimization, L. 2019\. Gurobi Optimizer Reference Manual. Gurobi optimizer reference manual. http://www.gurobi.com * Gustafsson (1999) gustafsson1999heuristicGustafsson, T. 1999\. A heuristic approach to column generation for airline crew scheduling A heuristic approach to column generation for airline crew scheduling. Department of Mathematics, Chalmers University of Technology. * Hoffman Padberg (1993) hoffman1993solvingHoffman, KL. Padberg, M. 1993\. Solving airline crew scheduling problems by branch-and-cut Solving airline crew scheduling problems by branch-and-cut. Management science396657–682. * Karmarkar (1984) karmarkar1984newKarmarkar, N. 1984\. A new polynomial-time algorithm for linear programming A new polynomial-time algorithm for linear programming. Proceedings of the sixteenth annual ACM symposium on Theory of computing Proceedings of the sixteenth annual acm symposium on theory of computing ( 302–311). * Kasirzadeh . (2017) kasirzadeh2017airlineKasirzadeh, A., Saddoune, M. Soumis, F. 2017\. Airline crew scheduling: models, algorithms, and data sets Airline crew scheduling: models, algorithms, and data sets. EURO Journal on Transportation and Logistics62111–137. * Koch . (2011) koch2011miplibKoch, T., Achterberg, T., Andersen, E., Bastert, O., Berthold, T., Bixby, RE.others 2011\. MIPLIB 2010 Miplib 2010. Mathematical Programming Computation32103. * Kornilakis Stamatopoulos (2002) kornilakis2002crewKornilakis, H. Stamatopoulos, P. 2002\. Crew pairing optimization with genetic algorithms Crew pairing optimization with genetic algorithms. Hellenic Conference on Artificial Intelligence Hellenic conference on artificial intelligence ( 109–120). * Land Doig (1960) land1960anLand, AH. Doig, AG. 1960\. An Automatic Method of Solving Discrete Programming Problems An automatic method of solving discrete programming problems. Econometrica283497–520. * Levine (1996) levine1996applicationLevine, D. 1996\. Application of a hybrid genetic algorithm to airline crew scheduling Application of a hybrid genetic algorithm to airline crew scheduling. Computers & Operations Research236547–558. * Linderoth Lodi (2011) linderoth2010milpLinderoth, JT. Lodi, A. 2011\. MILP software Milp software (JJ. Cochran, ). John Wiley & Sons. * Lodi (2009) lodi2009mixedLodi, A. 2009\. Mixed integer programming computation Mixed integer programming computation (M. Jünger ., ). Springer-Verlag. * Lodi Tramontani (2013) lodi2013performanceLodi, A. Tramontani, A. 2013\. Performance variability in mixed-integer programming Performance variability in mixed-integer programming. Theory Driven by Influential Applications Theory driven by influential applications ( 1–12). INFORMS. * Lübbecke (2010) lubbecke2010columnLübbecke, ME. 2010\. Column generation Column generation. Wiley encyclopedia of operations research and management science. * Lübbecke Desrosiers (2005) lubbecke2005selectedLübbecke, ME. Desrosiers, J. 2005\. Selected topics in column generation Selected topics in column generation. Operations research5361007–1023. * Marsten (1994) marsten1994crewMarsten, R. 1994\. Crew Planning at Delta Airlines Crew planning at delta airlines. Presentation at XV Mathematical Programming Symposium, Ann Arbor, MI, USA. * Ozdemir Mohan (2001) ozdemir2001flightOzdemir, HT. Mohan, CK. 2001\. Flight graph based genetic algorithm for crew scheduling in airlines Flight graph based genetic algorithm for crew scheduling in airlines. Information Sciences1333-4165–173. * Padberg Rinaldi (1991) padberg1991branchPadberg, M. Rinaldi, G. 1991\. A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems. SIAM review33160–100. * Tarjan (1972) tarjan1972depthTarjan, R. 1972\. Depth-first search and linear graph algorithms Depth-first search and linear graph algorithms. SIAM journal on computing12146–160. * Vance . (1997) vance1997heuristicVance, PH., Barnhart, C., Gelman, E., Johnson, EL., Krishna, A., Mahidhara, D.Rebello, R. 1997\. A heuristic branch-and-price approach for the airline crew pairing problem A heuristic branch-and-price approach for the airline crew pairing problem LEC-97-06. AtlantaGeorgia Institute of Technology. * Vazirani (2003) vazirani2013approximationVazirani, VV. 2003\. Approximation Algorithms Approximation algorithms. Springer, Berlin, Heidelberg, (Chapter 13). * Virtanen . (2020) scipyVirtanen, P., Gommers, R., Oliphant, TE., Haberland, M., Reddy, T., Cournapeau, D.Contributors, S. 2020\. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods17261–272. https://doi.org/10.1038/s41592-019-0686-2 * Zeren Özkol (2012) zeren2012improvedZeren, B. Özkol, İ. 2012\. An improved genetic algorithm for crew pairing optimization An improved genetic algorithm for crew pairing optimization. Journal of Intelligent Learning Systems and Applications40170. * Zeren Özkol (2016) zeren2016novelZeren, B. Özkol, I. 2016\. A novel column generation strategy for large scale airline crew pairing problems A novel column generation strategy for large scale airline crew pairing problems. Expert Systems with Applications55133–144.
2024-09-04T02:54:58.320151
2020-03-09T09:39:36
2003.03999
{ "authors": "LHCb collaboration: R. Aaij, C. Abell\\'an Beteta, T. Ackernley, B.\n Adeva, M. Adinolfi, H. Afsharnia, C.A. Aidala, S. Aiola, Z. Ajaltouni, S.\n Akar, P. Albicocco, J. Albrecht, F. Alessio, M. Alexander, A. Alfonso Albero,\n G. Alkhazov, P. Alvarez Cartelle, A.A. Alves Jr, S. Amato, Y. Amhis, L. An,\n L. Anderlini, G. Andreassi, M. Andreotti, F. Archilli, A. Artamonov, M.\n Artuso, K. Arzymatov, E. Aslanides, M. Atzeni, B. Audurier, S. Bachmann, J.J.\n Back, S. Baker, V. Balagura, W. Baldini, A. Baranov, R.J. Barlow, S. Barsuk,\n W. Barter, M. Bartolini, F. Baryshnikov, J.M. Basels, G. Bassi, V.\n Batozskaya, B. Batsukh, A. Battig, A. Bay, M. Becker, F. Bedeschi, I.\n Bediaga, A. Beiter, V. Belavin, S. Belin, V. Bellee, K. Belous, I. Belyaev,\n G. Bencivenni, E. Ben-Haim, S. Benson, A. Berezhnoy, R. Bernet, D.\n Berninghoff, H.C. Bernstein, C. Bertella, E. Bertholet, A. Bertolin, C.\n Betancourt, F. Betti, M.O. Bettler, Ia. Bezshyiko, S. Bhasin, J. Bhom, M.S.\n Bieker, S. Bifani, P. Billoir, A. Bizzeti, M. Bj{\\o}rn, M.P. Blago, T. Blake,\n F. Blanc, S. Blusk, D. Bobulska, V. Bocci, O. Boente Garcia, T. Boettcher, A.\n Boldyrev, A. Bondar, N. Bondar, S. Borghi, M. Borisyak, M. Borsato, J.T.\n Borsuk, T.J.V. Bowcock, C. Bozzi, M.J. Bradley, S. Braun, A. Brea Rodriguez,\n M. Brodski, J. Brodzicka, A. Brossa Gonzalo, D. Brundu, E. Buchanan, A.\n B\\\"uchler-Germann, A. Buonaura, C. Burr, A. Bursche, A. Butkevich, J.S.\n Butter, J. Buytaert, W. Byczynski, S. Cadeddu, H. Cai, R. Calabrese, L.\n Calero Diaz, S. Cali, R. Calladine, M. Calvi, M. Calvo Gomez, P. Camargo\n Magalhaes, A. Camboni, P. Campana, D.H. Campora Perez, A.F. Campoverde\n Quezada, L. Capriotti, A. Carbone, G. Carboni, R. Cardinale, A. Cardini, I.\n Carli, P. Carniti, K. Carvalho Akiba, A. Casais Vidal, G. Casse, M. Cattaneo,\n G. Cavallero, S. Celani, R. Cenci, J. Cerasoli, M.G. Chapman, M. Charles, Ph.\n Charpentier, G. Chatzikonstantinidis, M. Chefdeville, V. Chekalina, C. Chen,\n S. Chen, A. Chernov, S.-G. Chitic, V. Chobanova, S. Cholak, M. Chrzaszcz, A.\n Chubykin, V. Chulikov, P. Ciambrone, M.F. Cicala, X. Cid Vidal, G. Ciezarek,\n F. Cindolo, P.E.L. Clarke, M. Clemencic, H.V. Cliff, J. Closier, J.L.\n Cobbledick, V. Coco, J.A.B. Coelho, J. Cogan, E. Cogneras, L. Cojocariu, P.\n Collins, T. Colombo, A. Contu, N. Cooke, G. Coombs, S. Coquereau, G. Corti,\n C.M. Costa Sobral, B. Couturier, D.C. Craik, J. Crkovsk\\'a, A. Crocombe, M.\n Cruz Torres, R. Currie, C.L. Da Silva, E. Dall'Occo, J. Dalseno, C.\n D'Ambrosio, A. Danilina, P. d'Argent, A. Davis, O. De Aguiar Francisco, K. De\n Bruyn, S. De Capua, M. De Cian, J.M. De Miranda, L. De Paula, M. De Serio, P.\n De Simone, J.A. de Vries, C.T. Dean, W. Dean, D. Decamp, L. Del Buono, B.\n Delaney, H.-P. Dembinski, A. Dendek, V. Denysenko, D. Derkach, O. Deschamps,\n F. Desse, F. Dettori, B. Dey, A. Di Canto, P. Di Nezza, S. Didenko, H.\n Dijkstra, V. Dobishuk, F. Dordei, M. Dorigo, A.C. dos Reis, L. Douglas, A.\n Dovbnya, K. Dreimanis, M.W. Dudek, L. Dufour, P. Durante, J.M. Durham, D.\n Dutta, M. Dziewiecki, A. Dziurda, A. Dzyuba, S. Easo, U. Egede, V. Egorychev,\n S. Eidelman, S. Eisenhardt, S. Ek-In, L. Eklund, S. Ely, A. Ene, E. Epple, S.\n Escher, J. Eschle, S. Esen, T. Evans, A. Falabella, J. Fan, Y. Fan, N.\n Farley, S. Farry, D. Fazzini, P. Fedin, M. F\\'eo, P. Fernandez Declara, A.\n Fernandez Prieto, F. Ferrari, L. Ferreira Lopes, F. Ferreira Rodrigues, S.\n Ferreres Sole, M. Ferrillo, M. Ferro-Luzzi, S. Filippov, R.A. Fini, M.\n Fiorini, M. Firlej, K.M. Fischer, C. Fitzpatrick, T. Fiutowski, F. Fleuret,\n M. Fontana, F. Fontanelli, R. Forty, V. Franco Lima, M. Franco Sevilla, M.\n Frank, C. Frei, D.A. Friday, J. Fu, Q. Fuehring, W. Funk, E. Gabriel, A.\n Gallas Torreira, D. Galli, S. Gallorini, S. Gambetta, Y. Gan, M. Gandelman,\n P. Gandini, Y. Gao, L.M. Garcia Martin, J. Garc\\'ia Pardi\\~nas, B. Garcia\n Plana, F.A. Garcia Rosales, L. Garrido, D. Gascon, C. Gaspar, D. Gerick, E.\n Gersabeck, M. Gersabeck, T. Gershon, D. Gerstel, Ph. Ghez, V. Gibson, A.\n Giovent\\`u, P. Gironella Gironell, L. Giubega, C. Giugliano, K. Gizdov, V.V.\n Gligorov, C. G\\\"obel, E. Golobardes, D. Golubkov, A. Golutvin, A. Gomes, P.\n Gorbounov, I.V. Gorelov, C. Gotti, E. Govorkova, J.P. Grabowski, R. Graciani\n Diaz, T. Grammatico, L.A. Granado Cardoso, E. Graug\\'es, E. Graverini, G.\n Graziani, A. Grecu, R. Greim, P. Griffith, L. Grillo, L. Gruber, B.R. Gruberg\n Cazon, C. Gu, E. Gushchin, A. Guth, Yu. Guz, T. Gys, P. A. G\\\"unther, T.\n Hadavizadeh, G. Haefeli, C. Haen, S.C. Haines, P.M. Hamilton, Q. Han, X. Han,\n T.H. Hancock, S. Hansmann-Menzemer, N. Harnew, T. Harrison, R. Hart, C.\n Hasse, M. Hatch, J. He, M. Hecker, K. Heijhoff, K. Heinicke, A.M. Hennequin,\n K. Hennessy, L. Henry, J. Heuel, A. Hicheur, D. Hill, M. Hilton, P.H.\n Hopchev, J. Hu, J. Hu, W. Hu, W. Huang, W. Hulsbergen, T. Humair, R.J.\n Hunter, M. Hushchyn, D. Hutchcroft, D. Hynds, P. Ibis, M. Idzik, P. Ilten, A.\n Inglessi, K. Ivshin, R. Jacobsson, S. Jakobsen, E. Jans, B.K. Jashal, A.\n Jawahery, V. Jevtic, F. Jiang, M. John, D. Johnson, C.R. Jones, B. Jost, N.\n Jurik, S. Kandybei, M. Karacson, J.M. Kariuki, N. Kazeev, M. Kecke, F.\n Keizer, M. Kelsey, M. Kenzie, T. Ketel, B. Khanji, A. Kharisova, K.E. Kim, T.\n Kirn, V.S. Kirsebom, S. Klaver, K. Klimaszewski, S. Koliiev, A. Kondybayeva,\n A. Konoplyannikov, P. Kopciewicz, R. Kopecna, P. Koppenburg, M. Korolev, I.\n Kostiuk, O. Kot, S. Kotriakhova, L. Kravchuk, R.D. Krawczyk, M. Kreps, F.\n Kress, S. Kretzschmar, P. Krokovny, W. Krupa, W. Krzemien, W. Kucewicz, M.\n Kucharczyk, V. Kudryavtsev, H.S. Kuindersma, G.J. Kunde, T. Kvaratskheliya,\n D. Lacarrere, G. Lafferty, A. Lai, D. Lancierini, J.J. Lane, G. Lanfranchi,\n C. Langenbruch, O. Lantwin, T. Latham, F. Lazzari, R. Le Gac, S.H. Lee, R.\n Lef\\`evre, A. Leflat, O. Leroy, T. Lesiak, B. Leverington, H. Li, L. Li, X.\n Li, Y. Li, Z. Li, X. Liang, T. Lin, R. Lindner, V. Lisovskyi, G. Liu, X. Liu,\n D. Loh, A. Loi, J. Lomba Castro, I. Longstaff, J.H. Lopes, G. Loustau, G.H.\n Lovell, Y. Lu, D. Lucchesi, M. Lucio Martinez, Y. Luo, A. Lupato, E. Luppi,\n O. Lupton, A. Lusiani, X. Lyu, S. Maccolini, F. Machefert, F. Maciuc, V.\n Macko, P. Mackowiak, S. Maddrell-Mander, L.R. Madhan Mohan, O. Maev, A.\n Maevskiy, D. Maisuzenko, M.W. Majewski, S. Malde, B. Malecki, A. Malinin, T.\n Maltsev, H. Malygina, G. Manca, G. Mancinelli, R. Manera Escalero, D.\n Manuzzi, D. Marangotto, J. Maratas, J.F. Marchand, U. Marconi, S. Mariani, C.\n Marin Benito, M. Marinangeli, P. Marino, J. Marks, P.J. Marshall, G.\n Martellotti, L. Martinazzoli, M. Martinelli, D. Martinez Santos, F. Martinez\n Vidal, A. Massafferri, M. Materok, R. Matev, A. Mathad, Z. Mathe, V.\n Matiunin, C. Matteuzzi, K.R. Mattioli, A. Mauri, E. Maurice, M. McCann, L.\n Mcconnell, A. McNab, R. McNulty, J.V. Mead, B. Meadows, C. Meaux, G. Meier,\n N. Meinert, D. Melnychuk, S. Meloni, M. Merk, A. Merli, M. Mikhasenko, D.A.\n Milanes, E. Millard, M.-N. Minard, O. Mineev, L. Minzoni, S.E. Mitchell, B.\n Mitreska, D.S. Mitzel, A. M\\\"odden, A. Mogini, R.D. Moise, T. Momb\\\"acher,\n I.A. Monroy, S. Monteil, M. Morandin, G. Morello, M.J. Morello, J. Moron,\n A.B. Morris, A.G. Morris, R. Mountain, H. Mu, F. Muheim, M. Mukherjee, M.\n Mulder, D. M\\\"uller, K. M\\\"uller, C.H. Murphy, D. Murray, P. Muzzetto, P.\n Naik, T. Nakada, R. Nandakumar, T. Nanut, I. Nasteva, M. Needham, N. Neri, S.\n Neubert, N. Neufeld, R. Newcombe, T.D. Nguyen, C. Nguyen-Mau, E.M. Niel, S.\n Nieswand, N. Nikitin, N.S. Nolte, C. Nunez, A. Oblakowska-Mucha, V.\n Obraztsov, S. Ogilvy, D.P. O'Hanlon, R. Oldeman, C.J.G. Onderwater, J. D.\n Osborn, A. Ossowska, J.M. Otalora Goicochea, T. Ovsiannikova, P. Owen, A.\n Oyanguren, P.R. Pais, T. Pajero, A. Palano, M. Palutan, G. Panshin, A.\n Papanestis, M. Pappagallo, L.L. Pappalardo, C. Pappenheimer, W. Parker, C.\n Parkes, G. Passaleva, A. Pastore, M. Patel, C. Patrignani, A. Pearce, A.\n Pellegrino, M. Pepe Altarelli, S. Perazzini, D. Pereima, P. Perret, L.\n Pescatore, K. Petridis, A. Petrolini, A. Petrov, S. Petrucci, M. Petruzzo, B.\n Pietrzyk, G. Pietrzyk, M. Pili, D. Pinci, J. Pinzino, F. Pisani, A. Piucci,\n V. Placinta, S. Playfer, J. Plews, M. Plo Casasus, F. Polci, M. Poli Lener,\n M. Poliakova, A. Poluektov, N. Polukhina, I. Polyakov, E. Polycarpo, G.J.\n Pomery, S. Ponce, A. Popov, D. Popov, S. Poslavskii, K. Prasanth, L.\n Promberger, C. Prouve, V. Pugatch, A. Puig Navarro, H. Pullen, G. Punzi, W.\n Qian, J. Qin, R. Quagliani, B. Quintana, N.V. Raab, R.I. Rabadan Trejo, B.\n Rachwal, J.H. Rademacker, M. Rama, M. Ramos Pernas, M.S. Rangel, F. Ratnikov,\n G. Raven, M. Reboud, F. Redi, F. Reiss, C. Remon Alepuz, Z. Ren, V. Renaudin,\n S. Ricciardi, D.S. Richards, S. Richards, K. Rinnert, P. Robbe, A. Robert,\n A.B. Rodrigues, E. Rodrigues, J.A. Rodriguez Lopez, M. Roehrken, A. Rollings,\n V. Romanovskiy, M. Romero Lamas, A. Romero Vidal, J.D. Roth, M. Rotondo, M.S.\n Rudolph, T. Ruf, J. Ruiz Vidal, A. Ryzhikov, J. Ryzka, J.J. Saborido Silva,\n N. Sagidova, N. Sahoo, B. Saitta, C. Sanchez Gras, C. Sanchez Mayordomo, R.\n Santacesaria, C. Santamarina Rios, M. Santimaria, E. Santovetti, G. Sarpis,\n M. Sarpis, A. Sarti, C. Satriano, A. Satta, M. Saur, D. Savrina, L.G.\n Scantlebury Smead, S. Schael, M. Schellenberg, M. Schiller, H. Schindler, M.\n Schmelling, T. Schmelzer, B. Schmidt, O. Schneider, A. Schopper, H.F.\n Schreiner, M. Schubiger, S. Schulte, M.H. Schune, R. Schwemmer, B. Sciascia,\n A. Sciubba, S. Sellam, A. Semennikov, A. Sergi, N. Serra, J. Serrano, L.\n Sestini, A. Seuthe, P. Seyfert, D.M. Shangase, M. Shapkin, L. Shchutska, T.\n Shears, L. Shekhtman, V. Shevchenko, E. Shmanin, J.D. Shupperd, B.G. Siddi,\n R. Silva Coutinho, L. Silva de Oliveira, G. Simi, S. Simone, I. Skiba, N.\n Skidmore, T. Skwarnicki, M.W. Slater, J.G. Smeaton, A. Smetkina, E. Smith,\n I.T. Smith, M. Smith, A. Snoch, M. Soares, L. Soares Lavra, M.D. Sokoloff,\n F.J.P. Soler, B. Souza De Paula, B. Spaan, E. Spadaro Norella, P. Spradlin,\n F. Stagni, M. Stahl, S. Stahl, P. Stefko, O. Steinkamp, S. Stemmle, O.\n Stenyakin, M. Stepanova, H. Stevens, S. Stone, S. Stracka, M.E. Stramaglia,\n M. Straticiuc, S. Strokov, J. Sun, L. Sun, Y. Sun, P. Svihra, K. Swientek, A.\n Szabelski, T. Szumlak, M. Szymanski, S. Taneja, Z. Tang, T. Tekampe, F.\n Teubert, E. Thomas, K.A. Thomson, M.J. Tilley, V. Tisserand, S. T'Jampens, M.\n Tobin, S. Tolk, L. Tomassetti, D. Torres Machado, D.Y. Tou, E. Tournefier, M.\n Traill, M.T. Tran, E. Trifonova, C. Trippl, A. Tsaregorodtsev, G. Tuci, A.\n Tully, N. Tuning, A. Ukleja, A. Usachov, A. Ustyuzhanin, U. Uwer, A. Vagner,\n V. Vagnoni, A. Valassi, G. Valenti, M. van Beuzekom, H. Van Hecke, E. van\n Herwijnen, C.B. Van Hulse, M. van Veghel, R. Vazquez Gomez, P. Vazquez\n Regueiro, C. V\\'azquez Sierra, S. Vecchi, J.J. Velthuis, M. Veltri, A.\n Venkateswaran, M. Veronesi, M. Vesterinen, J.V. Viana Barbosa, D. Vieira, M.\n Vieites Diaz, H. Viemann, X. Vilasis-Cardona, G. Vitali, A. Vitkovskiy, A.\n Vollhardt, D. Vom Bruch, A. Vorobyev, V. Vorobyev, N. Voropaev, R. Waldi, J.\n Walsh, J. Wang, J. Wang, J. Wang, M. Wang, Y. Wang, Z. Wang, D.R. Ward, H.M.\n Wark, N.K. Watson, D. Websdale, A. Weiden, C. Weisser, B.D.C. Westhenry, D.J.\n White, M. Whitehead, D. Wiedner, G. Wilkinson, M. Wilkinson, I. Williams, M.\n Williams, M.R.J. Williams, T. Williams, F.F. Wilson, W. Wislicki, M. Witek,\n L. Witola, G. Wormser, S.A. Wotton, H. Wu, K. Wyllie, Z. Xiang, D. Xiao, Y.\n Xie, H. Xing, A. Xu, J. Xu, L. Xu, M. Xu, Q. Xu, Z. Xu, Z. Yang, Z. Yang, Y.\n Yao, L.E. Yeomans, H. Yin, J. Yu, X. Yuan, O. Yushchenko, K.A. Zarebski, M.\n Zavertyaev, M. Zdybal, M. Zeng, D. Zhang, L. Zhang, S. Zhang, W.C. Zhang, Y.\n Zhang, A. Zhelezov, Y. Zheng, X. Zhou, Y. Zhou, X. Zhu, V. Zhukov, J.B.\n Zonneveld, S. Zucchelli", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26112", "submitter": "Titus Momb\\\"acher", "url": "https://arxiv.org/abs/2003.03999" }
arxiv-papers
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) ​​​ CERN-EP-2020-023 LHCb-PAPER-2020-001 May 28, 2020 Search for the rare decays $B^{0}_{s}\rightarrow e^{+}e^{-}$ and $B^{0}\rightarrow e^{+}e^{-}$ LHCb collaboration†††Authors are listed at the end of this Letter. A search for the decays $B^{0}_{s}\rightarrow e^{+}e^{-}$ and $B^{0}\rightarrow e^{+}e^{-}$ is performed using data collected with the LHCb experiment in proton-proton collisions at center-of-mass energies of $7$, $8$ and $13\,\text{TeV}$, corresponding to integrated luminosities of $1$, $2$ and $2\,\text{fb}^{-1}$, respectively. No signal is observed. Assuming no contribution from $B^{0}\rightarrow e^{+}e^{-}$ decays, an upper limit on the branching fraction $\mathcal{B}(B^{0}_{s}\rightarrow e^{+}e^{-})<9.4\,(11.2)\times 10^{-9}$ is obtained at $90\,(95)\,\%$ confidence level. If no $B^{0}_{s}\rightarrow e^{+}e^{-}$ contribution is assumed, a limit of $\mathcal{B}(B^{0}\rightarrow e^{+}e^{-})<2.5\,(3.0)\times 10^{-9}$ is determined at $90\,(95)\,\%$ confidence level. These upper limits are more than one order of magnitude lower than the previous values. Published in Phys. Rev. Lett. 124 (2020) 211802 © 2020 CERN for the benefit of the LHCb collaboration. CC BY 4.0 licence. Searches for rare particle decays provide ideal probes for contributions from physics processes beyond the Standard Model (SM). Recent measurements of decays involving $b\\!\rightarrow s\ell^{+}\ell^{-}$ transitions (the inclusion of charge-conjugated processes is implied throughout this Letter) hint at deviations from SM predictions in lepton-flavor universality tests [1, 2, 3, 4, 5, 6] and thus motivate measurements of decay rates into final states involving leptons. Following the observation of the decay ${{B}^{0}_{s}}\\!\rightarrow{\mu^{+}}{\mu^{-}}$ [7, 8], the search for ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ and ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ decays provides an independent test of lepton-flavor universality. According to SM predictions (calculated from Ref. [9], neglecting QED corrections that are expected to be at the percent level), ${{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}}$ decays have branching fractions of $\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})=$(8.60\pm 0.36)\text{\times}{10}^{-14}$$ and $\mathcal{B}({{B}^{0}}\\!\rightarrow{e^{+}e^{-}})=$(2.41\pm 0.13)\text{\times}{10}^{-15}$$. With contributions beyond the SM, these branching fractions could be significantly larger, reaching values of $\mathcal{O}(10^{-8})$ for $\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})$ and $\mathcal{O}(10^{-10})$ for $\mathcal{B}\mbox{(${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$)}$ [10]. These values are close to the current experimental bounds of $\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})<$2.8\text{\times}{10}^{-7}$$ and $\mathcal{B}({{B}^{0}}\\!\rightarrow{e^{+}e^{-}})<$8.3\text{\times}{10}^{-8}$$ at $90\text{\,}\mathrm{\char 37\relax}$ confidence level (CL) [11], set by the CDF collaboration. In this Letter a search for ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ and ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ decays is presented using data collected with the LHCb experiment in proton-proton collisions at center-of-mass energies of $7\text{\,}\mathrm{T}\mathrm{e}\mathrm{V}$ in 2011, $8\text{\,}\mathrm{T}\mathrm{e}\mathrm{V}$ in 2012 and $13\text{\,}\mathrm{T}\mathrm{e}\mathrm{V}$ in 2015 and 2016, corresponding to integrated luminosities of $1$, $2$ and $2\text{\,}\mathrm{f}\mathrm{b}^{-1}$, respectively. The signal yields are determined from a fit to the data and normalized to those of the ${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}}$ decay, where the ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ meson decays to $e^{+}e^{-}$, which has a precisely measured branching fraction [12] and a similar dielectron signature in the detector. The LHCb detector [13, 14] is a single-arm forward spectrometer covering the pseudorapidity range $2<\eta<5$, designed for the study of particles containing $b$ or $c$ quarks. The detector includes a high-precision tracking system consisting of a silicon-strip vertex detector surrounding the $pp$ interaction region, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about $4{\mathrm{\,Tm}}$, and three stations of silicon-strip detectors and straw drift tubes placed downstream of the magnet. Different types of charged hadrons are distinguished using information from two ring-imaging Cherenkov detectors. Photons, electrons and hadrons are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic and a hadronic calorimeter. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers. The online event selection is performed by a trigger [15], which consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies a full event reconstruction. At the hardware trigger stage, events are required to have a high-energy deposit in the calorimeters associated with a signal electron candidate, or a muon candidate with high transverse momentum $p_{\mathrm{T}}$, or a photon, electron or hadron candidate with high transverse energy from the decays of other particles from the $pp$ collision. The software trigger requires a two- track secondary vertex with a significant displacement from any primary $pp$ interaction vertex (PV). At least one charged particle must have high $p_{\mathrm{T}}$ and be inconsistent with originating from a PV. A multivariate algorithm [16, 17] is used in the trigger for the identification of secondary vertices consistent with the decay of a $b$ hadron. Simulated samples are used to optimize the candidate selection, estimate selection efficiencies and describe the expected invariant-mass shapes of the signal candidates and background decays. In the simulation, $pp$ collisions are generated using Pythia [18, *Sjostrand:2007gs] with a specific LHCb configuration [20]. Decays of unstable particles are described by EvtGen [21], in which final-state radiation is generated using Photos [22]. The interaction of the generated particles with the detector, and its response, are implemented using the Geant4 toolkit [23, *Agostinelli:2002hh] as described in Ref. [25]. The simulation is corrected for data-simulation differences in $B$-meson production kinematics, detector occupancy and isolation criteria [26] using ${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}}$ and ${{B}^{0}_{s}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}\phi$ decays, with ${{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}\\!\rightarrow{e^{+}e^{-}}$ and $\phi\\!\rightarrow{{K}^{+}}{{K}^{-}}$. Particle identification variables are calibrated using data from ${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}}$ and ${{D}^{0}}\\!\rightarrow{{K}^{-}}{{\pi}^{+}}$ decays [27]. The calibration data are binned in momentum and pseudorapidity of the particle as well as detector occupancy to account for possible differences in kinematics between the investigated decay and the calibration data. The ${{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}}$ candidates are selected in events passing the trigger requirements by combining two tracks that are inconsistent with originating from any PV in the event and which form a good- quality secondary vertex. The tracks are also required to have a momentum larger than 3 $\text{\,Ge\kern-1.00006ptV\\!/}c$ and $p_{\mathrm{T}}$ greater than 500 $\text{\,Me\kern-1.00006ptV\\!/}c$, and must be identified as electrons using information from the Cherenkov detectors and calorimeters. The dielectron candidate’s momentum must be aligned with the vector pointing from a PV (the associated PV) to the two-track vertex and have a considerable transverse component. The candidate must also have an invariant mass in the range $[4166,6566]\,\text{\,Me\kern-1.00006ptV\\!/}c^{2}$. The measured electron momenta are corrected for losses due to bremsstrahlung radiation by adding the momentum of photons consistent with being emitted upstream of the magnet [28]. Candidates in data and simulation are separated into three categories with either zero, one, or both electrons having a bremsstrahlung correction applied. To avoid experimenters’ bias, the narrowest dielectron invariant-mass region containing $90\text{\,}\mathrm{\char 37\relax}$ of simulated ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ decays, corresponding to a range of [$4689$, $5588$] $\text{\,Me\kern-1.00006ptV\\!/}c^{2}$, was removed from the data set until the analysis procedure was finalized. Candidates for the normalization mode, ${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}}$, are constructed similarly, but require an additional track consistent with being a kaon and originating from the same vertex as the dielectron candidate. The dielectron candidate must have an invariant mass in the range $[2450,3176]\,\text{\,Me\kern-1.00006ptV\\!/}c^{2}$, consistent with arising from a ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ meson decay. In addition, the reconstructed ${B}^{+}$ candidate mass, when the dielectron candidate is constrained to the known ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ mass [12], must be above $5175\,\text{\,Me\kern-1.00006ptV\\!/}c^{2}$, suppressing partially reconstructed decays. A boosted decision tree (BDT) algorithm[29, 30, 31] is used to separate ${{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}}$ signal from random combinations of two electrons (combinatorial background). The BDT is trained separately for data taking periods 2011–2012 (Run 1) and 2015–2016 (Run 2) on simulated ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ decays as signal proxy and dielectron candidates from data with a mass above $5588\,\text{\,Me\kern-1.00006ptV\\!/}c^{2}$ as background proxy. The split between the data taking periods is done to account for changes in the center- of-mass energies and trigger strategies, which significantly impact the data distributions and improve the BDT and the particle identification algorithms in Run 2. It is checked that the data behave consistently across the data- taking periods. The BDT input variables comprise of the following: kinematic information on the electron tracks and $B$ candidate, information on the displacement of the electrons and $B$ candidate from the associated PV, and isolation variables that quantify the compatibility of other tracks in the event with originating from the same decay as the $B$ candidate [26, 32]. Candidates with a BDT response compatible with that of the background are discarded, with the threshold chosen by maximizing the figure of merit ${\epsilon_{\text{signal}}}/{(\sqrt{N_{\text{background}}}+3/2)}$ [33], where $\epsilon_{\text{signal}}$ is the signal efficiency and the expected background yield in the signal region is $N_{\text{background}}$. The final selected data set is separated by data-taking period and by category of bremsstrahlung correction. The branching fraction $\mathcal{B}({{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}})$ is measured relative to that of the normalization channel via $\displaystyle\mathcal{B}({{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}})$ $\displaystyle=N({{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}})\times\alpha\times\mathcal{B}({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}})\times\left(\frac{f_{d(s)}}{f_{u}}\right)^{-1},$ (1) where $\displaystyle\alpha$ $\displaystyle\equiv\frac{\varepsilon({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}})}{\varepsilon({{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}})}\times\frac{1}{N({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}})},$ (2) $\varepsilon({{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}})$ and $\varepsilon({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}})$ denote the efficiencies of the signal and normalization modes, and $N$(${{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}}$) and $N$(${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}}$) their yields. The normalization mode branching fraction (including that for the decay ${{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}\\!\rightarrow{e^{+}e^{-}}$) is $\mathcal{B}({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}})=$(6.03\pm 0.17)\text{\times}{10}^{-5}$$, taken from Ref.[12]. The $b$-hadron fragmentation fraction ratio $f_{d}/f_{u}$ is assumed to be unity, while $f_{s}/f_{u}=$0.259\pm 0.015$$ [34] is used for the Run 1 data and is scaled by $1.068\pm 0.016$ for the Run 2 data, according to Ref. [35], to account for center-of-mass energy differences. A measurement of $f_{s}/f_{u}$ from Run 2 yields a consistent, but less precise, result [36]. The yield of the normalization mode is determined using an unbinned maximum- likelihood fit to the ${K}^{+}$ $e^{+}e^{-}$ invariant mass separately for each year of data taking and bremsstrahlung category. The fit model comprises a Gaussian function with power-law tails [37] for the signal component, where the tail parameters are fixed from simulation, and an exponential function to describe combinatorial background. Summed over the bremsstrahlung categories, the yield of the normalization mode is $20\,480\pm 140$ in the Run 1 data and $33\,080\pm 180$ in the Run 2 data. The selection efficiencies $\varepsilon({{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}})$ and $\varepsilon({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}})$ are determined separately for each year of data taking and bremsstrahlung category using simulated decays that are weighted to better represent the data. Calibration data are used to evaluate particle- identification efficiencies [27]. Trigger efficiencies are also estimated from data, using the technique described in Ref. [38]. For simulated ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ decays, the mean ${B}^{0}_{s}$ lifetime [39] is assumed. The selection efficiency is assumed to be the same for both ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ and ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ decays, which is consistent with results from simulation. The normalization factors, $\alpha$, are combined across the data-taking periods and given in Table 1, split by bremsstrahlung category (for the selection efficiency ratio between normalization and signal mode, see the Supplemental Material [40]). Table 1: Normalization factors $\alpha$ for ${{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}}$. The bremsstrahlung category denotes whether zero, one or both electrons are corrected for bremsstrahlung losses. The uncertainties include statistical uncertainties and uncertainties due to limited size of the simulated samples. Bremsstrahlung category | 2011–2012 $[10^{-5}]$ | 2015–2016 $[10^{-5}]$ ---|---|--- No correction | $2.85\pm 0.24$ | $1.84\pm 0.08$ One electron corrected | $1.13\pm 0.08$ | $0.70\pm 0.03$ Both electrons corrected | $1.73\pm 0.20$ | $1.04\pm 0.06$ In addition to the combinatorial background, backgrounds due to misidentification and partial reconstruction are present in the data. These backgrounds differ significantly between the categories of bremsstrahlung correction. Their invariant-mass shapes and relative contributions are evaluated using simulation. In the lower mass region, partially reconstructed backgrounds of the types ${B}\\!\rightarrow X{e^{+}e^{-}}$ and ${{B}^{+}}\\!\rightarrow{{\kern 1.79993pt\overline{\kern-1.79993ptD}}{}^{0}}(\rightarrow Y^{+}{e^{-}}{{\overline{\nu}}_{e}}){e^{+}}{{\nu}_{e}}$ dominate, where $X$ and $Y$ represent hadronic systems. The main source of background in the $B$-mass region, however, stems from misidentified particles in the decays ${{B}^{0}}\\!\rightarrow{{\pi}^{-}}{e^{+}}{{\nu}_{e}}$ and ${B}\\!\rightarrow{h}^{+}{h}^{\prime-}$, where $h$ and $h^{\prime}$ are hadrons. The latter has a peaking structure in the $B$-mass region. Backgrounds involving misidentified particles contribute mostly to categories in which at most one of the electrons has a bremsstrahlung correction applied. The contribution from combinatorial background is evaluated from same-sign lepton pairs in data and found to be small. The yields of the backgrounds are Gaussian constrained to their expected values, estimated from simulation using their known branching fractions [12]. The shape of the invariant mass of the ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ and ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ components is modeled using a Gaussian function with power-law tails, where the parameters are obtained from simulation and differ between each bremsstrahlung category and year of data taking. The peak values and the widths of the functions are corrected for data-simulation differences by a factor determined from the normalization mode. The parameters of the ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ and ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ line shapes are fixed to the same values with the exception of the peak value, which is shifted according to the known ${B}^{0}_{s}$–${B}^{0}$ mass difference [12]. Due to the limited mass resolution, arising from imperfect bremsstrahlung recovery, the line shapes from ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ and ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ are highly overlapping. Therefore the branching fraction of ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ is obtained by performing a simultaneous fit to the dielectron invariant-mass distribution of all six data sets while neglecting the contribution from ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$, and vice versa. In these fits, the only shared parameters between categories are the branching fractions $\mathcal{B}({{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}})$ and $\mathcal{B}({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}})$, and the ratio of the fragmentation fractions $f_{s}/f_{u}$. Systematic uncertainties are estimated separately for each data set. Dominant sources of systematic uncertainties in the normalization arise from the uncertainty on the fragmentation fraction ratio, the technique used to evaluate the trigger efficiencies, and the determination of particle- identification efficiencies; the systematic uncertainties from these sources extend to $5.8\text{\,}\mathrm{\char 37\relax}$, $5.3\text{\,}\mathrm{\char 37\relax}$, and $5.3\text{\,}\mathrm{\char 37\relax}$ on the branching fractions, respectively. The uncertainty on $\mathcal{B}({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}})$ of $2.8\text{\,}\mathrm{\char 37\relax}$ [12] is taken into account. A difference of up to $4.1\text{\,}\mathrm{\char 37\relax}$ is found between the efficiency of the BDT selection on simulated ${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}}$ decays and ${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}}$ decays in data, which is assigned as a systematic uncertainty. The fraction of candidates in each bremsstrahlung-correction category of the signal modes is taken from simulation. The difference between simulation and data is investigated using ${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}}$ decays and its effect on the normalization, up to $4.0\text{\,}\mathrm{\char 37\relax}$, is taken as a systematic uncertainty. Systematic uncertainties on the invariant-mass resolution corrections are determined by repeating the correction procedure with pseudoexperiments obtained with the bootstrapping method [41], yielding up to $1.1\text{\,}\mathrm{\char 37\relax}$. A difference between the total selection efficiencies in the ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ and ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ channels of up to $2.5\text{\,}\mathrm{\char 37\relax}$ is assigned as a systematic uncertainty on the ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ normalization factor. Due to the presence of an additional kaon in the final state of the normalization mode, the track-reconstruction efficiency is different between the signal and normalization modes. An uncertainty of $1.1\text{\,}\mathrm{\char 37\relax}$ is assigned to the branching fraction as a systematic uncertainty on the kaon reconstruction efficiency arising from the limited knowledge of the interactions in the detector material [42]. Finally, an uncertainty of $1.0\text{\,}\mathrm{\char 37\relax}$ is assigned to account for small differences in detector occupancy between the signal and normalization mode arising from the trigger selection. The dominant sources of systematic uncertainties on the background composition are due to the imprecise knowledge of the branching fractions of the background components. The largest uncertainty of this type on the expected background yield in the $B$-mass region is $14\text{\,}\mathrm{\char 37\relax}$, determined from refitting the mass sidebands while varying the background components according to their uncertainties. Taking all correlations into account, overall single event sensitivities of $[4.71\pm 0.12\text{(stat.)}\pm 0.33\text{(syst.)}]\times 10^{-10}$ for ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ and $[1.271\pm 0.034\text{(stat.)}\pm 0.063\text{(syst.)}]\times 10^{-10}$ for ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ are obtained. The dielectron invariant-mass spectrum, summed over bremsstrahlung categories, is shown in Fig. 1, with the result of the ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ fit. The individual categories are shown in the Supplemental Material [40], as well as the distributions with the result of the ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ fit. The measured branching fractions are $\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})=$(2.4\pm 4.4)\text{\times}{10}^{-9}$$ and $\mathcal{B}({{B}^{0}}\\!\rightarrow{e^{+}e^{-}})=$(0.30\pm 1.29)\text{\times}{10}^{-9}$$, where the uncertainties include both statistical and systematic components. The results are in agreement with the background-only hypothesis. Figure 1: Simultaneous fit to the dielectron invariant-mass distribution, with $\mathcal{B}({{B}^{0}}\\!\rightarrow{e^{+}e^{-}})$ fixed to zero. The sum of bremsstrahlung categories is shown for (left) Run 1 and (right) Run 2. The relative proportions of background contributions change between Run 1 and Run 2 due to different performances of the particle identification algorithms and BDT selections. Upper limits on the branching fractions are set using the CLs method [43], as implemented in the GammaCombo framework[44, 45] with a one-sided profile likelihood ratio [46] as test statistic. The likelihoods are computed from fits to the invariant-mass distributions. In the fits, the normalization factor, normalization mode branching fraction, fragmentation fraction ratio, and background yields are Gaussian constrained to their expected values within statistical and systematic uncertainties. Pseudoexperiments, in which the nuisance parameters are set to their fitted values from data, are used for the evaluation of the test statistic. The expected and observed CLs distributions are shown in Fig. 2. The upper observed limits are $\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})<9.4\,(11.2)\times 10^{-9}$ and $\mathcal{B}({{B}^{0}}\\!\rightarrow{e^{+}e^{-}})<2.5\,(3.0)\times 10^{-9}$ at $90\,(95)\,\%$ confidence level. These are consistent with the expected upper limits of $\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})<7.0\,(8.6)\times 10^{-9}$ and $\mathcal{B}({{B}^{0}}\\!\rightarrow{e^{+}e^{-}})<2.0\,(2.5)\times 10^{-9}$ at $90\,(95)\,\%$ confidence level, obtained as the median of limits determined on background-only pseudoexperiments. Figure 2: CLs values as a function of the branching fractions of the decays (left) ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ and (right) ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$. The red solid line (black solid line with data points) corresponds to the distribution of the expected (observed) upper limits, and the light blue (dark blue) band contains the $1\sigma$ $(2\sigma)$ uncertainties on the expected upper limits. Thresholds corresponding to $90\,\%$ and $95\,\%$ confidence level are indicated with dashed lines. The observed values are plotted for branching fractions greater than the measured branching fraction in the data; the test statistic is defined to be nonzero only in that region. In conclusion, a search for the rare decays ${{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}}$ is performed using data from proton-proton collisions recorded with the LHCb experiment, corresponding to a total integrated luminosity of $5\text{\,}\text{\,fb}^{-1}$. No excess of events is observed over the background. The resulting limits on the branching fractions are $\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})<9.4\,(11.2)\times 10^{-9}$ and $\mathcal{B}({{B}^{0}}\\!\rightarrow{e^{+}e^{-}})<2.5\,(3.0)\times 10^{-9}$ at $90\,(95)\,\%$ confidence level, when neglecting the contribution from the other decay. The mean ${B}^{0}_{s}$ lifetime is assumed for ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ decays. Assuming SM-like $C\\!P$-odd ($C\\!P$-even) ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ decays, an increase (decrease) of $2.4\text{\,}\mathrm{\char 37\relax}$ with respect to the quoted limit is found. The results improve the limits on these branching fractions [11] by more than one order of magnitude and constrain contributions beyond the SM, for example from scalar and pseudoscalar currents [10]. ## Acknowledgements We express our gratitude to our colleagues in the CERN accelerator departments for the excellent performance of the LHC. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); MOST and NSFC (China); CNRS/IN2P3 (France); BMBF, DFG and MPG (Germany); INFN (Italy); NWO (Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania); MSHE (Russia); MinECo (Spain); SNSF and SER (Switzerland); NASU (Ukraine); STFC (United Kingdom); DOE NP and NSF (USA). We acknowledge the computing resources that are provided by CERN, IN2P3 (France), KIT and DESY (Germany), INFN (Italy), SURF (Netherlands), PIC (Spain), GridPP (United Kingdom), RRCKI and Yandex LLC (Russia), CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil), PL- GRID (Poland) and OSC (USA). We are indebted to the communities behind the multiple open-source software packages on which we depend. Individual groups or members have received support from AvH Foundation (Germany); EPLANET, Marie Skłodowska-Curie Actions and ERC (European Union); ANR, Labex P2IO and OCEVU, and Région Auvergne-Rhône-Alpes (France); Key Research Program of Frontier Sciences of CAS, CAS PIFI, and the Thousand Talents Program (China); RFBR, RSF and Yandex LLC (Russia); GVA, XuntaGal and GENCAT (Spain); the Royal Society and the Leverhulme Trust (United Kingdom). ## Supplemental Material for LHCb-PAPER-2020-001 The individual categories of the simultaneous fit to the dielectron invariant- mass using the ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ hypothesis are presented in Fig. 3. The fit to the invariant dielectron mass including the ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ hypothesis instead of the ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ hypothesis is shown in Fig. 4, where the bremsstrahlung categories are summed. The individual categories of the simultaneous fit to the dielectron invariant-mass using the ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ hypothesis are presented in Fig. 5. Table 2 lists the inputs to the normalization factors: the ratio of normalization and signal efficiencies and the normalization yield. The efficiency of the normalization mode differs from the signal and causes the efficiency ratio to decrease with bremsstrahlung category due to the slightly different reconstruction and preselection and a different impact of the BDT selection, where the differences mainly originate from the additional track in the normalization mode. Figure 3: Simultaneous fit to the dielectron invariant-mass distribution in all categories, with $\mathcal{B}({{B}^{0}}\\!\rightarrow{e^{+}e^{-}})$ fixed to zero. The top figures show the three bremsstrahlung categories in the Run 1 data set and the bottom figures show the Run 2 data set. From left to right, the data sets correspond to the bremsstrahlung correction category with no correction, correcting one electron and correcting both electrons. The relative proportions of background contributions change between Run 1 and Run 2 due to different performances of the particle-identification algorithms and BDT selections. Their relative fractions between bremsstrahlung categories follow the expectation from simulation. Figure 4: Simultaneous fit to the dielectron invariant-mass distribution, with $\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})$ fixed to zero. The bremsstrahlung categories are summed over the (left) Run 1 and (right) Run 2 data sets. The relative proportions of background contributions change between Run 1 and Run 2 due to different performances of the particle-identification algorithms and BDT selections. Figure 5: Simultaneous fit to the dielectron invariant-mass distribution in all categories, with $\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})$ fixed to zero. The top figures show the three bremsstrahlung categories in the Run 1 data set and the bottom figures show the Run 2 data set. From left to right, the data sets correspond to the bremsstrahlung correction category with no correction, correcting one electron and correcting both electrons. The relative proportions of background contributions change between Run 1 and Run 2 due to different performances of the particle-identification algorithms and BDT selections. Their relative fractions between bremsstrahlung categories follow the expectation from simulation. Table 2: Inputs for the normalization factors, the efficiency ratio $\varepsilon({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}})/\varepsilon({{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}})$ and normalization yield $N({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}})$. The bremsstrahlung category (Brem. cat.) denotes whether zero, one or both electrons are corrected for bremsstrahlung losses. The uncertainties on the efficiency ratios include statistical uncertainties from the calibration and uncertainties due to limited size of the simulated samples. Brem. cat. | 2011–2012 | 2015–2016 ---|---|--- | Efficiency ratio | Norm. yield $[10^{3}]$ | Efficiency ratio | Norm. yield $[10^{3}]$ Brem. 0 | $0.144\pm 0.012$ | $5.05\pm 0.07$ | $0.148\pm 0.118$ | $7.96\pm 0.09$ Brem. 1 | $0.119\pm 0.008$ | $10.43\pm 0.11$ | $0.118\pm 0.005$ | $12.75\pm 0.05$ Brem. 2 | $0.086\pm 0.010$ | $4.95\pm 0.07$ | $0.085\pm 0.005$ | $8.306\pm 0.032$ ## References * [1] LHCb collaboration, R. Aaij et al., _Test of lepton universality with ${{B}^{0}}\\!\rightarrow{{K}^{*0}}\ell^{+}\ell^{-}$ decays_, JHEP 08 (2017) 055, arXiv:1705.05802 * [2] LHCb collaboration, R. Aaij et al., _Search for lepton-universality violation in ${{{B}^{+}}}\\!\rightarrow{{K}^{+}}\ell^{+}\ell^{-}$ decays_, Phys. Rev. Lett. 122 (2019) 191801, arXiv:1903.09252 * [3] LHCb collaboration, R. Aaij et al., _Test of lepton universality using ${{\mathchar 28931\relax}^{0}_{b}}\\!\rightarrow p{{K}^{-}}\ell^{+}\ell^{-}$ decays_, JHEP 05 (2020) 040, arXiv:1912.08139 * [4] BaBar collaboration, J. P. Lees et al., _Measurement of branching fractions and rate asymmetries in the rare decays $B\rightarrow K^{(*)}l^{+}l^{-}$_, Phys. Rev. D86 (2012) 032012, arXiv:1204.3933 * [5] Belle collaboration, A. Abdesselam et al., _Test of lepton flavor universality in ${B\rightarrow K\ell^{+}\ell^{-}}$ decays_, arXiv:1908.01848 * [6] Belle collaboration, A. Abdesselam et al., _Test of lepton flavor universality in ${B\rightarrow K^{\ast}\ell^{+}\ell^{-}}$ decays at Belle_, arXiv:1904.02440 * [7] CMS and LHCb collaborations, V. Khachatryan et al., _Observation of the rare ${{B}^{0}_{s}}\\!\rightarrow{\mu^{+}\mu^{-}}$ decay from the combined analysis of CMS and LHCb data_, Nature 522 (2015) 68, arXiv:1411.4413 * [8] LHCb collaboration, R. Aaij et al., _Measurement of the ${{B}^{0}_{s}}\\!\rightarrow{\mu^{+}\mu^{-}}$ branching fraction and effective lifetime and search for ${{B}^{0}}\\!\rightarrow{\mu^{+}\mu^{-}}$ decays_, Phys. Rev. Lett. 118 (2017) 191801, arXiv:1703.05747 * [9] M. Beneke, C. Bobeth, and R. Szafron, _Power-enhanced leading-logarithmic QED corrections to $B_{q}\rightarrow\mu^{+}\mu^{-}$_, JHEP 10 (2019) 232, arXiv:1908.07011 * [10] R. Fleischer, R. Jaarsma, and G. Tetlalmatzi-Xolocotzi, _In pursuit of New Physics with $B^{0}_{s,d}\rightarrow\ell^{+}\ell^{-}$_, JHEP 05 (2017) 156, arXiv:1703.10160 * [11] CDF collaboration, T. Aaltonen et al., _Search for the decays $B^{0}_{s}\rightarrow e^{+}\mu^{-}$ and $B^{0}_{s}\rightarrow e^{+}e^{-}$ in CDF Run II_, Phys. Rev. Lett. 102 (2009) 201801, arXiv:0901.3803 * [12] Particle Data Group, M. Tanabashi et al., _Review of particle physics_ , Phys. Rev. D98 (2018) 030001, and 2019 update * [13] LHCb collaboration, A. A. Alves Jr. et al., _The LHCb detector at the LHC_, JINST 3 (2008) S08005 * [14] LHCb collaboration, R. Aaij et al., _LHCb detector performance_ , Int. J. Mod. Phys. A30 (2015) 1530022, arXiv:1412.6352 * [15] R. Aaij et al., _The LHCb trigger and its performance in 2011_, JINST 8 (2013) P04022, arXiv:1211.3055 * [16] V. V. Gligorov and M. Williams, _Efficient, reliable and fast high-level triggering using a bonsai boosted decision tree_ , JINST 8 (2013) P02013, arXiv:1210.6861 * [17] T. Likhomanenko et al., _LHCb topological trigger reoptimization_ , J. Phys. Conf. Ser. 664 (2015) 082025 * [18] T. Sjöstrand, S. Mrenna, and P. Skands, _PYTHIA 6.4 physics and manual_ , JHEP 05 (2006) 026, arXiv:hep-ph/0603175 * [19] T. Sjöstrand, S. Mrenna, and P. Skands, _A brief introduction to PYTHIA 8.1_ , Comput. Phys. Commun. 178 (2008) 852, arXiv:0710.3820 * [20] I. Belyaev et al., _Handling of the generation of primary events in Gauss, the LHCb simulation framework_ , J. Phys. Conf. Ser. 331 (2011) 032047 * [21] D. J. Lange, _The EvtGen particle decay simulation package_ , Nucl. Instrum. Meth. A462 (2001) 152 * [22] P. Golonka and Z. Was, _PHOTOS Monte Carlo: A precision tool for QED corrections in $Z$ and $W$ decays_, Eur. Phys. J. C45 (2006) 97, arXiv:hep-ph/0506026 * [23] Geant4 collaboration, J. Allison et al., _Geant4 developments and applications_ , IEEE Trans. Nucl. Sci. 53 (2006) 270 * [24] Geant4 collaboration, S. Agostinelli et al., _Geant4: A simulation toolkit_ , Nucl. Instrum. Meth. A506 (2003) 250 * [25] M. Clemencic et al., _The LHCb simulation application, Gauss: Design, evolution and experience_, J. Phys. Conf. Ser. 331 (2011) 032023 * [26] LHCb collaboration, R. Aaij et al., _Search for the rare decays ${{B}^{0}_{s}}\\!\rightarrow{\mu^{+}\mu^{-}}$ and ${{B}^{0}}\\!\rightarrow{\mu^{+}\mu^{-}}$_, Phys. Lett. B708 (2012) 55, arXiv:1112.1600 * [27] L. Anderlini et al., _The PIDCalib package_ , LHCb-PUB-2016-021, 2016 * [28] LHCb collaboration, R. Aaij et al., _Measurement of the ${{B}^{0}}\\!\rightarrow{{K}^{*0}}{e^{+}e^{-}}$ branching fraction at low dilepton mass_, JHEP 05 (2013) 159, arXiv:1304.3035 * [29] Y. Freund and R. E. Schapire, _A decision-theoretic generalization of on-line learning and an application to boosting_ , J. Comput. Syst. Sci. 55 (1997) 119 * [30] L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification and regression trees, Wadsworth international group, Belmont, California, USA, 1984 * [31] F. Pedregosa et al., _Scikit-learn: Machine learning in Python_ , J. Machine Learning Res. 12 (2011) 2825, arXiv:1201.0490, and online at http://scikit-learn.org/stable/ * [32] L. Gavardi, _Search for lepton flavour violation in $\tau$ decays at the LHCb experiment_, Master’s thesis, Università degli studi di Milano-Bicocca, 2013, presented 28 Nov 2013, CERN-THESIS-2013-259 * [33] G. Punzi, _Sensitivity of searches for new signals and its optimization_ , eConf C030908 (2003) MODT002, arXiv:physics/0308063 * [34] LHCb collaboration, R. Aaij et al., _Measurement of the fragmentation fraction ratio $f_{s}/f_{d}$ and its dependence on $B$ meson kinematics_, JHEP 04 (2013) 001, arXiv:1301.5286, $f_{s}/f_{d}$ value updated in LHCb-CONF-2013-011 * [35] LHCb collaboration, R. Aaij et al., _Measurement of $f_{s}/f_{u}$ variation with proton-proton collision energy and $B$-meson kinematics_, Phys. Rev. Lett. 124 (2020) 122002, arXiv:1910.09934 * [36] LHCb collaboration, R. Aaij et al., _Measurement of $b$-hadron fractions in 13 TeV $p$ $p$ collisions_, Phys. Rev. D100 (2019) 031102(R), arXiv:1902.06794 * [37] T. Skwarnicki, A study of the radiative cascade transitions between the Upsilon-prime and Upsilon resonances, PhD thesis, Institute of Nuclear Physics, Krakow, 1986, DESY-F31-86-02 * [38] S. Tolk, J. Albrecht, F. Dettori, and A. Pellegrino, _Data driven trigger efficiency determination at LHCb_ , LHCb-PUB-2014-039, 2014 * [39] Particle Data Group, K. A. Olive et al., _Review of particle physics_ , Chin. Phys. C38 (2014) 090001 * [40] See Supplemental Material for the selection efficiency ratio of normalization and signal mode, the individual categories of the $B^{0}_{s}\rightarrow e^{+}e^{-}$ fit and the distributions with the result of the $B^{0}\rightarrow e^{+}e^{-}$ fit * [41] B. Efron and R. J. Tibshirani, An introduction to the bootstrap, Mono. Stat. Appl. Probab., Chapman and Hall, London, 1993 * [42] LHCb collaboration, R. Aaij et al., _Measurement of the track reconstruction efficiency at LHCb_ , JINST 10 (2015) P02007, arXiv:1408.1251 * [43] A. L. Read, _Presentation of search results: The CL s technique_, J. Phys. G28 (2002) 2693 * [44] LHCb collaboration, R. Aaij et al., _Measurement of the CKM angle $\gamma$ from a combination of LHCb results_, JHEP 12 (2016) 087, arXiv:1611.03076 * [45] M. Kenzie et al., _GammaCombo: A statistical analysis framework for combining measurements, fitting datasets and producing confidence intervals_ , doi: 10.5281/zenodo.3371421 * [46] G. Cowan, K. Cranmer, E. Gross, and O. Vitells, _Asymptotic formulae for likelihood-based tests of new physics_ , Eur. Phys. J. C71 (2011) 1554, Erratum ibid. C73 (2013) 2501, arXiv:1007.1727 LHCb collaboration R. Aaij31, C. Abellán Beteta49, T. Ackernley59, B. Adeva45, M. Adinolfi53, H. Afsharnia9, C.A. Aidala81, S. Aiola25, Z. Ajaltouni9, S. Akar66, P. Albicocco22, J. Albrecht14, F. Alessio47, M. Alexander58, A. Alfonso Albero44, G. Alkhazov37, P. Alvarez Cartelle60, A.A. Alves Jr45, S. Amato2, Y. Amhis11, L. An21, L. Anderlini21, G. Andreassi48, M. Andreotti20, F. Archilli16, A. Artamonov43, M. Artuso67, K. Arzymatov41, E. Aslanides10, M. Atzeni49, B. Audurier11, S. Bachmann16, J.J. Back55, S. Baker60, V. Balagura11,b, W. Baldini20, A. Baranov41, R.J. Barlow61, S. Barsuk11, W. Barter60, M. Bartolini23,47,h, F. Baryshnikov78, J.M. Basels13, G. Bassi28, V. Batozskaya35, B. Batsukh67, A. Battig14, A. Bay48, M. Becker14, F. Bedeschi28, I. Bediaga1, A. Beiter67, V. Belavin41, S. Belin26, V. Bellee48, K. Belous43, I. Belyaev38, G. Bencivenni22, E. Ben-Haim12, S. Benson31, A. Berezhnoy39, R. Bernet49, D. Berninghoff16, H.C. Bernstein67, C. Bertella47, E. Bertholet12, A. Bertolin27, C. Betancourt49, F. Betti19,e, M.O. Bettler54, Ia. Bezshyiko49, S. Bhasin53, J. Bhom33, M.S. Bieker14, S. Bifani52, P. Billoir12, A. Bizzeti21,t, M. Bjørn62, M.P. Blago47, T. Blake55, F. Blanc48, S. Blusk67, D. Bobulska58, V. Bocci30, O. Boente Garcia45, T. Boettcher63, A. Boldyrev79, A. Bondar42,w, N. Bondar37,47, S. Borghi61, M. Borisyak41, M. Borsato16, J.T. Borsuk33, T.J.V. Bowcock59, C. Bozzi20, M.J. Bradley60, S. Braun65, A. Brea Rodriguez45, M. Brodski47, J. Brodzicka33, A. Brossa Gonzalo55, D. Brundu26, E. Buchanan53, A. Büchler-Germann49, A. Buonaura49, C. Burr47, A. Bursche26, A. Butkevich40, J.S. Butter31, J. Buytaert47, W. Byczynski47, S. Cadeddu26, H. Cai72, R. Calabrese20,g, L. Calero Diaz22, S. Cali22, R. Calladine52, M. Calvi24,i, M. Calvo Gomez44,l, P. Camargo Magalhaes53, A. Camboni44,l, P. Campana22, D.H. Campora Perez31, A.F. Campoverde Quezada5, L. Capriotti19,e, A. Carbone19,e, G. Carboni29, R. Cardinale23,h, A. Cardini26, I. Carli6, P. Carniti24,i, K. Carvalho Akiba31, A. Casais Vidal45, G. Casse59, M. Cattaneo47, G. Cavallero47, S. Celani48, R. Cenci28,o, J. Cerasoli10, M.G. Chapman53, M. Charles12, Ph. Charpentier47, G. Chatzikonstantinidis52, M. Chefdeville8, V. Chekalina41, C. Chen3, S. Chen26, A. Chernov33, S.-G. Chitic47, V. Chobanova45, S. Cholak48, M. Chrzaszcz33, A. Chubykin37, V. Chulikov37, P. Ciambrone22, M.F. Cicala55, X. Cid Vidal45, G. Ciezarek47, F. Cindolo19, P.E.L. Clarke57, M. Clemencic47, H.V. Cliff54, J. Closier47, J.L. Cobbledick61, V. Coco47, J.A.B. Coelho11, J. Cogan10, E. Cogneras9, L. Cojocariu36, P. Collins47, T. Colombo47, A. Contu26, N. Cooke52, G. Coombs58, S. Coquereau44, G. Corti47, C.M. Costa Sobral55, B. Couturier47, D.C. Craik63, J. Crkovská66, A. Crocombe55, M. Cruz Torres1,z, R. Currie57, C.L. Da Silva66, E. Dall’Occo14, J. Dalseno45,53, C. D’Ambrosio47, A. Danilina38, P. d’Argent47, A. Davis61, O. De Aguiar Francisco47, K. De Bruyn47, S. De Capua61, M. De Cian48, J.M. De Miranda1, L. De Paula2, M. De Serio18,d, P. De Simone22, J.A. de Vries76, C.T. Dean66, W. Dean81, D. Decamp8, L. Del Buono12, B. Delaney54, H.-P. Dembinski14, A. Dendek34, V. Denysenko49, D. Derkach79, O. Deschamps9, F. Desse11, F. Dettori26,f, B. Dey7, A. Di Canto47, P. Di Nezza22, S. Didenko78, H. Dijkstra47, V. Dobishuk51, F. Dordei26, M. Dorigo28,x, A.C. dos Reis1, L. Douglas58, A. Dovbnya50, K. Dreimanis59, M.W. Dudek33, L. Dufour47, P. Durante47, J.M. Durham66, D. Dutta61, M. Dziewiecki16, A. Dziurda33, A. Dzyuba37, S. Easo56, U. Egede69, V. Egorychev38, S. Eidelman42,w, S. Eisenhardt57, S. Ek-In48, L. Eklund58, S. Ely67, A. Ene36, E. Epple66, S. Escher13, J. Eschle49, S. Esen31, T. Evans47, A. Falabella19, J. Fan3, Y. Fan5, N. Farley52, S. Farry59, D. Fazzini11, P. Fedin38, M. Féo47, P. Fernandez Declara47, A. Fernandez Prieto45, F. Ferrari19,e, L. Ferreira Lopes48, F. Ferreira Rodrigues2, S. Ferreres Sole31, M. Ferrillo49, M. Ferro- Luzzi47, S. Filippov40, R.A. Fini18, M. Fiorini20,g, M. Firlej34, K.M. Fischer62, C. Fitzpatrick47, T. Fiutowski34, F. Fleuret11,b, M. Fontana47, F. Fontanelli23,h, R. Forty47, V. Franco Lima59, M. Franco Sevilla65, M. Frank47, C. Frei47, D.A. Friday58, J. Fu25,p, Q. Fuehring14, W. Funk47, E. Gabriel57, A. Gallas Torreira45, D. Galli19,e, S. Gallorini27, S. Gambetta57, Y. Gan3, M. Gandelman2, P. Gandini25, Y. Gao4, L.M. Garcia Martin46, J. García Pardiñas49, B. Garcia Plana45, F.A. Garcia Rosales11, L. Garrido44, D. Gascon44, C. Gaspar47, D. Gerick16, E. Gersabeck61, M. Gersabeck61, T. Gershon55, D. Gerstel10, Ph. Ghez8, V. Gibson54, A. Gioventù45, P. Gironella Gironell44, L. Giubega36, C. Giugliano20, K. Gizdov57, V.V. Gligorov12, C. Göbel70, E. Golobardes44,l, D. Golubkov38, A. Golutvin60,78, A. Gomes1,a, P. Gorbounov38, I.V. Gorelov39, C. Gotti24,i, E. Govorkova31, J.P. Grabowski16, R. Graciani Diaz44, T. Grammatico12, L.A. Granado Cardoso47, E. Graugés44, E. Graverini48, G. Graziani21, A. Grecu36, R. Greim31, P. Griffith20, L. Grillo61, L. Gruber47, B.R. Gruberg Cazon62, C. Gu3, E. Gushchin40, A. Guth13, Yu. Guz43,47, T. Gys47, P. A. Günther16, T. Hadavizadeh62, G. Haefeli48, C. Haen47, S.C. Haines54, P.M. Hamilton65, Q. Han7, X. Han16, T.H. Hancock62, S. Hansmann-Menzemer16, N. Harnew62, T. Harrison59, R. Hart31, C. Hasse14, M. Hatch47, J. He5, M. Hecker60, K. Heijhoff31, K. Heinicke14, A.M. Hennequin47, K. Hennessy59, L. Henry46, J. Heuel13, A. Hicheur68, D. Hill62, M. Hilton61, P.H. Hopchev48, J. Hu16, J. Hu71, W. Hu7, W. Huang5, W. Hulsbergen31, T. Humair60, R.J. Hunter55, M. Hushchyn79, D. Hutchcroft59, D. Hynds31, P. Ibis14, M. Idzik34, P. Ilten52, A. Inglessi37, K. Ivshin37, R. Jacobsson47, S. Jakobsen47, E. Jans31, B.K. Jashal46, A. Jawahery65, V. Jevtic14, F. Jiang3, M. John62, D. Johnson47, C.R. Jones54, B. Jost47, N. Jurik62, S. Kandybei50, M. Karacson47, J.M. Kariuki53, N. Kazeev79, M. Kecke16, F. Keizer54,47, M. Kelsey67, M. Kenzie55, T. Ketel32, B. Khanji47, A. Kharisova80, K.E. Kim67, T. Kirn13, V.S. Kirsebom48, S. Klaver22, K. Klimaszewski35, S. Koliiev51, A. Kondybayeva78, A. Konoplyannikov38, P. Kopciewicz34, R. Kopecna16, P. Koppenburg31, M. Korolev39, I. Kostiuk31,51, O. Kot51, S. Kotriakhova37, L. Kravchuk40, R.D. Krawczyk47, M. Kreps55, F. Kress60, S. Kretzschmar13, P. Krokovny42,w, W. Krupa34, W. Krzemien35, W. Kucewicz33,k, M. Kucharczyk33, V. Kudryavtsev42,w, H.S. Kuindersma31, G.J. Kunde66, T. Kvaratskheliya38, D. Lacarrere47, G. Lafferty61, A. Lai26, D. Lancierini49, J.J. Lane61, G. Lanfranchi22, C. Langenbruch13, O. Lantwin49, T. Latham55, F. Lazzari28,u, R. Le Gac10, S.H. Lee81, R. Lefèvre9, A. Leflat39,47, O. Leroy10, T. Lesiak33, B. Leverington16, H. Li71, L. Li62, X. Li66, Y. Li6, Z. Li67, X. Liang67, T. Lin60, R. Lindner47, V. Lisovskyi14, G. Liu71, X. Liu3, D. Loh55, A. Loi26, J. Lomba Castro45, I. Longstaff58, J.H. Lopes2, G. Loustau49, G.H. Lovell54, Y. Lu6, D. Lucchesi27,n, M. Lucio Martinez31, Y. Luo3, A. Lupato27, E. Luppi20,g, O. Lupton55, A. Lusiani28,s, X. Lyu5, S. Maccolini19,e, F. Machefert11, F. Maciuc36, V. Macko48, P. Mackowiak14, S. Maddrell-Mander53, L.R. Madhan Mohan53, O. Maev37, A. Maevskiy79, D. Maisuzenko37, M.W. Majewski34, S. Malde62, B. Malecki47, A. Malinin77, T. Maltsev42,w, H. Malygina16, G. Manca26,f, G. Mancinelli10, R. Manera Escalero44, D. Manuzzi19,e, D. Marangotto25,p, J. Maratas9,v, J.F. Marchand8, U. Marconi19, S. Mariani21,21,47, C. Marin Benito11, M. Marinangeli48, P. Marino48, J. Marks16, P.J. Marshall59, G. Martellotti30, L. Martinazzoli47, M. Martinelli24,i, D. Martinez Santos45, F. Martinez Vidal46, A. Massafferri1, M. Materok13, R. Matev47, A. Mathad49, Z. Mathe47, V. Matiunin38, C. Matteuzzi24, K.R. Mattioli81, A. Mauri49, E. Maurice11,b, M. McCann60, L. Mcconnell17, A. McNab61, R. McNulty17, J.V. Mead59, B. Meadows64, C. Meaux10, G. Meier14, N. Meinert74, D. Melnychuk35, S. Meloni24,i, M. Merk31, A. Merli25, M. Mikhasenko47, D.A. Milanes73, E. Millard55, M.-N. Minard8, O. Mineev38, L. Minzoni20, S.E. Mitchell57, B. Mitreska61, D.S. Mitzel47, A. Mödden14, A. Mogini12, R.D. Moise60, T. Mombächer14, I.A. Monroy73, S. Monteil9, M. Morandin27, G. Morello22, M.J. Morello28,s, J. Moron34, A.B. Morris10, A.G. Morris55, R. Mountain67, H. Mu3, F. Muheim57, M. Mukherjee7, M. Mulder47, D. Müller47, K. Müller49, C.H. Murphy62, D. Murray61, P. Muzzetto26, P. Naik53, T. Nakada48, R. Nandakumar56, T. Nanut48, I. Nasteva2, M. Needham57, N. Neri25,p, S. Neubert16, N. Neufeld47, R. Newcombe60, T.D. Nguyen48, C. Nguyen- Mau48,m, E.M. Niel11, S. Nieswand13, N. Nikitin39, N.S. Nolte47, C. Nunez81, A. Oblakowska-Mucha34, V. Obraztsov43, S. Ogilvy58, D.P. O’Hanlon53, R. Oldeman26,f, C.J.G. Onderwater75, J. D. Osborn81, A. Ossowska33, J.M. Otalora Goicochea2, T. Ovsiannikova38, P. Owen49, A. Oyanguren46, P.R. Pais48, T. Pajero28,28,47,s, A. Palano18, M. Palutan22, G. Panshin80, A. Papanestis56, M. Pappagallo57, L.L. Pappalardo20, C. Pappenheimer64, W. Parker65, C. Parkes61, G. Passaleva21,47, A. Pastore18, M. Patel60, C. Patrignani19,e, A. Pearce47, A. Pellegrino31, M. Pepe Altarelli47, S. Perazzini19, D. Pereima38, P. Perret9, L. Pescatore48, K. Petridis53, A. Petrolini23,h, A. Petrov77, S. Petrucci57, M. Petruzzo25,p, B. Pietrzyk8, G. Pietrzyk48, M. Pili62, D. Pinci30, J. Pinzino47, F. Pisani19, A. Piucci16, V. Placinta36, S. Playfer57, J. Plews52, M. Plo Casasus45, F. Polci12, M. Poli Lener22, M. Poliakova67, A. Poluektov10, N. Polukhina78,c, I. Polyakov67, E. Polycarpo2, G.J. Pomery53, S. Ponce47, A. Popov43, D. Popov52, S. Poslavskii43, K. Prasanth33, L. Promberger47, C. Prouve45, V. Pugatch51, A. Puig Navarro49, H. Pullen62, G. Punzi28,o, W. Qian5, J. Qin5, R. Quagliani12, B. Quintana8, N.V. Raab17, R.I. Rabadan Trejo10, B. Rachwal34, J.H. Rademacker53, M. Rama28, M. Ramos Pernas45, M.S. Rangel2, F. Ratnikov41,79, G. Raven32, M. Reboud8, F. Redi48, F. Reiss12, C. Remon Alepuz46, Z. Ren3, V. Renaudin62, S. Ricciardi56, D.S. Richards56, S. Richards53, K. Rinnert59, P. Robbe11, A. Robert12, A.B. Rodrigues48, E. Rodrigues59, J.A. Rodriguez Lopez73, M. Roehrken47, A. Rollings62, V. Romanovskiy43, M. Romero Lamas45, A. Romero Vidal45, J.D. Roth81, M. Rotondo22, M.S. Rudolph67, T. Ruf47, J. Ruiz Vidal46, A. Ryzhikov79, J. Ryzka34, J.J. Saborido Silva45, N. Sagidova37, N. Sahoo55, B. Saitta26,f, C. Sanchez Gras31, C. Sanchez Mayordomo46, R. Santacesaria30, C. Santamarina Rios45, M. Santimaria22, E. Santovetti29,j, G. Sarpis61, M. Sarpis16, A. Sarti30, C. Satriano30,r, A. Satta29, M. Saur5, D. Savrina38,39, L.G. Scantlebury Smead62, S. Schael13, M. Schellenberg14, M. Schiller58, H. Schindler47, M. Schmelling15, T. Schmelzer14, B. Schmidt47, O. Schneider48, A. Schopper47, H.F. Schreiner64, M. Schubiger31, S. Schulte48, M.H. Schune11, R. Schwemmer47, B. Sciascia22, A. Sciubba22, S. Sellam68, A. Semennikov38, A. Sergi52,47, N. Serra49, J. Serrano10, L. Sestini27, A. Seuthe14, P. Seyfert47, D.M. Shangase81, M. Shapkin43, L. Shchutska48, T. Shears59, L. Shekhtman42,w, V. Shevchenko77, E. Shmanin78, J.D. Shupperd67, B.G. Siddi20, R. Silva Coutinho49, L. Silva de Oliveira2, G. Simi27,n, S. Simone18,d, I. Skiba20, N. Skidmore16, T. Skwarnicki67, M.W. Slater52, J.G. Smeaton54, A. Smetkina38, E. Smith13, I.T. Smith57, M. Smith60, A. Snoch31, M. Soares19, L. Soares Lavra9, M.D. Sokoloff64, F.J.P. Soler58, B. Souza De Paula2, B. Spaan14, E. Spadaro Norella25,p, P. Spradlin58, F. Stagni47, M. Stahl64, S. Stahl47, P. Stefko48, O. Steinkamp49,78, S. Stemmle16, O. Stenyakin43, M. Stepanova37, H. Stevens14, S. Stone67, S. Stracka28, M.E. Stramaglia48, M. Straticiuc36, S. Strokov80, J. Sun26, L. Sun72, Y. Sun65, P. Svihra61, K. Swientek34, A. Szabelski35, T. Szumlak34, M. Szymanski47, S. Taneja61, Z. Tang3, T. Tekampe14, F. Teubert47, E. Thomas47, K.A. Thomson59, M.J. Tilley60, V. Tisserand9, S. T’Jampens8, M. Tobin6, S. Tolk47, L. Tomassetti20,g, D. Torres Machado1, D.Y. Tou12, E. Tournefier8, M. Traill58, M.T. Tran48, E. Trifonova78, C. Trippl48, A. Tsaregorodtsev10, G. Tuci28,o, A. Tully48, N. Tuning31, A. Ukleja35, A. Usachov31, A. Ustyuzhanin41,79, U. Uwer16, A. Vagner80, V. Vagnoni19, A. Valassi47, G. Valenti19, M. van Beuzekom31, H. Van Hecke66, E. van Herwijnen47, C.B. Van Hulse17, M. van Veghel75, R. Vazquez Gomez44, P. Vazquez Regueiro45, C. Vázquez Sierra31, S. Vecchi20, J.J. Velthuis53, M. Veltri21,q, A. Venkateswaran67, M. Veronesi31, M. Vesterinen55, J.V. Viana Barbosa47, D. Vieira64, M. Vieites Diaz48, H. Viemann74, X. Vilasis-Cardona44,l, G. Vitali28, A. Vitkovskiy31, A. Vollhardt49, D. Vom Bruch12, A. Vorobyev37, V. Vorobyev42,w, N. Voropaev37, R. Waldi74, J. Walsh28, J. Wang3, J. Wang72, J. Wang6, M. Wang3, Y. Wang7, Z. Wang49, D.R. Ward54, H.M. Wark59, N.K. Watson52, D. Websdale60, A. Weiden49, C. Weisser63, B.D.C. Westhenry53, D.J. White61, M. Whitehead13, D. Wiedner14, G. Wilkinson62, M. Wilkinson67, I. Williams54, M. Williams63, M.R.J. Williams61, T. Williams52, F.F. Wilson56, W. Wislicki35, M. Witek33, L. Witola16, G. Wormser11, S.A. Wotton54, H. Wu67, K. Wyllie47, Z. Xiang5, D. Xiao7, Y. Xie7, H. Xing71, A. Xu4, J. Xu5, L. Xu3, M. Xu7, Q. Xu5, Z. Xu4, Z. Yang3, Z. Yang65, Y. Yao67, L.E. Yeomans59, H. Yin7, J. Yu7, X. Yuan67, O. Yushchenko43, K.A. Zarebski52, M. Zavertyaev15,c, M. Zdybal33, M. Zeng3, D. Zhang7, L. Zhang3, S. Zhang4, W.C. Zhang3,y, Y. Zhang47, A. Zhelezov16, Y. Zheng5, X. Zhou5, Y. Zhou5, X. Zhu3, V. Zhukov13,39, J.B. Zonneveld57, S. Zucchelli19,e. 1Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil 2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil 3Center for High Energy Physics, Tsinghua University, Beijing, China 4School of Physics State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing, China 5University of Chinese Academy of Sciences, Beijing, China 6Institute Of High Energy Physics (IHEP), Beijing, China 7Institute of Particle Physics, Central China Normal University, Wuhan, Hubei, China 8Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, IN2P3-LAPP, Annecy, France 9Université Clermont Auvergne, CNRS/IN2P3, LPC, Clermont-Ferrand, France 10Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France 11Université Paris-Saclay, CNRS/IN2P3, IJCLab, Orsay, France 12LPNHE, Sorbonne Université, Paris Diderot Sorbonne Paris Cité, CNRS/IN2P3, Paris, France 13I. Physikalisches Institut, RWTH Aachen University, Aachen, Germany 14Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany 15Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany 16Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany 17School of Physics, University College Dublin, Dublin, Ireland 18INFN Sezione di Bari, Bari, Italy 19INFN Sezione di Bologna, Bologna, Italy 20INFN Sezione di Ferrara, Ferrara, Italy 21INFN Sezione di Firenze, Firenze, Italy 22INFN Laboratori Nazionali di Frascati, Frascati, Italy 23INFN Sezione di Genova, Genova, Italy 24INFN Sezione di Milano-Bicocca, Milano, Italy 25INFN Sezione di Milano, Milano, Italy 26INFN Sezione di Cagliari, Monserrato, Italy 27INFN Sezione di Padova, Padova, Italy 28INFN Sezione di Pisa, Pisa, Italy 29INFN Sezione di Roma Tor Vergata, Roma, Italy 30INFN Sezione di Roma La Sapienza, Roma, Italy 31Nikhef National Institute for Subatomic Physics, Amsterdam, Netherlands 32Nikhef National Institute for Subatomic Physics and VU University Amsterdam, Amsterdam, Netherlands 33Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences, Kraków, Poland 34AGH - University of Science and Technology, Faculty of Physics and Applied Computer Science, Kraków, Poland 35National Center for Nuclear Research (NCBJ), Warsaw, Poland 36Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest-Magurele, Romania 37Petersburg Nuclear Physics Institute NRC Kurchatov Institute (PNPI NRC KI), Gatchina, Russia 38Institute of Theoretical and Experimental Physics NRC Kurchatov Institute (ITEP NRC KI), Moscow, Russia, Moscow, Russia 39Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow, Russia 40Institute for Nuclear Research of the Russian Academy of Sciences (INR RAS), Moscow, Russia 41Yandex School of Data Analysis, Moscow, Russia 42Budker Institute of Nuclear Physics (SB RAS), Novosibirsk, Russia 43Institute for High Energy Physics NRC Kurchatov Institute (IHEP NRC KI), Protvino, Russia, Protvino, Russia 44ICCUB, Universitat de Barcelona, Barcelona, Spain 45Instituto Galego de Física de Altas Enerxías (IGFAE), Universidade de Santiago de Compostela, Santiago de Compostela, Spain 46Instituto de Fisica Corpuscular, Centro Mixto Universidad de Valencia - CSIC, Valencia, Spain 47European Organization for Nuclear Research (CERN), Geneva, Switzerland 48Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland 49Physik-Institut, Universität Zürich, Zürich, Switzerland 50NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine 51Institute for Nuclear Research of the National Academy of Sciences (KINR), Kyiv, Ukraine 52University of Birmingham, Birmingham, United Kingdom 53H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom 54Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom 55Department of Physics, University of Warwick, Coventry, United Kingdom 56STFC Rutherford Appleton Laboratory, Didcot, United Kingdom 57School of Physics and Astronomy, University of Edinburgh, Edinburgh, United Kingdom 58School of Physics and Astronomy, University of Glasgow, Glasgow, United Kingdom 59Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom 60Imperial College London, London, United Kingdom 61Department of Physics and Astronomy, University of Manchester, Manchester, United Kingdom 62Department of Physics, University of Oxford, Oxford, United Kingdom 63Massachusetts Institute of Technology, Cambridge, MA, United States 64University of Cincinnati, Cincinnati, OH, United States 65University of Maryland, College Park, MD, United States 66Los Alamos National Laboratory (LANL), Los Alamos, United States 67Syracuse University, Syracuse, NY, United States 68Laboratory of Mathematical and Subatomic Physics , Constantine, Algeria, associated to 2 69School of Physics and Astronomy, Monash University, Melbourne, Australia, associated to 55 70Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil, associated to 2 71Guangdong Provencial Key Laboratory of Nuclear Science, Institute of Quantum Matter, South China Normal University, Guangzhou, China, associated to 3 72School of Physics and Technology, Wuhan University, Wuhan, China, associated to 3 73Departamento de Fisica , Universidad Nacional de Colombia, Bogota, Colombia, associated to 12 74Institut für Physik, Universität Rostock, Rostock, Germany, associated to 16 75Van Swinderen Institute, University of Groningen, Groningen, Netherlands, associated to 31 76Universiteit Maastricht, Maastricht, Netherlands, associated to 31 77National Research Centre Kurchatov Institute, Moscow, Russia, associated to 38 78National University of Science and Technology “MISIS”, Moscow, Russia, associated to 38 79National Research University Higher School of Economics, Moscow, Russia, associated to 41 80National Research Tomsk Polytechnic University, Tomsk, Russia, associated to 38 81University of Michigan, Ann Arbor, United States, associated to 67 aUniversidade Federal do Triângulo Mineiro (UFTM), Uberaba-MG, Brazil bLaboratoire Leprince-Ringuet, Palaiseau, France cP.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS), Moscow, Russia dUniversità di Bari, Bari, Italy eUniversità di Bologna, Bologna, Italy fUniversità di Cagliari, Cagliari, Italy gUniversità di Ferrara, Ferrara, Italy hUniversità di Genova, Genova, Italy iUniversità di Milano Bicocca, Milano, Italy jUniversità di Roma Tor Vergata, Roma, Italy kAGH - University of Science and Technology, Faculty of Computer Science, Electronics and Telecommunications, Kraków, Poland lDS4DS, La Salle, Universitat Ramon Llull, Barcelona, Spain mHanoi University of Science, Hanoi, Vietnam nUniversità di Padova, Padova, Italy oUniversità di Pisa, Pisa, Italy pUniversità degli Studi di Milano, Milano, Italy qUniversità di Urbino, Urbino, Italy rUniversità della Basilicata, Potenza, Italy sScuola Normale Superiore, Pisa, Italy tUniversità di Modena e Reggio Emilia, Modena, Italy uUniversità di Siena, Siena, Italy vMSU - Iligan Institute of Technology (MSU-IIT), Iligan, Philippines wNovosibirsk State University, Novosibirsk, Russia xINFN Sezione di Trieste, Trieste, Italy ySchool of Physics and Information Technology, Shaanxi Normal University (SNNU), Xi’an, China zUniversidad Nacional Autonoma de Honduras, Tegucigalpa, Honduras
2024-09-04T02:54:58.337551
2020-03-09T10:05:41
2003.04013
{ "authors": "Richard A. Fallows, Biagio Forte, Ivan Astin, Tom Allbrook, Alex\n Arnold, Alan Wood, Gareth Dorrian, Maaijke Mevius, Hanna Rothkaehl, Barbara\n Matyjasiak, Andrzej Krankowski, James M. Anderson, Ashish Asgekar, I. Max\n Avruch, Mark Bentum, Mario M. Bisi, Harvey R. Butcher, Benedetta Ciardi,\n Bartosz Dabrowski, Sieds Damstra, Francesco de Gasperin, Sven Duscha, Jochen\n Eisl\\\"offel, Thomas M.O. Franzen, Michael A. Garrett, Jean-Matthias\n Grie\\b{eta}meier, Andr\\'e W. Gunst, Matthias Hoeft, J\\\"org R. H\\\"orandel,\n Marco Iacobelli, Huib T. Intema, Leon V.E. Koopmans, Peter Maat, Gottfried\n Mann, Anna Nelles, Harm Paas, Vishambhar N. Pandey, Wolfgang Reich, Antonia\n Rowlinson, Mark Ruiter, Dominik J. Schwarz, Maciej Serylak, Aleksander\n Shulevski, Oleg M. Smirnov, Marian Soida, Matthias Steinmetz, Satyendra\n Thoudam, M. Carmen Toribio, Arnold van Ardenne, Ilse M. van Bemmel, Matthijs\n H.D. van der Wiel, Michiel P. van Haarlem, Ren\\'e C. Vermeulen, Christian\n Vocks, Ralph A.M.J. Wijers, Olaf Wucknitz, Philippe Zarka, Pietro Zucca", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26113", "submitter": "Richard A. Fallows", "url": "https://arxiv.org/abs/2003.04013" }
arxiv-papers
11institutetext: ASTRON - the Netherlands Institute for Radio Astronomy, Oude Hoogeveensedijk 4, 7991 PD Dwingeloo, the Netherlands 22institutetext: Department of Electronic and Electrical Engineering, University of Bath, Claverton Down, Bath, BA2 7AY, UK 33institutetext: School of Science and Technology, Nottingham Trent University, Clifton Lane, Nottingham, NG11 8NS, UK 44institutetext: Space Environment and Radio Engineering, School of Engineering, The University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK 55institutetext: Space Research Centre, Polish Academy of Sciences, Bartycka 18A, 00-716 Warsaw, Poland 66institutetext: Space Radio-Diagnostics Research Centre, University of Warmia and Mazury, ul. Romana Prawocheskiego 9, 10-719 Olsztyn, Poland 77institutetext: Technische Universität Berlin, Institut für Geodäsie und Geoinformationstechnik, Fakultät VI, Sekr. H 12, Hauptgebäude Raum H 5121, Straße des 17. Juni 135, 10623 Berlin, Germany 88institutetext: GFZ German Research Centre for Geosciences, Telegrafenberg, 14473 Potsdam, Germany 99institutetext: Shell Technology Center, Bangalore, India 1010institutetext: Science and Technology B.V., Delft, the Netherlands 1111institutetext: RAL Space, UKRI STFC, Rutherford Appleton Laboratory, Harwell Campus, Oxfordshire, OX11 0QX, UK 1212institutetext: Mt Stromlo Observatory, Research School of Astronomy and Astrophysics, Australian National University, Cotter Road, Weston Creek, ACT 2611, Australia 1313institutetext: Max Planck Institute for Astrophysics, Karl-Schwarzschild-Str. 1, 85748 Garching, Germany 1414institutetext: Hamburger Sternwarte, Universität Hamburg, Gojenbergsweg 112, 21029, Hamburg, Germany 1515institutetext: Thüringer Landessternwarte, Sternwarte 4, D-07778 Tautenburg, Germany 1616institutetext: Jodrell Bank Centre for Astrophysics (JBCA), Department of Physics & Astronomy, Alan Turing Building, Oxford Road, University of Manchester, Manchester M139PL, UK 1717institutetext: Leiden Observatory, Leiden University, PO Box 9513, NL-2300 RA Leiden, The Netherlands 1818institutetext: LPC2E - Université d’Orléans / CNRS, 45071 Orléans cedex 2, France 1919institutetext: Station de Radioastronomie de Nançay, Observatoire de Paris, PSL Research University, CNRS, Univ. Orléans, OSUC, 18330 Nançay, France 2020institutetext: Radboud University, Department of Astrophysics/IMAPP, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands 2121institutetext: Nikhef, Science Park 105, 1098 XG Amsterdam, The Netherlands 2222institutetext: Vrije Universiteit Brussel, Astronomy and Astrophysics Research Group, Pleinlaan 2, 1050 Brussel, Belgium 2323institutetext: Kapteyn Astronomical Institute, University of Groningen, P.O.Box 800, 9700AV Groningen, the Netherlands 2424institutetext: Leibniz- Institut für Astrophysik Potsdam, An der Sternwarte 16, D-14482 Potsdam, Germany 2525institutetext: ECAP, Friedrich-Alexander-Universität Erlangen- Nürnberg, Erwin-Rommel-Str. 1, 91054 Erlangen, Germany 2626institutetext: DESY, Platanenallee 6, 15738 Zeuthen, Germany 2727institutetext: CIT, Rijksuniversiteit Groningen, Nettelbosje 1, 9747 AJ Groningen, The Netherlands 2828institutetext: Max- Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany 2929institutetext: Anton Pannekoek Institute, University of Amsterdam, Postbus 94249, 1090 GE Amsterdam, The Netherlands 3030institutetext: Fakultät für Physik, Universität Bielefeld, Postfach 100131, 33501 Bielefeld, Germany 3131institutetext: South African Radio Astronomy Observatory, 2 Fir Street, Black River Park, Observatory, Cape Town, 7925, South Africa 3232institutetext: Department of Physics and Astronomy, University of the Western Cape, Cape Town 7535, South Africa 3333institutetext: Department of Physics and Electronics, Rhodes University, PO Box 94, Makhanda, 6140, South Africa 3434institutetext: Jagiellonian University in Kraków, Astronomical Observatory, ul. Orla 171, PL 30-244 Kraków, Poland 3535institutetext: Department of Physics, Khalifa University, PO Box 127788, Abu Dhabi, United Arab Emirates 3636institutetext: Department of Space, Earth and Environment, Chalmers University of Technology, Onsala Space Observatory, SE-439 92 Onsala, Sweden 3737institutetext: JIVE, Joint Institute for VLBI-ERIC, Oude Hoogeveensedijk 4, 7991 PD Dwingeloo, the Netherlands 3838institutetext: LESIA & USN, Observatoire de Paris, CNRS, PSL, SU/UP/UO, 92195 Meudon, France # A LOFAR Observation of Ionospheric Scintillation from Two Simultaneous Travelling Ionospheric Disturbances R.A. Fallows111Corresponding author<EMAIL_ADDRESS>11 B. Forte 22 I. Astin 22 T. Allbrook222Now at BAE Systems (operation) Ltd. 22 A. Arnold333Now an independent researcher 22 A. Wood 33 G. Dorrian 44 M. Mevius 11 H. Rothkaehl 55 B. Matyjasiak 55 A. Krankowski 66 J.M. Anderson 7788 A. Asgekar 99 I.M. Avruch 1010 M.J.Bentum 11 M.M. Bisi 1111 H.R. Butcher 1212 B. Ciardi 1313 B. Dabrowski 66 S. Damstra 11 F. de Gasperin 1414 S. Duscha 11 J. Eislöffel 1515 T.M.O Franzen 11 M.A. Garrett 16161717 J.-M. Grießmeier 18181919 A.W. Gunst 11 M. Hoeft 1515 J.R. Hörandel 202021212222 M. Iacobelli 11 H.T. Intema 1717 L.V.E. Koopmans 2323 P. Maat 11 G. Mann 2424 A. Nelles 25252626 H. Paas 2727 V.N. Pandey 112323 W. Reich 2828 A. Rowlinson 112929 M. Ruiter 11 D.J. Schwarz 3030 M. Serylak 31313232 A. Shulevski 2929 O.M. Smirnov 33333131 M. Soida 3434 M. Steinmetz 2424 S. Thoudam 3535 M.C. Toribio 3636 A. van Ardenne 11 I.M. van Bemmel 3737 M.H.D. van der Wiel 11 M.P. van Haarlem 11 R.C. Vermeulen 11 C. Vocks 2424 R.A.M.J. Wijers 2929 O. Wucknitz 2828 P. Zarka 3838 P. Zucca 11 (Received November 30, 2019) ###### Abstract This paper presents the results from one of the first observations of ionospheric scintillation taken using the Low-Frequency Array (LOFAR). The observation was of the strong natural radio source Cassiopeia A, taken overnight on 18-19 August 2013, and exhibited moderately strong scattering effects in dynamic spectra of intensity received across an observing bandwidth of 10-80 MHz. Delay-Doppler spectra (the 2-D FFT of the dynamic spectrum) from the first hour of observation showed two discrete parabolic arcs, one with a steep curvature and the other shallow, which can be used to provide estimates of the distance to, and velocity of, the scattering plasma. A cross- correlation analysis of data received by the dense array of stations in the LOFAR “core” reveals two different velocities in the scintillation pattern: a primary velocity of $\sim$20-40 m s-1 with a north-west to south-east direction, associated with the steep parabolic arc and a scattering altitude in the F-region or higher, and a secondary velocity of $\sim$110 m s-1 with a north-east to south-west direction, associated with the shallow arc and a scattering altitude in the D-region. Geomagnetic activity was low in the mid- latitudes at the time, but a weak sub-storm at high latitudes reached its peak at the start of the observation. An analysis of Global Navigation Satellite Systems (GNSS) and ionosonde data from the time reveals a larger–scale travelling ionospheric disturbance (TID), possibly the result of the high–latitude activity, travelling in the north-west to south-east direction, and, simultaneously, a smaller–scale TID travelling in a north-east to south- west direction, which could be associated with atmospheric gravity wave activity. The LOFAR observation shows scattering from both TIDs, at different altitudes and propagating in different directions. To the best of our knowledge this is the first time that such a phenomenon has been reported. ###### keywords: ionospheric scintillation – travelling ionospheric disturbances – instability mechanisms ## 1 Introduction Radio waves from compact sources can be strongly affected by any ionised medium through which they pass. Refraction through large-scale density structures in the medium leads to strong lensing effects where the radio source appears, if imaged, to focus, de-focus and change shape as the density structures in the line of sight themselves move and change. Diffraction of the wavefront by small-scale density structures leads to variations building up in the intensity of the wavefront with distance from the scattering medium, due to interference between the scattered waves, an effect known as scintillation. Observations of all these effects thus contain a great deal of information on the medium through which the radio waves have passed, including the large- scale density, turbulence, and the movement of the medium across the line of sight. Since the second world war, a large number of studies have shown the effect of ionospheric density variations on radio signals, as reviewed by Aarons (1982), and this can lead to disruption for applications using Global Navigation Satellite Systems (GNSS, e.g., GPS), as thoroughly reviewed by, e.g., Hapgood (2017). The Low-Frequency Array (LOFAR - van Haarlem et al. (2013)) is Europe’s largest low-frequency radio telescope, operating across the frequency band 10–250 MHz, and with a dense array of stations in the Netherlands and, at the time of writing, 13 stations internationally from Ireland to Poland. It was conceived and designed for radio astronomy but, at these frequencies, the ionosphere can also have a strong effect on the radio astronomy measurement (de Gasperin et al., 2018). Ionospheric scintillation, which is rarely seen over the mid-latitudes on the high-frequency signals of GNSS, is seen almost continually in observations of strong natural radio sources by LOFAR. The wide bandwidth available with LOFAR enables an easy and direct assessment of scattering conditions and how they change in a given observation, including whether scattering is weak or strong, or refractive effects dominate, and enables further information to be gleaned from delay-Doppler spectra (the 2-D FFT of a dynamic spectrum, termed variously as the “scattering function”, “generalised power spectrum”, or “secondary spectrum” - here we use the term “delay-Doppler” spectrum as this clearly describes what the spectrum shows). In observations of interstellar scintillation these spectra can exhibit discrete parabolic arcs which can be modelled to give information on the distance to the scattering “screen” giving rise to the scintillation and its velocity across the line of sight (Stinebring et al., 2001; Cordes et al., 2006). Broadband observations of ionospheric scintillation are not common, but such arcs have been observed using the Kilpisjärvi Atmospheric Imaging Receiver Array (KAIRA, McKay-Bukowski et al. (2014) – an independent station built using LOFAR hardware in arctic Finland) in a study by Fallows et al. (2014). Model spectra produced by Knepp and Nickisch (2009) have also illustrated parabolic arc structures, particularly in the case of scattering from a thin scattering screen. The wide spatial distribution of LOFAR stations also enables scintillation conditions at these observing frequencies to be sampled over a large part of western Europe. A dense “core” of 24 stations, situated near Exloo in the north-east of the Netherlands, over an area with a diameter of $\sim$3.5 km further provides a more detailed spatial view of the scintillation pattern in its field of view. LOFAR thus enables detailed studies of ionospheric scintillation to be undertaken which can both reveal details which would be unavailable to discrete-frequency observations such as those taken using GNSS receivers, and act as a low-frequency complement to these observations to probe potentially different scattering scales. A number of different phenomena can lead to scattering effects in radio wave propagation through the mid-latitude ionosphere: Ionisation structures due to gradients in the spatial distribution of the plasma density can arise from a southward expansion of the auroral oval or from large- to small- scale travelling ionospheric disturbances (TIDs). Large-scale TIDs (LSTIDs) with wavelengths of about 200 km typically propagate southward after forming in the high-latitude ionosphere in response to magnetic disturbances (e.g. storms or sub-storms, Tsugawa et al. (2004)). On the other hand, medium-scale TIDs (MSTIDs) seem to form in response to phenomena occurring in the neutral atmosphere triggering atmospheric gravity waves (AGWs), which then propagate upwards to generate TIDs at ionospheric heights (Kelley, 2009). The morphology of MSTIDs varies with local time, season, and magnetic longitude. Their propagation shows irregular patterns that vary on a case-by-case basis, although they commonly seem to propagate mainly equatorward during winter daytime and westward during summer night-time (Hernández-Pajares et al., 2006, 2012; Tsugawa et al., 2007; Saito and Fukao, 1998; Emardson et al., 2013). Smaller-scale ionisation gradients, likely associated with the Perkins instability (Kelley, 2009, 2011), can then form as a consequence of the presence of MSTIDs, potentially leading to scintillation at LOFAR frequencies. In this paper, we perform an in-depth analysis of ionospheric scintillation seen in an observation of the strong natural radio source Cassiopeia A (Cas A) overnight on 18-19 August 2013. This observation was amongst the first of its kind taken with LOFAR and exhibited quite strong scattering effects across the 10-80 MHz band. The purpose of this paper is both technical and scientific: We first describe the observation itself, and then demonstrate several techniques to analyse LOFAR data and show how these can bring out the details of ionospheric structures. Finally, we use supporting data from GNSS and ionosondes to get a broader picture of conditions in the ionosphere at the time and how these give rise to the scintillation seen by LOFAR. ## 2 The LOFAR Observation LOFAR observed Cas A (Right Ascension 23h23m24s, Declination +58°48’54”) between 21:05 UT on 18 August 2013 and 04:05 UT on 19 August 2013, recording dynamic spectra from each individual station with a sampling time of 0.083 s over the band 2.24-97.55 MHz from each available station. The observing band was sampled with 7808 channels of 12.207 kHz each, but averaged over each successive 16-channel block to 488 subbands of 195.3125 kHz for the analyses described in this paper. At the time of observation the available stations were the 24 stations of the LOFAR “core”, 13 “remote” stations across the north-east of the Netherlands, and the international stations at Effelsburg, Unterweilenbach, Tautenburg, Potsdam, and Jülich (Germany), Nançay (France), Onsala (Sweden), and Chilbolton (UK). The reader is referred to van Haarlem et al. (2013) for full details of the LOFAR receiving system. The raw data for this observation can be obtained from the LOFAR long-term archive (lta.lofar.eu); observation ID L169059 under project “IPS”. We first illustrate the data in a more traditional sense. Figure 1 shows time series’ at three discrete observing frequencies of the data taken by LOFAR station CS002, at the centre of the core, and their associated power spectra. The power spectra show a fairly typical shape for intensity scintillation: An initial flat section at the lowest spectral frequencies represents scattering from larger-scale density structures which are close enough to the observer that the scattered waves have not had the space to fully interfere to develop a full intensity scintillation pattern; the turnover (often termed the “Fresnel Knee”) indicates the largest density scales for which the intensity scintillation pattern has fully formed; this is followed by a power-law in the spectra illustrating the cascade from larger to smaller density scales, which is cut off in these spectra by white noise due to the receiving system (the flat section covering high spectral frequencies). Figure 1: 1 Time series of intensity received at three discrete frequencies by LOFAR station CS002 during the observation of Cas A on 18-19 August 2013, plus, 1 and 1, power spectra of two 10-minute periods within these time series’. However, the advantage of observing a natural radio source with LOFAR is that full dynamic spectra can be produced covering the full observed band. Dynamic spectra of data taken by LOFAR station CS002 are presented in Figure 2, which includes a dynamic spectrum of the full observation, alongside more detailed views of three different single hours of the observation to illustrate the range of scattering conditions seen. The strength of the scattering can be seen much more clearly in this view, compared to time series’ from discrete observing frequencies. In general, scattering appears weak in this observation at the highest observing frequencies (where intensity remains highly correlated across the observing band) with a transition to strong scattering conditions as the observing frequency decreases. The frequency range displayed in these dynamic spectra is restricted to exclude the radio–frequency interference (RFI) which dominates below about 20 MHz and a fade in signal strength at the higher frequencies due to the imposition of a hard filter to exclude the FM waveband. Figure 2: Dynamic spectra of normalised intensity data taken by LOFAR station CS002 during the observation of Cas A on 18-19 August 2013. The dynamic spectrum of the entire observing period is given at the top, with zooms into three different hours of observation below to illustrate the range of conditions seen. White areas within the plots indicate where RFI was identified. RFI is still visible as white areas within the plots. These were identified by applying a median filter to the data using a window of (19.5 MHz $\times$ 4.2 s) to flatten out the scintillation pattern and then applying a threshold to identify the RFI. This method appears to be quite successful at identifying the RFI without also falsely identifying strong peaks in the scintillation as RFI. For subsequent analysis the RFI data points are replaced by an interpolation from nearby data, using the Python Astropy (Astropy Collaboration et al., 2013; Price-Whelan et al., 2018) library routine, “interpolate_replace_nans”. Normalisation of the data, to correct for long- period temporal variations in the system (e.g., gain variations resulting from the varying sensitivity of the receiving antenna array with source elevation), is carried out after RFI excision by dividing the intensities for each single frequency subband by a fitted 3rd-order polynomial. When analysing the data, a variety of scattering conditions are observed during the course of the observation, as indicated in Figure 2. Different conditions also naturally occurred over the various international stations compared to those observed over the Dutch part of LOFAR. In this paper we therefore focus our analysis on only the first hour of observation and the measurements taken by the 24 core stations. This allows us to demonstrate the analysis techniques and to investigate the reason for the scintillation seen over this interval. Observations from later in this dataset undoubtedly show other effects and may be discussed in a subsequent publication. ## 3 LOFAR Data Analysis Methods and Results ### 3.1 Delay-Doppler Spectra The first stage of analysis was the calculation of delay–Doppler spectra: These were created from the dynamic spectra using five-minute time slices, advancing every minute through the observation, following the methods described in Fallows et al. (2014). To avoid regions more heavily contaminated by RFI, the frequency band used was restricted to 28.5–64.1 MHz. Example spectra from the first hour are presented in Figure 3. Figure 3: Example delay-Doppler spectra from the first hour of observation, taken using five-minute chunks of the dynamic spectrum from CS103 over the frequency band 28.5-64.1 MHz. The spectra show two clear arcs: the first is a steeper arc which varies in curvature throughout the first hour (henceforth labelled for convenience as the “primary arc”); the second is a very shallow arc (henceforth labelled as the “secondary arc”) which remains stable for the first 40 minutes of the observation before fading away. By the end of the first hour of observation the primary arc also becomes less distinctive for a short while before the delay–Doppler spectra again show distinctive structure, including a return of the secondary arc. The variability of the curvature of the primary arc appears to follow a wave–like pattern during this part of the observation, as displayed in Figure 4. Here, simple parabolas involving only the square term ($y=Cx^{2}$ where $C$ is the curvature) were plotted with various curvatures until a reasonable eyeball fit was achieved, and the resulting curvatures plotted for every minute of observation for the first hour. It proved impossible to achieve reasonable fits using least-squares methods due to confusion from non–arc structure in the spectra: Fitting curvatures to these scintillation arcs is a well–known problem in the interstellar scintillation field and new methods of attempting this were presented at a recent workshop, but they are not easily described and have yet to be published. Hence, we do not attempt their application here. Figure 4: Curvatures of the steeper arc seen in delay-Doppler spectra calculated using data from CS103, from simple parabolas fitted by eye. The grey bounds represent an estimated error. The presence of two scintillation arcs likely indicates that scattering is dominated by two distinct layers in the ionosphere. A simple analysis, as described in Fallows et al. (2014), can be used to estimate the altitude of the scattering region with a basic formula relating arc curvature $C$ to velocity $V$ and distance $L$ along the line of sight to the scattering region (Cordes et al., 2006): $L=2CV^{2}$ (1) The square term for the velocity illustrates the importance of gaining a good estimate of velocity to be able to accurately estimate the altitude of the scattering region via this method. ### 3.2 Scintillation Pattern Flow The core area of LOFAR contains 24 stations within an area with a diameter of $\sim$3.5 km. When viewing dynamic spectra from each of these stations it is clear that the scintillation pattern is mobile over the core (i.e., temporal shifts in the scintillation pattern are clear between stations) but does not necessarily evolve significantly. Therefore, the flow of the scintillation pattern over the core stations may be viewed directly by simply plotting the intensity received, for a single subband, by each station on a map of geographical station locations, for data from successive time steps. A movie (CasA_20130818_NL.mp4) of the scintillation pattern flow through the observation is published as an online supplement to this article. The result, for 12 example time steps, is displayed in Figure 5, where a band of higher intensities can be seen to progress from north-west to south-east over the core. It should be noted that the data were integrated in time to 0.92 s for this purpose, to reduce both flicker due to noise and the duration of the movie. This does not average over any scintillation structure in this observation; structure with periodicities shorter than one second would be obvious in the delay–Doppler spectra as an extension of the arc(s) to greater than 0.5 Hz along the Doppler frequency axis. Figure 5: Normalised intensities received by all core stations at an observing frequency of 44.13 MHz, plotted on a geographical map of the stations. The intensities are colour-coded using a colour scale from yellow to purple with a range of 0.8 to 1.3 respectively. Times are at $\sim$10 s intervals from 21:22:25 UT at top left to 21:24:15 UT at bottom right, and each plot uses data samples with an integration time of 0.92 s. Plot diameter is $\sim$4.5 km. However, this is not the entire picture because the lines of sight from radio source to receivers are moving through the ionosphere as the Earth rotates, meaning that the scintillation pattern flow observed is a combination of flow due to the movement of density variations in the ionosphere and the movement of the lines of sight themselves through the ionosphere. Since the speed with which any single point on a line of sight passing through the ionosphere is dependent on the altitude of that point (the so-called ionosphere “pierce point”), this altitude needs to be either assumed or calculated to estimate a correction to the overall flow speed to obtain the natural ionospheric contribution. This introduces a natural uncertainty into estimates of velocity. Figure 6 shows the track of an ionospheric pierce-point at an assumed altitude of 200 km (an altitude chosen as representative of a typical F-region altitude where large-scale plasma structures are commonly observed) for the line of sight from core station CS002 to the radio source Cassiopeia A through the 7-hour course of the observation to illustrate this movement. Although not the subject of this paper, it is worth noting that an east to west flow seen later in the observation appears to be solely due to the lines of sight moving across a mostly static ionospheric structure (see the online movie), if the 200 km pierce point is assumed, further illustrating the necessity to take accurate care of the contribution from line of sight movement when assessing ionospheric speeds. Figure 6: Map showing the track of the 200 km pierce point of the line of sight from core station CS002 to Cassiopeia A from 2013-08-18T21:05:00 to 2013-08-19T04:05:00 UT. The thicker orange part of the track enhances the first hour of the observation. The black line winding a path across the centre of the image is the location of the border between the Netherlands and Germany. The location of CS002 is marked with a black star. The movie of the scintillation pattern flow, assuming a 200 km pierce point, shows a clear general north-west to south-east flow during the first hour of the observation, but also indicates some short (minutes) periods of confusion in which a north-east to south-west component might be just about discernable. Any second flow is likely to be associated with a second ionospheric layer and so warrants further investigation. ### 3.3 Estimating Velocities The representation of the scintillation pattern flow in movie form gives a direct and broad picture of the flow pattern and is very helpful in discovering short time-scale changes in speed and direction. However a cross- correlation analysis is still necessary to assess actual velocity(s). Correlation functions are calculated as follows:- * • Time series’ of intensity received by each station are calculated by averaging over the frequency band 55–65 MHz, with these frequencies chosen as the scintillation pattern remains highly correlated over this band; * • For each three-minute data slice, advancing the start time of each successive slice by 10 s:- * – Calculate auto- and cross- power spectra using intensities from every station pair within the LOFAR core; * – Apply low- and high-pass filters to exclude the DC-component and any slow system variation unlikely to be due to ionospheric effects, and white noise at the high spectral frequencies. The white noise is also subtracted using an average of spectral power over the high frequencies; * – Inverse–FFT the power spectra back to the time domain to give auto- and cross- correlation functions. In the analysis the high- and low-pass filter values were set to 0.01 Hz and 0.5 Hz respectively. This process results in a large set of cross-correlation functions for each time slice, each of which has an associated station-station baseline and a primary peak at, typically, a non-zero time delay from which a velocity can be calculated. However, the direction of the scintillation pattern flow still needs to be found for calculation of the actual velocity. For this, directions were assumed for each degree in the full 360–degree range of possible azimuth directions and the velocities re-calculated using the components of all baselines aligned with each assumed direction. This results, for each time slice, in 360 sets of velocities and from each set a median velocity and standard deviation about the median can be calculated (the median is used as this is less susceptible to rogue data points than the mean). The actual flow direction corresponds to the azimuth with the maximum median velocity and minimum standard deviation, as illustrated in Figure 7. From this analysis the primary velocity of $\sim$20–40 m s-1 travelling from north-west to south-east is found, illustrated in Figure 7, corresponding to the obvious scintillation pattern flow seen in the movie. However, the presence of a second flow is still not obvious, although a hint of it can be seen in, for example, the second peak in the median velocity seen in Figure 7. Figure 7: Plots for single 3-minute time slices of the median velocity and standard deviation of velocities about the median versus azimuth direction, calculated from the range of velocities found from all cross-correlation functions with the baselines within each station pair re-calculated for each assumed azimuth direction, in the usual form, counting clockwise from north. 7 Time slice commencing 21:05:00 UT using cross-correlations calculated after applying a high-pass filter at 0.01 Hz; 7 Time slice commencing 21:15:00 UT using cross-correlations calculated after applying a high-pass filter at 0.07 Hz. Note that the same y-axis is used for both velocity and standard deviation. A closer look at the auto- power spectra yielded the key to finding the second flow. Many spectra show a “bump” which can be viewed as being a second spectrum superposed on the main one. This is illustrated in Figure 8. To isolate this part of the spectrum, the spectra were re-filtered with a high- pass filter value of 0.07 Hz (the low-pass filter value remained the same), and correlation functions re-calculated. After following the same analysis as above to find median velocities and standard deviations, the second flow was found, as illustrated in Figure 7. Figure 8: Example power spectrum calculated from three minutes of intensity data received by CS003. The black curve is the raw spectrum, the blue curve is the filtered and noise-subtracted spectrum. The locations of the low-pass filter and both high-pass filters used are illustrated. The analysis, using both high-pass filter values, has been carried out for the full data set. The velocities and associated directions in degrees azimuth for the first hour of the observation are given in Figure 9. Error bounds in the velocities are calculated as the standard deviation about the median of all velocity values available for the calculated azimuth direction. Figure 9: Top: Velocities calculated for the first hour of observation from cross-correlations created after filtering using the two different high-pass filter values. Bottom: Directions of these velocities, in degrees azimuth. The higher velocity (henceforth labelled as the “secondary velocity”) shows some scatter: Periods where the secondary velocity drops to around the primary velocity values are due to the secondary velocity not being detected at these times; in these cases, it can still be detected in short-duration drops of velocity if correlation functions are re-calculated using an even higher high- pass filter value (the bump in these spectra appears shifted to slightly higher spectral frequencies). Values which decrease/increase towards/away from the primary velocity values likely represent a mix between the two velocities. The larger error bars seen in velocities may also be indicative in some instances of the standard deviation being broadened by some velocity values being more dominated by the other flow. The more extended period of scatter around 21:40 to 22:00 UT is a period where the secondary velocity is less apparent and the secondary scintillation arc fades from the delay-Doppler spectra. This indicates that the secondary structure is restricted in either space or time, either moving out of the field of view of the observation or ceasing for a period around 21:40 UT. It gives a first indication that the secondary velocity is associated with the secondary scintillation arc. ### 3.4 Estimating Scattering Altitudes The velocities can now be used to estimate scattering altitudes, using the curvatures of the scintillation arcs and the simple formula given in Equation 1. Initially the movement of the line of sight through the ionosphere is not accounted for, since this correction also requires an estimate of the pierce- point altitude to be reasonably calculated. Therefore an initial calculation of the scattering altitudes is made based on velocity values which are not corrected for this movement. Using the primary velocities and combining these with the curvatures of the primary arc (Figure 4) in Equation 1, a range of distances, L, along the line of sight to the scattering region are found. These distances are converted to altitudes by accounting for source elevation (Cas A increased in elevation from 55 ∘ to 64 ∘ during the first hour of observation). This process resulted in a range of altitudes to the scattering region of 200 to 900 km. Doing the same for the secondary velocities and applying an arc curvature of 3.2$\pm$0.3 for the secondary scintillation arc gives estimated scattering altitudes of only $\sim$70 km. If the primary/secondary velocities are combined vice-versa with the secondary/primary arc curvatures respectively, then the resulting scattering altitudes are clearly unreasonable (the secondary arc, primary velocity combination gives estimated altitudes of only $\sim$10 km for example), lending further credence to the secondary velocity being associated with the secondary arc. Velocity contributions from the line of sight movement are calculated as follows: For each time slice, t, the geographical locations beneath the pierce point of the line of sight through the ionosphere corresponding to the estimated scattering altitude at t are calculated, for both t and t + $\delta$t, where $\delta$t is taken as 3 minutes (the actual value is unimportant for this calculation). A velocity and its direction are found from the horizontal distance between these two locations and the direction of travel from one to the other. The general direction of the movement of the line of sight through the ionosphere is indicated by the orange line in Figure 6. Although the high scattering altitudes related to the primary scintillation arc and primary scintillation velocities lead to line-of-sight movements of up to $\sim$35 m s-1, this movement is almost perpendicular to the direction of the primary scintillation velocity, limiting the actual contribution to $\sim$ 5 m s-1. The line of sight movement is, however, in a very similar direction to the secondary velocities but the low corresponding scattering altitudes also limit the contribution in this case to $\sim$5 m s-1. An iterative procedure is then followed to correct the scintillation velocities for line-of-sight movement at the calculated scattering altitudes, re-calculate these altitudes, and re-calculate the line-of-sight movement. This procedure converges to a set of final scattering altitudes within 5 iterations. These are presented in Figure 10, with error bounds taken as the lowest and highest possible altitudes resulting from applying this procedure using the lower and upper limits of the arc curvature and scintillation velocity error bounds. Figure 10: Scattering altitudes estimated using Equation 1, the primary velocities and primary scintillation arc curvatures (blue curve) and the secondary velocities and the curvature of the secondary scintillation arc (red dashed curves). The range of scattering altitudes encompassed by the error bounds is quite large in some instances, particularly where the calculated altitudes are higher. Although the square term for the velocity in Equation 1 could lead to the natural conclusion that the error in the velocity dominates the error in scattering altitude, the errors in the velocity calculations are, for the most part, relatively small. Nevertheless, the error in the secondary velocity does appear to be the dominant error in the lower range of scattering altitudes (the red curves in Figure 10. However, the dominant error for the higher range of scattering altitudes appears to be the scintillation arc curvatures, illustrating the importance of developing accurate fitting methods for these curvatures. Despite these concerns, it is clear that scattering is seen from two layers in the ionosphere; the primary scintillation arc arises from scattering in the F-region and the secondary scintillation arc arises from scattering much lower down in the D-region. Plasma decays by recombination with neutral species. In the F-region these densities are lower and so plasma lifetimes are longer than in the D-region. Typical plasma lifetimes in the F-region are of the order of hours, while they are of the order of minutes in the D-region. Hence the structures seen in each level may have a different source and time history. ## 4 Conditions in the Ionosphere We now investigate what the overall ionospheric conditions were at the time and hence the possible cause(s) of the scintillation seen by LOFAR at the time. ### 4.1 Geomagnetic Conditions The overall geomagnetic conditions at the time are given in Figure 11, which shows 24–hour traces of the H–component of magnetic field for a representative set of magnetometers from the Norwegian magnetometer chain for 18 August 2013. Activity can be described as unsettled, with a minor substorm at high latitudes, peaking at the start of the LOFAR observation. However, geomagnetic activity remains quiet further south, and Kp took a value of 1 at 21 UT on 18th August 2013, indicating that this is unlikely to be a direct cause of the scintillation seen at LOFAR latitudes. We therefore investigate whether TIDs were present at the time and whether these could be consistent with the scintillation seen by LOFAR. Figure 11: Traces of the H-component of the geomagnetic field recorded on 18 August 2013 by a selection of magnetometer stations from the Norwegian chain. From top to bottom these are, along with their geomagnetic latitudes (2004, altitude 100 km): Longyearbyen (75.31∘N), Bjørnøya (71.52∘N), Nordkapp (67.87∘N), Tromsø(66.69∘N), Rørvik (62.28∘N), and Karmøy (56.43∘N). ### 4.2 Ionosonde Data The presence of TIDs can be detected through the simultaneous appearance of wave-like structures on multiple sounding frequencies recorded by an ionosonde. This method is generally limited to a single point of observation and detection. The spatial extent of TIDs can be attempted by comparing multiple traces from different ionosondes, but this is limited by the low density of ionosondes in a given region. Measurements from the ionosonde in Chilton (UK) do indeed suggest the presence of wave-like patterns which, in principle, could be due to a large-scale TID propagating southward and/or MSTID triggered by a local Atmospheric Gravity Wave (Figure 12). Figure 12: Multiple traces from the ionosonde in Chilton (UK) recorded between 20:00 18 August 2013 and 06:00 19 August 2013. ### 4.3 GNSS Data However, measurements from ground-based GNSS receivers offer a more comprehensive view of the characteristics of any MSTIDs present (Kelley, 2009). In the present study, we focus on perturbations in the slant Total Electron Content (STEC) observed over the evening of 18 August 2013 from a network of GNSS stations around the LOFAR core stations (see Figure 13). These stations are sufficient to infer the presence of TIDs and to infer the upper spatial scale-size limit of smaller-scale irregularities causing the intensity scintillation seen at LOFAR wavelengths. The presence of TID-induced perturbations can be deduced from the presence of wave-like residuals on the STEC calculated for each satellite-receiver pair. Figure 13: Map showing the locations of the GNSS stations used. STEC was calculated and detrended following the methods of Hernández-Pajares et al. (2006), with the detrending carried out according to: $\Delta STEC\left(t\right)=STEC\left(t\right)-\frac{STEC\left(t+\tau\right)+STEC\left(t-\tau\right)}{2}\left[TECu\right]$ (2) where $\tau=300s$. It is worth noting that the measured carrier phases $L_{1}$ and $L_{2}$ vary with time as a consequence of the motion of GNSS satellites relative to a given receiver on the Earth’s surface. As such, the spatial and temporal variabilities of ionisation gradients (such as those connected with TIDs and corresponding instabilities) become entangled. The various detrending methods (similar to equation 2) lead to an estimate of ionisation gradients by considering temporal gradients only, with spatial and temporal variabilities intrinsically entangled in the GNSS observations. Figure 14 shows examples of wave-like residuals on STEC for one pair of GNSS stations (Dentergem and Bruxelles in Belgium) observing the same GNSS satellite. The wave pattern is strongest over the first two hours shown (18:00 - 20:00 UT) but then weakens considerably by the start of the LOFAR observation, although it remains evident. STEC from the observations of both stations appears well–correlated, with the Bruxelles dataset lagging behind that of Dentergem. Since Dentergem lies to the WNW of Bruxelles, this suggests a strong westerly component in the direction of travel, which could correspond with the secondary velocity seen by LOFAR. Figure 14: Example of a satellite-station pair. 14 PRN01 as observed on 18 August 2013 from Dentergem (DENT, blue line) and Bruxelles (BRUX, red line), both in Belgium, with baseline oriented from WNW to ESE; 14 azimuth/elevation plot for PRN01 as observed from Dentergem. Figure 15 shows hourly plots of the overall geographical distribution of the STEC residuals calculated for all satellite passes seen within each hour by the GNSS stations used. The patterns shown in Figure 15 suggest a spatially and temporally varying propagation of MSTID wavefronts with components along the NE-SW as well as the NW-SE directions. Furthermore, the examples shown in Figure 15 also indicate the presence of smaller-scale ionisation structures in proximity to the wavefronts of the MSTIDs. This suggests that the scintillation seen by LOFAR is likely associated with the perpendicular propagation of two MSTIDs. However, the STEC variations here are also seen to fade by the start of the LOFAR observation. Figure 15: Hourly geographical distribution of all STEC perturbations in the evening of 18 August 2013: 15 18:00-19:00 UT, 15 19:00-20:00 UT, 15 20:00-21:00 UT, and 15 21:00-22:00 UT. A further illustration looks at the overall power spectral densities for the STEC residuals on all satellite–receiver pairs considered here over the hourly periods 20:00 UT to 21:00] UT and 21:00 UT to 22:00 UT (Figure 16). The earlier hour is chosen alongside the hour covering the LOFAR data period as this better displays the components seen in the spectra The temporal frequencies f can be converted into spatial scales L by assuming a given velocity VREL for the motion of the ionospheric structures across a GNSS raypath. That is: $L=\frac{V_{REL}}{f}$ (3) where VREL = VIONO-VSAT is the relative velocity between the velocity of the ionospheric structures and the scan velocity of a single raypath (at the same shell height). VSAT can be of the order of a few tens of m s-1 at 300 km. Figure 16: Power Spectral Densities of all the TEC residuals considered during the hours 16 20:00-21:00 UT and 16 21:00-22:00 UT. The arrows indicate the two components considered in the text. There appear to be two main components in the energy cascade from larger to smaller ionisation scales: one with a period of $\sim$1666 s, and another component with a period of $\sim$666 s. Taking VREL to be $\sim$100 m s-1 (the secondary velocity seen by LOFAR as this is in a south-westerly direction and the example GNSS data in Figure 14 indicate a westerly component), these periodicities correspond to spatial scales of the order of 166 km and 66 km respectively. Beyond these scales the STEC analysis is limited by the sensitivity of the technique (Tsugawa et al., 2007), as the Power Spectral Densities reach the noise floor (Figure 16). These orders of magnitudes suggest the presence of a larger–scale TID together with a smaller–scale TID (Kelley, 2009), while the energy cascade that can be observed through the Power Spectral Densities indicates that the large–scale structure breaks down into small–scale structures, likely owing to some instability mechanism. ### 4.4 Estimation of Scale Sizes of Plasma Structures The scale sizes of the plasma structures causing the scintillation seen by LOFAR can also be calculated. The variations in the intensity of the received signal are caused by irregularities with a spatial scale size ranging from the Fresnel dimension to an order of magnitude below this value (Basu et al., 1998). The Fresnel length DF is related to the wavelength of the radio wave $\lambda$ and the line of sight distance from the receiver to the scattering region L: $D_{F}=\sqrt{2\lambda L}$ (4) The Fresnel length was calculated for plasma structures at altitudes of 70 km, 200 km, 350 km and 700 km, elevations of 55∘ and 64∘, and at frequencies of 25.19 MHz, 35.15 MHz and 60.15 MHz, and the results are shown in Table 1. The altitudes were chosen to cover the range of altitudes identified for the primary and secondary features in the LOFAR analysis, with the addition of 350 km as this altitude is commonly used within studies using GNSS satellites. The elevations of the radio source at the start and the end of the first hour of observation were used to establish the range of Fresnel scales for each altitude. The frequencies were chosen to match Figure 1. Table 1 shows that the Fresnel length ranges between $\sim$1 km and $\sim$5 km and therefore the plasma structures causing the variations in signal intensity are likely to have a spatial scale size between $\sim$100 m and $\sim$5 km. The velocities calculated from the LOFAR data indicate that such structures would take tens of seconds to pass through the source-to-receiver line and the intensity variations in the observed signal occur on a similar timescale. Altitude | 70 km | 200 km | 350 km | 700 km ---|---|---|---|--- Frequency | | | | 25.19 MHz | 1.4 | 2.3–2.4 | 3.0–3.2 | 4.3–4.5 35.15 MHz | 1.2 | 1.9–2.0 | 2.6–2.7 | 3.6–3.8 60.15 MHz | 0.9 | 1.5–1.6 | 2.0–2.1 | 2.8–2.9 Table 1: The Fresnel length at altitudes of 70 km, 200 km, 350 km and 700 km for three different frequencies received by LOFAR station CS002. The ranges represent calculation using the source elevation for the start and for the end of the first hour of observation. Values are in km. ## 5 Further Discussion Geomagnetic activity was low in the mid-latitudes at the time, so enhanced activity was unlikely to be the direct cause of the scintillation observed. However, a weak sub-storm was seen at high latitudes and this reached its peak at the time of the start of the observation. An analysis of GNSS and ionosonde data reveals the presence of an MSTID travelling in the north-west to south- east direction. The larger-scale nature of this TID, and its direction of travel, are strongly consistent with the primary velocity and F-region scattering altitudes seen in the LOFAR observation. It is possible that this TID was caused by the geomagnetic activity at high latitude, but this is not confirmed. Simultaneously, an MSTID is also present travelling in a north-east to south-west direction which would most likely be associated with an atmospheric gravity wave propagating up from the neutral atmosphere. The smaller–scale nature of it, its direction of travel, and likely low-altitude source make it highly consistent with the secondary velocity and D–region scattering altitudes observed by LOFAR. The amplitude of TID activity observed through GNSS STEC residuals decreased after 20:00 UT (as visible from Figure 14 as well as from the comparison of hourly geographical maps in Figure 15). However, the LOFAR observation did not start until 21:05 UT and the presence of scintillation on the radio frequencies observed by LOFAR remained significant for much of the first hour of observation. Whilst the presence of MSTIDs seems evident from the ionosonde multiple traces and GNSS STEC residuals in the region considered, their signatures do not appear simultaneously above the LOFAR core stations between 21:00 UT and 22:00 UT. This can be explained by the inability of GNSS to detect smaller amplitudes in STEC residuals, as the noise floor is encountered for observations with pierce points above the core LOFAR stations (Figures 15 and 16). The scale sizes of plasma structures calculated for the LOFAR data indicate that these are an order of magnitude lower than those estimated from GNSS STEC. Smaller ionisation scales developing, for example, through the Perkins instability could induce scintillation on the VHF radio frequencies received by LOFAR but not on the L-band frequencies of GNSS. Hence, scintillation from these mid-latitude smaller-scale ionisation structures, formed through the Perkins instability in conjuction with the presence of TIDs, is likely to be what is detected through LOFAR. ## 6 Conclusions and Outlook This paper presents the results from one of the first observations of ionospheric scintillation taken using LOFAR, of the strong natural radio source Cassiopeia A taken overnight on 18–19 August 2013. The observation exhibited moderately strong scattering effects in dynamic spectra of intensity received across an observing bandwidth of 10–80 MHz. Delay–Doppler spectra from the first hour of observation showed two discrete parabolic arcs, one with a steep and variable curvature and the other with a shallow and static curvature, indicating that the scintillation was the result of scattering through two distinct layers in the ionosphere. A cross-correlation analysis of the data received by stations in the LOFAR core reveals two different velocities in the scintillation pattern: A primary velocity of $\sim$20-40 m s-1 is observed travelling in a north-west to south- east direction, which is associated with the primary parabolic arc and altitudes of the scattering layer varying in the range $\sim$200–700 km. A secondary velocity of $\sim$110 m s-1 is observed travelling in a north-east to south-west direction, which is associated with the secondary arc and a much lower scattering altitude of $\sim$60–70 km. The latter velocity is associated with a secondary “bump” seen at higher spectral frequencies in power spectra calculated from time series’ of intensities, indicating that it is more strongly associated with smaller–scale structure in the ionosphere. GNSS and ionosonde data from the time suggest the presence of two MSTIDs travelling in perpendicular directions. The F-region scattering altitudes calculated from the LOFAR primary scintillation arc and primary velocity, and the larger density scales associated with this, suggest that this is associated with a larger–scale TID seen in GNSS data potentially resulting from high–latitude geomagnetic activity. The D-region scattering altitudes of the secondary arc and secondary velocity suggest an atmospheric gravity wave source for a smaller-scale TID. These TIDs trigger an instability which leads to the breakdown of the large-scale density structure into smaller scales, giving rise to the scintillation observed. In the mid-latitude ionosphere the Perkins mechanism is the most likely instability and the features of the smaller-scale density variations observed seem consistent with this. To the best of our knowledge this is the first time that two TIDs have been directly observed simultaneously at different altitudes. This observation demonstrates that LOFAR can be a highly valuable tool for observing ionospheric scintillation in the mid–latitudes over Europe and enables methods of analysis to be used which give greater insight into the likely sources of scattering and could be used to improve modelling of them. With a far greater range of frequencies (multi–octave if the LOFAR high–band is also used) and fine sampling both across the frequency band and in time, LOFAR observations offer a wider sensitivity than that available to GNSS measurements. The analysis techniques shown in this paper also demonstrate that LOFAR can observe ionospheric structures at different altitudes simultaneously; a capability not commonly available for GNSS observations. It also complements these measurements by probing potentially different scintillation regimes to those observed by GNSS. Since this observation was taken, many more have been carried out under a number of projects, recording ionospheric scintillation data at times when the telescope would otherwise be idle. These demonstrate a wide range of scintillation conditions over LOFAR, some of which are seen only very occasionally and perhaps by only one or two of the international stations, illustrating the value to be had by monitoring the ionosphere at these frequencies. A Design Study, LOFAR4SpaceWeather (LOFAR4SW – funded from the European Community’s Horizon 2020 Programme H2020 INFRADEV-2017-1 under grant agreement 777442) currently underway will design a possible upgrade to LOFAR to enable, amongst other space weather observations, ionospheric monitoring in parallel with the regular radio astronomy observations. Such a design, if implemented, would enable a full statistical study of ionospheric scintillation at these frequencies, alongside the advances in scintillation modelling and our understanding of the ionospheric conditions causing it which can be gleaned in focussed studies such as that presented here. ###### Acknowledgements. This paper is based on data obtained with the International LOFAR Telescope (ILT) under project code “IPS”. LOFAR (van Haarlem et al., 2013) is the Low Frequency Array designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, that are owned by various parties (each with their own funding sources), and that are collectively operated by the ILT foundation under a joint scientific policy. The ILT resources have benefitted from the following recent major funding sources: CNRS-INSU, Observatoire de Paris and Université d’Orléans, France; BMBF, MIWF-NRW, MPG, Germany; Science Foundation Ireland (SFI), Department of Business, Enterprise and Innovation (DBEI), Ireland; NWO, The Netherlands; The Science and Technology Facilities Council, UK; Ministry of Science and Higher Education, Poland. The work carried out at the University of Bath was supported by the Natural Environment Research Council [Grant Number NE/R009082/1] and by the European Space Agency/Thales Alenia Space Italy [H2020-MOM-TASI-016-00002]. We thank Tromsø Geophysical Observatory, UiT the Arctic University of Norway, for providing the lyr, bjn, nor, tro, rvk, and kar magnetometer data. The Kp index and the Chilton ionosonde data were obtained from the U.K. Solar System Data Centre at the Rutherford Appleton Laboratory. Part of the research leading to these results has received funding from the European Community’s Horizon 2020 Programme H2020-INFRADEV-2017-1under grant agreement 777442. ## References * Aarons (1982) Aarons, J., 1982. Global morphology of ionospheric scintillations. _Proceedings of the IEEE_ , 70(4), 360–378. https://doi.org/10.1109/PROC.1982.12314. * Astropy Collaboration et al. (2013) Astropy Collaboration, T. P. Robitaille, E. J. Tollerud, P. Greenfield, M. Droettboom, et al., 2013. Astropy: A community Python package for astronomy. _A &A_, 558, A33. 10.1051/0004-6361/201322068, 1307.6212. * Basu et al. (1998) Basu, S., E. Weber, T. Bullett, M. Keskinen, E. MacKenzie, P. Doherty, R. Sheehan, H. Kuenzler, P. Ning, and J. Bongiolatti, 1998. Characteristics of plasma structuring in the cusp/cleft region at Svalbard. _Radio Science_ , 33(6), 1885–1899. https://doi.org/10.1029/98RS01597. * Cordes et al. (2006) Cordes, J. M., B. J. Rickett, D. R. Stinebring, and W. A. Coles, 2006. Theory of parabolic arcs in interstellar scintillation spectra. _The Astrophysical Journal_ , 637(1), 346. https://doi.org/10.1086/498332. * de Gasperin et al. (2018) de Gasperin, F., M. Mevius, D. Rafferty, H. Intema, and R. Fallows, 2018. The effect of the ionosphere on ultra-low-frequency radio-interferometric observations. _Astronomy & Astrophysics_, 615, A179. https://doi.org/10.1051/0004-6361/201833012. * Emardson et al. (2013) Emardson, R., P. Jarlemark, J. Johansson, and S. Scäfer, 2013. Spatial variability in the ionosphere measured with GNSS networks. _Radio Sci._ , 48, 646–652. https://dx.doi.org/10.1002/2013RS005152. * Fallows et al. (2014) Fallows, R., W. Coles, D. McKay-Bukowski, J. Vierinen, I. Virtanen, et al., 2014\. Broadband meter-wavelength observations of ionospheric scintillation. _Journal of Geophysical Research: Space Physics_. https://dx.doi.org/10.1002/2014JA020406. * Hapgood (2017) Hapgood, M., 2017. Satellite navigation—Amazing technology but insidious risk: Why everyone needs to understand space weather. _Space Weather_ , 15(4), 545–548. https://doi.org/10.1002/2017SW001638. * Hernández-Pajares et al. (2006) Hernández-Pajares, M., J. M. Juan, and J. Sanz, 2006. Medium-scale travelling ionospheric disturbances affecting GPS measurements: Spatial and temporal analysis. _J. Geophys. Res._ , 111, A07S11. https://dx.doi.org/10.1029/2005JA011474. * Hernández-Pajares et al. (2012) Hernández-Pajares, M., J. M. Juan, J. Sanz, and A. Aragón-Àngel, 2012. Propagation of medium scale travelling ionospheric disturbances at different latitudes and solar cycle conditions. _Radio Sci._ , 47, RS0K05. https://dx.doi.org/10.1029/2011RS004951. * Kelley (2009) Kelley, M. C., 2009. The Earth’s ionosphere: plasma physics and electrodynamics, vol. 96 of _International Geophysics Series_. Elsevier, 2 edn. * Kelley (2011) Kelley, M. C., 2011. On the origin of mesoscale TIDs at midlatitudes. _Ann. Geophys._ , 29, 361–366. https://dx.doi.org/10.5194/angeo-29-361-2011. * Knepp and Nickisch (2009) Knepp, D. L., and L. Nickisch, 2009. Multiple phase screen calculation of wide bandwidth propagation. _Radio Science_ , 44(1). https://doi.org/10.1029/2008RS004054. * McKay-Bukowski et al. (2014) McKay-Bukowski, D., J.-P. Vierinen, I. Virtanen, R. Fallows, M. Postila, et al., 2014. KAIRA: the Kilpisjar̈vi Atmospheric Imaging Receiver Array – system overview and first results. _IEEE Transactions on Geoscience and Remote Sensing_ , 53(3), 1440–1451. https://doi.org/10.1109/TGRS.2014.2342252. * Price-Whelan et al. (2018) Price-Whelan, A. M., B. M. Sipőcz, H. M. Günther, P. L. Lim, S. M. Crawford, et al., 2018. The Astropy Project: Building an Open-science Project and Status of the v2.0 Core Package. _AJ_ , 156, 123. 10.3847/1538-3881/aabc4f. * Saito and Fukao (1998) Saito, A., and S. Fukao, 1998. High resolution mapping of TEC perturbations with the GSI GPS network over Japan. _Geophys. Res. Lett._ , 25(16), 3079–3082. * Stinebring et al. (2001) Stinebring, D., M. McLaughlin, J. Cordes, K. Becker, J. E. Goodman, M. Kramer, J. Sheckard, and C. Smith, 2001. Faint scattering around pulsars: probing the interstellar medium on solar system size scales. _The Astrophysical Journal Letters_ , 549(1), L97. https://dx.doi.org/10.1086/319133. * Tsugawa et al. (2007) Tsugawa, T., Y. Otsuka, A. J. Coster, and A. Saito, 2007. Medium-scale travelling ionospheric disturbances detected with dense and wide TEC maps over North America. _Geophys. Res. Lett._ , 34, L22,101. https://dx.doi.org/10.1029/2007GL031663. * Tsugawa et al. (2004) Tsugawa, T., A. Saito, and Y. Otsuka, 2004. A statistical study of large-scale traveling ionospheric disturbances using the GPS network in Japan. _Journal of Geophysical Research: Space Physics_ , 109(A6). * van Haarlem et al. (2013) van Haarlem, M. P., M. W. Wise, A. W. Gunst, G. Heald, J. P. McKean, et al., 2013. LOFAR: The LOw-Frequency ARray. _A &A_, 556, A2. https://dx.doi.org/10.1051/0004-6361/201220873, 1305.3550.
2024-09-04T02:54:58.353153
2020-03-09T12:12:03
2003.04055
{ "authors": "Dai Aoki, Ai Nakamura, Fuminori Honda, DeXin Li, Yoshiya Homma, Yusei\n Shimizu, Yoshiki J. Sato, Georg Knebel, Jean-Pascal Brison, Alexandre\n Pourret, Daniel Braithwaite, Gerard Lapertot, Qun Niu, Michal Valiska,\n Hisatomo Harima, and Jacques Flouquet", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26114", "submitter": "Dai Aoki", "url": "https://arxiv.org/abs/2003.04055" }
arxiv-papers
September 16, 2019 # Spin-Triplet Superconductivity in UTe2 and Ferromagnetic Superconductors Dai Aoki1,2 E-mail<EMAIL_ADDRESS>Ai Nakamura1 Fuminori Honda1 DeXin Li1 Yoshiya Homma1 Yusei Shimizu1 Yoshiki J. Sato1 Georg Knebel2 Jean-Pascal Brison2 Alexandre Pourret2 Daniel Braithwaite2 Gerard Lapertot2 Qun Niu2 Michal Vališka2 Hisatomo Harima3 and Jacques Flouquet2 1IMR1IMR Tohoku University Tohoku University Oarai Oarai Ibaraki Ibaraki 311-1313 311-1313 Japan 2University Grenoble Alpes Japan 2University Grenoble Alpes CEA CEA IRIG-PHELIQS IRIG-PHELIQS F-38000 Grenoble F-38000 Grenoble France 3Graduate School of Science France 3Graduate School of Science Kobe University Kobe University Kobe 657-8501 Kobe 657-8501 Japan Japan<EMAIL_ADDRESS> ###### Abstract The spin-triplet state is most likely realized in uranium ferromagnetic superconductors, UGe2, URhGe, UCoGe. The microscopic coexistence of ferromagnetism and superconductivity means that the Cooper pair should be realized under the strong internal field due the ferromagnetism. leading to the spin-triplet state with equal spin pairing. The field-reinforced superconductivity, which is observed in all three materials when the ferromagnetic fluctuations are enhanced, is one of the strong evidences for the spin-triplet superconductivity. We present here the results of a newly discovered spin-triplet superconductor, UTe2, and compare those with the results of ferromagnetic superconductors. Although no magnetic order is found in UTe2, there are similarities between UTe2 and ferromagnetic superconductors. For example, the huge upper critical field exceeding the Pauli limit and the field-reentrant superconductivity for $H\parallel b$-axis are observed in UTe2, URhGe and UCoGe. We also show the specific heat results on UTe2 in different quality samples, focusing on the residual density of states in the superconducting phase. ferromagnetism, superconductivity, metamagnetism, reentrant superconductivity, spin triplet, specific heat The coexistence of ferromagnetism and superconductivity attracts much attention because unconventional superconductivity with spin-triplet state is realized. [1, 2] In general, ferromagnetism and superconductivity are antagonistic because the large internal field due to the ferromagnetism easily destroys the superconducting Cooper pairs in conventional superconductors. Thus it is natural to consider that the spin-triplet superconductivity with equal spin-pairing is realized in ferromagnetic superconductors. The microscopic coexistence of ferromagnetism and superconductivity is established only in uranium compounds so far, namely UGe2 [3], URhGe [4] and UCoGe [5]. All of these materials have fairly small ordered moments ($1$–$0.05\,\mu_{\rm B}/{\rm U}$) in the ferromagnetic phase compared to that for the U free ion ($\sim 3\,\mu_{\rm B}$). Thus the $5f$ electrons for these compounds are believed to be itinerant in the first principle, although the magnetic anisotropy is rather strong, indicating the Ising properties. The superconductivity occurs well below the Curie temperature, $T_{\rm Curie}$, in the ferromagnetic state. One of the highlights in ferromagnetic superconductors is the field-reentrant or field-reinforced superconductivity. In URhGe, for example, the reentrant superconductivity appears when the field is applied along the hard-magnetization $b$-axis in the orthorhombic structure. [6] While the transition temperature $T_{\rm sc}$ is $0.25\,{\rm K}$ at zero field, the reentrant superconducting phase has a maximum $T_{\rm sc}$ of $0.4\,{\rm K}$ at $H_{\rm R}\sim 12\,{\rm T}$, indicating that the superconductivity is indeed reinforced under magnetic field. The similar field-reinforced superconductivity is also observed in UCoGe. [7] Recently a new spin-triplet superconductor, namely UTe2 was discovered [8, 9]. UTe2 has the body-centered orthorhombic structure with the space group $Immm$ (#71, $D_{2h}^{25}$). The distance of the first nearest neighbor for U atom is $3.78\,{\rm\AA}$, which is larger than the so-called Hill limit ($\sim 3.5\,{\rm\AA}$). Although no magnetic order was found down to $0.025\,{\rm K}$, UTe2 is believed to be at the verge of ferromagnetic order. In fact, the ferromagnetic fluctuations were observed in $\mu$SR [10] and NMR experiments [11]. By substituting Te with Se, the ferromagnetic order appears at $69$ and $33\,{\rm K}$ in UTe0.72Se1.28 and UTe0.24Se1.76, respectively, [12] although the space group for these materials is $Pnma$, which is different from that in UTe2. The superconducting transition occurs at $1.6\,{\rm K}$ with the sharp and large specific heat jump. The large residual density of states nearly $50\,{\%}$ may suggest the possibility for the spontaneous spin-polarization and the “partially-gapped” superconductivity similar to the A1 state with non- unitary state. However, it should be stressed that the direct transition from the paramagnetic state to the non-unitary state at zero field is forbidden from the symmetry restriction in this orthorhombic system. [13] Thus, the hidden feature in the superconducting state is expected. No other transition in the superconducting state is not observed yet at least at zero field. The pressure study is definitely important to solve this problem. One of the strongest support for the spin-triplet superconductivity in UTe2 is the huge upper critical field, $H_{\rm c2}$. In all the field directions, $H_{\rm c2}$ extremely exceeds the Pauli limit ($\sim 3\,{\rm T}$) expected for the weak- coupling BCS theory. The values of $H_{\rm c2}$ at $0\,{\rm K}$ are $7$ and $11\,{\rm T}$ for $H\parallel a$ and $c$-axis, respectively. For $H\parallel b$-axis, the spectacular field-reentrant superconductivity is observed. [14, 15] The transition temperature monotonously decreases with field down to $0.4\,{\rm K}$ at $16\,{\rm T}$, then increases with field up to $0.9\,{\rm K}$ at $35\,{\rm T}$. The first order metamagnetic transtion occurs at $H_{\rm m}=35\,{\rm T}$ [16, 17], and the superconductivity is abruptly collapsed above $H_{\rm m}$. The metamagnetic transition at $H_{\rm m}$ is connected to the so-called $T_{\chi,\rm max}$ at low fields, where the magnetic susceptibility shows a broad maximum for $H\parallel b$-axis. The magnetic susceptibility shows the Curie-Weiss behavior at high temperatures. At low temperatures, the anisotropic susceptibility is observed with the relation, $\chi_{a}>\chi_{c}>\chi_{b}$, which is consistent with the anisotropy of $H_{\rm c2}$. In order to study more details on superconducting properties in UTe2, we have grown single crystals of UTe2 with different quality, and measured the specific heat at low temperatures. We compare the ($H,T$) phase diagrams for $H\parallel b$-axis in UTe2, URhGe and UCoGe. Figure 1: (Color online) Photographs of UTe2 single crystals grown by (a) chemical vapor transport method and (b) Te-flux method. (c) Laue photograph of UTe2 single crystal along $c$-axis. (d) U-Te phase diagram cited from Ref.18 Single crystals of UTe2 were grown using chemical vapor transport method. The starting materials of U and Te were put into a quartz ampoule with the atomic ratio, U : Te = 2 : 3, together with iodine as the transport agent to be the density, $3\,{\rm mg/cm}^{3}$ in the inner volume of the quartz ampoule. The ampoule was slowly heated and was kept at the temperature gradient of $1060\,^{\circ}{\rm C}$/$1000\,^{\circ}{\rm C}$ for 10 days. Many single crystals were obtained at lower temperature side, as shown in Fig. 1(a). The obtained single crystals were checked by the single crystal X-ray analysis. The lattice parameters and the atomic coordinates are in good agreement with the values in the previous report. [19] The single crystals were oriented using the Laue photograph, as shown in Fig. 1(c). The clear superconducting transition was observed in resistivity and specific heat. The highest residual resistivity ratio (RRR) is about 40. Note that the we also obtained single crystals from the previous recipe, that is, a stoichiometric amount of starting materials and lower temperature gradient $950\,^{\circ}{\rm C}$/$850\,^{\circ}{\rm C}$. The single crystals were grown at high temperature side in this case. However, the quality of the single crystal is lower with the low RRR ($\sim 2$–$3$), and no superconductivity was observed down to $0.1\,{\rm K}$. As shown in Fig. 1(d), UTe2 is an incongruent melting compound in the U-Te phase diagram, and single crystals of UTe2 can be grown using the Te-flux method as well. The off-stoichiometric amounts of U and Te ($22$ and $78\,{\rm at\%}$, respectively) were put into an alumina crucible, which was sealed in a Ta-tube under Ar atmosphere gas. The Ta-tube was then sealed again in a quartz ampoule. The quartz ampoule was slowly heated up to $1050\,^{\circ}{\rm C}$ and was cooled down to $960\,^{\circ}{\rm C}$. The Te-flux was removed at $960\,^{\circ}{\rm C}$ in a centrifuge. The obtained single crystals were large, as shown in Fig. 1(b). However the residual resistivity ratio is not very large (${\rm RRR}\sim 3$). Although the superconductivity was confirmed by the resistivity, it was not a bulk property as no anomaly was detected in the specific heat. Hereafter, we show the results of single crystals obtained by the chemical vapor transport method with off-stoichiometric amounts of starting materials and the high temperature gradient. Figure 2(a) shows the temperature dependence of the electronic specific heat in UTe2. The part of the data is replotted from Ref. 8, 20. The data of sample #1 show the highest $T_{\rm sc}$ with a sharp and large jump at $T_{\rm sc}$, indicating the highest quality sample. The value of $T_{\rm sc}$ defined by the entropy balance in #1 is $1.65\,{\rm K}$, and the residual $\gamma$-value, $\gamma_{0}$, which is extrapolated from the fitting $C_{\rm e}/T=\gamma_{0}+\alpha T^{2}$ at low temperatures assuming a point node gap, is $\gamma_{0}=52\,{\rm mJ\,K^{-2}mol^{-1}}$. The residual $\gamma$-value is equal to $44\,{\%}$ of the $\gamma$-value in the normal state. The lower quality samples show the lower $T_{\rm sc}$ and the higher residual $\gamma$-value. For example, $T_{\rm sc}$ and $\gamma_{0}$ in sample #4 are $1.23\,{\rm K}$ and $89\,{\rm mJ\,K^{-2}mol^{-1}}$, respectively. In sample #5, no superconductivity was observed down to $0.4\,{\rm K}$ in specific heat, and it is confirmed by the resistivity measurement down $0.1\,{\rm K}$. Figure 2(b) shows $T_{\rm sc}$ as a function of residual $\gamma$-value normalized by the $\gamma$-value in the normal state. It is clear that $T_{\rm sc}$ decreases with increasing the residual $\gamma$-value. It is known that the decrease of $T_{\rm sc}$ can be described by the Abrikosov-Gor’kov pair- breaking theory. On the basis of this model, the relation between $T_{\rm sc}$ and the residual density of states had been studied theoretically [21] and experimentally [22] in high $T_{\rm c}$ cuprates and heavy fermion systems, where the rapid increase of the residual density of states is reported, compared to the decrease of $T_{\rm sc}$. This can be explained by the unitarity scattering in unconventional superconductivity. The present result in Fig. 2(b) supports the unconventional superconductivity in UTe2. An important question is whether the residual density of states exists in the ideal single crystal without impurities. In that case, the partial density of states would be gapped, and the so-called A1 state should be realized, where the time reversal symmetry must be broken at zero field. There is, however, no experimental evidence for the breaking of time reversal symmetry in UTe2. Figure 2: (Color online) (a) Electronic specific heat in the form of $C_{\rm e}/T$ vs $T$ of UTe2 in different samples. The phonon contribution is subtracted from the fitting at high temperature above $T_{\rm sc}$. Part of the data is replotted from Ref. 8, 20. (b) $T_{\rm sc}$ as a function of the normalized residual $\gamma$-value for different quality samples. Next we show in Fig. 3 the ($H,T$) phase diagrams of UTe2 and two ferromagnetic superconductors, URhGe and UCoGe for $H\parallel b$-axis, corresponding to the hard-magnetization axis. The field-reentrant or field- reinforced superconductivity is observed both in URhGe and in UCoGe. The enhancement of superconductivity is clearly related to the suppression of $T_{\rm Curie}$, where the ferromagnetic instabilities are realized. In URhGe, the suppression of $T_{\rm Curie}$ leads to the spin-reorientation at $H_{\rm R}$ in field sweep at low temperatures. The slope of magnetization curve for $H\parallel b$-axis is larger than that for $c$-axis. The moment gradually tilts from $c$ to $b$-axis, and finally it re-orients to $b$-axis at $H_{\rm R}\sim 12\,{\rm T}$. The $\gamma$-value is enhanced with increasing field, taking a maximum at $H_{\rm R}$. In NMR experiments, the spin-spin relaxation rate, $1/T_{2}$ shows the diverging behavior around $H_{\rm R}$, indicating the strong enhancement of the longitudinal ferromagnetic fluctuations [23]. The 2nd order transition of $T_{\rm Curie}$ at low fields changes into the weak 1st order transition at $H_{\rm R}$ through the tricritical point (TCP). The reentrant superconductivity appears with the maximum $T_{\rm sc}=0.4\,{\rm K}$ exactly at $H_{\rm R}$. In UCoGe, the suppression of $T_{\rm Curie}$ with field is similar to the case for URhGe. However, the spin reorientation is not observed in magnetization curve, indicating the strong Ising property compared to URhGe. The superconductivity shows an “S”-shaped curve, which is also connected to the suppression of $T_{\rm Curie}$. The enhancement of $\gamma$-value and the development of longitudinal fluctuation are observed in the field scan for $H\parallel b$-axis. In UTe2, the field reentrant superconductivity is also observed, while the temperature range and field range are much wider, compared to those in ferromagnetic superconductors, URhGe and UCoGe. The reentrant superconductivity is again linked to the metamagnetic transition at $H_{\rm m}$. The clear difference from ferromagnetic superconductors is that $H_{\rm m}$ at high fields originates from the broad maximum of magnetic susceptibility, $T_{\chi,\rm max}$, instead of $T_{\rm Curie}$. In the heavy fermion system, it is well known that $H_{\rm m}$ is scaled with $T_{\chi,\rm max}$ [24]. The value of $H_{\rm m}=35\,{\rm T}$ in UTe2 is consistent with $T_{\chi,\rm max}=35\,{\rm K}$. The mass enhancement around $H_{\rm m}$ is detected in the resistivity $A$ coefficient [16] and $\gamma$-value from the Maxwell relation in magnetization [17] and the direct specific heat measurements [25]. The crossover at $T_{\chi,\rm max}$ changes into the 1st order transition at $H_{\rm m}$ through the critical end point (CEP). It should be noted that the reentrant superconductivity is abruptly suppressed above $H_{\rm m}$ in UTe2. On the other hand, the reentrant superconductivity in URhGe still survives in the small field range above $H_{\rm m}$. This is probably due to the abrupt change of $T_{\rm sc}$ as it is inferred from the sharp increase of magnetoresistance at $H_{\rm m}$, implying the drastic change of the electronic state at $H_{\rm m}$. Figure 3: (Color online) ($H,T$) phase diagrams for $H\parallel b$-axis in URhGe, UCoGe and UTe2. The data are taken from Ref. 1, 14, 16, 17 In summary, we presented the single crystal growth of the novel spin-triplet superconductor, UTe2, and the results of specific heat in different quality samples. The higher quality sample shows the higher $T_{\rm sc}$ and the lower residual density of states. The rapid increase of the residual density of states compared to the decrease of $T_{\rm sc}$ supports the unconventional superconductivity in UTe2. The unusual field-reentrant superconductivity is a common feature in ferromagnetic superconductors and UTe2. The precise high field experiments from the microscopic point of views and pressure experiments using a high quality single crystal are desired for the future studies. ## Acknowledgements We thank Y. Tokunaga, S. Ikeda, Y. Ōnuki, K. Ishida, K. Izawa, K. Miyake, V. Mineev, S. Ran, J. Ishizuka, Y. Yanase, K. Machida, C. Paulsen and K. Miyake for fruitful discussion. This work was supported by ERC starting grant (NewHeavyFermion), KAKENHI (JP15H05884, JP15H05882, JP15K21732, JP16H04006, JP15H05745, JP19H00646), and ICC-IMR. ## References * [1] D. Aoki, K. Ishida, and J. Flouquet: J. Phys. Soc. Jpn. 88 (2019) 022001\. * [2] D. Aoki and J. Flouquet: J. Phys. Soc. Jpn. 81 (2012) 011003. * [3] S. S. Saxena, P. Agarwal, K. Ahilan, F. M. Grosche, R. K. W. Haselwimmer, M. J. Steiner, E. Pugh, I. R. Walker, S. R. Julian, P. Monthoux, G. G. Lonzarich, A. Huxley, I. Sheikin, D. Braithwaite, and J. Flouquet: Nature 406 (2000) 587. * [4] D. Aoki, A. Huxley, E. Ressouche, D. Braithwaite, J. Flouquet, J.-P. Brison, E. Lhotel, and C. Paulsen: Nature 413 (2001) 613. * [5] N. T. Huy, A. Gasparini, D. E. de Nijs, Y. Huang, J. C. P. Klaasse, T. Gortenmulder, A. de Visser, A. Hamann, T. Görlach, and H. v. Löhneysen: Phys. Rev. Lett. 99 (2007) 067006. * [6] F. Lévy, I. Sheikin, B. Grenier, and A. D. Huxley: Science 309 (2005) 1343. * [7] D. Aoki, T. D. Matsuda, V. Taufour, E. Hassinger, G. Knebel, and J. Flouquet: J. Phys. Soc. Jpn. 78 (2009) 113709. * [8] S. Ran, C. Eckberg, Q.-P. Ding, Y. Furukawa, T. Metz, S. R. Saha, I.-L. Liu, M. Zic, H. Kim, J. Paglione, and N. P. Butch: Science 365 (2019) 684\. * [9] D. Aoki, A. Nakamura, F. Honda, D. Li, Y. Homma, Y. Shimizu, Y. J. Sato, G. Knebel, J.-P. Brison, A. Pourret, D. Braithwaite, G. Lapertot, Q. Niu, M. Vališka, H. Harima, and J. Flouquet: J. Phys. Soc. Jpn. 88 (2019) 043702. * [10] S. Sundar, S. Gheidi, K. Akintola, A. M. Côté, S. R. Dunsiger, S. Ran, N. P. Butch, S. R. Saha, J. Paglione, and J. E. Sonier: arXiv:1905.06901 . * [11] Y. Tokunaga, H. Sakai, S. Kambe, T. Hattori, N. Higa, G. Nakamine, S. Kitagawa, K. Ishida, A. Nakamura, Y. Shimizu, Y. Homma, D. Li, F. Honda, and D. Aoki: J. Phys. Soc. Jpn. 88 (2019) 073701. * [12] H. Noël, M. Potel, R. Troc, and L. Shlyk: J. Solid. State. Chem. 126 (1996) 22. * [13] V. Mineev: private communication . * [14] G. Knebel, W. Knafo, A. Pourret, Q. Niu, M. Vališka, D. Braithwaite, G. Lapertot, J.-P. Brison, S. Mishra, I. Sheikin, G. Seyfarth, D. Aoki, and J. Flouquet: J. Phys. Soc. Jpn. 88 (2019) 063707. * [15] S. Ran, I.-L. Liu, Y. S. Eo, D. J. Campbell, P. Neves, W. T. Fuhrman, S. R. Saha, C. Eckberg, H. Kim, J. Paglione, D. Graf, J. Singleton, and N. P. Butch: arXiv:1905.04343 . * [16] W. Knafo, M. Vališka, D. Braithwaite, G. Lapertot, G. Knebel, A. Pourret, J.-P. Brison, J. Flouquet, and D. Aoki: J. Phys. Soc. Jpn. 88 (2019) 063705. * [17] A. Miyake, Y. Shimizu, Y. J. Sato, D. Li, A. Nakmura, Y. Homma, F. Honda, J. Flouquet, M. Tokunaga, and D. Aoki: J. Phys. Soc. Jpn. 88 (2019) 063706. * [18] Y. Xu, M. Yamazaki, and P. Villars: Jpn. J. Appl. Phys. 50 (2011) 11RH02. * [19] S. Ikeda, H. Sakai, D. Aoki, Y. Homma, E. Yamamoto, A. Nakamura, Y. Shiokawa, Y. Haga, and Y. Ōnuki: J. Phys. Soc. Jpn. Suppl 75 (2006) 116\. * [20] T. Metz, S. Bae, S. Ran, I.-L. Liu, Y. S. Eo, W. T. Fuhrman, D. F. Agterberg, S. Anlage, N. P. Butch, and J. Paglione: arXiv:1908.01069 . * [21] A. Okada and K. Miyake: J. Phys. Soc. Jpn. 80 (2011) 084708. * [22] Y. Kitaoka, K. Ishida, and K. Asayama: J. Phys. Soc. Jpn. 63 (1994) 2052\. * [23] Y. Tokunaga, D. Aoki, H. Mayaffre, S. Krämer, M.-H. Julien, C. Berthier, M. Horvatić, H. Sakai, S. Kambe, and S. Araki: Phys. Rev. Lett. 114 (2015) 216401. * [24] D. Aoki, W. Knafo, and I. Sheikin: C. R. Physique 14 (2013) 53. * [25] S. Imajo, Y. Kohama, A. Miyake, C. Dong, J. Flouquet, K. Kindo, and D. Aoki: J. Phys. Soc. Jpn. 88 (2019) 083705.
2024-09-04T02:54:58.365392
2020-01-06T12:26:18
2003.04100
{ "authors": "Lixin Ge, Xi Shi, Zijun Xu, and Ke Gong", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26115", "submitter": "Lixin Ge", "url": "https://arxiv.org/abs/2003.04100" }
arxiv-papers
# Tunable Casimir equilibria with phase change materials: from quantum trapping to its release Lixin Ge<EMAIL_ADDRESS>School of Physics and Electronic Engineering, Xinyang Normal University, Xinyang 464000, China Xi Shi Department of physics, Shanghai Normal University, Shanghai, 200234, China Zijun Xu School of Physics and Electronic Engineering, Xinyang Normal University, Xinyang 464000, China Ke Gong School of Physics and Electronic Engineering, Xinyang Normal University, Xinyang 464000, China ###### Abstract A stable suspension of nanoscale particles due to the Casimir force is of great interest for many applications such as sensing, non-contract nano- machines. However, the suspension properties are difficult to change once the devices are fabricated. Vanadium dioxide (VO2) is a phase change material, which undergoes a transition from a low-temperature insulating phase to a high-temperature metallic phase around a temperature of 340 K. In this work, we study Casimir forces between a nanoplate (gold or Teflon) and a layered structure containing a VO2 film. It is found that stable Casimir suspensions of nanoplates can be realized in a liquid environment, and the equilibrium distances are determined, not only by the layer thicknesses but also by the matter phases of VO2. Under proper designs, a switch from quantum trapping of the gold nanoplate (“on” state) to its release (“off” state) as a result of the metal-to-insulator transition of VO2, is revealed. On the other hand, the quantum trapping and release of a Teflon nanoplate is found under the insulator-to-metal transition of VO2. Our findings offer the possibility of designing switchable devices for applications in micro-and nano- electromechanical systems. ## I Introduction Micro- and nano-electromechanical systems (MEMS and NEMS), which integrate electrical and mechanical functionality on the micro- and nano-scales, have attracted enormous attention Lyshevski (2018); Craighead (2000). Thanks to small sizes, the MEMS and NEMS exhibit low mass, high mechanical resonance frequencies and quantum effects, leading to a broad range of applications such as biological/chemical detections Eom et al. (2011), accelerometers Xu et al. (2011) and micro/nanomachines Wang (2013). One major problem in MEMS and NEMS is the $stiction$ which makes the systems collapse and permanent adhesion caused by the attractive Casimir forces Buks and Roukes (2001); Chan et al. (2001). The Casimir force is a macroscopic quantum effect which arises from quantum fluctuations of the electromagnetic field Casimir (1948). In most cases, two neutral, parallel plates consisted of the same materials are attractive to each other, and the magnitudes of the attraction depend on several parameters such as separations, geometric thicknesses, finite conductivities and temperatures (see, e.g., the review Klimchitskaya et al. (2009) and Refs.Yampol’skii et al. (2008, 2010). Therefore, repulsive Casimir forces are highly required for non-contact and low-friction MEMS and NEMS. The repulsive Casimir forces have been intensively studied in many systems Woods et al. (2016) including liquid-separated environments Munday et al. (2009); van Zwol and Palasantzas (2010); Phan and Viet (2011); Dou et al. (2014), meta-materials Rosa et al. (2008); Zhao et al. (2009, 2011); Song et al. (2018), topological insulators Grushin and Cortijo (2011); Chen and Wan (2012); Nie et al. (2013) and specific geometrics Tang et al. (2017); Levin et al. (2010). In addition, the concept of Casimir equilibria was also investigated, using the enclosed geometries Rodriguez et al. (2008); Rahi and Zaheer (2010) and dispersive materials Rodriguez et al. (2010a). Lately, stable Casimir equilibria of nanoplates above a Teflon-coated gold substrate were reported by Zhao et al Zhao et al. (2019). However, the Casimir equilibria of previous studies were mainly in passive systems. Once the devices are fabricated, the trapping properties are difficult to change. Thus, the tunable trapping or even the switching from the trapping to its release by external stimuli (e.g., heating, electric fields or optical waves) is highly desired in MEMS and NEMS. In order to active modulate the Casimir effect, one straight way is to change the dielectric properties of materials under external means Torricelli et al. (2012); Sedighi et al. (2013); Torricelli et al. (2010). Vanadium dioxide (VO2) Shao et al. (2018); Zylbersztejn and Mott (1975) is a phase change material(PCM), which undergoes a transition from a low-temperature insulating phase to a high-temperature metallic phase at critical temperature 340 K. The phase transition of VO2 is accompanied by a structural transformation from the monoclinic phase to the tetragonal one. Meanwhile, the dielectric function of VO2 changes dramatically during the phase transition, leading to many interesting applications Wu et al. (2017); Liu et al. (2017); Kats et al. (2012); van Zwol et al. (2012). In general, the phase transition of VO2 can be induced by changing the temperature of systems. Alternatively, the phase transition can be driven by optical lasers Cavalleri et al. (2001); Rini et al. (2008) or electrical gratings Qazilbash et al. (2008); Nakano et al. (2012) on a sub-picosecond timescale. Recently, VO2 has been employed to study the tunable Casimir effect in the vacuum Galkina et al. (2009); Pirozhenko and Lambrecht (2008); Castillo-Garza et al. (2007). For a large separation (e.g., $>$1 $\mu$m), the contrast of Casimir forces due to the phase-transition is quite large (e.g., over 2 times for two semi-infinite plates of VO2, this value could be even larger for the case of finite thickness Galkina et al. (2009); Pirozhenko and Lambrecht (2008)). As the separation is small (e.g., $\sim$100 nm), however, the modulation of Casimir forces owning to the phase transition and finite-thickness decreases greatly Pirozhenko and Lambrecht (2008); Castillo-Garza et al. (2007). Nonetheless, the Casimir forces are always attractive and only magnitude modulations have been reported in a vacuum-separated configuration. The influences of phase transition of VO2 on the sign modulation of Casimir forces (e.g., from attraction to repulsion) are yet less explored. In a liquid environment, the function of sign modulation and the related phenomena such as tunable Casimir equilibria are expected based on the phase transition of VO2. Here, the Casimir forces between a nanoplate and a layered structure separated by a liquid are investigated. The layered structure consists of two kinds of materials, i.e., Vanadium dioxide (VO2) and Teflon. It is found that stable Casimir equilibria of gold nanoplates can be realized when a VO2 film is buried under a semi-infinite Teflon. The properties of Casimir equilibria are determined, not only by the layer thicknesses but also by the matter phases of VO2. For thick-film VO2, the Casimir equilibria and quantum traps can be achieved for both the metallic and insulating phases. On the other hand, a switch from quantum trapping of the gold nanoplate(“on” state) to its release (“off” state) can be triggered by the metal-to-insulator phase transition when the thickness of VO2 is thin (e.g., 20 nm). Finally, stable suspensions of Teflon nanoplates are also proposed with a complementary design, where the Teflon substrate is coated by a VO2 film. Unlike the case of gold nanoplates, the quantum trapping of Teflon nanoplates and its release correspond to the insulating and metallic phases of VO2. Moreover, the switching phenomena can be realized only with a several-nanometers thickness of VO2. Figure 1: (color online) (a) Schematic view of a gold nanoplate suspended in a liquid environment. (b) The permittivity of different materials (gold, VO2, bromobenzene and Teflon) as a function of imaginary frequency. ## II Theoretical models The system in this work is schematically shown in Fig. 1(a), where a gold nanoplate with thickness $L_{g}$ is suspended in a liquid of bromobenzene. The separation between the nanoplate and the substrate is $d$. The substrate is composed of a VO2 film buried under a semi-infinite plate of Teflon. The thicknesses of the top-layer Teflon and VO2 are denoted as $L_{T}$ and $L_{\mathrm{V}}$, respectively. The in-plane dimension of the gold nanoplate is much larger than $L_{g}$ and $d$, and it is considered as a slab during our calculations. The Casimir force is calculated by $F_{c}=-\partial E_{c}(d)/\partial d$, where $E_{c}(d)$ is the Casimir energy between the gold nanoplate and the substrate, having the form Nie et al. (2013); Zhao et al. (2019) $E_{c}(d)=A\hbar\int_{0}^{\infty}\frac{d\xi}{2\pi}\int\frac{d^{2}\mathbf{k_{\|}}}{(2\pi)^{2}}\log\det\left[1-\mathbf{R_{1}}\cdot\mathbf{R_{2}}e^{-2k_{3}d}\right],$ (1) where $\hbar$ is the reduced Planck constant, $A$ is the in-plane area, $\mathbf{k_{\parallel}}$ is the parallel wavevector, $k_{3}=\sqrt{k_{\parallel}^{2}+\varepsilon_{liq}(i\xi)\xi^{2}/c^{2}}$ is the vertical wavevector, $c$ is the speed of light in vacuum, $\varepsilon_{liq}(i\xi)$ is the permittivity of the intervening liquid evaluated with imaginary frequency $\omega=i\xi$, $\mathbf{R}_{1,2}$ is the $2\times 2$ reflection matrix for layered structures, having the form $\mathbf{R_{j}}=\left(\begin{array}[]{cc}r_{j}^{s}&0\\\ 0&r_{j}^{p}\end{array}\right),$ (2) where $r_{j}$ with $j$=1 and $j$=2 are the reflection coefficients for the upper and lower layered structures, and the superscripts $s$ and $p$ correspond to the polarizations of transverse electric ($\mathbf{TE}$) and transverse magnetic ($\mathbf{TM}$) modes, respectively. Note that the temperature $T$ for Eq. (1) equals 0 K and it is an effective approximation as the separation $d$ is smaller than 1 $\mu m$ for finite temperatures Milton (2004). For a nanoplate suspended in a liquid, the reflection coefficients can be given analytically as follows Zhao et al. (2011) $r^{\alpha}=\frac{r_{0,j}^{\alpha}+r_{j,0}^{\alpha}e^{-2K_{j}L_{j}}}{1+r_{0,j}^{\alpha}r_{j,0}^{\alpha}e^{-2K_{j}L_{j}}},$ (3) where $\alpha=s$ and $p$, $L_{j}$ is the thickness of the nanoplate, $K_{j}=\sqrt{k_{\parallel}^{2}+\varepsilon_{j}(i\xi)\xi^{2}/c^{2}}$ is the vertical wavevector, $\varepsilon_{j}(i\xi)$ is the permittivity of the nanoplate. The subscripts of $r_{m,n}^{\alpha}$ represent the light is incident from the medium $m$ to $n$ (0 means the liquid). Alternatively, the reflection coefficients for layered structures can be calculated by a transfer matrix method. The general form is given as $r=M_{21}/M_{11}$, where $M_{21}$ and $M_{11}$ are the elements of the $M$ matrixZhan et al. (2013). The $M$ matrix is the multiplications of transmission matrices across different interfaces and propagation matrices in different layers. Considering an arbitrary $N$-layer system, the $M$-matrix is given as : $M=D_{0,1}P(L_{1})D_{1,2}P(L_{2})...D_{N-1,N}P(L_{N})D_{N,N+1},$ (4) where the transmission matrix $D_{j,j+1}$ is given as: $D_{j,j+1}=\frac{1}{2}\left[\begin{array}[]{cc}1+\eta&1-\eta\\\ 1-\eta&1+\eta\end{array}\right],$ (5) where $\eta=\varepsilon_{j}(i\xi)K_{j+1}/(\varepsilon_{j+1}(i\xi)K_{j})$ for p-polarization and $\eta=K_{j+1}/K_{j}$ for s-polarization. The propagation matric in the $j$-th layer (for both $s$ and $p$ polarizations) is written as: $P(L_{j})=\left[\begin{array}[]{cc}e^{K_{j}L_{j}}&0\\\ 0&e^{-K_{j}L_{j}}\end{array}\right].$ (6) For example, we have $N=2$ for the multilayered substrate in Fig. 1. The $M$ matrix is given by $M=D_{0,1}P(L_{1})D_{1,2}P(L_{2})D_{2,3}$, where the subscripts 0, 1, 2 and 3 represent the media of liquid, Teflon, VO2 and Teflon (from top to down); the thicknesses $L_{1}=L_{T}$, $L_{2}=L_{V}$. ## III Results and discussions Figure 1(b) shows the permittivity for different materials, where the used models and parameters are given in the Appendixes. The dielectric function of VO2 changes dramatically under different temperatures. For temperature $T>T_{c}$, VO2 is in the metallic phase and it acts as a poor metal. For $T<T_{c}$, it is in the insulating phase (or called semiconducting phase), and the corresponding dielectric function nearly matches that of intrinsic silicon at low frequency Pirozhenko and Lambrecht (2008). To create repulsive Casimir forces between two dissimilar plates separated by a liquid, the permittivity should satisfy $\varepsilon_{1}(i\xi)>\varepsilon_{liq}(i\xi)>\varepsilon_{2}(i\xi)$ for a vast range of frequency Munday et al. (2009). Clearly, the dielectric functions of gold and VO2 (either metallic or insulating phase) are larger than that of bromobenzene over a wide range of frequency. Therefore, the Casimir force is always attractive for the layered structure of gold/bromobenzene/VO2. While the Casimir force for the structure of gold/bromobenzene/Teflon is repulsive instead. Nonetheless, the Casimir equilibria can not be found for above two layered structures. Figure 2: (color online) Casimir pressure via different thicknesses of VO2, where the thickness $L_{T}$=45 nm and $L_{g}$=40 nm are fixed. (a) Thick films. The solid and dashed lines represent the pressure for the metallic and insulating phases of VO2, respectively. (b) Thin films. The positive (negative) sign of the pressure corresponds to the repulsive (attractive) force. ### III.1 Tunable Casimir equilibria for gold nanoplates Now we consider the Casimir forces as the substrate is composed of a VO2 film and Teflon (see Fig. 1(a)). The Casimir pressure ($P_{c}=F_{c}/A$) for the thick film of VO2 is given in Fig. 2(a). The results show that the curves are almost identical for $L_{\mathrm{V}}$=200, 500 and 1000 nm, indicating the weak impact of the thickness for thick-film configurations. The pressure is repulsive at small separation (e.g., $d<60$ nm), making the nanoplate stay away from the substrate. As the separation increases further, the Casimir equilibria (zero pressure) occur and quantum traps can be realized for both metallic (solid lines) and insulating phases (dashed lines). In addition, the equilibrium distance $d_{c}$ is shifted under the phase transition of VO2. On the other hand, the thin-film thickness and the phase transition of VO2 can play an important role in Casimir pressure as shown in Fig. 2(b). For the thickness $L_{\mathrm{V}}$ =10 and 20 nm, quantum traps can be realized for the metallic phase, whereas no trap is found for the insulating phase. Under such configurations, a switch from quantum trapping of the nanoplate(“on” state) to its release (“off” state) can be triggered by the metal-insulator transition of VO2. However, the quantum trapping occurs for both metallic and insulating phases as the thickness $L_{\mathrm{V}}$ increases to 30 nm, and the “off” state disappears. Compared with the vacuum-separated configuration Castillo-Garza et al. (2007), not only the magnitude of Casimir forces can be modified in a liquid environment, but also the sign could be switched (e.g., from attraction to repulsion for $d$=100 nm, $L_{V}$=30 nm), due to the phase- transition of VO2. Figure 3: (color online) Casimir pressure contributed from different frequencies and different parallel wavevectors. (a) and (b) $d$=30 nm; (c) and (d) $d$=85 nm (close to critical separation); (e) and (f) $d$=150 nm. (a), (c)and (e) VO2 in the metallic phase ($T>T_{c}$); (b), (d) and (f) VO2 in the insulating phase ($T<T_{c}$). The layer thicknesses are set as $L_{\mathrm{V}}$=20 nm and $L_{T}$=45 nm. To understand the switch transition from the “on” to the “off” state, the contour plots of Casimir pressure are shown in Fig. 3 under different separations. The sign of pressure is determined by the competition of VO2 film (attraction) and low-refractive-index Teflon (repulsion). For small separation $d$=30 nm, the pressure is dominant by the repulsive component as shown in Figs. 3(a) and 3(b). For the metallic phase, the attractive component increases and it compensates the repulsive one as the separation becomes 85 nm ($d\approx d_{c}$), resulting in Casimir equilibrium (see Fig. 3(c)). While the repulsion is still dominant for the insulating phase as shown in Fig. 3(d). As $d$ increases further to 150 nm, the Casimir pressure turns out to be dominantly attractive in Fig. 3(e) for the metallic phase, resulting in a restoring force for stable trapping. By contrast, the pressure is still dominant by repulsion for the insulating phase as shown in Fig. 3(f). The pressure maps between the metallic and insulating phases are almost identical for large energy (e.g., $>$2 eV), whereas the discrepancy manifests at low energy. The results indicate that the attractive component appears only at low frequency and small $k$ vector for metallic VO2, where the field cannot penetrate the metalZhao et al. (2019). Conversely, the field can penetrate the thin-film of insulating VO2 easily, leading to repulsive Casimir forces. Figure 4: (color online) (a)The equilibrium distances via the thicknesses of VO2 under three different configurations (see the inset on the right). The thickness $L_{T}$ is set as 45 nm. The solid (dashed) curves for type III represent stable (unstable) equilibria. Contour plots of Casimir pressure via the thicknesses of coating Teflon for (b) metallic VO2 and (c) insulating VO2, where the thickness $L_{\mathrm{V}}$=20 nm is fixed. In (b) and (c), the gray zones represent a strong repulsive pressure larger than 1 Pa. The colors of the curves denote the same meaning as those in (a). Practically, the influences of gravitation and buoyancy on the force balances should be taken into account. The condition for the force equilibrium is written as $\vec{n}\cdot(\mathbf{F}_{c}+\mathbf{F}_{\mathrm{GB}})$=0, where $\vec{n}$ is the unit vector normal to the surface, $F_{\mathrm{GB}}=(\rho_{g}-\rho_{liq})gL_{g}A$ is the sum of gravity and buoyancy, $g$ is the gravitational acceleration, $\rho_{g}\approx$19.3 g/cm3 and $\rho_{liq}\approx$1.50 g/cm3 is the density of gold and liquid bromobenzene, respectively. The magnitude of $F_{\mathrm{GB}}/A$ is about 7.0 mPa as the thickness $L_{g}$=40 nm. Three types of configurations are depicted in the inset of Fig. 4(a) for the cross-section views. The type I configuration corresponds to a zero-projection (or weightlessness in aerospace), where the switching from quantum trapping (metallic state) to its release (insulating state) can be obtained as $L_{\mathrm{V}}$ in a proper range, from about 2 to 22 nm. For type II configuration, the attractive $F_{\mathrm{GB}}$ can compensate the long-range repulsive Casimir force at large $d$, leading to stable suspensions for both $T>T_{c}$ and $T<T_{c}$. However, the equilibrium distances are different, and it can be inferred that the stiffness of trapping for metallic phase is stronger than that of the insulating phase. For type III configuration (a flipped down system), the switching between trapping and its release can also be realized. Interestingly, there are two equilibrium distances for this configuration. It is not difficult to know that the smaller equilibrium distance (solid lines) is stable, whereas the other one (dashed lines) with larger distance is unstable to small perturbations in position. For both type II and III configurations, the deviations from Type I become strong as $d_{c}$ is large. In addition to the thickness of VO2 film, the top-layer Teflon can also play a significant role in the Casimir effect. The plots of Casimir pressure via the thicknesses of the coating Teflon $L_{T}$ are shown in Figs. 4(b) and 4(c), where $L_{\mathrm{V}}$=20 nm is fixed. The results show that the switching between quantum trapping and it release occurs only when $L_{T}$ is larger than about 42 nm (no gravity). The larger the $L_{T}$, the larger of the position for the Casimir equilibrium. As $L_{T}$ is smaller than 42 nm, the equilibrium distance is also small, and quantum trappings can be realized for both metallic and insulating phases. For comparison, the gravitation and buoyancy are taken into account. Again, strong discrepancies among three configurations occur as the equilibrium positions larger than about 150 nm, resulting from the comparable magnitude of $F_{GB}$ and the Casimir force. The impact of $F_{GB}$ can be further reduced by decreasing the thickness $L_{g}$ near the skin depth (about 22 nm) Lisanti et al. (2005). Figure 5: (color online) Casimir pressure for a complementary design. A thin film of VO2 with thickness $L_{V}$ is deposited on a Teflon substrate. (a)The metallic VO2. (b)The insulating VO2. The thickness of the suspended nanoplate is set as 100 nm. Figure 6: (color online) Casimir pressure calculated for finite temperatures and 0 K approximation from Eq. (1). (a)The trapping and release of a gold nanoplate. The parameters for the substrate are $L_{T}$=45 nm and $L_{V}$=20 nm. (b)The trapping and release of a Teflon nanoplate. The thickness $L_{V}$ is set as 2 nm. ### III.2 Tunable Casimir equilibria for Teflon nanoplates The active control of the low-refractive-index nanoplates can also be significant in many applications. Inspiring by the work Zhao et al. (2019), a complementary design is schematically shown in the inset of Fig. 5(a). A Teflon nanoplate is suspended in a liquid of bromobenzene, and the substrate is a semi-infinite plate of Teflon coated by a VO2 film (high refractive index). Under such design, the Casimir force is repulsive at very short separation, due to the dominant interaction between Teflon/bromobenzene/VO2. As the separation increases, the attractive interaction from Teflon/bromobenzene/Teflon can be dominant instead, resulting in a stable Casimir trapping. To verify the design, the Casimir pressure is given quantitatively in Figs. 5(a) and 5(b) as a function of separation. Interestingly, the Casimir pressure shows a long-range repulsive behavior for the metallic VO2, which corresponds to the “off” state. The repulsion pressure becomes stronger as the thickness $L_{\mathrm{V}}$ enlarges from 2 to 6 nm. For $L_{\mathrm{V}}$= 2 nm, a Casimir equilibria and strong restoring forces can be found when VO2 is in the insulating phase. Therefore, the quantum trapping and release of a Teflon nanoplate can be achieved under the insulator-to-metal transition of VO2. As the thickness is 4 nm, the restoring force decreases and the trapping stiffness drops considerably. The calculation results indicate that the Casimir pressure is quite sensitive to the thickness of VO2. Due to the low density of Teflon (2.1 g/cm3), the pressure $F_{GB}/A$ for the Teflon nanoplate is about 0.6 mPa, which is reduced significantly compared with those of gold nanoplates. ### III.3 Finite temperatures effect To achieve the phase transition of VO2, the temperatures of the devices need to be changed. We assume that the dielectric functions of the gold and Teflon are temperature-independent. For organic liquids, the change of refractive index due to the temperature Li et al. (1994) is an order of $10^{-4}/$ K, and the permittivity of bromobenzene is also treated as temperature-independent. Nonetheless, it is interesting to check the finite temperature effect on Casimir forces. The integral over frequency $\xi$ in Eq. (1) now is replaced by a discrete summation Rahi et al. (2009): $\frac{\hbar}{2\pi}\int_{0}^{\infty}d\xi\leftrightarrow k_{b}T\overset{\infty}{\underset{n=0}{\sum}}^{\prime},$ (7) where $\xi$ is replaced by discrete Matsubara frequencies $\xi_{n}=2\pi\frac{k_{b}T}{\hbar}n(n=0,1,2,3\ldots),$ $k_{B}$ is the Boltzmann’s constant and the prime denotes a prefactor 1/2 for the term $n$=0. The Casimir pressures under different temperatures are shown in Figs. 6(a) and 6(b), where two different designs are demonstrated. It is found that the curves for temperature 320 K (insulating phase) overlap with those calculated from Eq. (1). For the temperature of 360 K, there is only a small deviation between 0 K and 360 K. Overall, the calculation results from 320 and 360 K confirm the accuracy of the 0 K approximation. Recently, the switching between repulsive and attractive Casimir forces based on PCM has also been reported Boström et al. (2018), where the equilibrium distances for switching occur only at several nanometers. The equilibrium distances in our work are more accessible to experiments, and it can be tuned by designing the geometric thickness of VO2 and the Teflon. Figure 7: (color online) The total energy of a suspended gold nanoplate (a) and a Teflon nanoplate (b) under different types of gravity projection. The solid and dashed lines represent the cases for the metallic VO2 ($T$=360 K) and insulating VO2 ($T$=320 K), respectively. The in-plane area $A$ is set as 10 $\mu m\times$10 $\mu m$. Other parameters are kept the same as those in Fig. 6. ### III.4 The effect of Brownian motion In a real configuration, the position of a nanoplate has a fluctuation around the equilibrium distances due to the Brownian motion. To evaluate the effect of Brownian motion, the total energy of the suspended nanoplate should be known, which are written as $U(d)=E_{c}+\Lambda\times(E_{g}+E_{b})$, where $E_{c}$ is the Casimir energy given by Eq. [1], $E_{g}=\rho_{p}gL_{p}Ad$ and $E_{b}=-\rho_{liq}gL_{p}Ad$ are respectively the energies caused by the gravity and buoyancy Phan et al. (2012), $\rho_{p}$ and $L_{p}$ represent the density and thickness of the suspended nanoplate. The coefficient $\Lambda$ is the parameter depending on the gravity projection. For type I configuration (see the inset of Fig. 4), $\Lambda$=0. While we have $\Lambda$=1 and -1 for type II and type III configurations. The total energy of a gold and Teflon nanoplate are shown in Figs. 7(a) and 7(b), respectively. The minimum of $U(d)/k_{B}T$ corresponds to the equilibrium distance $d_{c}$. Clearly, stable quantum trapping can be realized for a gold (Teflon) nanoplate when VO2 is in the metallic (insulating) phase. Due to the balance of repulsive Casimir force and gravity, stable trapping can also be realized for type II configuration. Theoretically, the transition rate from the equilibrium distance to another position due to the Brownian motion is proportional to $\exp(-\triangle U/k_{B}T)$ Phan et al. (2012); Rodriguez et al. (2010b), where $\triangle U$ represents the energy barrier between these two positions. The calculated results indicate that the transition rates from Casimir equilibria to stiction are negligible since the energy barriers $\triangle U//k_{B}T$ are quite large (e.g., over $10^{4}$) for the gold and Teflon nanoplates. For a flipped-down system (type III), quantum trapping can be realized for gold (Teflon) nanoplate when VO2 is in the metallic (insulating) phase. However, there is a nonzero possibility that the nanoplates can escape from the equilibrium distances to the free-liquid regime ($d\rightarrow\infty$). Fortunately, the energy barrier $\triangle U/k_{B}T$ for such a transition is the order of $10^{2}$ as shown in Figs. 7(a) and 7(b), and the transition rate of the escape is also negligible. ## IV Conclusions In summary, the Casimir forces between a nanoplate and a layered structure containing VO2 films are investigated. In a liquid-separated environment, not only the magnitude of Casimir forces can be modified, but also the sign could be switched (e.g., from attraction to repulsion), due to the phase-transition of VO2. Moreover, a stable Casimir suspension of nanoplates and its tunability are revealed. For a gold nanoplate, a switch from the quantum trapping to its release is obtained under the metal-to-insulator transition of VO2. In addition, the quantum trapping and release of a Teflon nanoplate are demonstrated with a complementary design. The switching performances due to the layer thicknesses, gravitation and temperatures are discussed as well. Theoretically, the bromobenzene can be substituted by other high-refractive- index liquids (e.g., glycerol and styrene van Zwol and Palasantzas (2010)) as long as the boiling points are larger than $T_{c}$. The Teflon can also be replaced by other low-refractive-index materials (e.g., mesoporous silica Dou et al. (2014)). This work offers the possibility of designing switchable devices in MEMS/NEMS, resulting from the quantum fluctuations of the electromagnetic field. ###### Acknowledgements. This work is supported by the National Natural Science Foundation of China (Grant No. 11804288, No. 11704254, No. 61571386 and No. 61974127), and the Innovation Scientists and Technicians Troop Construction Projects of Henan Province. The research of L.X. Ge is further supported by Nanhu Scholars Program for Young Scholars of XYNU. ## Appendix A The permittivity of gold Here, a generalized Drude-Lorentz model is applied for the permittivity of gold Sehmi et al. (2017): $\varepsilon(i\xi)=\varepsilon_{D}(i\xi)+\varepsilon_{L}(i\xi),$ (8) where the Drude term is given by: $\varepsilon_{D}(i\xi)=\varepsilon_{\infty}+\frac{\gamma\sigma}{\xi(\xi+\gamma)},$ (9) where $\varepsilon_{\infty}=$0.83409, $\sigma=$3134.5 eV, and $\gamma=$0.02334 eV. The Lorentz term is described by four pairs of poles: $\varepsilon_{L}(i\xi)=\overset{4}{\underset{j=1}{\sum}}\left(\frac{i\sigma_{j}}{i\xi-\Omega_{j}}+\frac{i\sigma_{j}^{\ast}}{i\xi+\Omega_{j}^{\ast}}\right)$ (10) where $\sigma_{j}$ and $\Omega_{j}$ are the generalized conductivity and resonant frequency of the $j$-th Lorentz pole. The star superscripts represent the operation of complex conjugation. The generalized Drude-Lorentz model respects causality, and it can represent the exact physical resonances in the material. The parameters for the model are listed in the Table I. Table 1: The fitted parameters for Lorentz poles of gold Sehmi et al. (2017). $j$-th | $\sigma_{j}(\mathrm{eV})$ | $\Omega_{j}(\mathrm{eV})$ ---|---|--- 1 | -0.01743+0.3059*I | 2.6905-0.16645*I 2 | 1.0349+1.2919*I | 2.8772-0.44473*I 3 | 1.2274+2.5605*I | 3.7911-0.81981*I 4 | 9.85+37.614*I | 4.8532-13.891*I ## Appendix B The permittivity of VO2 For temperature $T>T_{c}$, VO2 is in the metallic phase, and the permittivity is given by Pirozhenko and Lambrecht (2008); Castillo-Garza et al. (2007) $\displaystyle\varepsilon(i\xi)$ $\displaystyle=$ $\displaystyle 1+\frac{\omega_{p}^{2}}{\xi(\xi+\gamma)}+\frac{\varepsilon_{\infty}-1}{1+\xi^{2}/\omega_{\infty}^{2}}$ (11) $\displaystyle+\underset{j=1}{\overset{4}{\sum}}\frac{s_{j}}{1+(\xi/\omega_{j})^{2}+\Gamma_{j}\xi/\omega_{j}},$ where $\varepsilon_{\infty}=3.95,\omega_{p}=3.33$ eV, and $\gamma=0.66$ eV. The parameters $s_{j}$ and $\Gamma_{j}$ represent respectively the strength and linewidth of the $j$-th oscillator (resonant frequency $\omega_{j}$). For temperature $T<T_{c}$, VO2 is in the insulating phase, and the permittivity is described as $\varepsilon(i\xi)=1+\frac{\varepsilon_{\infty}-1}{1+\xi^{2}/\omega_{\infty}^{2}}+\underset{j=1}{\overset{7}{\sum}}\frac{s_{j}}{1+(\xi/\omega_{j})^{2}+\Gamma_{j}\xi/\omega_{j}},$ (12) where $\varepsilon_{\infty}=4.26$ and $\omega_{\infty}=15$ eV. The above equations for metallic and insulating VO2 are valid for a wide range of frequency (up to about 10 eV)Castillo-Garza et al. (2007), which are modified versions of Ref. Verleur et al. (1968). The parameters are listed in Table II. Table 2: The parameters for the metallic and insulating VO2 Castillo-Garza et al. (2007). $j$-th ($T>T_{c}$) | $S_{j}$ | $\omega_{j}(\mathrm{eV})$ | $\Gamma_{j}$ ---|---|---|--- 1 | 1.816 | 0.86 | 0.95 2 | 0.972 | 2.8 | 0.23 3 | 1.04 | 3.48 | 0.28 4 | 1.05 | 4.6 | 0.34 $j$-th ($T<T_{c}$) | $S_{j}$ | $\omega_{j}(\mathrm{eV})$ | $\Gamma_{j}$ 1 | 0.79 | 1.02 | 0.55 2 | 0.474 | 1.30 | 0.55 3 | 0.483 | 1.50 | 0.50 4 | 0.536 | 2.75 | 0.22 5 | 1.316 | 3.49 | 0.47 6 | 1.060 | 3.76 | 0.38 7 | 0.99 | 5.1 | 0.385 Table 3: The parameters for Teflon(left) and bromobenzene (right)van Zwol and Palasantzas (2010). $j$-th | $C_{j}$ | $\omega_{j}(\mathrm{eV})$ | $C_{j}$ | $\omega_{j}(\mathrm{eV})$ ---|---|---|---|--- 1 | 0.0093 | 0.0003 | 0.0544 | 0.00502 2 | 0.0183 | 0.0076 | 0.0184 | 0.0309 3 | 0.139 | 0.0557 | 0.0475 | 0.111 4 | 0.112 | 0.126 | 0.532 | 6.75 5 | 0.195 | 6.71 | 0.645 | 13.3 6 | 0.438 | 18.6 | 0.240 | 24.0 7 | 0.106 | 42.1 | 0.00927 | 99.9 8 | 0.0386 | 77.6 | | ## Appendix C The permittivity of Teflon and bromobenzene The permittivity for the Teflon and bromobenzene are given by the oscillator model van Zwol and Palasantzas (2010): $\varepsilon(i\xi)=1+\underset{j}{\overset{}{\sum}}\frac{C_{j}}{1+(\xi/\omega_{j})^{2}},$ (13) where $C_{j}$ corresponds to the oscillator strength for the $j$-th resonance, and $\omega_{j}$ is the corresponding resonant frequency. The values of $C_{j}$ and $\omega_{j}$ listed in Table III are fitted from the experimental data in a wide range of frequency. ## References * Lyshevski (2018) S. E. Lyshevski, _MEMS and NEMS: systems, devices, and structures_ (CRC press, 2018). * Craighead (2000) H. G. Craighead, Science 290, 1532 (2000). * Eom et al. (2011) K. Eom, H. S. Park, D. S. Yoon, and T. Kwon, Phys. Rep. 503, 115 (2011). * Xu et al. (2011) R. Xu, S. Zhou, and W. J. Li, IEEE Sens. J. 12, 1166 (2011). * Wang (2013) J. Wang, _Nanomachines: fundamentals and applications_ (John Wiley & Sons, 2013). * Buks and Roukes (2001) E. Buks and M. L. Roukes, Phys. Rev. B 63, 033402 (2001). * Chan et al. (2001) H. Chan, V. Aksyuk, R. Kleiman, D. Bishop, and F. Capasso, Science 291, 1941 (2001). * Casimir (1948) H. B. Casimir, Proc. Kon. Ned. Akad. Wet. 51, 793 (1948). * Klimchitskaya et al. (2009) G. L. Klimchitskaya, U. Mohideen, and V. M. Mostepanenko, Rev. Mod. Phys. 81, 1827 (2009). * Yampol’skii et al. (2008) V. A. Yampol’skii, S. Savel’ev, Z. A. Mayselis, S. S. Apostolov, and F. Nori, Phys. Rev. Lett. 101, 096803 (2008). * Yampol’skii et al. (2010) V. A. Yampol’skii, S. Savel’ev, Z. A. Maizelis, S. S. Apostolov, and F. Nori, Phys. Rev. A 82, 032511 (2010). * Woods et al. (2016) L. M. Woods, D. A. R. Dalvit, A. Tkatchenko, P. Rodriguez-Lopez, A. W. Rodriguez, and R. Podgornik, Rev. Mod. Phys. 88, 045003 (2016). * Munday et al. (2009) J. N. Munday, F. Capasso, and V. A. Parsegian, Nature 457, 170 (2009). * van Zwol and Palasantzas (2010) P. J. van Zwol and G. Palasantzas, Phys. Rev. A 81, 062502 (2010). * Phan and Viet (2011) A. D. Phan and N. A. Viet, Phys. Rev. A 84, 062503 (2011). * Dou et al. (2014) M. Dou, F. Lou, M. Boström, I. Brevik, and C. Persson, Phys. Rev. B 89, 201407(R) (2014). * Rosa et al. (2008) F. S. S. Rosa, D. A. R. Dalvit, and P. W. Milonni, Phys. Rev. Lett. 100, 183602 (2008). * Zhao et al. (2009) R. Zhao, J. Zhou, T. Koschny, E. N. Economou, and C. M. Soukoulis, Phys. Rev. Lett. 103, 103602 (2009). * Zhao et al. (2011) R. Zhao, T. Koschny, E. N. Economou, and C. M. Soukoulis, Phys. Rev. B 83, 075108 (2011). * Song et al. (2018) G. Song, R. Zeng, M. Al-Amri, J. Xu, C. Zhu, P. He, and Y. Yang, Opt. Express 26, 34461 (2018). * Grushin and Cortijo (2011) A. G. Grushin and A. Cortijo, Phys. Rev. Lett. 106, 020403 (2011). * Chen and Wan (2012) L. Chen and S. Wan, Phys. Rev. B 85, 115102 (2012). * Nie et al. (2013) W. Nie, R. Zeng, Y. Lan, and S. Zhu, Phys. Rev. B 88, 085421 (2013). * Tang et al. (2017) L. Tang, M. Wang, C. Y. Ng, M. Nikolic, C. T. Chan, A. W. Rodriguez, and H. B. Chan, Nat. Photonics 11, 97 (2017). * Levin et al. (2010) M. Levin, A. P. McCauley, A. W. Rodriguez, M. T. Homer Reid, and S. G. Johnson, Phys. Rev. Lett. 105, 090403 (2010). * Rodriguez et al. (2008) A. W. Rodriguez, J. N. Munday, J. D. Joannopoulos, F. Capasso, D. A. R. Dalvit, and S. G. Johnson, Phys. Rev. Lett. 101, 190404 (2008). * Rahi and Zaheer (2010) S. J. Rahi and S. Zaheer, Phys. Rev. Lett. 104, 070405 (2010). * Rodriguez et al. (2010a) A. W. Rodriguez, A. P. McCauley, D. Woolf, F. Capasso, J. D. Joannopoulos, and S. G. Johnson, Phys. Rev. Lett. 104, 160402 (2010a). * Zhao et al. (2019) R. Zhao, L. Li, S. Yang, W. Bao, Y. Xia, P. Ashby, Y. Wang, and X. Zhang, Science 364, 984 (2019). * Torricelli et al. (2012) G. Torricelli, P. J. Van Zwol, O. Shpak, G. Palasantzas, V. B. Svetovoy, C. Binns, B. J. Kooi, P. Jost, and M. Wuttig, Adv. Funct. Mater. 22, 3729 (2012). * Sedighi et al. (2013) M. Sedighi, W. H. Broer, G. Palasantzas, and B. J. Kooi, Phys. Rev. B 88, 165423 (2013). * Torricelli et al. (2010) G. Torricelli, P. J. van Zwol, O. Shpak, C. Binns, G. Palasantzas, B. J. Kooi, V. B. Svetovoy, and M. Wuttig, Phys. Rev. A 82, 010101(R) (2010). * Shao et al. (2018) Z. Shao, X. Cao, H. Luo, and P. Jin, NPG Asia Mater. 10, 581 (2018). * Zylbersztejn and Mott (1975) A. M. N. F. Zylbersztejn and N. F. Mott, Phys. Rev. B 11, 4383 (1975). * Wu et al. (2017) S.-H. Wu, M. Chen, M. T. Barako, V. Jankovic, P. W. C. Hon, L. A. Sweatlock, and M. L. Povinelli, Optica 4, 1390 (2017). * Liu et al. (2017) H. Liu, J. Lu, and X. R. Wang, Nanotechnology 29, 024002 (2017). * Kats et al. (2012) M. A. Kats, D. Sharma, J. Lin, P. Genevet, R. Blanchard, Z. Yang, M. M. Qazilbash, D. Basov, S. Ramanathan, and F. Capasso, Appl. Phys. Lett. 101, 221101 (2012). * van Zwol et al. (2012) P. J. van Zwol, L. Ranno, and J. Chevrier, Phys. Rev. Lett. 108, 234301 (2012). * Cavalleri et al. (2001) A. Cavalleri, C. Tóth, C. W. Siders, J. A. Squier, F. Ráksi, P. Forget, and J. C. Kieffer, Phys. Rev. Lett. 87, 237401 (2001). * Rini et al. (2008) M. Rini, Z. Hao, R. W. Schoenlein, C. Giannetti, F. Parmigiani, S. Fourmaux, J. C. Kieffer, A. Fujimori, M. Onoda, S. Wall, et al., Appl. Phys. Lett. 92, 181904 (2008). * Qazilbash et al. (2008) M. M. Qazilbash, Z. Q. Li, V. Podzorov, M. Brehm, F. Keilmann, B. G. Chae, H.-T. Kim, and D. N. Basov, Appl. Phys. Lett. 92, 241906 (2008). * Nakano et al. (2012) M. Nakano, K. Shibuya, D. Okuyama, T. Hatano, S. Ono, M. Kawasaki, Y. Iwasa, and Y. Tokura, Nature 487, 459 (2012). * Galkina et al. (2009) E. G. Galkina, B. A. Ivanov, S. Savel’ev, V. A. Yampol’skii, and F. Nori, Phys. Rev. B 80, 125119 (2009). * Pirozhenko and Lambrecht (2008) I. Pirozhenko and A. Lambrecht, Phys. Rev. A 77, 013811 (2008). * Castillo-Garza et al. (2007) R. Castillo-Garza, C.-C. Chang, D. Jimenez, G. L. Klimchitskaya, V. M. Mostepanenko, and U. Mohideen, Phys. Rev. A 75, 062114 (2007). * Milton (2004) K. A. Milton, J. Phys. A 37, R209 (2004). * Zhan et al. (2013) T. Zhan, X. Shi, Y. Dai, X. Liu, and J. Zi, J. Phys.: Condens. Matter 25, 215301 (2013). * Lisanti et al. (2005) M. Lisanti, D. Iannuzzi, and F. Capasso, Proc. Natl. Acad. Sci. U.S.A. 102, 11989 (2005). * Li et al. (1994) W. Li, P. N. Segre, R. Gammon, J. V. Sengers, and M. Lamvik, J. Chem. Phys. 101, 5058 (1994). * Rahi et al. (2009) S. J. Rahi, T. Emig, N. Graham, R. L. Jaffe, and M. Kardar, Phys. Rev. D 80, 085021 (2009). * Boström et al. (2018) M. Boström, M. Dou, O. I. Malyi, P. Parashar, D. F. Parsons, I. Brevik, and C. Persson, Phys. Rev. B 97, 125421 (2018). * Phan et al. (2012) A. D. Phan, L. M. Woods, D. Drosdoff, I. V. Bondarev, and N. A. Viet, Appl. Phys. Lett. 101, 113118 (2012). * Rodriguez et al. (2010b) A. W. Rodriguez, D. Woolf, A. P. McCauley, F. Capasso, J. D. Joannopoulos, and S. G. Johnson, Phys. Rev. Lett. 105, 060401 (2010b). * Sehmi et al. (2017) H. S. Sehmi, W. Langbein, and E. A. Muljarov, Phys. Rev. B 95, 115444 (2017). * Verleur et al. (1968) H. W. Verleur, A. S. Barker Jr, and C. N. Berglund, Phys. Rev. 172, 788 (1968).
2024-09-04T02:54:58.377245
2020-03-09T13:08:27
2003.04110
{ "authors": "Sajid Ali, Georg Bergner, Henning Gerber, Istvan Montvay, Gernot\n M\\\"unster, Stefano Piemonte and Philipp Scior", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26116", "submitter": "Sajid Ali", "url": "https://arxiv.org/abs/2003.04110" }
arxiv-papers
# MS-TP-20-17 Continuum extrapolation of Ward identities in $\mathbf{\mathcal{N}=1}$ supersymmetric SU(3) Yang-Mills theory Sajid Ali<EMAIL_ADDRESS>University of Münster, Institute for Theoretical Physics, Wilhelm-Klemm-Str. 9, D-48149 Münster, Germany Government College University Lahore, Department of Physics, Lahore 54000, Pakistan Georg Bergner<EMAIL_ADDRESS>University of Jena, Institute for Theoretical Physics, Max-Wien-Platz 1, D-07743 Jena, Germany University of Münster, Institute for Theoretical Physics, Wilhelm-Klemm-Str. 9, D-48149 Münster, Germany Henning Gerber<EMAIL_ADDRESS>University of Münster, Institute for Theoretical Physics, Wilhelm-Klemm-Str. 9, D-48149 Münster, Germany Istvan Montvay<EMAIL_ADDRESS>Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, D-22607 Hamburg, Germany Gernot Münster11footnotemark: 1 University of Münster, Institute for Theoretical Physics, Wilhelm-Klemm-Str. 9, D-48149 Münster, Germany Stefano Piemonte <EMAIL_ADDRESS>University of Regensburg, Institute for Theoretical Physics, Universitätsstr. 31, D-93040 Regensburg, Germany Philipp Scior <EMAIL_ADDRESS>Universität Bielefeld, Fakultät für Physik, Universitätsstr. 25, D-33615 Bielefeld, Germany (14th May 2020) ###### Abstract Abstract: In $\mathcal{N}=1$ supersymmetric Yang-Mills theory, regularised on a space-time lattice, in addition to the breaking by the gluino mass term, supersymmetry is broken explicitly by the lattice regulator. In addition to the parameter tuning in the theory, the supersymmetric Ward identities can be used as a tool to investigate lattice artefacts as well as to check whether supersymmetry can be recovered in the chiral and continuum limits. In this paper we present the numerical results of an analysis of the supersymmetric Ward identities for our available gauge ensembles at different values of the inverse gauge coupling $\beta$ and of the hopping parameter $\kappa$. The results clearly indicate that the lattice artefacts vanish in the continuum limit, confirming the restoration of supersymmetry. ## 1 Introduction Supersymmetry (SUSY) is an elegant idea which relates fermions and bosons, whose spin differs by 1/2, through supercharges [1]. SUSY provides dark matter candidates, arising from the lightest supersymmetric particles [2]. In addition to that, supersymmetric extensions of the Standard Model would resolve the hierarchy problem [3]. $\mathcal{N}=1$ supersymmetric Yang-Mills (SYM) theory, which is being considered in this article, provides an extension of the pure gluonic part of the Standard Model [4]. It describes the strong interactions between gluons and gluinos, the superpartners of the gluons. Gluinos are Majorana particles that transform under the adjoint representation of the gauge group. The on-shell Lagrangian of $\mathcal{N}=1$ SYM theory, which consists of the gluon fields $A^{a}_{\mu}(x)$ and the gluino fields $\lambda^{a}(x)$, where $a=1,\ldots,N^{2}_{c}-1$, can be written in Minkowski space as $\mathcal{L}_{\text{SYM}}=-\frac{1}{4}F^{a}_{\mu\nu}F^{a,\mu\nu}+\frac{\mathrm{i}}{2}\bar{\lambda}^{a}\gamma^{\mu}\left(\mathcal{D}_{\mu}\lambda\right)^{a}-\frac{m_{\tilde{g}}}{2}\bar{\lambda}^{a}\lambda^{a},$ (1) where the first term, containing the field strength tensor $F^{a}_{\mu\nu}$, is the gauge part, and $\mathcal{D}_{\mu}$ in the second term is the covariant derivative in the adjoint representation of the gauge group SU($N_{c}$), $N_{c}$ being the number of colors. The last part of the above Lagrangian is a gluino mass term which breaks SUSY softly for $m_{\tilde{g}}\neq 0$, which means that it does not affect the renormalisation properties of the theory and that the spectrum of the theory depends on the gluino mass in a continuous way. The physical spectrum of this theory is expected to consist of bound states of gluons and gluinos, arranged in mass degenerate supermultiplets if SUSY is not broken [5, 6]. In order to perform Monte-Carlo simulations of the theory, we discretise the Euclidean action and put it onto a four-dimensional hypercubic lattice. We use the Curci-Veneziano version [7] of the lattice action $S=S_{g}+S_{f}$, where the gauge part $S_{g}$ is defined by the usual plaquette action $S_{g}=-\frac{\beta}{N_{c}}\sum_{p}\mathrm{Re}\left[\mathrm{tr}\left(U_{p}\right)\right],$ (2) with the inverse gauge coupling given by $\beta=2N_{c}/g^{2}$, and the fermionic part $S_{f}=\frac{1}{2}\sum_{x}\left\\{\bar{\lambda}^{a}_{x}\lambda_{x}^{a}-\kappa\sum_{\mu=1}^{4}\left[\bar{\lambda}^{a}_{x+\hat{\mu}}V_{ab,x\mu}(1+\gamma_{\mu})\lambda^{b}_{x}+\bar{\lambda}^{a}_{x}V^{T}_{ab,x\mu}(1-\gamma_{\mu})\lambda^{b}_{x+\hat{\mu}}\right]\right\\}$ (3) implements the gluinos as Wilson fermions. Here the adjoint link variables are defined by $V_{ab,x\mu}=2\,\mathrm{tr}\,(U_{x\mu}^{\dagger}T_{a}U_{x\mu}T_{b})$, where $T_{a}$ are the generators of the gauge group, and the hopping parameter $\kappa$ is related to the bare gluino mass $m_{\tilde{g}}$ by $\kappa=1/(2m_{\tilde{g}}+8)$. In order to approach the limit of vanishing gluino mass, the hopping parameter has to be tuned properly. In our numerical investigations the fermionic part is additionally $O(a)$ improved by adding the clover term $-(c_{sw}/4)\,\bar{\lambda}(x)\sigma_{\mu\nu}F^{\mu\nu}\lambda(x)$ [8]. In our previous investigations we have determined the low-lying mass spectrum of the theory with gauge group SU(2) and SU(3) non-perturbatively from first principles using Monte Carlo techniques [4, 9, 10, 11], and obtained mass degenerate supermultiplets [12]. ## 2 SUSY Ward identities In classical physics, Noether’s theorem provides a relation between symmetries and conservation laws. In the case of quantum field theories, symmetries are translated to Ward identities, representing quantum versions of Noether’s theorem. In $\mathcal{N}=1$ supersymmetric Yang-Mills theory a gluino mass term breaks SUSY softly. The soft breaking effects vanish in the chiral limit, a limit where theory is characterised by massless gluinos. In order to analyse this breaking of supersymmetry and to identify the chiral limit, we employ the Ward identities for supersymmetry. Moreover, on the lattice supersymmetry is broken explicitly due to the introduction of the discretisation of space-time lattice as a regulator of the theory. SUSY Ward identities can be used to check whether supersymmetry is restored in the continuum limit. In the Euclidean continuum, on-shell supersymmetry transformations of the gauge and gluino fields are given by $\delta A_{\mu}^{a}=-2\,\mathrm{i}\,\overline{\lambda}^{a}\gamma_{\mu}\,\varepsilon\,,\quad\delta\lambda^{a}=-\sigma_{\mu\nu}F_{\mu\nu}^{a}\,\varepsilon\,,$ (4) where the transformation parameter $\varepsilon$ is an anticommuting Majorana spinor. From the variation of the action under a supersymmetry transformation with a space-time-dependent parameter $\varepsilon(x)$ one derives the SUSY Ward identities. For any suitable gauge invariant local operator $Q(y)$, they read $\left\langle\partial^{\mu}S_{\mu}(x)Q(y)\right\rangle=m_{\tilde{g}}\left\langle\chi(x)Q(y)\right\rangle-\left\langle\frac{\delta Q(y)}{\delta\bar{\epsilon}(x)}\right\rangle,$ (5) where $S_{\mu}(x)=(S_{\mu}^{\alpha}(x))$ is the supercurrent of spin 3/2, and the term $m_{\tilde{g}}\left\langle\chi(x)Q(y)\right\rangle$ is due to the gluino mass in the action of the theory. In the continuum the supercurrent $S_{\mu}(x)$ and the operator $\chi(x)$ are given by $\displaystyle S_{\mu}(x)$ $\displaystyle=-\frac{2\,\mathrm{i}}{g}\mathrm{tr}\left[F^{\nu\rho}(x)\sigma_{\nu\rho}\gamma_{\mu}\lambda(x)\right],$ (6) $\displaystyle\chi(x)$ $\displaystyle=+\frac{2\,\mathrm{i}}{g}\mathrm{tr}\left[F^{\mu\nu}(x)\sigma_{\mu\nu}\lambda(x)\right].$ (7) The last term of Eq. (5) is a contact term, which contributes only if $x=y$, and it can be avoided if $Q(y)$ is not localised at $x$. Therefore the contact term is ignored in the following discussions. The four-dimensional space-time lattice breaks SUSY explicitly. As a consequence, the lattice versions of the Ward identities differ from their continuum counter parts by an additional term $\left\langle X_{S}(x)Q(y)\right\rangle$. The explicit form of this term is known, but need not be displayed here. At tree level this term is proportional to the lattice spacing $a$ and vanishes in the limit of zero lattice spacing. At higher orders in perturbation theory, nevertheless, the contribution of this term is finite in the continuum limit due to divergences proportional to 1/$a$ that multiply the factor $a$. This plays a role for the renormalisation of the supercurrent and of the gluino mass [7, 13]. In the renormalisation of SUSY Ward identities, operators of dimensions $\leq 11/2$ have to be taken into account. They lead to a modification of the gluino mass, and in addition a current $T_{\mu}$, mixing with the supercurrent, appears, corresponding to an operator of dimension $9/2$. Consequently, on the lattice the following Ward identities are obtained $Z_{S}\left\langle\nabla_{\mu}S_{\mu}(x)Q(y)\right\rangle+Z_{T}\left\langle\nabla_{\mu}T_{\mu}(x)Q(y)\right\rangle=m_{S}\left\langle\chi(x)Q(y)\right\rangle+O(a),$ (8) where $Z_{S}$ and $Z_{T}$ are renormalisation coefficients. The subtracted gluino mass is defined as $m_{S}=m_{\tilde{g}}-\bar{m}$, where $\bar{m}$ is the mass subtraction coming from the operators of dimension $7/2$. The mixing current is defined as $T_{\mu}(x)=\frac{2\,\mathrm{i}}{g}\mathrm{tr}\left[F_{\mu\nu}(x)\gamma_{\nu}\lambda(x)\right].$ (9) Regarding the local insertion operator $Q(y)$, our choice is the spinor $Q(y)=\chi^{(\mathrm{sp})}(y)$, with $\chi^{(\mathrm{sp})}(y)=\sum_{i<j}\mathrm{tr}\left[F_{ij}(y)\sigma_{ij}\lambda(y)\right],$ (10) where the indices $i,j\in\\{1,2,3\\}$. The reason behind this choice is that it gives the best signal [13]. ## 3 Numerical analysis of SUSY Ward identities We have analysed the SUSY Ward identities numerically, employing the configurations produced in our project on $\mathcal{N}=1$ supersymmetric Yang- Mills theory with gauge group SU(3). Numerically it is convenient to use integrated Ward identities where integration or sum is performed over all three spatial coordinates. The resulting identities will then hold for every time-slice distance $t$. In the analysis the data from all time-slice distances in an interval $t_{min}\leq t\leq t_{max}$ are included. The lower limit $t_{min}$ is always taken to be larger or equal than 3 in order to avoid contamination from contact terms. The choice of $t_{min}$ for the different ensembles of configurations is discussed below. Since the correlation functions are symmetric or antisymmetric in $t$, the upper limit $t_{max}$ is chosen to be half of the time extent of the lattice. Each term in Eq. (8) is a 4$\times$4 matrix in spin-space and can be expanded in the basis of 16 Dirac matrices, i. e. $\left\\{\boldsymbol{1},\gamma_{5},\gamma_{\mu},\gamma_{\mu}\gamma_{5},\mathrm{i}\sigma_{\mu\nu}\right\\}$. It can be shown, with the help of discrete symmetries, that only the following two contributions are non-zero [13]: $\hat{x}_{b,t,1}+A\hat{x}_{b,t,2}=B\hat{x}_{b,t,3},\qquad\text{with}\quad b=1,2\,,$ (11) where $A=Z_{T}Z^{-1}_{S}$, $B=am_{S}Z^{-1}_{S}$, and $\displaystyle\hat{x}_{1,t,1}$ $\displaystyle\equiv\sum_{\vec{x}}\left\langle\nabla_{4}S_{4}(x)Q(0)\right\rangle,$ $\displaystyle\hat{x}_{2,t,1}$ $\displaystyle\equiv\sum_{\vec{x}}\left\langle\nabla_{4}S_{4}(x)\gamma_{4}Q(0)\right\rangle,$ $\displaystyle\hat{x}_{1,t,2}$ $\displaystyle\equiv\sum_{\vec{x}}\left\langle\nabla_{4}T_{4}(x)Q(0)\right\rangle,$ $\displaystyle\hat{x}_{2,t,2}$ $\displaystyle\equiv\sum_{\vec{x}}\left\langle\nabla_{4}T_{4}(x)\gamma_{4}Q(0)\right\rangle,$ (12) $\displaystyle\hat{x}_{1,t,3}$ $\displaystyle\equiv\sum_{\vec{x}}\left\langle\chi(x)Q(0)\right\rangle,$ $\displaystyle\hat{x}_{2,t,3}$ $\displaystyle\equiv\sum_{\vec{x}}\left\langle\chi(x)\gamma_{4}Q(0)\right\rangle.$ In these equations the Dirac indices of $S_{4}(x)$, $T_{4}(x)$, $\chi(x)$ and of the insertion operator $Q(0)$ are not written, and sums over repeated (hidden) Dirac indices are implied. Also, $O(a)$ terms that vanish in the continuum limit are not written explicitly in these equations. Introducing a double index $i=(b,t)$, running over $2T$ values, where $T$ is the time extent of the lattice, and denoting $A_{1}=1,A_{2}=A,A_{3}=-B$, Eq. (11) is written compactly $\sum_{\alpha=1}^{3}A_{\alpha}\hat{x}_{i\alpha}=0\,.$ (13) In these equations the $\hat{x}_{i\alpha}=\langle x_{i\alpha}\rangle$ are the expectation values of random variables $x_{i\alpha}$, which themselves are considered to be the results of a finite Markov chain. We compute the estimators $x_{i\alpha}$ for the correlation functions $\hat{x}_{i\alpha}$ numerically using high performance facilities. The Eqs. (13), including all time-slice distances $t$ from $t_{min}$ to $t_{max}$, are solved simultaneously for $A_{\alpha}$ by means of minimal chi-squared methods. Two methods, namely the so-called Local Method and Global Method, have been used in the past by our collaboration [4, 13]. These methods, however, do not take properly into account correlations between the different quantities appearing in Eq. (13). For this purpose we have developed a new method based on a generalised least squares fit, the so-called GLS Method [14], based on the maximum likelihood. For fixed $A_{\alpha}$ ($\alpha=1,2,3$) and given numerical data $x_{i\alpha}$, the probability distribution $P\sim\exp(-L)$ of the quantities $\hat{x}_{i\alpha}$, subject to the constraints (13), has its maximum at a point where $L=L_{min}$, with $L_{min}=\frac{1}{2}\sum_{i,\alpha,j,\beta}(A_{\alpha}x_{i\alpha})(D^{-1})_{ij}(A_{\beta}x_{j\beta})\,,$ (14) where $D_{ij}=\sum_{\alpha,\beta}A_{\alpha}A_{\beta}(\langle x_{i\alpha}x_{j\beta}\rangle-\langle x_{i\alpha}\rangle\langle x_{j\beta}\rangle).$ (15) Next, the desired coefficients $A_{\alpha}$ have to be found such that $L_{min}$ as a function of $A_{2}$ and $A_{3}$ is minimised. This cannot be solved analytically, and we find $A_{\alpha}$ numerically such that the global minimum of $L_{min}(A_{2},A_{3})$ is reached; for details see Ref. [15]. In particular, owing to $A_{3}=-am_{S}Z^{-1}_{S}$ this provides us with the subtracted gluino mass $m_{S}$ up to the renormalisation factor. To estimate the statistical uncertainties we employ the standard Jackknife procedure. ### 3.1 Discretisation effects All terms in the Ward identity (8), including the $O(a)$ term $\left\langle X_{S}(x)Q(y)\right\rangle$, are correlation functions of gauge invariant operators. In the corresponding Eqs. (11) they are correlation functions of operators localised on time slices or pairs of adjacent time slices at distance $t$. As for any gauge invariant correlation function of this type, they decay exponentially in $t$, with a decay rate given by the mass gap of the theory. For very small $t$ the contributions of higher masses will affect the impact of the $O(a)$ term on the Ward identities. Therefore we expect that the value of the obtained gluino mass will depend on the minimal time slice distance $t_{min}$. This effect should become negligible at sufficiently large $t_{min}$. On the other hand, if $t_{min}$ is chosen too large, noise in the data will dominate. The behaviour that can be observed in Fig. 1 is compatible with these expectations. Figure 1: The subtracted gluino mass $am_{S}Z^{-1}_{S}$ as a function of $t_{min}$ calculated with the GLS Method at $\beta=5.6$. At small values of $t_{min}$ the subtracted gluino mass is affected by contact terms and by $O(a)$ terms. Data from $t_{min}=2$ and $t_{min}=3$ are shown, but do not enter our final analysis. An adequate choice of $t_{min}$ is therefore important for the quality of the results. We cope with this in two ways. In order to avoid perturbing effects at too small $t_{min}$ and a poor signal- to-noise ratio at too large $t_{min}$, for each hopping parameter and inverse gauge coupling, the value of $t_{min}$ is selected by finding an optimal starting point where a plateau in the subtracted gluino mass begins. The results are presented in Tab. 1. $\beta=5.4$ | $\beta=5.4$ | $\beta=5.45$ | $\beta=5.5$ | $\beta=5.6$ ---|---|---|---|--- ​​$V=12^{3}\times 24$​​ | ​​$V=16^{3}\times 32$​​ | ​​$V=16^{3}\times 32$​​ | ​​$V=16^{3}\times 32$​​ | ​​ $V=24^{3}\times 48$​​ $\kappa$ | ​​$t_{min}$​​ | $\kappa$ | ​​$t_{min}$​​ | $\kappa$ | ​​$t_{min}$ ​​ | $\kappa$ | ​​ $t_{min}$ ​​ | $\kappa$ | ​​$t_{min}$​​ 0.1695 | 4 | 0.1692 | 4 | 0.1685 | 5 | 0.1667 | 5 | 0.1645 | 7 0.1700 | 4 | 0.1695 | 4 | 0.1687 | 5 | 0.1673 | 5 | 0.1650 | 7 0.1703 | 4 | 0.1697 | 4 | 0.1690 | 5 | 0.1678 | 5 | 0.1655 | 6 0.1705 | 4 | 0.1700 | 4 | 0.1692 | 5 | 0.1680 | 5 | 0.1660 | 7 - | - | 0.1703 | 4 | 0.1693 | 4 | 0.1683 | 5 | - | - - | - | 0.1705 | 4 | - | - | - | - | - | - Table 1: The values of $t_{min}$ for all available gauge ensembles, chosen such that a plateau is formed. In the second approach, we consider that our simulations of the theory are done at different values of the lattice spacing $a$, which leads to different $O(a)$ terms in the Ward identities. A fixed value of $t_{min}$ in lattice units would mean a lower limit on the time-slice distances in physical units, that is on the cutoff-scale and shrinks to zero in the continuum limit. Instead it would be more appropriate to consider $t_{min}$ at constant physical distance for all gauge ensembles. This is done in the following way. At the coarsest lattice spacing, at inverse gauge coupling $\beta_{0}$, the value of $t_{min}$ is selected according to the plateau criterion explained above. For finer lattice spacings at inverse gauge couplings $\beta_{i}$ the corresponding $t_{min}$ are then obtained by scaling with a physical scale. In order to determine the physical scale we use the mass $m_{g\tilde{g}}$ of the gluino-glue particle and the Wilson flow parameter $w_{0}$. Correspondingly, $t_{min}$ is scaled according to $\displaystyle t_{min,{\beta_{i}}}$ $\displaystyle=t_{min,\beta_{0}}\frac{m_{g\tilde{g},\beta_{0}}}{m_{g\tilde{g},\beta_{i}}}\,,$ (16) $\displaystyle\text{or}\quad t_{min,{\beta_{i}}}$ $\displaystyle=t_{min,\beta_{0}}\frac{w_{0,\beta_{i}}}{w_{0,\beta_{0}}}\,,$ (17) where $\beta_{0}=5.4$, $\beta_{1}=5.45$, $\beta_{2}=5.5$, and $\beta_{3}=5.6$. The resulting $t_{min}$ is rounded to the nearest integer value. The values obtained by this method are collected in Tab. 2. In most points they are equal or almost equal to those in Tab. 1. $\beta$ | $t_{min}$ from $m_{g\tilde{g}}$ | $t_{min}$ from $w_{0}$ ---|---|--- 5.4 | 4 | 4 5.45 | 5 | 5 5.5 | 5 | 6 5.6 | 7 | 7 Table 2: The values of $t_{min}$ at fixed physical temporal distance from scaling with the gluino-glue mass $m_{g\tilde{g}}$ and with the Wilson flow parameter $w_{0}$. ### 3.2 Adjoint pion and remnant gluino mass The chiral limit is defined by the vanishing of the subtracted gluino mass. Its measured values can therefore be employed for the tuning of the hopping parameter $\kappa$ to the chiral limit. On the other hand, we can also use the vanishing of the adjoint pion mass $m_{\text{a-}\pi}$ for the tuning [16]. The adjoint pion $\text{a-}\pi$ is an unphysical particle in the SYM theory, that can be defined in partially quenched chiral perturbation theory [17]. In the numerical simulations its correlation function can be computed as the connected piece of the correlation function of the $\text{a-}\eta^{\prime}$ particle. Similar to the Gell-Mann-Oakes-Renner relation of QCD [5], in the continuum limit there is a linear relation between the adjoint pion mass squared and the gluino mass: $m^{2}_{\text{a-}\pi}\propto m_{\tilde{g}}$. The numerical results for the subtracted gluino mass from the Ward identities and the adjoint pion mass squared in lattice units are shown for $\beta=5.6$ in Fig. 2 together with their extrapolations towards the chiral limit. (a) The subtracted gluino mass $am_{S}Z^{-1}_{S}$ and the adjoint pion mass squared $(am_{\text{a-}\pi})^{2}$ as a function of $1/(2\kappa)$, and the corresponding extrapolations towards the chiral limit ($\kappa_{c}$). (b) The subtracted gluino mass $am_{S}Z^{-1}_{S}$ as a function of the adjoint pion mass squared $(am_{\text{a-}\pi})^{2}$ in order to obtain the remnant gluino mass $\Delta(am_{S}Z^{-1}_{S})$. Figure 2: Chiral limit and determination of the remnant gluino mass at $\beta=5.6$. All quantities are in lattice units. In the continuum the subtracted gluino mass and the adjoint pion mass should vanish at the same point. On the lattice, however, this is not the case due to lattice artefacts. As an estimate for this discrepancy we determine the value of the subtracted gluino mass at vanishing adjoint pion mass. This quantity is called the remnant gluino mass $\Delta(am_{S}Z^{-1}_{S})$, and it is expected to vanish in the continuum limit. The values of the remnant gluino mass, obtained by taking an average of the values calculated using the procedures explained above, are presented in Tab. 3. $\beta$ | 5.4 | 5.45 | 5.5 | 5.6 ---|---|---|---|--- $\Delta(am_{S}Z^{-1}_{S})$ | 0.0334(48) | 0.019(12) | 0.0099(88) | 0.0103(33) Table 3: The values of the remnant gluino mass $\Delta(am_{S}Z^{-1}_{S})$ obtained at four different values of the inverse gauge coupling. ### 3.3 Continuum limit The remnant gluino mass is a lattice artefact and should vanish in the continuum limit $a\rightarrow 0$. It is therefore a quantity to check on whether supersymmetry is recovered or not. Concerning the dependence of the remnant gluino mass on the lattice spacing, arguments based on partially quenched chiral perturbation theory suggest that the remnant gluino mass is of order $a^{2}$ at $m^{2}_{\text{a-}\pi}=0$ [13]. In order to investigate this relation, the remnant gluino mass has to be expressed in physical units. Our choice for the scale is the Wilson flow parameter $w_{0}$, which is defined through the gradient flow [10]. We use its values extrapolated to the chiral limit, $w_{0,\chi}$. Similarly the lattice spacing is represented by $a/w_{0,\chi}$. Our numerical results for the remnant gluino mass as a function of the lattice spacing and its extrapolation towards the continuum limit are shown in Fig. 3. The data points in Fig. 3(a) show the results from separate chiral extrapolations for each lattice spacing and the corresponding extrapolation to the continuum limit. The extrapolation to the continuum and the error of this extrapolation are obtained by means of parametric bootstrap with linear fits. On the other hand, Fig. 3(b) is obtained by means of a simultaneous fit of the dependence on the hopping parameter and the lattice spacing [18]. (a) The remnant gluino mass from separate extrapolations to the chiral limit where $m^{2}_{\text{a-}\pi}$ is zero, and the extrapolation to the continuum limit. (b) The remnant gluino mass from a simultaneous chiral and continuum extrapolation. By construction, in this method the data points coincide with the error band. Figure 3: The remnant gluino mass $\Delta{(w_{0}m_{S}Z^{-1}_{S})}$ in physical units $w_{0}$ as a function of the lattice spacing squared, and its linear extrapolation towards the continuum limit. The remnant gluino mass in the continuum limit is compatible with zero within one standard-deviation, confirming the preliminary results present in Ref. [15] with only two data points. Lattice artefacts vanish in the continuum limit as expected, and supersymmetry is recovered in the chiral and continuum limits, in agreement with our findings from the mass spectrum [12]. ## 4 Conclusion In this paper we have presented numerical results of an analysis of SUSY Ward identities in $\mathcal{N}=1$ supersymmetric Yang-Mills theory on the lattice with gauge group SU(3). Contact terms and $O(a)$ lattice artefacts in the Ward identities have been controlled by suitable choices of time-slice distances. Ensembles of gauge configurations at four different values of the lattice spacing and various hopping parameters have been analysed, allowing us for the first time to perform an extrapolation to the continuum limit, where the lattice artefacts vanish. The remnant gluino mass has been extrapolated in two alternative ways, on the one hand by extrapolating to the chiral limit at each lattice spacing separately and then to the continuum limit, and on the other hand by means of a simultaneous extrapolation to the chiral and continuum limit. With both extrapolations the lattice artefacts in the subtracted gluino mass appear to scale to zero as of order $a^{2}$ in agreement with the theoretical expectations. Our findings support the validity of SUSY Ward identities and the restoration of supersymmetry in the continuum limit. ## Acknowledgments The authors gratefully acknowledge the Gauss Centre for Supercomputing e. V. (www.gauss-centre.eu) for funding this project by providing computing time on the GCS Supercomputer JUQUEEN and JURECA at Jülich Supercomputing Centre (JSC) and SuperMUC at Leibniz Supercomputing Centre (LRZ). Further computing time has been provided on the compute cluster PALMA of the University of Münster. This work is supported by the Deutsche Forschungsgemeinschaft (DFG) through the Research Training Group “GRK 2149: Strong and Weak Interactions - from Hadrons to Dark Matter”. G. Bergner acknowledges support from the Deutsche Forschungsgemeinschaft (DFG) Grant No. BE 5942/2-1. S. Ali acknowledges financial support from the Deutsche Akademische Austauschdienst (DAAD). ## References * [1] J. Wess and J. Bagger, Supersymmetry and Supergravity, Princeton University Press, 1992. * [2] G. Jungman, M. Kamionkowski and K. Griest, Phys. Rept. 267 (1996) 195, [arXiv: hep-ph/9506380 ]. * [3] J. D. Lykken, [arXiv: 1005.1676[hep-ph]]. * [4] G. Bergner, P. Giudice, I. Montvay, G. Münster and S. Piemonte, JHEP 1603 (2016) 080, [arXiv: 1512.07014[hep-lat]]. * [5] G. Veneziano and S. Yankielowicz, Phys. Lett. B 113 (1982) 231. * [6] G. R. Farrar, G. Gabadadze and M. Schwetz, Phys. Rev. D 58 (1998) 015009, [arXiv: hep-th/9711166 ]. * [7] G. Curci and G. Veneziano, Nucl. Phys. B 292 (1987) 555. * [8] S. Musberg, G. Münster and S. Piemonte, JHEP 1305 (2013) 143, [arXiv: 1304.5741[hep-lat]]. * [9] S. Ali, G. Bergner, H. Gerber, P. Giudice, S. Kuberski, I. Montvay, G. Münster and S. Piemonte, EPJ Web Conf. 175 (2018) 08016, [arXiv: 1710.07464[hep-lat]]. * [10] S. Ali, G. Bergner, H. Gerber, P. Giudice, I. Montvay, G. Münster, S. Piemonte and P. Scior, JHEP 1803 (2018) 113, [arXiv: 1801.08062[hep-lat]]. * [11] S. Ali, G. Bergner, H. Gerber, S. Kuberski, I. Montvay, G. Münster, S. Piemonte and P. Scior, JHEP 1904 (2019) 150, [arXiv: 1901.02416[hep-lat]]. * [12] S. Ali, G. Bergner, H. Gerber, I. Montvay, G. Münster, S. Piemonte and P. Scior, Phys. Rev. Lett. 122 (2019) 2216011, [arXiv: 1902.11127[hep-lat]]. * [13] F. Farchioni, A. Feo, T. Galla, C. Gebert, R. Kirchner, I. Montvay, G. Münster and A. Vladikas, Eur. Phys. J. C 23 (2002) 719, [arXiv: hep-lat/0111008 ]. * [14] S. Ali, PhD thesis, University of Münster, June 2019. * [15] S. Ali, G. Bergner, H. Gerber, I. Montvay, G. Münster, S. Piemonte and P. Scior, Eur. Phys. J. C 78 (2018) 404, [arXiv: 1802.07067[hep-lat]]. * [16] K. Demmouche, F. Farchioni, A. Ferling, I. Montvay, G. Münster, E. E. Scholz and J. Wuilloud, Eur. Phys. J. C 69 (2010) 147, [arXiv: 1003.2073[hep-lat]]. * [17] G. Münster and H. Stüwe, JHEP 1405 (2014) 034, [arXiv: 1402.6616[hep-th]]. * [18] H. Gerber, PhD thesis, University of Münster, May 2019.
2024-09-04T02:54:58.388585
2020-03-09T13:16:15
2003.04121
{ "authors": "Sean Prendiville", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26117", "submitter": "Sean Prendiville", "url": "https://arxiv.org/abs/2003.04121" }
arxiv-papers
# The inverse theorem for the nonlinear Roth configuration: an exposition Sean Prendiville Department of Mathematics and Statistics Lancaster University UK<EMAIL_ADDRESS> ###### Abstract. We give an exposition of the inverse theorem for the cut-norm associated to the nonlinear Roth configuration, established by Peluse and the author in [6]. ###### Contents 1. 1 Introduction 2. 2 An outline of our argument 3. 3 PET induction 4. 4 An inverse theorem for the arithmetic box norm 5. 5 Quantitative concatenation 6. 6 Degree lowering 7. 7 Proof of the cut norm inverse theorem 8. A Basic theory of the Gowers norms ## 1\. Introduction Peluse and the author recently obtained an effective bound on the density of sets of integers lacking the configuration $x,\ x+y,\ x+y^{2}\qquad(y\neq 0).$ (1.1) We call this pattern the _nonlinear Roth configuration_ , after Bourgain and Chang [1]. ###### Theorem 1.1 (Peluse and Prendiville [6]). There exists an absolute constant $c>0$ such that if $A\subset\left\\{1,2,\dots,N\right\\}$ lacks the configuration (1.1), then $|A|\ll N(\log\log N)^{-c}.$ We have since removed a logarithm from this bound. ###### Theorem 1.2 (Peluse and Prendiville [7]). There exists an absolute constant $c>0$ such that if $A\subset\left\\{1,2,\dots,N\right\\}$ lacks the configuration (1.1), then $|A|\ll N(\log N)^{-c}.$ The main innovation behind both of these results is [6, Theorem 7.1], an inverse theorem for the counting operator associated to this configuration. It is the purpose of this note to give an exposition of this inverse theorem. The approach is essentially the same as that in [6]. We hope that having two distinct accounts is useful for those interested in utilising these ideas. ###### Definition 1.3 (Counting operator). For positive integers $q\leq N$ write $M:=\left\lfloor\sqrt{N/q}\right\rfloor.$ (1.2) Given this, define the _counting operator_ on the functions $f_{i}:\mathbb{Z}\to\mathbb{C}$ by $\Lambda_{q,N}(f_{0},f_{1},f_{2}):=\mathbb{E}_{x\in[N]}\mathbb{E}_{y\in[M]}f_{0}(x)f_{1}(x+y)f_{2}(x+qy^{2}).$ (1.3) When the $f_{i}$ all equal $f$ we simply write $\Lambda_{q,N}(f)$. ###### Definition 1.4 (Local function). We call a function $\phi:\mathbb{Z}\to\mathbb{C}$ a _local function of resolution $M$ and modulus $q$_ if there exists a partition of $\mathbb{R}$ into intervals of length $M$ such that $\phi$ is constant on the intersection of every such interval with every congruence class mod $q$. ###### Definition 1.5 (Cut norm). Define the _cut norm_ of $f:\mathbb{Z}\to\mathbb{C}$ by $\left\|f\right\|_{q,N}:=\sup\\{|\Lambda_{q,N}(f,g_{1},g_{2})|,\ |\Lambda_{q,N}(g_{1},f,g_{2})|,\ |\Lambda_{q,N}(g_{1},g_{2},f)|\\},$ (1.4) where the supremum is taken over all 1-bounded functions $g_{i}:[N]\to\mathbb{C}$. We note that, in spite of our nomenclature, this is not a norm but a seminorm. One could remedy this by summing over $y\geq 0$ in the counting operator (1.3) This seminorm is useful in [7]. However, it is too restrictive for the approach developed in [6], where we (implicitly) only work with the following quantities: $\left\|f\right\|^{\sharp}_{q}:=\sup\\{|\Lambda_{q,N}(g_{0},g_{1},f)|:|g_{i}|\leq 1\ \text{ and }\ \mathrm{supp}(g_{i})\subset[N]\\}$ (1.5) and $\left\|f\right\|^{\flat}_{q}:=\sup\\{|\Lambda_{q,N}(f,g_{1},g_{2})|,\ |\Lambda_{q,N}(g_{1},f,g_{2})|\ :\ |g_{i}|\leq 1\ \text{ and }\ \mathrm{supp}(g_{i})\subset[N]\\}.$ (1.6) Here then is a re-formulation and slight generalisation of [6, Theorem 7.1]. ###### Theorem 1.6 (Partial cut norm inverse theorem). Let $q\leq N$ be positive integers, $\delta>0$, and $f:\mathbb{Z}\to\mathbb{C}$ be a $1$-bounded function with support in $[N]$. Suppose that $\left\|f\right\|^{\flat}_{q,N}\geq\delta.$ Then either $N\ll(q/\delta)^{O(1)}$ or there exists a 1-bounded local function $\phi$ of resolution $\gg(\delta/q)^{O(1)}N^{1/2}$, modulus $qq^{\prime}$ for some $q^{\prime}\ll\delta^{-O(1)}$, and such that $\sum_{x\in[N]}f(x)\phi(x)\gg\delta^{2^{66}}N.$ This exposition is organised as follows. In §2, we give a more detailed outline of the proof of Theorem 1.6. In §§3–5 we develop an effective approach to a (special case of a) so-called _concatenation_ theorem of Tao and Ziegler [10]. This allows us to show that if our counting operator is large, then the function weighting the nonlinear term must have large Gowers uniformity norm. The drawback is that the degree of the resulting Gowers norm is large (in our approach it is the $U^{5}$-norm). In §6 we give a _degree-lowering_ procedure, which utilises properties specific to our configuration to show that one may replace the $U^{5}$-norm with the $U^{1}$-norm. In §7 we combine the results of the previous sections in order to prove Theorem 1.6. ### 1.1. Notation #### 1.1.1. Standard conventions We use $\mathbb{N}$ to denote the positive integers. For a real $X\geq 1$, write $[X]=\\{1,2,\ldots,\left\lfloor X\right\rfloor\\}$. A complex-valued function is _1-bounded_ if the modulus of the function does not exceed 1. We use counting measure on $\mathbb{Z}$, so that for $f,g:\mathbb{Z}\to\mathbb{C}$ we have $\left\langle f,g\right\rangle:=\sum_{x}f(x)\overline{g(x)}\qquad\text{and}\qquad\left\|f\right\|_{L^{p}}:=\biggl{(}\sum_{x}|f(x)|^{p}\biggr{)}^{\frac{1}{p}}.$ Any sum of the form $\sum_{x}$ is to be interpreted as a sum over $\mathbb{Z}$. We use Haar probability measure on $\mathbb{T}:=\mathbb{R}/\mathbb{Z}$, so that for measurable $F:\mathbb{T}\to\mathbb{C}$ we have $\left\|F\right\|_{L^{p}}:=\biggl{(}\int_{\mathbb{T}}|F(\alpha)|^{p}\mathrm{d}\alpha\biggr{)}^{\frac{1}{p}}=\biggl{(}\int_{0}^{1}|F(\alpha)|^{p}\mathrm{d}\alpha\biggr{)}^{\frac{1}{p}}$ For $\alpha\in\mathbb{T}$ we write $\left\|\alpha\right\|$ for the distance to the nearest integer. For a finite set $S$ and function $f:S\to\mathbb{C}$, denote the average of $f$ over $S$ by $\mathbb{E}_{s\in S}f(s):=\frac{1}{|S|}\sum_{s\in S}f(s).$ Given functions $f,g:G\to\mathbb{C}$ on an additive group with measure $\mu_{G}$ we define their convolution by $f*g(x):=\int_{G}f(x-y)g(y)\mathrm{d}\mu_{G},$ (1.7) when this makes sense. We define the Fourier transform of $f:\mathbb{Z}\to\mathbb{C}$ by $\hat{f}(\alpha):=\sum_{x}f(x)e(\alpha x)\qquad(\alpha\in\mathbb{T}),$ (1.8) again, when this makes sense. Here $e(\alpha)$ stands for $e^{2\pi i\alpha}$. The difference function of $f:\mathbb{Z}\to\mathbb{C}$ is the function $\Delta_{h}f:\mathbb{Z}\to\mathbb{C}$ given by $\Delta_{h}f(x)=f(x)\overline{f(x+h)}.$ Iterating gives $\Delta_{h_{1},\dots,h_{s}}f:=\Delta_{h_{1}}\dots\Delta_{h_{s}}f.$ This allows us to define the Gowers $U^{s}$-norm $\left\|f\right\|_{U^{s}}:=\left(\sum_{x,h_{1},\dots,h_{s}}\Delta_{h_{1},\dots,h_{s}}f(x)\right)^{1/2^{s}}.$ (1.9) If $\|\cdot\|$ is a seminorm on an inner product space, recall that its dual seminorm $\|\cdot\|^{*}$ is defined by $\|f\|^{*}:=\sup_{\|g\|\leq 1}|\langle f,g\rangle|.$ Hence $\left|\left\langle f,g\right\rangle\right|\leq\left\|f\right\|^{*}\left\|g\right\|.$ (1.10) For a function $f$ and positive-valued function $g$, write $f\ll g$ or $f=O(g)$ if there exists a constant $C$ such that $|f(x)|\leq Cg(x)$ for all $x$. We write $f=\Omega(g)$ if $f\gg g$. We sometimes opt for a more explicit approach, using $C$ to denote a large absolute constant, and $c$ to denote a small positive absolute constant. The values of $C$ and $c$ may change from line to line. #### 1.1.2. Local conventions Up to normalisation, all of the above are well-used in the literature. Next we list notation specific to our paper. We have tried to minimise this in order to aid the casual reader. For a real parameter $H\geq 1$, we use $\mu_{H}:\mathbb{Z}\to[0,1]$ to represent the following normalised Fejér kernel $\mu_{H}(h):=\frac{1}{\left\lfloor H\right\rfloor}\left(1-\frac{|h|}{\left\lfloor H\right\rfloor}\right)_{+}=\frac{(1_{[H]}*1_{[H]})(h)}{\left\lfloor H\right\rfloor^{2}}.$ (1.11) For a multidimensional vector $h\in\mathbb{Z}^{d}$ we write $\mu_{H}(h):=\mu_{H}(h_{1})\dotsm\mu_{H}(h_{d}).$ (1.12) We observe that this is a probability measure on $\mathbb{Z}^{d}$ with support in the interval $(-H,H)^{d}$. ## 2\. An outline of our argument In this section we describe the ideas behind Theorem 1.6. In the hope of making the ideas clearer, we make the simplification that $q=1$ in our counting operator (1.3). Hence, for finitely supported functions $f_{0},f_{1},f_{2}:\mathbb{Z}\to\mathbb{C}$, write $\Lambda(f_{0},f_{1},f_{2}):=\mathbb{E}_{x\in[N]}\mathbb{E}_{y\in[N^{1/2}]}f_{0}(x)f_{1}(x+y)f_{2}(x+y^{2}).$ (2.1) For this operator, Theorem 1.6 can be deduced from the following. ###### Lemma 2.1. Let $f_{0},f_{1},f_{2}:\mathbb{Z}\to\mathbb{C}$ be $1$-bounded functions supported in the interval $[N]$ and $\delta>0$. Suppose that $|\Lambda(f_{0},f_{1},f_{2})|\geq\delta.$ Then either $N\ll\delta^{-O(1)}$ or there exist positive integers $q\ll\delta^{-O(1)}$ and $N^{\prime}\gg\delta^{O(1)}N^{1/2}$ such that $\sum_{x}\left|\sum_{y\in[N^{\prime}]}f_{1}(x+qy)\right|\gg\delta^{O(1)}NN^{\prime}.$ (2.2) Using the notation (1.9), notice that the left-hand side of (2.2) is equal to $\sum_{x}\left\|f_{1}\right\|_{U^{1}(x+q\cdot[N^{\prime}])}.$ ### 2.1. Quantitative concatenation To prove Lemma 2.1, we first prove that our counting operator (2.1) is controlled by the $U^{5}$-norm of $f_{2}$. The purpose of this subsection is to sketch how we do this with polynomial bounds. By applying the Cauchy–Schwarz and van der Corput inequalities a number of times, we show in §3 that, when $f_{0},f_{1},f_{2}:\mathbb{Z}\to\mathbb{C}$ are $1$-bounded functions supported in the interval $[N]$, largeness of the counting operator (2.1) implies largeness of the sum $\sum_{a,b\in[N^{1/2}]}\sum_{h_{1},h_{2},h_{3}\in[N^{1/2}]}\sum_{x}\Delta_{ah_{1},bh_{2},(a+b)h_{3}}f_{2}(x).$ (2.3) This deduction is made following the PET induction scheme of Bergelson and Leibman [2]. The gain in working with the counting operator (2.3) over (2.1) is that univariate polynomials such as $y^{2}$, whose image constitute a sparse set, have been replaced by bilinear forms such as $ah_{1}$, whose image is much denser In §§4–5, we show that largeness of (2.3) implies largeness of $\|f_{2}\|_{U^{5}}$. If there were no dependence between the coefficients of the $h_{i}$ in (2.3), then we could in fact bound (2.3) in terms of $\|f_{2}\|_{U^{3}}$. Since the argument is informative, we illustrate why this is the case for the sum $\sum_{a,b,c\in[N^{1/2}]}\sum_{h_{1},h_{2},h_{3}\in[N^{1/2}]}\sum_{x}\Delta_{ah_{1},bh_{2},ch_{3}}f_{2}(x).$ (2.4) The following fact is key, the formal version of which is Lemma 5.3. ###### Claim 2.2. If $\displaystyle\sum_{a,h\in[N^{1/2}]}\sum_{x}\Delta_{ah}f(x)$ is large then so is $\displaystyle\sum_{k\in(-N,N)}\sum_{x}\Delta_{k}f(x)$. ###### Sketch proof. Apply the Cauchy–Schwarz inequality to double the $a$ and $h$ variables, yielding a bound in terms of $\sum_{a,a^{\prime}\in[N^{1/2}]}\sum_{h,h^{\prime}\in[N^{1/2}]}\sum_{x}\Delta_{ah-a^{\prime}h^{\prime}}f(x).$ (2.5) For a random choice of $a,a^{\prime}\in[N^{1/2}]$, the progression $a\cdot[N^{1/2}]-a^{\prime}\cdot[N^{1/2}]$ covers a large portion of the interval $(-N,N)$ relatively smoothly. One can make this intuition rigorous and thus deduce largeness of the sum $\sum_{k\in(-N,N)}\sum_{x}\Delta_{k}f(x).$∎ Applying Claim 2.2 three times allows us to replace each of $ah_{1}$, $bh_{2}$ and $ch_{3}$ in (2.4) with $k_{1},k_{2},k_{3}\in(-N,N)$, yielding largeness of $\left\|f_{2}\right\|_{U^{3}}$. Since the PET induction scheme outputs (2.3), and not (2.4), the problem remains of how to handle the dependency between the differencing parameters in (2.3). If we were not concerned with quantitative bounds, we could apply a ‘concatenation’ theorem of Tao and Ziegler [10, Theorem 1.24] to obtain largeness of the $U^{9}$-norm of $f_{2}$. However, the qualitative nature of this argument means that it cannot be used to obtain bounds in the nonlinear Roth theorem. In its place we prove Theorem 5.6, which is a special case of [10, Theorem 1.24], using a very different argument that gives polynomial bounds. We spend the remainder of this subsection sketching the argument. We begin by viewing (2.3) as the average $\sum_{a,h_{1}\in[N^{1/2}]}\left\|\Delta_{ah_{1}}f_{2}\right\|_{a},$ (2.6) where $\|f\|_{a}^{4}:=\sum_{b\in[N^{1/2}]}\sum_{h_{2},h_{3}\in[N^{1/2}]}\sum_{x}\Delta_{bh_{2},(a+b)h_{3}}f(x)$ (2.7) One can view this as an average of 2-dimensional Gowers box norms where, for fixed $b$, the inner sum corresponds to a box norm in the ‘directions’ $b$ and $a+b$. Note that if we could bound the quantity $\|\Delta_{ah_{1}}f_{2}\|_{a}$ in terms of the $U^{4}$-norm of $\Delta_{ah_{1}}f_{2}$ for many pairs $(a,h_{1})$, then by Claim 2.2 we deduce largeness of the $U^{5}$-norm of $f_{2}$. We show that, on average, one can indeed control $\|\cdot\|_{a}$ in terms of $\|\cdot\|_{U^{4}}$, with polynomial bounds. The following can be extracted from the proof of (the more general) Theorem 5.6. ###### Lemma 2.3. For each $a\in[N^{1/2}]$ let $f_{a}:\mathbb{Z}\to\mathbb{C}$ be a $1$-bounded function supported in the interval $[N]$. Suppose that $\mathbb{E}_{a\in[N^{1/2}]}\|f_{a}\|_{a}^{4}\geq\delta\left\|1_{[N]}\right\|_{a}^{4}.$ Then $\mathbb{E}_{a\in[N^{1/2}]}\|f_{a}\|_{U^{4}}^{16}\gg\delta^{O(1)}\left\|1_{[N]}\right\|_{U^{4}}^{16}.$ To finish this subsection, we briefly discuss the proof of this key lemma. For most choices of $a,b\in[N^{1/2}]$, the ‘directions’ $a$ and $a+b$ of the box norm $\sum_{h_{2},h_{3}\in[N^{1/2}]}\sum_{x}\Delta_{bh_{2},(a+b)h_{3}}f_{a}(x)$ (2.8) are close to ‘independent’, in the sense that at least one of the directions $a$ and $a+b$ is large and together they have small greatest common divisor. The proof of Lemma 2.3 thus begins by viewing $\|\cdot\|_{a}$ as an average of box norms $\|f\|_{\square(X,Y)}^{4}:=\sum_{x_{1},x_{2}\in X,y_{1},y_{2}\in Y}f(x_{1},y_{1})\overline{f(x_{1},y_{2})f(x_{2},y_{1})}f(x_{2},y_{2}).$ (2.9) It is easy to show that largeness of $\|f\|_{\square(X,Y)}$ implies that $f$ correlates with a function of the form $(x,y)\mapsto l(x)r(y)$. We show, analogously, that provided $b$ and $a+b$ are not too small and have greatest common divisor not too large, then largeness of the arithmetic box norm (2.8) implies that $f_{a}$ correlates with a product $g_{b}h_{a+b}$ of 1-bounded functions, where $g_{b}$ is $b$-periodic and $h_{a+b}$ is almost periodic under shifts by integer multiples of $a+b$. As a consequence, for most $a\in[N^{1/2}]$, largeness of $\|f_{a}\|_{a}$ implies largeness of $\sum_{b\in[N^{1/2}]}\sum_{x}f_{a}(x)g_{b}(x)h_{a+b}(x).$ (2.10) In fact, an application of Cauchy–Schwarz allows us give an explicit description of $h_{a+b}$ in terms of $f_{a}$, namely we may take it to be of the form $h_{a+b}(x)=\mathbb{E}_{k\in[N^{1/2}]}f_{a}(x+(a+b)k)g_{b}(x+(a+b)k).$ (2.11) This presentation makes apparent the almost periodicity of $h_{a+b}$. ###### Claim 2.4. Largeness of (2.10) implies that $\mathbb{E}_{b\in[N^{1/2}]}h_{a+b}$ has large $U^{3}$-norm. Let us first show why Claim 2.4 in turn implies that $f_{a}$ has large $U^{4}$-norm, completing our sketch proof of Lemma 2.3. The expression (2.11) and the triangle inequality for Gowers norms together imply that largeness of $\mathbb{E}_{b\in[N^{1/2}]}\left\|h_{a+b}\right\|_{U^{3}}$ implies largeness of $\mathbb{E}_{b\in[N^{1/2}]}\left\|f_{a}g_{b}\right\|_{U^{3}}$. Utilising the $b$-periodicity of $g_{b}$ we have $\left\|f_{a}g_{b}\right\|_{U^{3}}=\mathbb{E}_{k\in[N^{1/2}]}\left\|f_{a}(\cdot)g_{b}(\cdot+bk)\right\|_{U^{3}}.$ (2.12) The product $f_{a}(\cdot)g_{b}(\cdot+bk)$ resembles a difference function in the direction $b$. Indeed the Gowers–Cauchy–Schwarz inequality (see [9, Exercise 1.3.19]) shows that if (2.12) is large (on average over $b\in[N^{1/2}]$) then so is $\mathbb{E}_{b,k\in[N^{1/2}]}\left\|\Delta_{bk}f_{a}\right\|_{U^{3}}$ Largeness of $\left\|f_{a}\right\|_{U^{4}}$ then follows from Claim 2.2. Finally we sketch the proof of Claim 2.4. The Cauchy–Schwarz inequality allows us to remove the weight $f_{a}(x)$ from (2.10) and deduce largeness of $\sum_{x}\sum_{b,b^{\prime}\in[N^{1/2}]}\overline{g_{b}(x)h_{a+b}(x)}g_{b^{\prime}}(x)h_{a+b^{\prime}}(x).$ Using the periodicity properties of $g_{b}$, $g_{b^{\prime}}$ and $h_{a+b}$, this is approximately equal to $\sum_{x}\sum_{\begin{subarray}{c}b,b^{\prime}\in[N^{1/2}]\\\ k_{1},k_{2},k_{3}\in[N^{1/2}]\end{subarray}}\overline{g_{b}(x-bk_{1})h_{a+b}(x-(a+b)k_{2})}g_{b^{\prime}}(x-b^{\prime}k_{3})h_{a+b^{\prime}}(x).$ Changing variables in $x$, we obtain largeness of the sum $\sum_{x}\sum_{\begin{subarray}{c}b,b^{\prime}\in[N^{1/2}]\\\ k_{1},k_{2},k_{3}\in[N^{1/2}]\end{subarray}}\overline{g_{b}(x+(a+b)k_{2}+b^{\prime}k_{3})h_{a+b}(x+bk_{1}+b^{\prime}k_{3})}\\\ g_{b^{\prime}}(x+bk_{1}+(a+b)k_{2})h_{a+b^{\prime}}(x+bk_{1}+(a+b)k_{2}+b^{\prime}k_{3}).$ The point here is that all but the last function have arguments depending on at most two of the bilinear forms $bk_{1}$, $(a+b)k_{2}$ and $b^{\prime}k_{1}^{\prime}$. This enables us to employ the Gowers–Cauchy–Schwarz inequality (in the form of Lemma A.4) to deduce largeness of a sum similar to $\sum_{x}\sum_{\begin{subarray}{c}b,b^{\prime}\in[N^{1/2}]\\\ k_{1},k_{2},k_{3}\in[N^{1/2}]\end{subarray}}\Delta_{bk_{1},\,(a+b)k_{2},\,b^{\prime}k_{3}}h_{a+b^{\prime}}(x).$ The utility of this expression is that the directions of the differencing parameters are all ‘independent’ of the direction of periodicity of $h_{a+b^{\prime}}$. Indeed the approximate $(a+b^{\prime})$-periodicity of $h_{a+b^{\prime}}$ means that one can replace $\Delta_{y}h_{a+b^{\prime}}$ with $\mathbb{E}_{k}\Delta_{y+(a+b^{\prime})k}h_{a+b^{\prime}}$ at the cost of a small error. We thereby obtain largeness of $\sum_{x}\sum_{b,b^{\prime}\in[N^{1/2}]}\sum_{\begin{subarray}{c}k_{1},k_{2},k_{3}\in[N^{1/2}]\\\ k_{1}^{\prime},k_{2}^{\prime},k_{3}^{\prime}\in[N^{1/2}]\end{subarray}}\Delta_{bk_{1}+(a+b^{\prime})k_{1}^{\prime},\,(a+b)k_{2}+(a+b^{\prime})k_{2}^{\prime},\,b^{\prime}k_{3}+(a+b^{\prime})k_{3}^{\prime}}h_{a+b^{\prime}}(x).$ (2.13) For a random triple $(a,b,b^{\prime})\in[N^{1/2}]$ the greatest common divisor of the pairs $(b,a+b^{\prime})$, $(a+b,a+b^{\prime})$ and $(b^{\prime},a+b^{\prime})$ are all small, and these are the pairs appearing in the differencing parameters of (2.13). The argument used to treat (2.5) may be therefore be employed to replace (2.13) with $\sum_{x}\sum_{b^{\prime}\in[N^{1/2}]}\sum_{k_{1},k_{2},k_{3}\in[N]}\Delta_{k_{1},k_{2},k_{3}}h_{a+b^{\prime}}(x),$ and thereby yield Claim 2.4. ### 2.2. Degree lowering After we have shown that $\Lambda(f_{0},f_{1},f_{2})$ is controlled by the $U^{5}$-norm of $f_{2}$, we carry out a ‘degree lowering’ argument. This technique originated in the work [5] in finite fields. The basic idea is that, under certain conditions, one can combine $U^{s}$-control with understanding of two-term progressions to deduce $U^{s-1}$-control. Repeating this gives a sequence of implications $U^{5}\text{-control}\implies U^{4}\text{-control}\implies U^{3}\text{-control}\implies U^{2}\text{-control}\implies U^{1}\text{-control}.$ Despite the appearance of the $U^{5}$-norm, $U^{4}$-norm, and $U^{3}$-norm, the degree lowering argument, both in [5] and here, does not require the $U^{s}$-inverse theorem for any $s\geq 3$. Instead it relies on Fourier analysis in the place of these inverse theorems. Adapting the degree lowering argument of [5] to the integer setting requires several significant modifications. The first modification is that the $U^{s}$-control described above is control in terms of the $U^{s}$-norm of the dual function $F(x):=\mathbb{E}_{y\in[N^{1/2}]}f_{0}(x-y^{2})f_{1}(x+y-y^{2}).$ (2.14) Thus, to begin the degree lowering argument, we must show that largeness of $\Lambda(f_{0},f_{1},f_{2})$ implies largeness of $\|F\|_{U^{5}}$. To do this, we use a simple Hahn–Banach decomposition as described in [3, Proposition 3.6], for details see §7. We conclude this section by sketching an instance of degree-lowering: how $U^{3}$-control of the dual (2.14) implies $U^{2}$-control, starting from the assumption that $\|F\|_{U^{3}}^{8}\geq\delta\left\|1_{[N]}\right\|_{U^{3}}^{8}.$ Using the fact that $\|F\|_{U^{3}}^{8}=\sum_{h}\|\Delta_{h}F\|_{U^{2}}^{4}$ and applying the $U^{2}$-inverse theorem, we deduce the existence of a function $\phi:\mathbb{Z}\to\mathbb{T}$ such that, for at least $\gg\delta N$ choices of differencing parameter $h$, we have $\left|\sum_{x\in[N]}\Delta_{h}F(x)e(\phi(h)x)\right|\gg\delta N.$ (2.15) Note that if, in the above inequality, we could replace the function $\phi(h)$ by a constant $\beta\in\mathbb{T}$ not depending on $h$, then we could easily deduce largeness of $\|F\|_{U^{2}}$. Indeed, writing $g(h)$ for the phase of the sum inside absolute values, this would give $\sum_{x,h}\overline{g(h)}\overline{F(x+h)}F(x)e(\beta x)\gg\delta^{O(1)}N^{3},$ and the usual argument111One can either use orthogonality and extraction of a large Fourier coefficient, as in the proof of Lemma A.1, or use two applications of Cauchy–Schwarz. showing $U^{2}$-control of the equation $x+y=z$ implies that $\|F\|_{U^{2}}^{4}\gg\delta^{O(1)}\left\|1_{[N]}\right\|_{U^{2}}$. It thus remains to show that such a $\beta$ exists. Expanding the definition of the difference and dual functions in (2.15), and using the Cauchy–Schwarz inequality (as is done in greater generality in the proof of Lemma 6.3), one can show that there exists $h^{\prime}$ such that for many $h$ satisfying (2.15) we have $\left|\sum_{x}\sum_{y\in[N^{1/2}]}\Delta_{h-h^{\prime}}f_{0}(x)\Delta_{h-h^{\prime}}f_{1}(x+y)e([\phi(h)-\phi(h^{\prime})][x+y^{2}])\right|\gg\delta^{O(1)}N^{3/2}$ Further application of Cauchy–Schwarz allows us to remove the difference functions from the above inequality and deduce largeness of the exponential sum $\sum_{z\in[N^{1/2}]}\left|\sum_{y\in[N^{1/2}]}e(2\left[\phi(h)-\phi(h^{\prime})\right]yz)\right|.$ Summing the inner geometric progression and using a Vinogradov-type lemma then shows that $\phi(h)-\phi(h^{\prime})$ is major arc. There are very few major arcs, so the pigeonhole principle gives the existence of $\beta_{0}\in\mathbb{T}$ such that $\phi(h)-\phi(h^{\prime})$ is very close to $\beta_{0}$ for many $h\in(-N,N)$ that also satisfy (2.15). We may therefore take $\beta=\beta_{0}+\phi(h^{\prime})$ in the argument following (2.15). ## 3\. PET induction We prove Theorem 1.6 over the course of §§3–7. We begin in §§3–5 by showing how our counting operator $\Lambda_{q,N}(f_{0},f_{1},f_{2})$, as defined in (1.3), is controlled by the $U^{5}$-norm of $f_{2}$. This argument starts with the PET induction scheme of Bergelson–Leibman [2], which in some sense ‘linearises’ a polynomial progression, replacing univariate polynomials such as $y^{2}$ with bilinear forms $ah$. The outcome of this procedure is Lemma 3.3. For the following, we recall our definition (1.11) of $\mu_{H}$. ###### Lemma 3.1 (van der Corput inequality). Let $f:\mathbb{Z}\to\mathbb{C}$ be 1-bounded and $M,H\geq 1$. Then we have the estimate $\biggl{|}\mathbb{E}_{y\in[M]}f(y)\biggr{|}^{2}\leq\frac{M+H}{M}\sum_{h}\mu_{H}(h)\mathbb{E}_{y\in[M]}\Delta_{h}f(y).$ ###### Proof. This is standard, see for instance [8, Lemma 3.1].∎ ###### Lemma 3.2 (Difference functions control linear configurations). Let $f_{i}:\mathbb{Z}\to\mathbb{C}$ be $1$-bounded functions with support in an interval $I_{i}$ of size $|I_{i}|=N$. Then for any $a,b\in\mathbb{Z}$ and $1\leq H\leq M$ we have $\biggl{|}\mathbb{E}_{x\in I_{0}}\mathbb{E}_{y\in[M]}f_{0}(x)f_{1}(x+ay)f_{2}(x+by)f_{3}(x+(a+b)y)\biggr{|}^{8}\\\ \ll\sum_{h}\mu_{H}(h)\mathbb{E}_{x\in I_{3}}\Delta_{ah_{1},bh_{2},(a+b)h_{3}}f_{3}(x).$ (3.1) ###### Proof. Applying Cauchy-Schwarz in the $x$ variable gives $\biggl{|}\mathbb{E}_{x\in I_{0}}\mathbb{E}_{y\in[M]}f_{0}(x)f_{1}(x+ay)f_{2}(x+by)f_{3}(x+(a+b)y)\biggr{|}^{2}\\\ \leq\frac{1}{N}\sum_{x}\bigg{|}\mathbb{E}_{y\in[M]}f_{1}(x+ay)f_{2}(x+by)f_{3}(x+(a+b)y)\bigg{|}^{2}.$ Bounding the inner sum using van der Corput’s inequality (Lemma 3.1) and making the change of variables $x\mapsto x-ay$ (valid since $x$ is ranging over $\mathbb{Z}$), the latter is at most $2\sum_{h_{1}}\mu_{H}(h_{1})\mathbb{E}_{x\in I_{1}}\mathbb{E}_{y\in[M]}\Delta_{ah_{1}}f_{1}(x)\Delta_{bh_{1}}f_{2}(x+(b-a)y)\Delta_{(a+b)h_{1}}f_{3}(x+by).$ Here we may restrict $x$ to $I_{1}$ on observing that the support of $\Delta_{ah_{1}}f_{1}$ is contained in the support of $f_{1}$. Making use of the fact that $\mu_{H}$ is a probability measure, we repeat the procedure of applying Cauchy–Schwarz, van der Corput then a change of variables, to deduce that $\biggl{|}\mathbb{E}_{x\in I_{0}}\mathbb{E}_{y\in[M]}f_{0}(x)f_{1}(x+ay)f_{2}(x+by)f_{3}(x+(a+b)y)\biggr{|}^{4}\\\ \leq 8\sum_{h_{1},h_{2}}\mu_{H}(h_{1})\mu_{H}(h_{2})\mathbb{E}_{x\in I_{2}}\mathbb{E}_{y\in[M]}\Delta_{bh_{1},(b-a)h_{2}}f_{2}(x)\Delta_{(a+b)h_{1},bh_{2}}f_{3}(x+ay).$ A final iteration of the same procedure then yields (3.1). ∎ Before embarking on the following, we remind the reader of our convention (1.2) regarding $M$. ###### Lemma 3.3 (Linearisation). Let $f_{i}:\mathbb{Z}\to\mathbb{C}$ be $1$-bounded functions, each with support in the interval $[N]$. Then for any $1\leq H\leq M$ we have $\left|\Lambda_{q,N}(f_{0},f_{1},f_{2})\right|^{32}\ll\sum_{a,b,h}\mu_{M}(a)\mu_{M}(b)\mu_{H}(h)\mathbb{E}_{x\in[N]}\Delta_{2q(a+b)h_{1},\,2qbh_{2},\,2qah_{3}}f_{2}(x).$ (3.2) ###### Proof. We repeat the procedure given in the proof of Lemma 3.2, applying Cauchy- Schwarz, followed by van der Corput’s inequality and a change of variables. A first application gives $\left|\Lambda_{q,N}(f_{0},f_{1},f_{2})\right|^{2}\leq\\\ 2\sum_{a}\mu_{M}(a)\mathbb{E}_{x\in[N]}\mathbb{E}_{y\in[M]}\Delta_{a}f_{1}(x)f_{2}\bigl{(}x+qy^{2}-y\bigr{)}\overline{f_{2}\bigl{(}x+q(y+a)^{2}-y\bigr{)}}.$ A second application then gives $\left|\Lambda_{q,N}(f_{0},f_{1},f_{2})\right|^{4}\ll\sum_{a,b}\mu_{M}(a)\mu_{M}(b)\mathbb{E}_{x\in[N]}\mathbb{E}_{y\in[M]}f_{2}(x)\overline{f_{2}\bigl{(}x+2qay+qa^{2}\bigr{)}}\\\ \overline{f_{2}\bigl{(}x+2qby+qb^{2}-b\bigr{)}}f_{2}\bigl{(}x+2q(a+b)y+q(a+b)^{2}-b\bigr{)}.$ Applying Lemma 3.2 to bound the inner sum over $x$ and $y$, we obtain (3.2) after a final change of variables ∎ ## 4\. An inverse theorem for the arithmetic box norm The objective in this section is to characterise those 1-bounded functions $f:\mathbb{Z}\to\mathbb{C}$ with support in $[N]$ for which the following quantity is large $\sum_{h,x}\mu_{H}(h)\Delta_{ah_{1},bh_{2}}f(x).$ (4.1) One can think of this as an arithmetic analogue of the two-dimensional ‘box norm’ (2.9). In our eventual application we are able to ensure that $a$ and $b$ are a generic pair of integers from the interval $[N^{1/2}]$. In particular, at least one of them has size proportional to $N^{1/2}$ and their highest common factor is small. One may think of this as a proxy for linear independence. We begin by characterising largeness of (4.1) when the directions are coprime. ###### Lemma 4.1 (Inverse theorem for the arithmetic box norm). Let $a,b$ be positive integers with $\gcd(a,b)=1$. Suppose that $f:\mathbb{Z}\to\mathbb{C}$ is $1$-bounded with support in the interval $[N]$ and satisfies $\sum_{h,x}\mu_{H}(h)\Delta_{ah_{1},bh_{2}}f(x)\geq\delta N.$ (4.2) Then there exist 1-bounded functions $g,h:\mathbb{Z}\to\mathbb{C}$ such that * • $g$ is $a$-periodic, in the sense that $g(x+a)=g(x)$ for all $x$; * • $h$ is approximately $b$-periodic, in the sense that for any $\varepsilon>0$ we have $\\#\left\\{x\in[N]:h(x+by)\neq h(x)\text{ for some }|y|\leq\varepsilon N/b\right\\}\leq\left(1+\tfrac{2\varepsilon N}{b}\right)\left(1+\tfrac{N}{a}\right);$ and furthermore $\biggl{|}\sum_{x}f(x)g(x)h(x)\biggr{|}\geq\delta\left\lfloor H\right\rfloor^{2}-2\left(\tfrac{H}{a}+\tfrac{Hb}{N}\right)\left\lfloor H\right\rfloor^{2}.$ (4.3) ###### Remark. In parsing the above inequalities, it may be helpful to keep in mind that in our application $a$, $b$ and $H$ are of order $\sqrt{N}$, with $H$ considerably smaller than $a$, in which case the lower bound in (4.3) becomes $\Omega(\delta H^{2})$. ###### Proof. The majority of our proof is concerned with manipulating (4.2) until we can interpret it as a genuine box norm (2.9), and thereby apply the box norm inverse theorem. The essential observation is that, since $\gcd(a,b)=1$, every integer $x$ can be uniquely represented in the form $x=ay+bz\qquad(y\in\mathbb{Z},\ z\in[a]).$ We note that if $x\in[N]$ then the constraint on $z$ forces $y$ to lie in the range $-b<y<N/a$. Defining $F:\mathbb{Z}\times\mathbb{Z}\to\mathbb{C}$ by $F(y,z):=f(ay+bz)$, the left-hand side of (4.2) becomes $\sum_{y,y^{\prime}\in\mathbb{Z}}\sum_{\begin{subarray}{c}z\in[a]\\\ z^{\prime}\in\mathbb{Z}\end{subarray}}F(y,z)\overline{F(y^{\prime},z)}\overline{F(y,z^{\prime})}F(y^{\prime},z^{\prime})\mu_{H}(y^{\prime}-y)\mu_{H}(z^{\prime}-z).$ If $z^{\prime}$ and $z$ contribute to the above sum then $z^{\prime}\in z+(-H,H)\subset(-H+1,a+H).$ Hence we can restrict the range of summation of $z^{\prime}$ to $[a]$, at the cost of perturbing the sum by at most $2\left\lfloor H\right\rfloor(\frac{N}{a}+b).$ It follows that $\biggl{|}\sum_{y,y^{\prime}}\sum_{z,z^{\prime}\in[a]}F(y,z)\overline{F(y^{\prime},z)}\overline{F(y,z^{\prime})}F(y^{\prime},z^{\prime})\mu_{H}(y^{\prime}-y)\mu_{H}(z^{\prime}-z)\biggr{|}\\\ \geq\delta N-2\left\lfloor H\right\rfloor\left(\tfrac{N}{a}+b\right).$ We remove the Fejér kernels by Fourier expansion: $\sum_{\begin{subarray}{c}y,y^{\prime}\\\ z,z^{\prime}\in[a]\end{subarray}}F(y,z)\overline{F(y^{\prime},z)F(y,z^{\prime})}F(y^{\prime},z^{\prime})\mu_{H}(y^{\prime}-y)\mu_{H}(z^{\prime}-z)=\\\ \int_{\mathbb{T}^{2}}\sum_{\begin{subarray}{c}y,y^{\prime}\\\ z,z^{\prime}\in[a]\end{subarray}}F(y,z)\overline{F(y^{\prime},z)F(y,z^{\prime})}F(y^{\prime},z^{\prime})\hat{\mu}_{H}(\alpha)\hat{\mu}_{H}(\beta)e(\alpha(y^{\prime}-y)+\beta(z^{\prime}-z))\mathrm{d}\alpha\mathrm{d}\beta\\\ \leq\left(\int_{\mathbb{T}}|\hat{\mu}_{H}(\alpha)|\mathrm{d}\alpha\right)^{2}\sup_{\alpha,\beta\in\mathbb{T}}\biggl{|}\sum_{\begin{subarray}{c}y,y^{\prime}\\\ z,z^{\prime}\in[a]\end{subarray}}F(y,z)F_{2}(y^{\prime},z)F_{3}(y,z^{\prime})F_{4}(y^{\prime},z^{\prime})\biggr{|},$ where $F_{2}(y^{\prime},z):=\overline{F(y^{\prime},z)}e(-\beta z)$, $F_{3}(y,z^{\prime}):=\overline{F(y,z^{\prime})}e(-\alpha y)$, and $F_{4}(y^{\prime},z^{\prime})$ $:=F(y^{\prime},z^{\prime})e(\alpha y^{\prime}+\beta z^{\prime})$. We observe that $\hat{\mu}_{H}(\alpha)=|\hat{1}_{[H]}(\alpha)|^{2}/\left\lfloor H\right\rfloor^{2}$, which implies that $\int_{\mathbb{T}}|\hat{\mu}(\alpha)|d\alpha=\left\lfloor H\right\rfloor^{-1}$. Therefore $\biggl{|}\sum_{\begin{subarray}{c}y,y^{\prime}\\\ z,z^{\prime}\in[a]\end{subarray}}F(y,z)F_{2}(y^{\prime},z)F_{3}(y,z^{\prime})F_{4}(y^{\prime},z^{\prime})\biggr{|}\geq\delta\left\lfloor H\right\rfloor^{2}N-2\left\lfloor H\right\rfloor^{3}\left(\tfrac{N}{a}+b\right),$ (4.4) for $1$-bounded functions $F_{i}:\mathbb{Z}\times[a]\to\mathbb{C}$ of the form $F_{i}(y,z)=f(ay+bz)e(\alpha_{1}y+\alpha_{2}z)$. Since $f$ is supported on $[N]$, there are exactly $N$ pairs $(y^{\prime},z^{\prime})\in\mathbb{Z}\times[a]$ for which $F(y^{\prime},z^{\prime})\neq 0$. Thus, by pigeonholing in $y^{\prime}$ and $z^{\prime}$ in (4.4) and setting $L(y):=F_{3}(y,z^{\prime})$ and $R(z):=F_{2}(y^{\prime},z)F_{4}(y^{\prime},z^{\prime})$, we get that $\biggl{|}\sum_{y}\sum_{z\in[a]}F(y,z)L(y)R(z)\biggr{|}\geq\delta\left\lfloor H\right\rfloor^{2}-2\left\lfloor H\right\rfloor^{3}\left(\tfrac{1}{a}+\tfrac{b}{N}\right).$ For each $x\in\mathbb{Z}$, define $l(x)\in\mathbb{Z}$ and $r(x)\in[a]$ by $x=al(x)+br(x)$, and set $g(x):=R\circ r(x)$ and $h(x):=L\circ l(x)$. Then it remains to check the invariance properties of $g$ and $h$. To see that $g(x)=g(x+ay)$ for all $x,y\in\mathbb{Z}$, just note that $r(x)=r(x+ay)$ for every $x,y\in\mathbb{Z}$. Finally we establish that, for most $x\in[N]$, we have $h(x)=h(x+bz)$ for all $|z|\leq\varepsilon N/b$. First note that $l(x)=l(x+bz)$ whenever $\varepsilon N/b<r(x)\leq a-\varepsilon N/b$. Hence for this to fail, $x$ must lie in one of at most $1+2\varepsilon N/b$ congruence classes modulo $a$. The number of such $x$ lying in the interval $[N]$ is at most $\left(1+\frac{2\varepsilon N}{b}\right)\left(1+\frac{N}{a}\right).$ ∎ The lemma also yields a result in the situation in which $\gcd(a,b)>1$. In proving this we take the opportunity to smooth out the $b$-invariance of $h$ slightly, whilst also giving an explicit description of $h$ in terms of $f$. More concretely, we replace $h$ with a projection of $fg$ onto cosets of $b\cdot\mathbb{Z}$. ###### Lemma 4.2. There exists an absolute constant $c>0$ such that on assuming $1\leq H\leq c\delta^{3}N^{1/2}$ and $1\leq K\leq c\delta^{2}H^{2}N^{-1/2}$ the following holds. Let $a,b\in[N^{1/2}]$ with $\gcd(a,b)\leq\delta^{-1}$ and $a,b\geq\delta N^{1/2}$. Suppose that $f:\mathbb{Z}\to\mathbb{C}$ is $1$-bounded, supported on the interval $[N]$, and satisfies $\biggl{|}\sum_{h,x}\mu_{H}(h)\Delta_{ah_{1},bh_{2}}f(x)\biggr{|}\geq\delta N.$ Then there exists a 1-bounded $a$-periodic function $g$ such that $\sum_{x}f(x)g(x)\sum_{k}\mu_{K}(k)\overline{f(x+bk)g(x+bk)}\gg\delta^{2}H^{4}/N.$ (4.5) ###### Proof. Set $q:=\gcd(a,b)\leq\delta^{-1}$. For each $u\in[q]$, define a $1$-bounded function $f_{u}:\mathbb{Z}\to\mathbb{C}$ by $f_{u}(x):=f(u+qx)$, and let $I_{u}:=\left\\{x:u+qx\in[N]\right\\}$ denote the interval on which $f_{u}$ is supported. By the pigeon-hole principle, for some $u$ we have $\sum_{x,h_{1},h_{2}}\mu_{H}(h_{1})\mu_{H}(h_{2})\Delta_{\frac{a}{q}h_{1},\frac{b}{q}h_{2}}f_{u}(x)\geq\delta|I_{u}|.$ Note that $\gcd(a/q,b/q)=1$, so by the previous lemma, there exist 1-bounded functions $g_{u},h_{u}:\mathbb{Z}\to\mathbb{C}$ such that $\biggl{|}\sum_{x}f_{u}(x)g_{u}(x)h_{u}(x)\biggr{|}\geq\delta\left\lfloor H\right\rfloor^{2}-2\left(\tfrac{Hq}{a}+\tfrac{Hb}{q|I_{u}|}\right)\left\lfloor H\right\rfloor^{2}\gg\delta H^{2}.$ Furthermore, $g_{u}$ is $(a/q)$-periodic and $\\#\left\\{x\in I_{u}:h_{u}(x)\neq h_{u}(x+yb/q)\text{ for some }|y|\leq\varepsilon|I_{u}|q/b\right\\}\\\ \leq\left(1+\tfrac{2q\varepsilon|I_{u}|}{b}\right)\left(1+\tfrac{q|I_{u}|}{a}\right)\ll\tfrac{N}{a}+\tfrac{\varepsilon N^{2}}{ab}.$ Defining $g_{u^{\prime}}$ and $h_{u^{\prime}}$ to be identically zero when $u^{\prime}\neq u$, we set $g(u^{\prime}+qx):=g_{u^{\prime}}(x)$ and $h(u^{\prime}+qx):=h_{u^{\prime}}(x)$. One can then check that $g$ is $a$-invariant, that $\biggl{|}\sum_{x}f(x)g(x)h(x)\biggr{|}\gg\delta H^{2},$ and that $\\#\left\\{x\in[N]:h(x)\neq h(x+by)\text{ for some }|y|\leq\varepsilon N/b\right\\}\ll\tfrac{N}{a}+\tfrac{\varepsilon N^{2}}{ab}.$ We may use the latter property to show that, provided $K\geq 1$, we have $\biggl{|}\sum_{x}f(x)g(x)h(x)-\sum_{x}h(x)\mathbb{E}_{y\in[K]}g(x+by)f(x+by)\biggr{|}\ll\tfrac{NK}{a}.$ Provided that $K\leq c\delta^{2}H^{2}N^{-1/2}$ we deduce that $\biggl{|}\sum_{x}h(x)\mathbb{E}_{y\in[K]}g(x+bk)f(x+bk)\biggr{|}\gg\delta H^{2}.$ One can check that, as a function of $x$, the inner expectation is 1-bounded with support in $[-2N,2N]$. Applying the Cauchy–Schwarz inequality and changing variables then gives (4.5). ∎ Finally we observe that a function of the form $h(x):=\sum_{k}\mu_{K}(k)f(x+by)$ (4.6) has nice $b$-periodicity properties. ###### Lemma 4.3. If $h$ is defined as in (4.6) for some 1-bounded $f$, then $h$ is $O(K^{-1})$-Lipschitz along $b\cdot\mathbb{Z}$, in that for any $x,y\in\mathbb{Z}$ we have $h(x+by)=h(x)+O(|y|/K)$. ###### Proof. Recalling the definition (1.11), note that $\mu_{K}$ is $(2/\left\lfloor K\right\rfloor)$-Lipschitz, in that $|\mu_{K}(k+y)-\mu_{K}(k)|\leq 2|y|/\left\lfloor K\right\rfloor$ for all $k,y\in\mathbb{Z}$. Hence, for $|y|\leq K$, a change of variables gives $|h(x+by)-h(x)|\leq\sum_{k}|\mu_{K}(k-y)-\mu_{K}(k)|\ll\frac{|y|}{K}\sum_{|k|<2K}1.$ ∎ ## 5\. Quantitative concatenation The endpoint of this section is to show how our counting operator (1.3) is controlled by the $U^{5}$-norm. We begin with four technical lemmas. The first says that convolving Fejér kernels along progressions of coprime common difference covers a substantial portion of an interval in a somewhat regular manner, a fact that can be interpreted Fourier analytically in the following. ###### Lemma 5.1. Let $K,L\geq 1$ and let $a,b$ be integers satisfying $a\geq\delta L$, $b\geq\delta K$ and $\gcd(a,b)\leq\delta^{-1}$. Then $\int_{\mathbb{T}}\bigl{|}\widehat{\mu}_{K}(a\beta)\bigr{|}\bigl{|}\widehat{\mu}_{L}(b\beta)\bigr{|}\mathrm{d}\beta\ll\frac{\delta^{-4}}{\left\lfloor K\right\rfloor\left\lfloor L\right\rfloor}.$ ###### Proof. Expanding Fourier transforms, one can check that $\int_{\mathbb{T}}\bigl{|}\widehat{\mu}_{H}(a\beta)\bigr{|}\bigl{|}\widehat{\mu}_{K}(b\beta)\bigr{|}\mathrm{d}\beta\\\ =\left\lfloor K\right\rfloor^{-2}\left\lfloor L\right\rfloor^{-2}\\#\biggl{\\{}(x,y)\in[K]^{2}\times[L]^{2}:a(x_{1}-x_{2})=b(y_{1}-y_{2})\biggr{\\}}.$ Writing $d:=\gcd(a,b)$, the number of solutions to the equation is at most $\left\lfloor K\right\rfloor\left\lfloor L\right\rfloor\left(\tfrac{\left\lfloor K\right\rfloor}{b/d}+1\right)\left(\tfrac{\left\lfloor L\right\rfloor}{a/d}+1\right).$ ∎ Our next lemma allows us to discard pairs of integers $a,b$ which are not sufficiently coprime. We exploit this repeatedly. ###### Lemma 5.2. For fixed integers $0\leq a_{1},a_{2}\leq M$. The number of pairs $(b,c)$ of integers $0\leq b,c\leq M$ such that $\gcd(a_{1}+b,a_{2}+c)>\delta^{-1}$ is $\ll\delta M^{2}$. ###### Proof. Notice that if $d=\gcd(a_{1}+b,a_{2}+c)$ then $d\leq 2M$. Hence $\displaystyle\sum_{\begin{subarray}{c}0\leq b,c\leq M\\\ \gcd(a_{1}+b,a_{2}+c)>\delta^{-1}\end{subarray}}1\leq\sum_{\delta^{-1}<d\leq 2M}\ \biggl{(}\ \sum_{0\leq m\leq 2M,\ d\mid m}1\biggr{)}^{2}$ $\displaystyle\leq\sum_{\delta^{-1}<d\leq 2M}\left(\frac{2M}{d}+1\right)^{2}$ $\displaystyle\ll M^{2}\sum_{d>\delta^{-1}}\frac{1}{d^{2}}\ll\delta M^{2}.$ ∎ The following lemma says that, as $a$ and $h$ range over $[N^{1/2}]$, the difference function $\Delta_{ah}f$ behaves like $\Delta_{k}f$ with $k\in[N]$, at least on average. ###### Lemma 5.3. Let $f:\mathbb{Z}\to\mathbb{C}$ be a 1-bounded function with support in $[N]$. Suppose that $\delta N^{1/2}\leq H\leq N^{1/2}$ and $\mathbb{E}_{a\in[N^{1/2}]}\sum_{h}\mu_{H}(h)\left\|\Delta_{ah}f\right\|_{U^{s}}^{2^{s}}\geq\delta\left\|1_{[N]}\right\|_{U^{s}}^{2^{s}}.$ Then $\left\|f\right\|_{U^{s+1}}^{2^{s+1}}\gg\delta^{12}\left\|1_{[N]}\right\|_{U^{s+1}}^{2^{s+1}}$ ###### Proof. Expanding the definition of the $U^{s}$-norm $\mathbb{E}_{a\in[N^{1/2}]}\sum_{h}\mu_{H}(h)\left\|\Delta_{ah}f\right\|_{U^{s}}^{2^{s}}\\\ =\sum_{h_{1},\dots,h_{s},x}\overline{\Delta_{h_{1},\dots,h_{s}}f(x)}\mathbb{E}_{a\in[N^{1/2}]}\sum_{h}\mu_{H}(h)\Delta_{h_{1},\dots,h_{s}}f(x+ah).$ Employing the Cauchy–Schwarz inequality to double the $a$ and $h$ variables gives $\mathbb{E}_{a,a^{\prime}\in[N^{1/2}]}\sum_{h_{i}}\sum_{x}\sum_{h,h^{\prime}}\mu_{H}(h)\mu_{H}(h^{\prime})\Delta_{h_{1},\dots,h_{s},ah-a^{\prime}h^{\prime}}f(x)\gg\delta^{2}N^{s+1}.$ By Lemma 5.2 and the pigeon-hole principle, we deduce the existence of $a,a^{\prime}\gg\delta^{2}N^{1/2}$ with $\gcd(a,a^{\prime})\ll\delta^{-2}$ such that $\sum_{h_{i}}\sum_{x}\sum_{h,h^{\prime}}\mu_{H}(h)\mu_{H}(h^{\prime})\Delta_{h_{1},\dots,h_{s},ah-a^{\prime}h^{\prime}}f(x)\gg\delta^{2}N^{s+1}.$ By Fourier inversion and extraction of a large Fourier coefficient, there exists $\alpha\in\mathbb{T}$ such that the right-hand side above is at most $\int_{\mathbb{T}}\left|\widehat{\mu}_{H}(a\beta)\right|\left|\widehat{\mu}_{H}(a^{\prime}\beta)\right|\mathrm{d}\beta\biggl{|}\sum_{h_{i}}\sum_{x}\Delta_{h_{1},\dots,h_{s},h_{s+1}}f(x)e(\alpha h_{s+1})\biggr{|}.$ The result follows on employing Lemma 5.1 and Lemma A.3. ∎ We now prove a similar lemma, but with $\Delta_{ah}f$ replaced by $fg_{a}$ where $g_{a}$ is $a$-periodic. The moral is that these are similar quantities (on average). ###### Lemma 5.4. Let $f,g_{a}:\mathbb{Z}\to\mathbb{C}$ be 1-bounded functions such that $g_{a}$ is $a$-periodic and $\mathrm{supp}(f)\subset[N]$. Suppose that $\mathbb{E}_{a\in[N^{1/2}]}\left\|fg_{a}\right\|_{U^{s}}^{2^{s}}\geq\delta\left\|1_{[N]}\right\|_{U^{s}}^{2^{s}}.$ Then $\left\|f\right\|_{U^{s+1}}^{2^{s+1}}\gg\delta^{24}\left\|1_{[N]}\right\|_{U^{s+1}}^{2^{s+1}}$ ###### Proof. Fix $a\in[N^{1/2}]$. By the periodicity of $g_{a}$ and a change of variables, we have $\sum_{h_{i}}\sum_{x}\Delta_{h_{1},\dots,h_{s}}g_{a}(x)\Delta_{h_{1},\dots,h_{s}}f(x)=\sum_{h_{i}}\sum_{x}\Delta_{h_{1},\dots,h_{s}}g_{a}(x)\mathbb{E}_{y\in[N^{1/2}]}\Delta_{h_{1},\dots,h_{s}}f(x+ay).$ Notice that the sum over $x$ is non-zero only if $|x|,|h_{i}|<N$, hence by Cauchy–Schwarz and a change of variables $\displaystyle\biggl{(}\mathbb{E}_{a\in[N^{1/2}]}\left\|fg_{a}\right\|_{U^{s}}^{2^{s}}\biggr{)}^{2}$ $\displaystyle\ll N^{s+1}\mathbb{E}_{a\in[N^{1/2}]}\sum_{h_{i}}\sum_{x}\sum_{y}\mu_{N^{1/2}}(y)\Delta_{h_{1},\dots,h_{s},ay}f(x)$ $\displaystyle=N^{s+1}\mathbb{E}_{a\in[N^{1/2}]}\sum_{y}\mu_{N^{1/2}}(y)\left\|\Delta_{ay}f\right\|_{U^{s}}^{2^{s}}$ The result follows on employing Lemma 5.3. ∎ We are now ready to give the technical heart of this section. The (somewhat lengthy) assumptions come from our eventual application of Lemma 4.2. ###### Lemma 5.5. Fix $a\in\mathbb{N}$ and let $\delta N^{1/2}\leq K\leq N^{1/2}$. For each $b\in[N^{1/2}]$ let $f,g_{b},h_{b}:\mathbb{Z}\to\mathbb{C}$ be 1-bounded functions such that $\mathrm{supp}(f),\mathrm{supp}(h_{b})\subset[N]$ and where $g_{b}$ is $b$-periodic. Set $\tilde{h}_{b}(x):=\sum_{k}\mu_{K}(k)h_{b}(x+(a+b)k)$ and suppose that $\sum_{\begin{subarray}{c}\delta\sqrt{N}\leq b\leq\sqrt{N}\\\ \gcd(a,b)\leq\delta^{-1}\end{subarray}}\sum_{x}f(x)g_{b}(x)\tilde{h}_{b}(x)\geq\delta N^{3/2}.$ Then $\mathbb{E}_{b\in[N^{1/2}]}\big{\|}h_{b}\big{\|}_{U^{3}}^{8}\gg\delta^{208}\left\|1_{[N]}\right\|_{U^{3}}^{8}.$ ###### Proof. To ease notation, write $\tilde{h}_{b}(x):=\sum_{k}\mu_{K}(k)h_{b}(x+(a+b)k)$ We apply Cauchy–Schwarz to remove the weight $f(x)$ and double the $b$ variable, yielding $\sum_{\begin{subarray}{c}\delta\sqrt{N}\leq b,b^{\prime}\leq\sqrt{N}\\\ \gcd(a,b)\leq\delta^{-1}\end{subarray}}\sum_{x}g_{b}(x)\tilde{h}_{b}(x)\overline{g_{b^{\prime}}(x)\tilde{h}_{b^{\prime}}(x)}\geq\delta^{2}N^{2}.$ Employing Lemma 5.2, we may discard those $b,{b^{\prime}}$ for which one of $\gcd(b^{\prime},a+{b})$ or $\gcd(a+b^{\prime},a+{b})$ is greater than $C\delta^{-2}$. On combining this with the popularity principle, we deduce the existence of $\mathcal{B}\subset[\delta N^{1/2},N^{1/2}]$ of size $|\mathcal{B}|\gg\delta^{2}N^{1/2}$ such that for each $b\in\mathcal{B}$ there exists $b^{\prime}\in[N^{1/2}]$ with all of $\gcd(b,a+{b})$, $\gcd({b^{\prime}},a+{b})$, $\gcd(a+b^{\prime},a+{b})$ at most $O(\delta^{-2})$ and satisfying $\sum_{x}g_{b}(x)\overline{\tilde{h}_{b^{\prime}}(x)g_{b^{\prime}}(x)}\tilde{h}_{b}(x)\gg\delta^{2}N.$ (5.1) Expanding the definition of $\tilde{h}_{b^{\prime}}$, using the invariance of $g_{b}$ and changing variables gives $\sum_{x}\mathbb{E}_{k_{1},k_{3}\in[K]}\sum_{k_{2}}\mu_{K}(k_{2})g_{b}(x+(a+b^{\prime})k_{2}+{b^{\prime}}k_{3})\overline{h_{b^{\prime}}(x+bk_{1}+{b^{\prime}}k_{3})}\\\ \overline{g_{b^{\prime}}(x+bk_{1}+(a+b^{\prime})k_{2})}\ \tilde{h}_{b}(x+bk_{1}+(a+b^{\prime})k_{2}+{b^{\prime}}k_{3})\gg\delta^{2}N.$ Since $h_{b^{\prime}}$ is supported on $[N]$ and $b,{b^{\prime}},K\leq N^{1/2}$, there are at most $O(N)$ values of $x$ which contribute to the above sum. Applying Hölder’s inequality then gives $\sum_{x}\biggl{(}\mathbb{E}_{k_{1},k_{3}\in[K]}\sum_{k_{2}}\mu_{K}(k_{2})g_{b}(x+(a+b^{\prime})k_{2}+{b^{\prime}}k_{3})\overline{h_{b^{\prime}}(x+bk_{1}+{b^{\prime}}k_{3})}\\\ \overline{g_{b^{\prime}}(x+bk_{1}+(a+b^{\prime})k_{2})}\ \tilde{h}_{b}(x+bk_{1}+(a+b^{\prime})k_{2}+{b^{\prime}}k_{3})\biggr{)}^{8}\gg\delta^{16}N.$ The sum inside the 8th power corresponds to an integral with respect to three probability measures on $\mathbb{Z}$, with integrand amenable to Lemma A.4. Combining this with a change of variables gives $\sum_{x}\sum_{k_{1},k_{2},k_{3}}\mu_{K}(k_{1})\nu_{K}(k_{2})\mu_{K}(k_{3})\Delta_{bk_{1},(a+b^{\prime})k_{2},{b^{\prime}}k_{3}}\ \tilde{h}_{b}(x)\gg\delta^{16}N,$ where we set $\nu_{K}(k):=\sum_{k_{1}-k_{2}=k}\mu_{K}(k_{1})\mu_{K}(k_{2}).$ By Lemma 4.3, each $\tilde{h}_{b}$ is $O(K^{-1})$-Lipschitz along $(a+b)\cdot\mathbb{Z}$. Hence, if $l_{i}\in[L]$, a telescoping identity shows that $|\Delta_{h_{1}+(a+{b})l_{1},h_{2}+(a+{b})l_{2},h_{3}+(a+{b})l_{3}}\tilde{h}_{b}(x)-\Delta_{h_{1},h_{2},h_{3}}\tilde{h}_{b}(x)|\ll L/K.$ Taking $L:=c\delta^{16}K$ we obtain $\sum_{x}\sum_{k_{1},k_{2},k_{3}}\mu_{K}(k_{1})\nu_{K}(k_{2})\mu_{K}(k_{3})\mathbb{E}_{l_{1},l_{2},l_{3}\in[L]}\\\ \Delta_{bk_{1}+(a+{b})l_{1},\,(a+b^{\prime})k_{2}+(a+{b})l_{2},\,{b^{\prime}}k_{3}+(a+{b})l_{3}}\ \tilde{h}_{b}(x)\gg\delta^{16}N.$ We may replace the uniform measure on the $l_{i}$ by Fejér kernels at the cost of three applications of Cauchy–Schwarz; this gives $\sum_{x}\sum_{\begin{subarray}{c}k_{1},k_{2},k_{3}\\\ l_{1},l_{2},l_{3}\end{subarray}}\mu_{K}(k_{1})\nu_{K}(k_{2})\mu_{K}(k_{3})\mu_{L}(l_{1})\mu_{L}(l_{2})\mu_{L}(l_{3})\\\ \Delta_{bk_{1}+(a+{b})l_{1},\,(a+b^{\prime})k_{2}+(a+{b})l_{2},\,{b^{\prime}}k_{3}+(a+{b})l_{3}}\ \tilde{h}_{b}(x)\gg\delta^{128}N.$ Write $\displaystyle\lambda_{1}(h):=\sum_{bk+(a+{b})l=h}$ $\displaystyle\mu_{K}(k)\mu_{L}(l),\qquad\lambda_{2}(h):=\sum_{(a+b^{\prime})k+(a+{b})l=h}\nu_{K}(k)\mu_{L}(l),$ $\displaystyle\lambda_{3}(h):=\sum_{{b^{\prime}}k+(a+{b})l=h}\mu_{K}(k)\mu_{L}(l).$ Then $\sum_{x}\sum_{h_{1},h_{2},h_{3}}\lambda_{1}(h_{1})\lambda_{2}(h_{2})\lambda_{3}(h_{3})\\\ \Delta_{h_{1},h_{2},h_{3}}\ \tilde{h}_{b}(x)\gg\delta^{128}N.$ By Fourier inversion and extraction of a large Fourier coefficient, there exist $\alpha_{i}\in\mathbb{T}$ such that $\biggl{|}\sum_{x}\sum_{h_{1},h_{2},h_{3}}\Delta_{h_{1},h_{2},h_{3}}\ \tilde{h}_{b}(x)e(\underline{\alpha}\cdot\underline{h})\biggr{|}\prod_{i=1}^{3}\int_{\mathbb{T}}\bigl{|}\widehat{\lambda}_{i}(\beta)\bigr{|}\mathrm{d}\beta\gg\delta^{128}N.$ By our choice of $b$, $b^{\prime}$ (see the paragraph preceding (5.1)), together with Lemma 5.1, for each $i$ we have $\int_{\mathbb{T}}\bigl{|}\widehat{\lambda}_{i}(\alpha)\bigr{|}\mathrm{d}\alpha\ll\frac{\delta^{-8}}{KL}\ll\frac{\delta^{-26}}{N},$ (5.2) the latter following from the fact that $L\gg c\delta^{16}K$ and $K\geq\delta N^{1/2}$. On combining this with Lemma A.3 we obtain $\big{\|}\tilde{h}_{b}\big{\|}_{U^{3}}^{8}\gg\delta^{206}N^{4}.$ Since $\tilde{h}_{b}$ is an average of translates of $h_{b}$, we may apply the triangle inequality for the $U^{3}$-norm, together with the fact that Gowers norms are translation invariant, and conclude that $\left\|h_{b}\right\|_{U^{3}}^{8}\gg\delta^{206}N^{4}$. Summing over $b\in\mathcal{B}$ gives our final bound. ∎ Finally we synthesise Lemmas 3.3, 4.2 and 5.5. ###### Theorem 5.6 (Global $U^{5}$-control). Let $g_{0},g_{1},f:\mathbb{Z}\to\mathbb{C}$ be 1-bounded functions, each with support in $[N]$. Suppose that $\left|\Lambda_{q,N}(g_{0},g_{1},f)\right|\geq\delta\Lambda_{q,N}(1_{[N]}).$ Then $\sum_{u\in[q]}\left\|f\right\|_{U^{5}(u+q\mathbb{Z})}^{2^{5}}\gg\delta^{2^{25}}\sum_{u\in[q]}\left\|1_{[N]}\right\|_{U^{5}(u+q\mathbb{Z})}^{2^{5}}.$ ###### Proof. We recall our convention (1.2) regarding $M$. We begin by applying the linearisation procedure (Lemma 3.3) to deduce that $\sum_{a,b\in(-2M,2M)}\ \biggl{|}\sum_{h}\mu_{H}(h)\sum_{x}\Delta_{q(a+b)h_{1},qbh_{2},qah_{3}}f(x)\biggr{|}\\\ \gg\delta^{32}NM^{2}.$ We note that the sum inside the absolute value is invariant under $a\mapsto-a$. Hence we may restrict to $a,b\in[0,2M]$ at the cost of changing the absolute constant. Applying Lemma 5.2 we may discard those $a,b$ for which either $\gcd(a,b)>C\delta^{-32}$ or $b<c\delta^{32}M$. Partitioning the sum over $x$ into congruence classes $u\bmod q$, the popularity principle gives: * • at least $\Omega(\delta^{32}q)$ residues $u\in[q]$; * • for each of which there is a subset of $h_{3}\in(-H,H)$ of $\mu_{H}$-measure222i.e. $\sum_{h_{3}\in\mathcal{H}}\mu_{H}(h_{3})\gg\delta^{32}$. at least $\Omega(\delta^{32})$; * • for each of which there exist $\Omega(\delta^{32}M)$ values of $a\in[2M]$; * • for each of which there are $\Omega(\delta^{32}M)$ values of $b\in[2M]$ satisfying $\gcd(a,b)\ll\delta^{-32}$ and $b\gg\delta^{32}M$; and together these satisfy $\biggl{|}\sum_{h_{1},h_{2}}\mu_{H}(h_{1},h_{2})\sum_{x}\Delta_{(a+b)h_{1},bh_{2},ah_{3}}f(qx-u)\biggr{|}\\\ \gg\delta^{32}M^{2}.$ For fixed $u,h_{3},a$ write $\tilde{f}(x):=\Delta_{ah_{3}}f(qx-u),$ so that $\tilde{f}$ has support in the interval $[(2M)^{2}]$ and $\biggl{|}\sum_{h_{1},h_{2}}\mu_{H}(h_{1},h_{2})\sum_{x}\Delta_{(a+b)h_{1},bh_{2}}\tilde{f}(x)\biggr{|}\\\ \gg\delta^{32}M^{2}.$ Set $H:=c\delta^{96}M\qquad\text{and}\qquad K:=c^{3}\delta^{160}M,$ (5.3) with $c$ sufficiently small to ensure that we may apply Lemma 4.2. This gives the existence of a 1-bounded $b$-periodic function $g_{b}$ such that on setting $\tilde{h}_{b}(x):=\sum_{k}\mu_{K}(k)\overline{\tilde{f}(x+(a+b)k)g_{b}(x+(a+b)k)}$ (5.4) we have $\sum_{x}\tilde{f}(x)g_{b}(x)\tilde{h}_{b}(x)\gg\delta^{448}M^{2}.$ Setting $\eta:=c\delta^{480}$ for some small absolute constant $c>0$, we may sum over our set of permissible $b$ to deduce that $\sum_{\begin{subarray}{c}\eta M\leq b\leq 2M\\\ \gcd(a,b)\leq\eta^{-1}\end{subarray}}\sum_{x}\tilde{f}(x)g_{b}(x)h_{b}(x)\geq\eta M^{3}.$ The hypotheses of Lemma 5.5 having been met, we conclude that $\mathbb{E}_{b\in[2M]}\big{\|}\tilde{f}g_{b}\big{\|}_{U^{3}}^{8}\gg\delta^{99,840}\left\|1_{[M^{2}]}\right\|_{U^{3}}^{8}.$ Applying Lemma 5.4 then gives $\big{\|}\tilde{f}\big{\|}_{U^{4}}^{16}\gg\delta^{2,396,160}\left\|1_{[M^{2}]}\right\|_{U^{4}}^{16}.$ Recalling that $\tilde{f}(x)=\Delta_{ah_{3}}f_{u}(x)$ where $f_{u}(x):=f(qx-u)$, we may integrate over the set of permissible $h_{3}$ and $a$, utilising positivity to extend the range of summation, and deduce that $\mathbb{E}_{a\in[2M]}\sum_{h}\mu_{H}(h_{3})\big{\|}\Delta_{ah_{3}}f_{u}\big{\|}_{U^{4}}^{16}\gg\delta^{2,396,224}\left\|1_{[M^{2}]}\right\|_{U^{4}}^{16}$ Using Lemma 5.3 and summing over the permissible range of $u$ we get that $\mathbb{E}_{u\in[q]}\left\|f_{u}\right\|_{U^{5}}^{32}\gg\delta^{28,754,720}\left\|1_{[M^{2}]}\right\|_{U^{5}}^{32},$ and the result follows. ∎ ## 6\. Degree lowering So far, we have shown that $\Lambda_{q,N}(f_{0},f_{1},f_{2})$ is controlled by $\mathbb{E}_{u\in[q]}\|f_{2}\|_{U^{5}(u+q\mathbb{Z})}^{2^{5}}$ whenever $f_{0},f_{1},$ and $f_{2}$ are $1$-bounded complex-valued functions supported on the interval $[N]$. The next step in our argument is to bound $\Lambda_{q,N}(f_{0},f_{1},f_{2})$ in terms of the $U^{5}(u+q\mathbb{Z})$-norm of the dual function $F(x):=\mathbb{E}_{y\in[M]}f_{0}(x-qy^{2})f_{1}(x+y-qy^{2}).$ (6.1) We postpone this deduction until §7. In this section we show how $U^{5}$-control of the dual implies $U^{2}$-control. Our argument combines three simple lemmas: Weyl’s inequality; what we call ‘dual–difference interchange’, which allows us to replace the difference function of the dual by the dual of the difference functions; and the fact that a function whose difference functions correlate with ‘low rank’ Fourier coefficients must have a large uniformity norm of lower degree. The following log-free variant of Weyl’s inequality can be found in [4, Lemma A.11]. ###### Lemma 6.1 (Weyl’s inequality). There exists an absolute constant $C$ such that the following holds. Let $\alpha,\beta\in\mathbb{T}$, $\delta\in(0,1)$ and let $I\subset\mathbb{Z}$ be an interval with $|I|\geq C\delta^{-6}$ and $\big{|}\mathbb{E}_{y\in I}e(\alpha y^{2}+\beta y)\big{|}\geq\delta.$ Then there exists a positive integer $q\ll\delta^{-4}$ such that $\|q\alpha\|\ll\delta^{-14}|I|^{-2}.$ This has the following consequence, which uses our convention (1.2) regarding $M$. ###### Lemma 6.2. There exist an absolute constant $C$ such that for $N\geq C(q/\delta)^{C}$ the following holds. Suppose that for $\alpha\in\mathbb{T}$ there are $1$-bounded functions $g_{0},g_{1}:\mathbb{Z}\to\mathbb{C}$ supported on the interval $[N]$ such that $\left|\sum_{x}\sum_{y\in[M]}g_{0}(qx)g_{1}(qx+y)e(\alpha(x+y^{2}))\right|\geq\delta MN/q.$ Then there exists a positive integer $q^{\prime}\ll\delta^{-4}$ such that $\|q^{\prime}q^{2}\alpha\|\ll\delta^{-14}q^{3}/N$. ###### Proof. We split the sum over $y\in[M]$ into arithmetic progressions modulo $q$ and split the sum over $x$ into intervals of length $M/q$. Hence, by the pigeon- hole principle, there exists $u\in[q]$ and an integer $m$ such that on rounding the sum over $y$ we have $\left|\sum_{x,y\in[M/q]}g_{0}(q(m+x))g_{1}(u+q(m+x+y))e\left(\alpha\left(x+(u+qy)^{2}\right)\right)\right|\\\ \gg\delta(M/q)^{2}.$ Define the functions $\displaystyle h_{0}(x):=g_{0}(q(m+x))$ $\displaystyle e(\alpha x)1_{[M/q]}(x),\qquad h_{1}(x):=g_{1}(u+q(m+x))1_{[2M/q]},$ $\displaystyle h_{2}(x):=e\left(\alpha(u+qx)^{2}\right)1_{[M/q]}(x)$ Then by orthogonality, extraction of a large Fourier coefficient and Parseval we have $\displaystyle\delta M^{2}/q^{2}\ll\left|\int_{\mathbb{T}}\hat{h}_{0}(\beta)\hat{h}_{1}(-\beta)\hat{h}_{2}(\beta)\mathrm{d}\alpha\right|\ll\big{\|}\hat{h}_{2}\big{\|}_{\infty}\big{\|}\hat{h}_{0}\big{\|}_{L^{2}}\big{\|}\hat{h}_{1}\big{\|}_{L^{2}}\ll\big{\|}\hat{h}_{2}\big{\|}_{\infty}M/q.$ It follows that there exists $\beta\in\mathbb{T}$ such that $\left|\sum_{x\in[M/q]}e\left(\alpha(u+qx)^{2}+\beta x\right)\right|\gg\delta M/q.$ Applying Weyl’s inequality, we deduce the existence of $q^{\prime}\ll\delta^{-4}$ such that $\left\|q^{\prime}q^{2}\alpha\right\|\ll\delta^{-14}(q/M)^{2}$. ∎ ###### Lemma 6.3 (Dual–difference interchange). For each $y\in[M]$, let $F_{y}:\mathbb{Z}\to\mathbb{C}$ be a 1-bounded function with support in an interval of length $N$. Set $F(x):=\mathbb{E}_{y\in[M]}F_{y}(x).$ Then for any function $\phi:\mathbb{Z}^{s}\to\mathbb{T}$ and finite set $\mathcal{H}\subset\mathbb{Z}^{s}$ we have $\left(N^{-s-1}\sum_{\underline{h}\in\mathcal{H}}\left|\sum_{x}\Delta_{\underline{h}}F(x)e\bigl{(}\phi(\underline{h})x\bigr{)}\right|\right)^{2^{s}}\ll_{s}\\\ N^{-2s-1}\sum_{\underline{h}^{0},\underline{h}^{1}\in\mathcal{H}}\left|\sum_{x}\mathbb{E}_{y\in[M]}\Delta_{\underline{h}^{0}-\underline{h}^{1}}F_{y}(x)e\bigl{(}\phi(\underline{h}^{0};\underline{h}^{1})x\bigr{)}\right|,$ where $\phi(\underline{h}^{0};\underline{h}^{1}):=\sum_{\omega\in\left\\{0,1\right\\}^{s}}(-1)^{|\omega|}\phi(\underline{h}^{\omega})\qquad\text{and}\qquad\underline{h}^{\omega}:=(h_{1}^{\omega_{1}},\dots,h_{s}^{\omega_{s}}).$ ###### Proof. We proceed by induction on $s\geq 0$, the base case being an identity. Suppose then that $s\geq 1$. For $\underline{h}\in\mathbb{Z}^{s-1}$ and $h\in\mathbb{Z}$, we note that $\Delta_{(\underline{h},h)}F(x)=\Delta_{\underline{h}}\left(\mathbb{E}_{y,y^{\prime}\in[M]}F_{y}(x)\overline{F_{y^{\prime}}(x+h)}\right)$ Hence by the induction hypothesis $\left(N^{-s-1}\sum_{h}\sum_{\begin{subarray}{c}\underline{h}\\\ (\underline{h},h)\in\mathcal{H}\end{subarray}}\left|\sum_{x}\Delta_{(\underline{h},h)}F(x)e\bigl{(}\phi(\underline{h})x\bigr{)}\right|\right)^{2^{s}}\ll_{s}\\\ \left(N^{-2s}\sum_{h}\sum_{\begin{subarray}{c}\underline{h}^{0},\underline{h}^{1}\\\ (\underline{h}^{i},h)\in\mathcal{H}\end{subarray}}\left|\sum_{x}\mathbb{E}_{y,y^{\prime}\in[M]}\Delta_{\underline{h}^{0}-\underline{h}^{1}}F_{y}(x)\overline{F_{y^{\prime}}(x+h)}e\bigl{(}\phi(\underline{h}^{0};\underline{h}^{1};h)x\bigr{)}\right|\right)^{2},$ where $\phi(\underline{h}^{0};\underline{h}^{1};h):=\sum_{\omega\in\left\\{0,1\right\\}^{s-1}}(-1)^{|\omega|}\phi(\underline{h}^{\omega},h).$ Letting $e(\psi(\underline{h}^{0};\underline{h}^{1};h))$ denote the phase of the inner absolute, we take the sum over $h$ inside and apply Cauchy–Schwarz to obtain $\left(\sum_{\underline{h}^{0},\underline{h}^{1},x}\mathbb{E}_{y,y^{\prime}\in[M]}\sum_{\begin{subarray}{c}h\\\ (\underline{h}^{i},h)\in\mathcal{H}\end{subarray}}\Delta_{\underline{h}^{0}-\underline{h}^{1}}F_{y}(x)\overline{F_{y^{\prime}}(x+h)}e\bigl{(}\phi(\underline{h}^{0};\underline{h}^{1};h)x+\psi(\underline{h}^{0};\underline{h}^{1};h)\bigr{)}\right)^{2}\\\ \leq N^{2s-1}\sum_{\underline{h}^{0},\underline{h}^{1}}\sum_{\begin{subarray}{c}h^{0},h^{1}\\\ (\underline{h}^{i},h^{j})\in\mathcal{H}\end{subarray}}\\\ \left|\sum_{x}\mathbb{E}_{y\in[M]}\Delta_{\underline{h}^{0}-\underline{h}^{1}}F_{y}(x)\overline{F_{y}(x+h^{0}-h^{1})}e\Bigl{(}\bigl{(}\phi(\underline{h}^{0};\underline{h}^{1};h^{0})-\phi(\underline{h}^{0};\underline{h}^{1};h^{1})\bigr{)}x\Bigr{)}\right|.$ The result follows. ∎ If $\phi(h_{1},\dots,h_{s-1})$ is a function of $s-1$ variables we write $\phi(h_{1},\dots,\hat{h}_{i},\dots,h_{s}):=\phi(h_{1},\dots,h_{i-1},h_{i+1},\dots,h_{s}).$ We say that $\phi(h_{1},\dots,h_{s})$ is _low rank_ if there exist functions $\phi_{i}(h_{1},\dots,h_{s-1})$ such that $\phi(h_{1},\dots,h_{s})=\sum_{i=1}^{s}\phi_{i}(h_{1},\dots,\hat{h}_{i},\dots,h_{s}).$ From the definition of the Gowers norm together with the $U^{2}$-inverse theorem (Lemma A.1), one can show that largeness of the $U^{s+2}$-norm is equivalent to the existence of $\phi:\mathbb{Z}^{s}\to\mathbb{T}$ such that $\sum_{h_{1},\dots,h_{s}}\left|\sum_{x}\Delta_{h}f(x)e(\phi(h)x)\right|\gg N^{s+1}.$ The following lemma says that if $\phi$ is low-rank, then the $U^{s+1}$-norm must also be large. ###### Lemma 6.4 (Low rank correlation implies lower degree). Let $f:\mathbb{Z}\to\mathbb{C}$ be a 1-bounded function with support in $[N]$. Then for $\phi_{1},\dots,\phi_{m}:\mathbb{Z}^{s-1}\to\mathbb{T}$ with $m\leq s$ we have $\frac{1}{N^{s+1}}\sum_{h_{1},\dots,h_{s}}\left|\sum_{x}\Delta_{h}f(x)e\left(\sum_{i=1}^{m}\phi_{i}(h_{1},\dots,\hat{h}_{i},\dots,h_{s})x\right)\right|\\\ \ll_{m}\left(\frac{\left\|f\right\|_{U^{s+1}}^{2^{s+1}}}{N^{s+2}}\right)^{2^{-m-1}}.$ (6.2) ###### Proof. We proceed by induction on $m\geq 0$, the base case corresponding to the Cauchy–Schwarz inequality. Suppose then that $m\geq 1$ and the result is true for smaller values of $m$. Letting $e(\psi(h))$ denote the phase of the inner- most sum, the left-hand side of (6.2) is equal to $\frac{1}{N^{s+1}}\sum_{h_{2},\dots,h_{s},x}\Delta_{h_{2},\dots,h_{s}}f(x)e\left(\phi_{1}(h_{2},\dots,h_{s})\right)\sum_{h_{1}}\Delta_{h_{2},\dots,h_{s}}\overline{f}(x+h_{1})\\\ e\left(\sum_{i=2}^{m}\phi_{i}(h_{1},\dots,\hat{h}_{i},\dots,h_{s})x+\psi(h_{1},\dots,h_{s})\right).$ By Cauchy–Schwarz, the square of this is at most $\frac{1}{N^{s+2}}\sum_{h_{2},\dots,h_{s}}\ \sum_{h_{1},h_{1}^{\prime}\in(-N,N)}\\\ \left|\sum_{x}\Delta_{h_{1}-h_{1}^{\prime},h_{2},\dots,h_{s}}f(x)e\left(\sum_{i=2}^{m}\left(\phi_{i}(h_{1},\dots,\hat{h}_{i},\dots,h_{s})-\phi_{i}(h_{1}^{\prime},\dots,\hat{h}_{i},\dots,h_{s})\right)x\right)\right|.$ Taking a maximum over $h_{1}^{\prime}\in(-N,N)$ and changing variables in $h_{1}$, the latter is at most an absolute constant times $\frac{1}{N^{s+1}}\sum_{h_{1},h_{2},\dots,h_{s}}\Bigg{|}\sum_{x}\Delta_{h_{1},h_{2},\dots,h_{s}}f(x)\\\ e\left(\sum_{i=2}^{m}\left(\phi_{i}(h_{1}+h_{1}^{\prime},h_{2}\dots,\hat{h}_{i},\dots,h_{s})-\phi_{i}(h_{1}^{\prime},h_{2}\dots,\hat{h}_{i},\dots,h_{s})\right)x\right)\Bigg{|}.$ This phase has lower rank than the original, hence we may apply the induction hypothesis to yield the lemma. ∎ ###### Lemma 6.5 (Degree lowering). There exists an absolute constant such that for $N\geq C(q/\delta)^{C}$ the following holds. Let $f_{0},f_{1}:\mathbb{Z}\to\mathbb{C}$ be 1-bounded functions with support in $[N]$ and define the dual $F(x):=\mathbb{E}_{y\in[M]}f_{0}(x-qy^{2})f_{1}(x+y-qy^{2}).$ If, for $s\geq 3$, we have $\sum_{u\in[q]}\left\|F\right\|_{U^{s}(u+q\cdot\mathbb{Z})}^{2^{s}}\geq\delta\sum_{u\in[q]}\left\|1_{[N]}\right\|_{U^{s}(u+q\cdot\mathbb{Z})}^{2^{s}},$ then $\sum_{u\in[q]}\left\|F\right\|_{U^{s-1}(u+q\cdot\mathbb{Z})}^{2^{s-1}}\gg_{s}\delta^{4^{s+2}}\sum_{u\in[q]}\left\|1_{[N]}\right\|_{U^{s-1}(u+q\cdot\mathbb{Z})}^{2^{s-1}},$ ###### Proof. Write $M:=\left\lfloor(N/q)^{1/2}\right\rfloor$. Given $u\in[q]$ let $F_{u}(x):=F(u+qx)$, a function with support in the interval $[2N/q]$. Applying the popularity principle, there exists a set of $\Omega(\delta q)$ residues $u\in[q]$ for which $\left\|F_{u}\right\|_{U^{s}}^{2^{s}}\gg\delta(N/q)^{s+1}$. Expanding the definition of the $U^{s}$-norm (1.9) we have $\sum_{h_{1},\dots,h_{s-2}}\left\|\Delta_{h_{1},\dots,h_{s-2}}F_{u}\right\|_{U^{2}}^{4}\gg\delta(N/q)^{s+1}.$ Applying the $U^{2}$-inverse theorem (Lemma A.1), there exists $\mathcal{H}\subset(-2N/q,2N/q)^{s-2}$ of size $|\mathcal{H}|\gg\delta(N/q)^{s-2}$ and a function $\phi:\mathbb{Z}^{s-2}\to\mathbb{T}$ such that for every $\underline{h}\in\mathcal{H}$ we have $\left|\sum_{x}\Delta_{\underline{h}}F_{u}(x)e\bigl{(}\phi(\underline{h})x\bigr{)}\right|\gg\delta N/q.$ (6.3) Set $T:=\left\lceil C\delta^{-1}N/q\right\rceil$, with $C$ an absolute constant taken sufficiently large to ensure that, on rounding $\phi(\underline{h})$ to the nearest fraction of the form $t/T$, the validity of (6.3) remains. Summing over $\underline{h}\in\mathcal{H}$ and applying Lemma 6.3, we deduce that $\sum_{\underline{h}^{0},\underline{h}^{1}\in\mathcal{H}}\left|\sum_{x}\mathbb{E}_{y\in[M]}\Delta_{\underline{h}^{0}-\underline{h}^{1}}f_{0}(u+qx- qy^{2})\Delta_{\underline{h}^{0}-\underline{h}^{1}}f_{1}(u+qx+y-qy^{2})\right|\\\ e\bigl{(}\phi(\underline{h}^{0};\underline{h}^{1})x\bigr{)}\gg_{s}\delta^{2^{s-1}}(N/q)^{2s-1}.$ Applying the pigeon-hole and popularity principle, there exists $\mathcal{H}^{\prime}\subset\mathcal{H}$ of size $\Omega_{s}(\delta^{2^{s-1}}(N/q)^{s-2})$ and $\underline{h}^{1}\in\mathcal{H}$ such that for every $\underline{h}^{0}\in\mathcal{H}^{\prime}$ we have $\left|\sum_{x}\sum_{y\in[M]}\Delta_{\underline{h}^{0}-\underline{h}^{1}}f_{0}(u+qx- qy^{2})\Delta_{\underline{h}^{0}-\underline{h}^{1}}f_{1}(u+qx+y-qy^{2})e\bigl{(}\phi(\underline{h}^{0},\underline{h}^{1})x\bigr{)}\right|\\\ \gg\delta^{2^{s-1}}MN/q.$ By Lemma 6.2, for each $\underline{h}^{0}\in\mathcal{H}^{\prime}$ there exists $q^{\prime}\ll\delta^{-2^{s+1}}$ such that $\left\|q^{\prime}q^{2}\phi(\underline{h}^{0},\underline{h}^{1})\right\|\ll\delta^{-2^{s}\times 7}q^{3}/N$ Notice that $\phi(\underline{h}^{0},\underline{h}^{1})$ is an element of the additive group $\left\\{t/T:t\in[T]\right\\}\subset\mathbb{T}$. Moreover, for any $Q_{i}$ we have the inclusion $\left\\{\alpha\in\mathbb{T}:\exists q^{\prime}\leq Q_{1}\text{ with }\left\|q^{\prime}q^{2}\alpha\right\|\leq Q_{2}q^{3}/N\right\\}\subset\bigcup_{\begin{subarray}{c}1\leq a\leq q\leq Q_{1}\\\ \mathrm{hcf}(a,q)=1\end{subarray}}\left[\frac{a}{q^{\prime}q^{2}}-\frac{Q_{2}}{N},\frac{a}{q^{\prime}q^{2}}+\frac{Q_{2}}{N}\right].$ By a volume packing argument, the number of $t/T$ lying in this union of intervals is at most $O\left(Q_{1}^{2}(1+\tfrac{Q_{2}T}{N})\right)$. It therefore follows from the pigeon-hole principle that there exists $\mathcal{H}^{\prime\prime}\subset\mathcal{H}^{\prime}$ of size $\Omega\left(\delta^{2^{s+3}+1-2^{s}}(N/q)^{s-2}\right)$ and $t_{0}\in[T]$ such that for any $\underline{h}^{0}\in\mathcal{H}^{\prime\prime}$ we have $\phi(\underline{h}^{0},\underline{h}^{1})=t_{0}/T$. In particular, when restricted to the set $\mathcal{H}^{\prime\prime}$, the function $\phi$ satisfies $\phi(\underline{h}^{0})=t_{0}/T-\sum_{\omega\in\left\\{0,1\right\\}^{s}\setminus\left\\{0\right\\}}(-1)^{|\omega|}\phi(\underline{h}^{\omega}).$ The right-hand side of this identity is clearly _low rank_ according to the terminology preceding Lemma 6.4. Summing over $\underline{h}\in\mathcal{H}^{\prime\prime}$ in (6.3), we deduce the existence of a low rank function $\psi:\mathbb{Z}^{s-2}\to\mathbb{T}$ such that $\sum_{\underline{h}}\left|\sum_{x}F_{u}(x)e\bigl{(}\psi(\underline{h})x\bigr{)}\right|\gg\delta^{2^{s+3}+1-2^{s}}(N/q)^{s-1}.$ Employing Lemma 6.4 then gives $\left\|F_{u}\right\|_{U^{s-1}}^{2^{s-1}}\gg\delta^{(2^{s+3}+1-2^{s})2^{s+1}}(N/q)^{s}.$ Summing over permissible $u$, then extending to the full sum over $u\in[q]$ by positivity, we obtain the bound claimed in the lemma. ∎ ## 7\. Proof of the cut norm inverse theorem In this section we complete our proof of Theorem 1.6. We first show how the dual function is controlled by the $U^{5}$-norm, and hence by the degree lowering of §6, the dual is controlled by the $U^{1}$-norm. The following can be found in the discussion following [3, Proposition 3.6]. Although the statement therein is for norms, and not seminorms, one can check that the (simple) argument remains valid in this greater generality333On occasion the relevant results in [3] appear to assume that unit balls are _bounded_ (if we take the definition of _convex body_ to be a compact convex set with non-empty interior), which may not be true for the unit ball of a seminorm. However, the boundedness assumption is not necessary in the pertinent proofs. Moreover, one could quotient by the norm zero set to obtain a genuine norm.. ###### Lemma 7.1. Let $\|\cdot\|$ be a seminorm on the space of complex-valued functions supported on $[N]$. For any such function $f$ and $\varepsilon>0$ there exists a decomposition $f=f_{str}+f_{unf}$ such that $\left\|f_{str}\right\|^{*}\leq\varepsilon^{-1}\left\|f\right\|_{2}\quad\text{and}\quad\left\|f_{unf}\right\|\leq\varepsilon\left\|f\right\|_{2}.$ ###### Lemma 7.2 ($U^{5}$-control of the dual). There exists an absolute constant $C$ such that for $N\geq Cq\delta^{-C}$ the following holds. Let $g_{0},g_{1},f:\mathbb{Z}\to\mathbb{C}$ be 1-bounded functions, each with support in $[N]$. Suppose that $\left|\Lambda_{q,N}(g_{0},g_{1},f)\right|\geq\delta\Lambda_{q,N}(1_{[N]}).$ Then, on defining the dual $G(x):=\mathbb{E}_{y\in[M]}g_{0}(x-qy^{2})g_{1}(x+y-qy^{2}),$ (7.1) we have $\sum_{u\in[q]}\left\|G\right\|_{U^{5}(u+q\cdot\mathbb{Z})}^{2^{5}}\gg\delta^{2^{26}}\sum_{u\in[q]}\left\|1_{[N]}\right\|_{U^{5}(u+q\cdot\mathbb{Z})}^{2^{5}}.$ ###### Proof. Applying Lemma 7.1 to $f$ with $\left\|\cdot\right\|:=\left\|\cdot\right\|^{\sharp}_{q}$ as defined in (1.5) and $\varepsilon:=\tfrac{1}{2}\delta\Lambda_{q,N}(1_{[N]})N^{-1/2}$, we deduce that $|\Lambda_{q,N}(g_{0},g_{1},f_{str})|\geq\delta\Lambda_{q,N}(1_{[N]})-|\Lambda_{q,N}(g_{0},g_{1},f_{unf})|\\\ \geq\delta\Lambda_{q,N}(1_{[N]})-\left\|f_{unf}\right\|_{q,N}^{\sharp}\geq\tfrac{1}{2}\delta\Lambda_{q,N}(1_{[N]}).$ We note that our lower bound assumption on $N$ implies that $\Lambda_{q,N}\left(1_{[N]}\right)\gg 1$. Hence the dual inequality (1.10) gives $\delta\ll N^{-1}|\left\langle f_{str},G\right\rangle|\ll\delta^{-1}\left\|G\right\|^{\sharp}_{q}.$ Invoking Theorem 5.6 yields the result. ∎ Taken together, the work in §§3–6 gives the following. ###### Proof of Theorem 1.6. Applying Lemma 7.2, we deduce that $\sum_{u\in[q]}\left\|G\right\|_{U^{5}(u+q\cdot\mathbb{Z})}^{2^{5}}\gg\delta^{2^{26}}\sum_{u\in[q]}\left\|1_{[N]}\right\|_{U^{5}(u+q\cdot\mathbb{Z})}^{2^{5}},$ where $G$ is defined as in (7.1). We now apply Lemma 6.5 three times. The first application gives $\sum_{u\in[q]}\left\|G\right\|_{U^{4}(u+q\cdot\mathbb{Z})}^{2^{4}}\gg\delta^{2^{40}}\sum_{u\in[q]}\left\|1_{[N]}\right\|_{U^{4}(u+q\cdot\mathbb{Z})}^{2^{4}},$ a second replaces $U^{4}$ with $U^{3}$ at the cost of replacing $\delta^{2^{40}}$ with $\delta^{2^{52}}$. With a final application, we obtain $\sum_{u\in[q]}\left\|G\right\|_{U^{2}(u+q\cdot\mathbb{Z})}^{4}\gg\delta^{2^{62}}\sum_{u\in[q]}\left\|1_{[N]}\right\|_{U^{2}(u+q\cdot\mathbb{Z})}^{4}.$ Let $\eta:=\delta^{2^{62}}$. By the popularity principle, there are at least $\Omega(\eta q)$ values of $u\in[q]$ for which $\left\|G\right\|_{U^{2}(u+q\cdot\mathbb{Z})}^{4}\gg\eta\left\|1_{[N]}\right\|_{U^{2}(u+q\cdot\mathbb{Z})}^{4}$. The inverse theorem for the $U^{2}$-norm then gives the existence of $\phi(u)\in\mathbb{T}$ for which $\left|\sum_{x}G(u+qx)e(\phi(u)x)\right|\gg\eta^{1/2}N/q.$ (7.2) Set $T:=\left\lceil C\eta^{-1/2}N/q\right\rceil$, with $C$ an absolute constant taken sufficiently large to ensure that, on rounding $\phi(u)$ to the nearest fraction of the form $t/T$, the inequality (7.2) remains valid. By Lemma 6.2, for each $u$ satisfying (7.2), there exists a positive integer $q^{\prime}\ll\eta^{2}$ such that $\|q^{\prime}q^{2}\phi(h)\|\ll\eta^{-7}q^{3}/N$. By a volume packing argument similar to that given in the proof of Lemma 6.5, the function $\phi$ is constant on a proportion of at least $\Omega\bigl{(}\eta^{11}\bigr{)}$ of the residues $u\in[q]$ satisfying (7.2). Summing over these $u$, then extending the sum to all of $[q]$, we deduce the existence of $\alpha\in\mathbb{T}$ and $q^{\prime}\ll\eta^{-2}$ such that $\|q^{\prime}q^{2}\alpha\|\ll\eta^{-7}q^{3}/N$ and $\sum_{u\in[q]}\left|\sum_{x}G(u+qx)e(\alpha x)\right|\gg\eta^{12}N.$ (7.3) Expanding the dual function, there is a 1-bounded function $\psi(u\bmod q)$ such that the left-hand side of the above is equal to $\sum_{u\in[q]}\psi(u\bmod q)\sum_{x\equiv u(q)}\mathbb{E}_{y\in[M]}g_{0}(x-qy^{2})g_{1}(x+y-qy^{2})e(\alpha x/q)\\\ =\sum_{x}g_{0}(x)\psi(x\bmod q)e(\alpha x/q)\mathbb{E}_{y\in[M]}g_{1}(x+y)e(\alpha y^{2}).$ (7.4) Let us first suppose that $f=g_{0}$, we deal with the case $f=g_{1}$ shortly. Setting $\phi(x):=\psi(x\bmod q)e(\alpha x/q)\mathbb{E}_{y\in[M]}g_{1}(x+y)e(\alpha y^{2}),$ we have $\left\langle f,\overline{\phi}\right\rangle\gg\eta^{12}N$. Our aim is to show that $\phi$ can be approximated by a local function of the type claimed in the lemma. We begin by removing the phase from the expectation over $[M]$, at the cost of passing to shorter progressions. Let $M^{\prime}\leq M/q^{\prime}q^{2}$ be a quantity to be determined. If $y\in[M^{\prime}]$ then for any $m\in[-M,M]\cap\mathbb{Z}$ we have $\left|e(\alpha(m+q^{\prime}q^{2}y)^{2})-e(\alpha m^{2})\right|\ll\left\|\alpha\left(2mq^{\prime}q^{2}y+(q^{\prime}q^{2}y)^{2}\right)\right\|\ll q^{\prime}q^{4}\eta^{-7}M^{\prime}/M.$ (7.5) Hence, partitioning $\mathbb{Z}$ into progressions $P$ of common difference $q^{\prime}q^{2}$ and length $M^{\prime}$, there exist phases $\omega_{P}$ such that for any $x\in\mathbb{Z}$ we have $\left|\mathbb{E}_{y\in[M]}g_{1}(x+y)e(\alpha y^{2})-M^{-1}\sum_{P}\omega_{P}\sum_{y\in[M]\cap P}g_{1}(x+y)\right|\ll q^{\prime}q^{4}\eta^{-7}M^{\prime}/M.$ (7.6) Notice that there are at most $O(M/M^{\prime})$ progressions $P$ such that $P\cap[M]\neq\emptyset$ (since we are assuming $M^{\prime}\leq M/q^{\prime}q^{2}$). Next we show how the phase $e(\alpha x/q)$ is approximately periodic. Suppose that $z\in[M^{\prime\prime}]$, with $M^{\prime\prime}\leq M^{\prime}/q$ to be determined. Then for any $x\in\mathbb{Z}$ we have $\left|e\left(\alpha(x+q^{\prime}q^{3}z)/q\right)-e\left(\alpha x\right)\right|\ll\left\|\alpha q^{\prime}q^{2}\right\|M^{\prime\prime}\ll\eta^{-7}q^{3}M^{\prime\prime}/N$ and by a boundary estimate $\left|\sum_{y\in[M]\cap P}g_{1}(x+q^{\prime}q^{3}z+y)-\sum_{y\in[M]\cap P}g_{1}(x+y)\right|\ll qM^{\prime\prime}.$ It then follows from a telescoping identity that for all $x\in\mathbb{Z}$ and $z\in[M^{\prime\prime}]$ we have $\displaystyle\left|\phi(x+q^{\prime}q^{3}z)-\phi(x)\right|$ $\displaystyle\ll\frac{\eta^{-7}q^{3}M^{\prime\prime}}{N}+\frac{\eta^{-7}q^{\prime}q^{4}M^{\prime}}{M}+\frac{qM^{\prime\prime}}{M}\sum_{\begin{subarray}{c}P\\\ P\cap[M]\neq\emptyset\end{subarray}}1$ $\displaystyle\ll\frac{\eta^{-7}q^{\prime}q^{4}M^{\prime}}{M}+\frac{qM^{\prime\prime}}{M^{\prime}}.$ Taking $M^{\prime}:=c\eta^{19}M/q^{\prime}q^{4}$ and $M^{\prime\prime}:=c\eta^{12}M^{\prime}/q$ for a sufficiently small absolute constant $c>0$ we have $\left|\phi(x+q^{\prime}q^{3}z)-\phi(x)\right|\leq\eta^{12}/C\quad\text{for all }x\in\mathbb{Z}\text{ and }z\in[M^{\prime\prime}].$ (7.7) Partitioning $\mathbb{Z}$ into translates $T$ of $q^{\prime}q^{3}\cdot[M^{\prime\prime}]$ we deduce that $\sum_{T}\biggl{|}\sum_{x\in T}f(x)\biggr{|}\gg\eta^{12}N.$ Write $\chi(x)$ for the phase of the inner sum when $x\in T$. Then $\chi$ is a 1-bounded local function of modulus $q^{\prime}q^{3}$ and resolution $\Omega\left((\delta/q)^{O(1)}M\right)$ satisfying $\sum_{x}f(x)\overline{\chi(x)}\gg\delta^{2^{66}}N,$ as required. Next we give the argument for when $f=g_{1}$. Returning to (7.4) we have $\sum_{x}\left|\mathbb{E}_{y\in[M]}f(x+y)e(\alpha y^{2})\right|\gg\eta^{12}N.$ Utilising (7.5) and (7.6), we may partition $\mathbb{Z}$ into progressions $P$ of common difference $q^{\prime}q^{2}$ and length $M^{\prime}:=c\eta^{19}M/q^{\prime}q^{4}$ such that $\sum_{x}\sum_{P}\left|\mathbb{E}_{y\in[M]\cap P}f(x+y)\right|\gg\eta^{12}N.$ Since $O(M/M^{\prime})$ of the $P$ intersect $[M]$, the pigeon-hole principle gives $P^{\prime}:=P\cap[M]$ such that $\sum_{x}\left|\sum_{y\in P^{\prime}}f(x+y)\right|\gg\eta^{12}NM^{\prime}.$ In particular $|P^{\prime}|\gg\eta^{12}M^{\prime}\gg(q/\delta)^{C}M$. Partitioning $\mathbb{Z}$ into translates of $P^{\prime}$ of the form $\mathbb{Z}=\bigsqcup_{i}(a_{i}+P^{\prime}),$ the pigeon-hole principle gives $z\in P^{\prime}$ such that $\sum_{i}\left|\sum_{y\in P^{\prime}}f(a_{i}+y+z)\right|\gg\eta^{12}N.$ Writing $\chi(x)$ for the phase of the inner sum when $x\in a_{i}+P$ one sees that $\chi$ is a local function of resolution $\gg(q/\delta)^{C}M$ and modulus $q^{\prime}q^{2}$ which satisfies $\left\langle f,\chi\right\rangle\gg\eta^{12}N$. The proof is complete on noting that a local function of modulus $q^{\prime}q^{2}$ is also a local function of modulus $q^{\prime}q^{3}$. ∎ ## Appendix A Basic theory of the Gowers norms ###### Lemma A.1 (Inverse theorem for the $U^{2}$-norm). Let $f:\mathbb{Z}\to\mathbb{C}$ be a $1$-bounded function with support in $[N]$. Then there exists $\alpha\in\mathbb{T}$ such that $\|f\|_{U^{2}}^{4}\leq N\left|\sum_{x}f(x)e(\alpha x)\right|^{2}.$ ###### Proof. Using the definition of the Fourier transform (1.8), together with orthogonality of additive characters, we have $\left\|f\right\|_{U^{2}}^{4}=\int_{\mathbb{T}}\bigl{|}\hat{f}(\alpha)\bigr{|}^{4}\mathrm{d}\alpha\leq\big{\|}\hat{f}\big{\|}_{\infty}^{2}\int_{\mathbb{T}}\bigl{|}\hat{f}(\alpha)\bigr{|}^{2}\mathrm{d}\alpha\leq\big{\|}\hat{f}\big{\|}_{\infty}^{2}N.$ ∎ For each $\omega\in\\{0,1\\}^{s}$, let $f_{\omega}:\mathbb{Z}\to\mathbb{C}$ be a function with finite support. Then we define the _Gowers inner product_ by $[f_{\omega}]_{U^{s}}:=\sum_{x,h_{1},\dots,h_{s}}\prod_{\omega\in\left\\{0,1\right\\}^{s}}\mathcal{C}^{|\omega|}f_{\omega}(x+\omega\cdot h).$ Here $\mathcal{C}$ denotes the operation of complex conjugation. Notice that $[f]_{U^{s}}=\left\|f\right\|_{U^{s}}^{2^{s}}$. ###### Lemma A.2 (Gowers–Cauchy–Schwarz). For each $\omega\in\\{0,1\\}^{s}$, let $f_{\omega}:\mathbb{Z}\to\mathbb{C}$ be a function with finite support. Then we have $[f_{\omega}]_{U^{s}}\leq\prod_{\omega\in\\{0,1\\}^{s}}\|f_{\omega}\|_{U^{s}}.$ ###### Proof. See [9, Exercise 1.3.19]. ∎ ###### Lemma A.3 (Phase invariance for $s\geq 2$). Let $L\in\mathbb{R}[x,h_{1},\dots,h_{s}]$ be a linear form, with $s\geq 2$ and let $f:\mathbb{Z}\to\mathbb{C}$. Then $\biggl{|}\sum_{x,h_{1},\dots,h_{s}}\Delta_{h_{1},\dots,h_{s}}f(x)e(L(x,h_{1},\dots,h_{s}))\biggr{|}\leq\left\|f\right\|_{U^{s}}^{2^{s}}.$ ###### Proof. The linear form may be written as $L(x,h_{1},\dots,h_{s})=\alpha x+\beta_{1}(x+h_{1})+\dots+\beta_{s}(x+h_{s}),$ for some real $\alpha$ and $\beta_{i}$. Write $f_{0}(x):=f(x)e(\alpha x)$, $f_{e_{i}}(x):=f(x)e(-\beta_{i}x)$ for $i=1,\dots,s$, and for $\omega\in\left\\{0,1\right\\}^{s}\setminus\left\\{0,e_{1},\dots,e_{s}\right\\}$ set $f_{\omega}:=f$. Then by Gowers–Cauchy–Schwarz we have $\biggl{|}\sum_{x,h_{1},\dots,h_{s}}\Delta_{h_{1},\dots,h_{s}}f(x)e(L(x,h_{1},\dots,h_{s}))\biggr{|}\leq\prod_{\omega}\left\|f_{\omega}\right\|.$ It therefore suffice to prove that for a phase function $e_{\alpha}:x\mapsto e(\alpha x)$ $\left\|fe_{\alpha}\right\|_{U^{s}}=\left\|f\right\|_{U^{s}}.$ The latter follows on observing that $\Delta_{h_{1},\dots,h_{s}}(fe_{\alpha})=\left(\Delta_{h_{1},\dots,h_{s}}f\right)\left(\Delta_{h_{1},\dots,h_{s}}e_{\alpha}\right),$ and for any $x,h_{1},\dots,h_{s}$ with $s\geq 2$ we have $\Delta_{h_{1},\dots,h_{s}}e_{\alpha}(x)=1.$ ∎ ###### Lemma A.4 (Box Cauchy–Schwarz). Let $\mu_{1},\mu_{2},\mu_{3}$ be probability measures on $\mathbb{Z}$ with the discrete sigma algebra. If $F_{1},F_{2},F_{3}$ are 1-bounded function on $\mathbb{Z}^{2}$ and $F$ is a 1-bounded function on $\mathbb{Z}^{3}$ then $\left|\sum_{x\in\mathbb{Z}^{3}}F_{1}(x_{2},x_{3})F_{2}(x_{1},x_{3})F_{3}(x_{1},x_{2})F(x)\underline{\mu}(x)\right|^{8}\\\ \leq\sum_{x^{0},x^{1}\in\mathbb{Z}^{3}}\prod_{\omega\in\left\\{0,1\right\\}^{3}}\mathcal{C}^{|\omega|}F(x_{1}^{\omega_{1}},x_{2}^{\omega_{2}},x_{3}^{\omega_{3}})\mu_{1}(x_{1}^{0})\mu_{1}(x_{1}^{1})\mu_{2}(x_{2}^{0})\mu_{2}(x_{2}^{1})\mu_{3}(x_{3}^{0})\mu_{3}(x_{3}^{1}).$ ## References * BC [17] J. Bourgain and M.-C. Chang. Nonlinear Roth type theorems in finite fields. Israel J. Math., 221(2):853–867, 2017. * BL [96] V. Bergelson and A. Leibman. Polynomial extensions of van der Waerden’s and Szemerédi’s theorems. J. Amer. Math. Soc., 9(3):725–753, 1996. * Gow [10] W. T. Gowers. Decompositions, approximate structure, transference, and the Hahn-Banach theorem. Bull. Lond. Math. Soc., 42(4):573–606, 2010. * GT [08] B. Green and T. Tao. Quadratic uniformity of the Möbius function. Ann. Inst. Fourier (Grenoble), 58(6):1863–1935, 2008. * Pel [19] S. Peluse. On the polynomial szemerédi theorem in finite fields. Duke Math. J., 168(5):749–774, 04 2019. * PP [19] S. Peluse and S. Prendiville. Quantitative bounds in the non-linear Roth theorem. ArXiv e-prints, 2019. * PP [20] S. Peluse and S. Prendiville. A polylogarithmic bound in the nonlinear Roth theorem. ArXiv e-prints, 2020. * Pre [17] S. Prendiville. Quantitative bounds in the polynomial Szemerédi theorem: the homogeneous case. Discrete Anal., pages 34, Paper No. 5, 2017. * Tao [12] T. Tao. Higher order Fourier analysis, volume 142 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2012. * TZ [16] T. Tao and T. Ziegler. Concatenation theorems for anti-Gowers-uniform functions and Host-Kra characteristic factors. Discrete Anal., pages 60, Paper No. 13, 2016.
2024-09-04T02:54:58.404089
2020-03-09T13:16:29
2003.04122
{ "authors": "Sarah Peluse and Sean Prendiville", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26118", "submitter": "Sean Prendiville", "url": "https://arxiv.org/abs/2003.04122" }
arxiv-papers
# A polylogarithmic bound in the nonlinear Roth theorem Sarah Peluse Mathematical Institute University of Oxford UK<EMAIL_ADDRESS>and Sean Prendiville Department of Mathematics and Statistics Lancaster University UK<EMAIL_ADDRESS> ###### Abstract. We show that sets of integers lacking the configuration $x$, $x+y$, $x+y^{2}$ have at most polylogarithmic density. ###### Contents 1. 1 Introduction 2. 2 Iterating the density increment 3. 3 The cut norm inverse theorem 4. 4 A weak regularity lemma 5. 5 The density increment lemma 6. 6 Global control by major arc Fourier coefficients 7. 7 Longer progressions ## 1\. Introduction ### 1.1. Density bound In [9] the authors obtained, for the first time, an effective bound for subsets of $\left\\{1,\dots,N\right\\}$ lacking the nonlinear Roth configuration $x$, $x+y$, $x+y^{2}$. There it was established that such sets have cardinality at most $O(N/(\log\log N)^{c})$, where $c>0$ is an absolute constant. The key breakthrough of [9] was a “local $U^{1}$-control” result, from which a bound for sets lacking the nonlinear Roth configuration follows via standard methods. Here, we combine this local $U^{1}$-control result with a more sophisticated argument to remove a logarithm from the bound of [9]. ###### Theorem 1.1 (Density bound). There exists an absolute constant $c>0$ such that the following holds. Suppose that $A\subset\left\\{1,\dots,N\right\\}$ lacks configurations of the form $x,\ x+y,\ x+y^{2}\qquad(y\neq 0).$ (1.1) Then $|A|=O\left(N/(\log N)^{c}\right).$ A careful analysis shows that the exponent $c=2^{-150}$ is permissible, where 150 represents the combined number of times we utilise the Cauchy–Schwarz inequality in [9] and this paper ### 1.2. Major arc correlation The techniques which yield Theorem 1.1 also allow us to show, in a quantitatively effective manner, that the major arc Fourier coefficients of a set determine how many nonlinear Roth configurations (1.1) the set contains. ###### Theorem 1.2 (Major-arc control). Let $\delta>0$ and $f,g,h:\mathbb{Z}\to\mathbb{C}$ be 1-bounded functions with support in $\left\\{1,\dots,N\right\\}$. Suppose that $\left|\sum_{x\in\mathbb{Z}}\sum_{y\in\mathbb{N}}f(x)g(x+y)h(x+y^{2})\right|\geqslant\delta N^{3/2}.$ Then either $N\ll\delta^{-O(1)}$, or there is a frequency $\alpha\in\mathbb{R}$ and a positive integer $q\ll\delta^{-O(1)}$ such that111Here $\left\|\cdot\right\|$ denotes the distance to the nearest integer, and $e(\alpha):=e^{2\pi i\alpha}$. For our conventions regarding asymptotic notation see §1.5. $\left\|q\alpha\right\|\ll\delta^{-O(1)}/N$ and $\left|\sum_{x\in\mathbb{Z}}h(x)e(\alpha x)\right|\gg\delta^{O(1)}N.$ In the nomenclature of [14], the major arc linear phases are the only obstructions to uniformity for the nonlinear Roth configuration. We emphasise that Theorem 1.2 is not used in the proof of Theorem 1.1. The major arc Fourier coefficients of a subset of $\\{1,\dots,N\\}$ essentially measure its distribution in arithmetic progressions of common difference $\ll 1$ and length $\gg N$. To illustrate this, the following definition is useful. ###### Definition 1.3 (Local function). We call a function $\phi:\mathbb{Z}\to\mathbb{C}$ a _local function of resolution $M$ and modulus $q$_ if there exists a partition of $\mathbb{Z}$ into intervals of length $M$ such that $\phi$ is constant on the intersection of every such interval with every congruence class mod $q$. ###### Corollary 1.4 (Local control of the nonlinear term). Let $\delta>0$ and $f,g,h:\mathbb{Z}\to\mathbb{C}$ be 1-bounded functions with support in $\left\\{1,\dots,N\right\\}$. Suppose that $\left|\sum_{x\in\mathbb{Z}}\sum_{y\in\mathbb{N}}f(x)g(x+y)h(x+y^{2})\right|\geqslant\delta N^{3/2}.$ Then either $N\ll\delta^{-O(1)}$, or there is a 1-bounded local function $\phi$ of resolution $M\gg\delta^{O(1)}N$ and modulus $q\ll\delta^{-O(1)}$ such that $\left|\sum_{x\in\mathbb{Z}}h(x)\phi(x)\right|\gg\delta^{O(1)}N.$ One cannot hope to prove that the functions $f$ and $g$ above also correlate globally with local functions, as the following example illustrates. For any positive integers $x_{1},x_{2}\leqslant N^{1/2}$, set $f\left(x_{1}+(x_{2}-1)\left\lfloor N^{1/2}\right\rfloor\right)=\begin{cases}1&\text{ if }x_{2}\equiv 0\pmod{4},\\\ 0&\text{ if }x_{2}\equiv 1\pmod{4},\\\ -1&\text{ if }x_{2}\equiv 2\pmod{4},\\\ 0&\text{ if }x_{2}\equiv 3\pmod{4};\end{cases}$ and set $f(x)=0$ everywhere else. Taking $g:=f$ and $h:=1_{\\{1,\dots,N\\}}$, one can check that either $N\ll 1$ or $\sum_{x\in\mathbb{Z}}\sum_{y\in\mathbb{N}}f(x)g(x+y)h(x+y^{2})\gg N^{3/2}.$ However, for any arithmetic progression $P\subset\\{1,\dots,N\\}$, we have $\left|\sum_{x\in P}f(x)\right|\ll N^{1/2}.$ Hence, for any 1-bounded local function $\phi$ of resolution $\geqslant\delta N$ and modulus $\leqslant\delta^{-1}$, the triangle inequality gives the discorrelation $\left|\sum_{x\in\mathbb{Z}}f(x)\phi(x)\right|\ll\delta^{-2}N^{1/2}.$ This example is a local obstruction coming from the real numbers: the nature of our counting operator means that we cannot disentangle possible correlations between the $f$ and $g$ functions on subintervals of length $N^{1/2}$. We can, however, show that these are the only other possible obstructions to uniformity. ###### Theorem 1.5 (Local control of all terms). Let $\delta>0$ and $f_{1},f_{2},f_{3}:\mathbb{Z}\to\mathbb{C}$ be 1-bounded functions with support in $\left\\{1,\dots,N\right\\}$. Suppose that $\left|\sum_{x\in\mathbb{Z}}\sum_{y\in\mathbb{N}}f_{1}(x)f_{2}(x+y)f_{3}(x+y^{2})\right|\geqslant\delta N^{3/2}.$ Then either $N\ll\delta^{-O(1)}$, or for each $i=1,2,3$ there is a 1-bounded local function $\phi_{i}$ of resolution $\gg\delta^{O(1)}N^{1/2}$ and modulus $q_{i}\ll\delta^{-O(1)}$ such that $\left|\sum_{x\in\mathbb{Z}}f_{i}(x)\phi_{i}(x)\right|\gg\delta^{O(1)}N.$ ###### Proof. This is an immediate consequence of Corollary 1.4 and Lemma 3.2. ∎ ### 1.3. Longer polynomial progressions In analogy with the first author’s generalisation [8] of [9], it is natural to ask whether the methods of this paper yield polylogarithmic bounds for sets of integers lacking longer progressions $x,\ x+P_{1}(y),\ \dots,\ x+P_{m}(y),$ (1.2) where the $P_{i}\in\mathbb{Z}[y]$ have zero constant term and $\deg P_{1}<\dots<\deg P_{m}$. As was mentioned above, the key input to this paper is the local $U^{1}$-control result [9, Theorem 7.1]. Replacing this with [8, Theorem 3.3], our argument generalises in a straightforward manner to yield polylogarithmic bounds for subsets of $\\{1,\dots,N\\}$ lacking (1.2) when $m=2$, that is, for all three-term polynomial progressions with distinct degrees and zero constant term. Obtaining polylogarithmic bounds for longer polynomial progressions requires an additional idea. We sketch a strategy in §7, which relies on obtaining an appropriate generalisation of [8, Theorem 3.3], a generalisation that would require re-running the majority of the arguments therein. ### Acknowledgements S. Peluse is supported by the NSF Mathematical Sciences Postdoctoral Research Fellowship Program under Grant No. DMS-1903038 ### 1.4. An outline of our argument Effective Szemerédi-type theorems are commonly proved via a density increment strategy, the prototypical example being the proof of Roth’s theorem [11] on three-term arithmetic progressions. This strategy begins with a set $A\subset\\{1,\dots,N\\}$ of density $\delta:=|A|/N$ that lacks the configuration in question. It then proceeds to show that there is a substructure $S\subset\\{1,\dots,N\\}$ on which $A$ has increased density $\delta+\Omega_{\delta}(1)$. One then hopes to iterate the argument with $A\cap S$ in place of $A$ and $S$ in place of $\\{1,\dots,N\\}$. One avenue to obtaining polylogarithmic bounds in a Szemerédi-type theorem is to obtain a constant proportion density increment $\delta+\Omega(\delta)$ on a substructure $S$ of polynomial size $|S|\approx N^{\Omega(1)}$. This was accomplished for three-term arithmetic progressions by Heath–Brown [7] and Szemerédi [13] (in fact, they were able to handle a smaller lower bound on $|S|$). An alternative strategy for obtaining polylogarithmic bounds is to obtain the weaker polynomial increment $\delta+\Omega(\delta^{O(1)})$, yet on a _dense_ or _global_ substructure $S$, that is, a substructure of size $|S|\geqslant\exp(-O(\delta^{-O(1)}))N$. This was accomplished by Sárközy [12] for the configuration $x,x+y^{2}$ and for three-term arithmetic progressions by Bourgain [2]. Both of these strategies are achievable for the nonlinear Roth configuration. The global structure strategy is perhaps the most natural, and may be accomplished by utilising a generalisation of Theorem 1.2. In this note we do not pursue this, and instead give details for a constant-proportion density increment, as our argument is somewhat cleaner in this form. More specifically, we show that if $A\subset\left\\{1,\dots,N\right\\}$ has density $\delta$ and lacks nontrivial configurations of the form $x,x+y,x+y^{2}$, then there exists an arithmetic progression $P$ of length $|P|\gg\delta^{O(1)}N^{1/2}$ and common difference $q\ll\delta^{-O(1)}$ such that we have the density increment $\frac{|A\cap P|}{|P|}\geqslant(1+\Omega(1))\frac{|A|}{N}.$ (1.3) As outlined in [9], the ‘almost bounded’ size of $q$ allows us to iterate this procedure. (In [9], we obtain the weaker density increment $(1+\Omega(\delta^{O(1)}))|A|/N$, which leads to the extra logarithm appearing in the bound there.) We obtain the constant-proportion increment (1.3) by combining the local $U^{1}$-control result of [9] with a strategy of Heath–Brown [7] and Szemerédi [13], which has a very robust formulation due to Green and Tao [6]. To accomplish this, we first give a structural characterisation of sets lacking the nonlinear Roth configuration (this is Lemma 3.3, whose essence is captured in the weaker Theorem 1.5). These sets resemble the level sets of the product of a function that is constant on intervals of length $N^{1/2}$ and a function that is constant on congruence classes modulo a bounded $q$. Having obtained such a structural characterisation, an energy increment procedure closely following [6] allows us to approximate an arbitrary set of integers by these level sets, up to an error that does not contribute substantially to the count of nonlinear Roth configurations. A combinatorial argument then allows us to deduce that our set must have a substantial density increment on one of these level sets, of the form $\delta+\Omega(\delta)$. As a result, our density increment procedure requires only $\log(\delta^{-1})+O(1)$ iterations, compared with the $O(\delta^{-O(1)})$ required in [9], and this yields the polylogarithmic improvement over our previous density increment iteration. The remainder of this paper is organized as follows. We derive Theorem 1.1 in §2 via a density increment iteration. Our deduction uses a density increment lemma that is established in §§3–5. We prove Theorem 1.2 and Corollary 1.4 in §6. ### 1.5. Notation #### 1.5.1. Standard conventions We use $\mathbb{N}$ to denote the positive integers. For a real number $X\geqslant 1$, write $[X]=\\{1,2,\ldots,\left\lfloor X\right\rfloor\\}$. A complex-valued function is said to be _1-bounded_ if the modulus of the function does not exceed 1. We use counting measure on $\mathbb{Z}$, so that for $f,g:\mathbb{Z}\to\mathbb{C}$, we have $\left\|f\right\|_{\ell^{p}}:=\biggl{(}\sum_{x}|f(x)|^{p}\biggr{)}^{\frac{1}{p}},\ \left\langle f,g\right\rangle:=\sum_{x}f(x)\overline{g(x)},\ \text{and}\ (f*g)(x)=\sum_{y}f(y)g(x-y).$ Any sum of the form $\sum_{x}$ is to be interpreted as a sum over $\mathbb{Z}$. The _support_ of $f$ is the set $\mathrm{supp}(f):=\left\\{x\in\mathbb{Z}:f(x)\neq 0\right\\}$. We write $\left\|f\right\|_{\infty}$ for $\sup_{x\in\mathbb{Z}}|f(x)|$. We use Haar probability measure on $\mathbb{T}:=\mathbb{R}/\mathbb{Z}$, so that for measurable $F:\mathbb{T}\to\mathbb{C}$, we have $\left\|F\right\|_{L^{p}}:=\biggl{(}\int_{\mathbb{T}}|F(\alpha)|^{p}d\alpha\biggr{)}^{\frac{1}{p}}=\biggl{(}\int_{0}^{1}|F(\alpha)|^{p}d\alpha\biggr{)}^{\frac{1}{p}}.$ We write $\left\|\alpha\right\|_{\mathbb{T}}$ for the distance from $\alpha\in\mathbb{R}$ to the nearest integer $\min_{n\in\mathbb{Z}}|\alpha-n|.$ This remains well-defined on $\mathbb{T}$. We define the Fourier transform of $f:\mathbb{Z}\to\mathbb{C}$ by $\hat{f}(\alpha):=\sum_{x}f(x)e(\alpha x)\qquad(\alpha\in\mathbb{T}),$ (1.4) when this makes sense. Here $e(\alpha)$ stands for $e^{2\pi i\alpha}$. For a finite set $S$ and function $f:S\to\mathbb{C}$, denote the average of $f$ over $S$ by $\mathbb{E}_{s\in S}f(s):=\frac{1}{|S|}\sum_{s\in S}f(s).$ For a complex-valued function $f$ and positive-valued function $g$, write $f\ll g$ or $f=O(g)$ if there exists a constant $C$ such that $|f(x)|\leq Cg(x)$ for all $x$. We write $f=\Omega(g)$ if $f\gg g$. We subscript this notation when the implicit constant may depend on the subscripted parameters. #### 1.5.2. Local conventions Up to normalisation, all of the above are widely used in the literature. Next, we list notation specific to our paper. We have tried to minimise this in order to aid the casual reader. The quantity $(N/q)^{1/2}$ appears repeatedly, where $N$ and $q$ are integers fixed throughout the majority of our paper. We therefore adopt the convention that $M:=\left\lfloor\sqrt{N/q}\right\rfloor.$ (1.5) Assuming this, define the _counting operator_ on the functions $f_{i}:\mathbb{Z}\to\mathbb{C}$ by $\Lambda_{q,N}(f_{0},f_{1},f_{2}):=\mathbb{E}_{x\in[N]}\mathbb{E}_{y\in[M]}f_{0}(x)f_{1}(x+y)f_{2}(x+qy^{2}).$ (1.6) When $f_{0}=f_{1}=f_{2}=f$, we simply write $\Lambda_{q,N}(f)$ for $\Lambda_{q,N}(f_{0},f_{1},f_{2})$. For a real parameter $H\geqslant 1$, we use $\mu_{H}:\mathbb{Z}\to[0,1]$ to represent the following normalised Fejér kernel $\mu_{H}(h):=\frac{1}{\left\lfloor H\right\rfloor}\left(1-\frac{|h|}{\left\lfloor H\right\rfloor}\right)_{+}=\frac{(1_{[H]}*1_{-[H]})(h)}{\left\lfloor H\right\rfloor^{2}}.$ (1.7) This is a probability measure on $\mathbb{Z}$ with support in the interval $(-H,H)$. ## 2\. Iterating the density increment In this section we prove Theorem 1.1 using the following lemma, which we will devote §§3–5 to proving. ###### Lemma 2.1 (Density increment lemma). Let $q\leqslant N$ be positive integers and $\delta>0$. Suppose that $A\subset[N]$ satisfies $|A|\geqslant\delta N$ and lacks the configuration $x,\ x+y,\ x+qy^{2}\qquad(y\neq 0).$ (2.1) Then either $N\ll(q/\delta)^{O(1)}$ or there exists $q^{\prime}\leqslant\exp\left(O\left(\delta^{-O(1)}\right)\right)$ and $N^{\prime}\geqslant q^{-O(1)}\exp\left(-O\left(\delta^{-O(1)}\right)\right)N^{1/2}$ such that, for some $a\in\mathbb{Z}$, we have $|A\cap(a+qq^{\prime}\cdot[N^{\prime}])|\geqslant(1+\Omega(1))\delta N^{\prime}.$ (2.2) ###### Proof of Theorem 1.1 given Lemma 2.1. This is the same as the proof of [9, Theorem 1.1], but using the improved density increment lemma above in place of the density increment lemma of [9]. Note first that if $A$ lacks the configuration (2.1), then the set $\\{x:a+qq^{\prime}x\in A\\},$ lacks configurations of the form $x,\ x+y,\ x+q^{2}q^{\prime}y^{2}\qquad(y\neq 0).$ Let $A\subset[N]$ have size $\delta N$, and suppose that it has no non-linear Roth configurations (1.1). Setting $A_{0}:=A$, $N_{0}:=N$ and $q_{0}=1$, let us suppose we have a sequence of tuples $(A_{i},N_{i},q_{i})$ for $i=0,1,\dots,n$ that each satisfy the following: 1. (i) $A_{i}$ lacks configurations of the form $x,\ x+y,\ x+q_{0}^{2^{i}}q_{1}^{2^{i-1}}\dotsm q_{i-1}^{2}q_{i}y^{2}\qquad(y\neq 0).$ 2. (ii) $q_{i}\leqslant\exp\left(O\left(\delta^{-O(1)}\right)\right)$; 3. (iii) $A_{i}\subset[N_{i}]$ and for $i\geqslant 1$ we have $\frac{|A_{i}|}{N_{i}}\geqslant(1+c)\frac{|A_{i-1}|}{N_{i-1}},$ where $c=\Omega(1)$ is a positive absolute constant; 4. (iv) for $i\geqslant 1$ we have the lower bound $N_{i}\geqslant\frac{N_{i-1}^{1/2}}{\left(q_{0}^{2^{i-1}}\dotsm q_{i-1}\exp\left(\delta^{-O(1)}\right)\right)^{O(1)}}.$ Applying Lemma 2.1 with $q=q_{0}^{2^{i}}q_{1}^{2^{i-1}}\dotsm q_{i-1}^{2}q_{i}$, either $N_{n}\ll\left(q_{0}^{2^{n}}q_{1}^{2^{n-1}}\dotsm q_{n-1}^{2}q_{n}/\delta\right)^{O(1)},$ (2.3) or we may obtain $(A_{n+1},N_{n+1},q_{n+1})$ satisfying conditions (i)–(iv). If (2.3) holds, then our iterative process terminates at stage $n$. If the number of iterations $n$ is at least $c^{-1}$, then the density of $A_{n}$ on $[N_{n}]$ is at least $2\delta$. After an additional $\tfrac{1}{2}c^{-1}$ iterations, the density is at least $4\delta$. Hence if the number of iterations is at least $\left\lceil c^{-1}\right\rceil+\left\lceil\tfrac{1}{2}c^{-1}\right\rceil+\left\lceil\tfrac{1}{4}c^{-1}\right\rceil+\dots+\left\lceil\tfrac{1}{2^{m-1}}c^{-1}\right\rceil,$ then the density is at least $2^{m}\delta$. The density therefore exceeds one if the number of iterations exceeds $2c^{-1}+\log_{2}(\delta^{-1})$. Since this cannot happen, it follows that there exists $n\leqslant\log_{2}(\delta^{-1})+O(1)$ such that the procedure terminates at stage $n$. At the point of termination, the smallness assumption (2.3) must hold, so that $N_{n}\leqslant\exp\left(O\Bigl{(}\delta^{-O(1)}\Bigr{)}\right).$ On the other hand, iteratively applying the lower bound (iv), we have $\begin{split}N_{n}&\geqslant\frac{N_{n-1}^{1/2}}{\left(q_{0}^{2^{n-1}}\dotsm q_{n-1}\exp\left(\delta^{-O(1)}\right)\right)^{O(1)}}\\\ &\geqslant N^{1/2^{n}}\left[q_{0}^{2^{n-1}}\dotsm q_{n-1}\exp\left(\delta^{-O(1)}\right)\right]^{-O(1+\frac{1}{2}+\frac{1}{4}+\dots+2^{1-n})}\\\ &\gg\exp\left(-O\left(\delta^{-O(1)}\right)\right)N^{\Omega(\delta)},\end{split}$ where we use the upper bound (ii) on the $q_{i}$’s, together with $n\leqslant\log_{2}(\delta^{-1})+O(1)$. Taking a logarithm and comparing upper and lower bounds for $N_{n}$ gives $\log N\ll\delta^{-O(1)},$ which yields the bound claimed in Theorem 1.1. ∎ ## 3\. The cut norm inverse theorem The first step of the proof of Lemma 2.1 is to use the main technical result of [9] to prove an inverse theorem for the cut norm associated to $\Lambda_{q,N}$, which we now define. ###### Definition 3.1 (Cut norm). For positive integers $q\leqslant N$, we define the _cut norm_ of $f:\mathbb{Z}\to\mathbb{C}$ by $\left\|f\right\|_{q,N}:=\sup\\{|\Lambda_{q,N}(f,g_{1},g_{2})|,\ |\Lambda_{q,N}(g_{1},f,g_{2})|,\ |\Lambda_{q,N}(g_{1},g_{2},f)|\\},$ (3.1) where the supremum is taken over all 1-bounded functions $g_{i}:[N]\to\mathbb{C}$. We note that, in spite of our nomenclature, this is not a norm, but a seminorm. One could remedy this by summing over $y\geqslant 0$ in the counting operator (1.6). Initially, the cut norm is too restrictive for us, so we begin by working with the weaker quantity $\left\|f\right\|^{\flat}_{q,N}:=\sup\\{|\Lambda_{q,N}(f,g_{1},g_{2})|,|\Lambda_{q,N}(g_{1},f,g_{2})|:|g_{i}|\leqslant 1\text{ and }\mathrm{supp}(g_{i})\subset[N]\\},$ (3.2) which we refer to as the _partial cut norm_. The following lemma is simply a rephrasing of [9, Theorem 7.1], which is the technical heart of that paper. See Definition 1.3 for the meaning of ‘local function’. ###### Lemma 3.2 (Partial cut norm inverse theorem). Let $q\leqslant N$ be positive integers, $\delta>0$, and $f:\mathbb{Z}\to\mathbb{C}$ be a $1$-bounded function with support in $[N]$. Suppose that $\left\|f\right\|^{\flat}_{q,N}\geqslant\delta.$ Then either $N\ll(q/\delta)^{O(1)}$ or there exists a 1-bounded local function $\phi$ of resolution $\gg(\delta/q)^{O(1)}N^{1/2}$, modulus $qq^{\prime}$ for some $q^{\prime}\ll\delta^{-O(1)}$, and such that $\sum_{x\in[N]}f(x)\phi(x)\gg\delta^{O(1)}N.$ ###### Proof. By compactness, there exist 1-bounded functions $g_{1},g_{2}:[N]\to\mathbb{C}$ such that either $|\Lambda_{q,N}(f,g_{1},g_{2})|\geqslant\delta$ or $|\Lambda_{q,N}(g_{1},f,g_{2})|\geqslant\delta.$ In the latter case, we may apply [9, Theorem 7.1] to deduce that there exist positive integers $q^{\prime}\ll\delta^{-O(1)}$ and $N^{\prime}\gg(\delta/q)^{O(1)}N^{1/2}$ such that $\sum_{x}\left|\sum_{y\in[N^{\prime}]}f(x+qq^{\prime}y)\right|\gg\delta^{O(1)}NN^{\prime}.$ In the former case, the reader may check that the argument of [9, Theorem 7.1] delivers the same conclusion222For details see the second author’s exposition [10].. To ease notation, write $Q:=qq^{\prime}$. Partitioning the integers into arithmetic progressions of length $N^{\prime}$ and common difference $Q$ gives $\delta^{O(1)}NN^{\prime}\ll\sum_{z\in[N^{\prime}]}\sum_{u\in[Q]}\sum_{x\in\mathbb{Z}}\left|\sum_{y\in[N^{\prime}]}f(Qz+QN^{\prime}x+u+Qy)\right|\\\ \leqslant N^{\prime}\max_{z}\sum_{u\in[Q]}\sum_{x\in\mathbb{Z}}\left|\sum_{y\in[N^{\prime}]}f(Qz+QN^{\prime}x+u+Qy)\right|.$ Defining $\psi_{z}(u,x)$ to be the conjugate phase of the inner sum, we deduce the existence of $z$ for which $\displaystyle\delta^{O(1)}N\ll\sum_{u\in[Q]}\sum_{x}\sum_{y\in[N^{\prime}]}f(Qz+QN^{\prime}x+u+Qy)\psi_{z}(u,x).$ The result follows on noting that every integer has a unique representation of the form $QN^{\prime}x+u+Qy$ with $u\in[Q]$, $x\in\mathbb{Z}$ and $y\in[N^{\prime}]$. Hence the map $Qz+QN^{\prime}x+u+Qy\mapsto\psi_{z}(u,x)$ is a local function of resolution $QN^{\prime}$ and modulus $Q$. ∎ Now we can prove an inverse theorem for the cut norm itself. ###### Lemma 3.3 (Full cut norm inverse theorem). Let $q\leqslant N$ be positive integers, $\delta>0$, and $f:\mathbb{Z}\to\mathbb{C}$ be a $1$-bounded function with support in $[N]$. Suppose that $\left\|f\right\|_{q,N}\geqslant\delta.$ Then either $N\ll(q/\delta)^{O(1)}$ or there exist 1-bounded local functions $\phi_{1}$ and $\phi_{2}$, of resolution $\gg(\delta/q)^{O(1)}N^{1/2}$ and moduli $qq_{1}$ and $qq_{2}$, respectively, for some $q_{1},q_{2}\ll\delta^{-O(1)}$ such that $\left|\sum_{x\in[N]}f(x)\phi_{1}(x)\phi_{2}(x)\right|\gg\delta^{O(1)}N.$ (3.3) ###### Proof. By the definition of the cut norm (3.1) and Lemma 3.2, we may assume that there are 1-bounded functions $g,h:[N]\to\mathbb{C}$ such that $|\Lambda_{q,N}(g,h,f)|\geqslant\delta.$ (3.4) Recalling that $M:=\lfloor\sqrt{N/q}\rfloor$, define the dual function $F(x):=\mathbb{E}_{y\in[M]}h(x+y)f(x+qy^{2}).$ Re-parametrising (3.4) and applying the Cauchy–Schwarz inequality, we have that $\delta^{2}\leqslant\mathbb{E}_{x\in[N]}F(x)^{2}=\mathbb{E}_{x\in[N]}\mathbb{E}_{y\in[M]}F(x)h(x+y)f(x+qy^{2}).$ Recalling the definition of the partial cut norm (3.2), we deduce that $\left\|F\right\|_{q,N}^{\flat}\geqslant\delta^{2}.$ Applying the partial cut norm inverse theorem (Lemma 3.2), there exists a 1-bounded local function $\phi_{1}$ of resolution $\gg(\delta/q)^{O(1)}N^{1/2}$ and modulus $qq_{1}$ for some $q_{1}\ll\delta^{-O(1)}$ such that $\left|\sum_{x\in[N]}F(x)\phi_{1}(x)\right|\gg\delta^{O(1)}N.$ Thus $|\Lambda_{q,N}(\phi_{1},h,f)|\gg\delta^{O(1)}.$ We now re-run our argument on $h$ instead of $f$, deducing the existence of a 1-bounded local function $\phi_{2}$ of resolution $\gg(\delta/q)^{O(1)}N^{1/2}$ and modulus $qq_{2}$ for some $q_{2}\ll\delta^{-O(1)}$ such that $|\Lambda_{q,N}(\phi_{1},\phi_{2},f)|\gg\delta^{O(1)}.$ Expanding the counting operator and taking a maximum over $y\in[M]$ gives $\displaystyle\delta^{O(1)}NM$ $\displaystyle\ll\left|\sum_{y\in[M]}\sum_{x}f(x)\phi_{1}(x-qy^{2})\phi_{2}(x-qy^{2}+y)\right|$ $\displaystyle\leqslant M\left|\sum_{x}f(x)\tilde{\phi}_{1}(x)\tilde{\phi}_{2}(x)\right|,$ where both $\tilde{\phi}_{i}$ are 1-bounded local functions of resolution $\gg(\delta/q)^{O(1)}N^{1/2}$ and moduli $qq_{i}$ for some $q_{i}\ll\delta^{-O(1)}$. ∎ ## 4\. A weak regularity lemma Much of the material is this section is standard, and closely follows the expositions in Green [4] and Green–Tao [6]. To simplify the exposition of later arguments, while the factors in [4] and [6] are $\sigma$-algebras, our factors will be the set of atoms of certain $\sigma$-algebras (which can obviously be recovered by taking the $\sigma$-algebra generated by the set of atoms). ###### Definition 4.1 (Factor). We define a _factor_ $\mathcal{B}$ of $[N]$ to be a partition of $[N]$, so that $[N]=\sqcup_{B\in\mathcal{B}}B$. We say that a factor $\mathcal{B}^{\prime}$ _refines_ $\mathcal{B}$ if every element of $\mathcal{B}$ is a union of elements of $\mathcal{B}^{\prime}$. The _join_ $\mathcal{B}_{1}\vee\dots\vee\mathcal{B}_{d}$ of factors $\mathcal{B}_{1},\dots,\mathcal{B}_{d}$ is the factor formed by taking the $d$-fold intersections of the elements of $\mathcal{B}_{1}$, …, $\mathcal{B}_{d}$, that is, $\mathcal{B}_{1}\vee\dots\vee\mathcal{B}_{d}:=\\{B_{1}\cap\dots\cap B_{d}:B_{i}\in\mathcal{B}_{i}\text{ for }i=1,\dots,d\\}.$ ###### Definition 4.2 (Measurability, projection). Given a factor $\mathcal{B}$, we say that a function $f:[N]\to\mathbb{C}$ is _$\mathcal{B}$ -measurable_ if it is constant on the elements of $\mathcal{B}$. Define the _projection_ of any function $f:[N]\to\mathbb{C}$ onto $\mathcal{B}$ by $\Pi_{\mathcal{B}}f(x)=\mathbb{E}_{y\in B_{x}}f(y),$ (4.1) where $B_{x}$ is the element of $\mathcal{B}$ that contains $x$. Notice that $\Pi_{\mathcal{B}}f$ is $\mathcal{B}$-measurable, and is just the conditional expectation of $f$ with respect to the $\sigma$-algebra generated by the elements of $\mathcal{B}$. We record some well-known properties of the projection operator $\Pi_{\mathcal{B}}$ (that is, properties of conditional expectation) in the next lemma. ###### Lemma 4.3 (Properties of the projection operator). 1. (i) The operator $\Pi_{\mathcal{B}}$ linearly projects onto the space of $\mathcal{B}$-measurable functions. 2. (ii) $\Pi_{\mathcal{B}}$ is self-adjoint with respect to the inner product $\left\langle f,g\right\rangle:=\sum_{x}f(x)\overline{g(x)}\qquad(f,g:[N]\to\mathbb{C}),$ so that $\left\langle f,\Pi_{\mathcal{B}}g\right\rangle=\left\langle\Pi_{\mathcal{B}}f,g\right\rangle$. 3. (iii) If $\mathcal{B}^{\prime}$ is a refinement of $\mathcal{B}$ then $\Pi_{\mathcal{B}^{\prime}}\Pi_{\mathcal{B}}f=\Pi_{\mathcal{B}}f.$ 4. (iv) If $\mathcal{B}^{\prime}$ refines $\mathcal{B}$ then $\Pi_{\mathcal{B}}f$ is orthogonal to $\Pi_{\mathcal{B}^{\prime}}f-\Pi_{\mathcal{B}}f$. ###### Proof. Inspecting the formula (4.1) reveals that $\Pi_{\mathcal{B}}$ is linear, that $\Pi_{\mathcal{B}}f$ is constant on elements of $\mathcal{B}$, and that if $f$ itself is constant on elements of $\mathcal{B}$, then $\Pi_{\mathcal{B}}f=f$. This establishes (i). Interchanging the order of summation gives $\begin{split}\left\langle f,\Pi_{\mathcal{B}}g\right\rangle=\sum_{B\in\mathcal{B}}|B|^{-1}\sum_{x,y\in B}f(x)\overline{g(y)}=\left\langle\Pi_{\mathcal{B}}f,g\right\rangle.\end{split}$ This proves that $\Pi_{\mathcal{B}}$ is self-adjoint. The first refinement property follows from the fact that $\Pi_{\mathcal{B}}f$ is $\mathcal{B}^{\prime}$-measurable. We utilise self-adjointness of $\Pi_{\mathcal{B}}$ and the first refinement property to conclude that $\begin{split}\left\langle\Pi_{\mathcal{B}}f,\Pi_{\mathcal{B}}f-\Pi_{\mathcal{B}^{\prime}}f\right\rangle&=\left\langle\Pi_{\mathcal{B}}f,\Pi_{\mathcal{B}}f-f\right\rangle=\left\langle f,\Pi_{\mathcal{B}}f-\Pi_{\mathcal{B}}f\right\rangle=0.\end{split}$ ∎ Now we describe the particular type of factors that will be relevant to us. ###### Definition 4.4 (Local factor). A _simple real factor_ of resolution $M$ is a factor of $[N]$ obtained by partitioning $\mathbb{R}$ into intervals all of length $M$. A _simple congruence factor_ of modulus $q$ is the factor of $[N]$ obtained by partitioning into congruence classes mod $q$. We say that $\mathcal{B}$ is a _simple local factor_ of resolution $M$ and modulus $q$ if it is the join of a simple real factor of resolution $M$ and a simple congruence factor of modulus $q$. Notice that $\mathcal{B}$ is a simple local factor if and only if it consists of the level sets of a local function (Definition 1.3) of resolution $M$ and modulus $q$. A _local factor_ of dimension $d$, resolution $M$ and modulus $q$ is the join of $d$ simple local factors $\mathcal{B}_{i}$, each of resolution $M_{i}$ and modulus $q_{i}$, where $M_{i}\geqslant M$ and $q=\mathrm{lcm}[q_{1},\dots,q_{d}]$. Local factors of large resolution and small modulus and dimension necessarily contain few sets. This fact will be useful later in the proof of Lemma 2.1. ###### Lemma 4.5 (Size of a local factor). If $\mathcal{B}$ is a local factor of dimension $d$, resolution $M$, and modulus $q$, then $|\mathcal{B}|\leqslant qd\left(\frac{N}{M}+2\right).$ ###### Proof. By the definition of a local factor, it suffices to bound the size of the join of $d$ simple real factors, and then bound the size of the join of $d$ simple congruence factors. The product of these two numbers gives us our final bound. Joining $d$ congruence simple factors with moduli $q_{1},\dots,q_{d}$ results in another congruence simple factor of modulus $q=\mathrm{lcm}[q_{1},\dots,q_{d}]$. The number of parts in such a partition is $q$. The join of $d$ simple real factors partitions $[N]$ into intervals. The upper endpoint of each of these intervals is either equal to $N$ or is equal to an endpoint of an interval in one of the original simple real factors. For a simple real factor of resolution $M$, at most $1+N/M$ upper endpoints lie in $[1,N)$. Hence the number of intervals in the join of $d$ simple real factors of resolutions $M_{1}$, …, $M_{d}$ is at most $2d+N(M_{1}^{-1}+\dots+M_{d}^{-1})$.∎ We now prove a weak regularity lemma for the cut norm via an energy increment argument. ###### Lemma 4.6 (Weak regularity). Let $q\leqslant N$ be positive integers and $\delta>0$. Either $N\ll(q/\delta)^{O(1)}$, or for any function $f:[N]\to[0,1]$ there exists a local factor $\mathcal{B}$ of dimension $d\ll\delta^{-O(1)}$, resolution $\gg(\delta/q)^{O(1)}N^{1/2}$, and modulus $qq^{\prime}$ for some $q^{\prime}\leqslant O\left(1/\delta\right)^{O(d)}$ such that $\left\|f-\Pi_{\mathcal{B}}f\right\|_{q,N}\leqslant\delta.$ (4.2) ###### Proof. We run an energy increment argument, initialising at stage $0$ with the trivial factor $\mathcal{B}_{0}:=\left\\{[N]\right\\}$. Suppose that at stage $d$ of this iteration we have a local factor $\mathcal{B}$ of resolution $\gg(\delta/q)^{O(1)}N^{1/2}$, dimension at most $2d$, and modulus $qq^{\prime}$ for some $q^{\prime}\leqslant O(1/\delta)^{O(d)}$. In addition, suppose that we have the energy lower bound $\left\|\Pi_{\mathcal{B}}f\right\|_{\ell^{2}}^{2}\gg d\delta^{O(1)}N.$ (4.3) With these assumptions in place, we query if the following holds $\left\|f-\Pi_{\mathcal{B}}f\right\|_{q,N}\leqslant\delta.$ (4.4) If so, then the process terminates. If not, we show how our iteration may proceed to stage $d+1$. Applying the cut norm inverse theorem (Lemma 3.3), we conclude that there exist 1-bounded local functions $\phi_{i}$ of resolution $\gg(\delta/q)^{O(1)}N^{1/2}$ and modulus $qq_{i}$ for some $q_{i}\leqslant\delta^{-O(1)}$ such that $\left|\left\langle f-\Pi_{\mathcal{B}}f,\phi_{1}\phi_{2}\right\rangle\right|=\left|\sum_{x\in[N]}(f-\Pi_{\mathcal{B}}f)(x)\phi_{1}(x)\phi_{2}(x)\right|\gg\delta^{O(1)}N.$ Let $\mathcal{B}^{\prime}$ denote the join of $\mathcal{B}$ and the simple local factors generated by $\phi_{1}$ and $\phi_{2}$, so that $\mathcal{B}^{\prime}$ is a local factor of dimension at most $2(d+1)$, resolution $\gg(\delta/q)^{O(1)}N^{1/2}$ and modulus $qq^{\prime\prime}$ for some $q^{\prime\prime}\leqslant q^{\prime}q_{1}q_{2}\leqslant O(1/\delta)^{O(d+1)}$. Since $\phi_{1}\phi_{2}$ is $\mathcal{B}^{\prime}$-measurable, we can use the properties listed in Lemma 4.3 together with the Cauchy–Schwarz inequality to deduce that $\begin{split}\left|\left\langle f-\Pi_{\mathcal{B}}f,\phi_{1}\phi_{2}\right\rangle\right|&=\left|\left\langle f-\Pi_{\mathcal{B}}f,\Pi_{\mathcal{B}^{\prime}}(\phi_{1}\phi_{2})\right\rangle\right|=\left|\left\langle\Pi_{\mathcal{B}^{\prime}}f-\Pi_{\mathcal{B}}f,\phi_{1}\phi_{2}\right\rangle\right|\\\ &\leqslant N^{1/2}\left\|\Pi_{\mathcal{B}^{\prime}}f-\Pi_{\mathcal{B}}f\right\|_{\ell^{2}}.\end{split}$ It follows that $\left\|\Pi_{\mathcal{B}^{\prime}}f-\Pi_{\mathcal{B}}f\right\|_{\ell^{2}}\gg\delta^{O(1)}N^{1/2}.$ Lemma 4.3 (iv) tells us that $\Pi_{\mathcal{B}}f$ is orthogonal to $\Pi_{\mathcal{B}^{\prime}}f-\Pi_{\mathcal{B}}f$, hence by Pythagoras’s theorem $\left\|\Pi_{\mathcal{B}^{\prime}}f\right\|_{\ell^{2}}^{2}=\left\|\Pi_{\mathcal{B}}f\right\|_{\ell^{2}}^{2}+\left\|\Pi_{\mathcal{B}^{\prime}}f-\Pi_{\mathcal{B}}f\right\|_{\ell^{2}}^{2}.$ The energy bound (4.3) follows for $\mathcal{B}^{\prime}$, allowing us to proceed to the next stage of our iteration. Since the function $f$ is $1$-bounded, the projection $\Pi_{\mathcal{B}}f$ is also 1-bounded, hence the energy (4.3) is always bounded above by $N$. It follows that this energy increment must terminate at stage $d$ for some $d\ll\delta^{-O(1)}$, yielding the lemma.∎ ## 5\. The density increment lemma In this section we prove Lemma 2.1, modelling our argument on that given by Green and Tao [6, Corollary 5.8]. We first record, for the sake of convenience, the following immediate consequence of the triangle inequality. ###### Lemma 5.1 ($\ell^{1}$-control). Suppose that $N\geqslant q$. Then for any $f_{0},f_{1},f_{2}:[N]\to\mathbb{C}$ we have $|\Lambda_{q,N}(f_{0},f_{1},f_{2})|\leqslant N^{-1}\left\|f_{i}\right\|_{\ell^{1}}\prod_{j\neq i}\left\|f_{j}\right\|_{\infty}.$ ###### Proof. We prove the result for $i=1$, the other cases being similar. A reparametrisation gives $\displaystyle\left|\Lambda_{q,N}(f_{0},f_{1},f_{2})\right|$ $\displaystyle=\left|\mathbb{E}_{x\in[N]}f_{1}(x)\mathbb{E}_{y\in[M]}f_{0}(x-y)f_{2}(x+qy^{2}-y)\right|$ $\displaystyle\leqslant\mathbb{E}_{x\in[N]}|f_{1}(x)|\mathbb{E}_{y\in[M]}|f_{0}(x-y)||f_{2}(x+qy^{2}-y)|.$ ∎ We are now in a position to prove Lemma 2.1, and thereby complete our proof of Theorem 1.1. ###### Proof of Lemma 2.1. Let $A$ satisfy the assumptions of Lemma 2.1. Increasing $\delta$ only strengthens our conclusion, so we may assume that $|A|=\delta N$. Since $\Lambda_{q,N}(1_{A})=0$, we have that $\left|\Lambda_{q,N}(1_{A})-\Lambda_{q,N}(\delta 1_{[N]})\right|=\delta^{3}\Lambda_{q,N}(1_{[N]})\gg\delta^{3}$. Applying the weak regularity lemma (Lemma 4.6), there exists a local factor $\mathcal{B}$ of dimension $d\ll\delta^{-O(1)}$, resolution $\gg(\delta/q)^{O(1)}N^{1/2}$, and modulus $qq^{\prime}$ for some $q^{\prime}\leqslant O(1/\delta)^{O(d)}$ such that $\left\|1_{A}-\Pi_{\mathcal{B}}1_{A}\right\|_{q,N}\leqslant\tfrac{1}{6}\delta^{3}\Lambda_{q,N}({1_{[N]}}).$ Setting $f:=\Pi_{\mathcal{B}}1_{A}$, a telescoping identity thus yields $\left|\Lambda_{q,N}(f)-\Lambda_{q,N}(\delta 1_{[N]})\right|\geqslant\tfrac{1}{2}\delta^{3}\Lambda_{q,N}({1_{[N]}})\gg\delta^{3}.$ Define the $\mathcal{B}$-measurable set $S:=\left\\{x\in[N]:f(x)\geqslant(1+c)\delta\right\\},$ where $c>0$ is a sufficiently small absolute constant that will be chosen to make the following argument valid. By Lemma 5.1 and a telescoping identity, we have $\left|\Lambda_{q,N}(f)-\Lambda_{q,N}(f1_{S^{c}})\right|\leqslant 3|S|/N$, so that $\tfrac{|S|}{N}+\left|\Lambda_{q,N}(f1_{S^{c}})-\Lambda_{q,N}(\delta 1_{[N]})\right|\gg\delta^{3}.$ Yet another telescoping identity, in conjunction with Lemma 5.1, gives $\displaystyle\left|\Lambda_{q,N}(f1_{S^{c}})-\Lambda_{q,N}(\delta 1_{[N]})\right|$ $\displaystyle\ll\tfrac{\delta^{2}}{N}\left\|f1_{S^{c}}-\delta 1_{[N]}\right\|_{\ell^{1}}\leqslant\tfrac{\delta^{2}}{N}\left\|f-\delta 1_{[N]}\right\|_{\ell^{1}}+\tfrac{|S|}{N},$ so that $|S|+\delta^{2}\left\|f-\delta 1_{[N]}\right\|_{\ell^{1}}\gg\delta^{3}N.$ Since $f-\delta 1_{[N]}$ has mean zero, its $\ell^{1}$-norm is equal to twice the $\ell^{1}$-norm of its positive part. The function $\left(f-\delta 1_{[N]}\right)_{+}$ can only exceed $c\delta$ on $S$, so taking $c$ small enough gives $|S|\gg\delta^{3}N$. Letting $B$ denote the largest element of $\mathcal{B}$ for which $B\subset S$, the bound in Lemma 4.5 yields $|B|\gg q^{-O(1)}\delta^{O(d)}2^{-O(d)}N^{1/2}.$ By construction (see Definition 4.4), the set $B$ is an arithmetic progression of common difference $qq^{\prime}$ with $q^{\prime}\leqslant O(1/\delta)^{O(d)}$. Moreover, the density of $A$ on $B$ is equal to the value of $f(x)$ for any $x\in B$, and this is at least $(1+c)\delta$ by the definition of $S$. ∎ ## 6\. Global control by major arc Fourier coefficients The purpose of this section is to prove Theorem 1.2 and Corollary 1.4. We begin with an alternative version of Lemma 3.2, replacing the rigid local function found therein with something more continuous. ###### Definition 6.1 ($C$-Lipschitz). We say that $\phi:\mathbb{Z}\to\mathbb{C}$ is _$C$ -Lipschitz along $q\cdot\mathbb{Z}$_ if for any $x,y\in\mathbb{Z}$ we have $|\phi(x+qy)-\phi(x)|\leqslant C|y|.$ Recalling our definition for the Fejér kernel (1.7), we observe that a function of the form $x\mapsto\sum_{h}\mu_{H}(h)f(x+qh)$ (6.1) is Lipschitz along $q\cdot\mathbb{Z}$. ###### Lemma 6.2. Let $q,H$ be positive integers and $f:\mathbb{Z}\to\mathbb{C}$ be 1-bounded. If $\phi$ is defined as in (6.1), then $\phi$ is $O(H^{-1})$-Lipschitz along $q\cdot\mathbb{Z}$. ###### Proof. Recalling (1.7), the triangle inequality for $|\cdot|$ and $\max\\{\cdot,0\\}$ show that $|\mu_{H}(h+y)-\mu_{H}(h)|\leqslant|y|/\left\lfloor H\right\rfloor^{2}$ for all $h,y\in\mathbb{Z}$. Hence a change of variables gives $|\phi(x+qy)-\phi(x)|\leqslant\sum_{h}|\mu_{H}(h-y)-\mu_{H}(h)|\ll\frac{|y|}{H^{2}}\sum_{h\in(-H,H)\cup(y-H,y+H)}1.$ ∎ Now we prove another partial cut norm inverse theorem, this time getting correlation with functions that are Lipschitz along progressions with small common difference. ###### Lemma 6.3 (Partial cut norm inverse theorem II). Let $N$ be a positive integer, $\delta>0$, and $f,g,h:\mathbb{Z}\to\mathbb{C}$ be $1$-bounded functions with support in $[N]$. Suppose that $\left|\mathbb{E}_{x\in[N]}\mathbb{E}_{y\in[N^{1/2}]}f(x)g(x+y)h(x+y^{2})\right|\geqslant\delta.$ Then either $N\ll\delta^{-O(1)}$, or there exists $q\ll\delta^{-O(1)}$ and a 1-bounded function $\phi$ that is $O(\delta^{-O(1)}N^{-1/2})$-Lipschitz along $q\cdot\mathbb{Z}$ such that $\sum_{x\in[N]}g(x)\phi(x)\gg\delta^{O(1)}N.$ ###### Proof. Applying [9, Theorem 7.1], we obtain positive integers $q\ll\delta^{-O(1)}$ and $N^{1/2}\geqslant M\gg\delta^{O(1)}N^{1/2}$ such that $\sum_{x}\left|\sum_{y\in[M]}g(x+qy)\right|\gg\delta^{O(1)}NM.$ By the Cauchy–Schwarz inequality and a change of variables, we have $\sum_{x}g(x)\sum_{y_{1},y_{2}\in[M]}\overline{g(x+q(y_{1}-y_{2}))}\gg\delta^{O(1)}NM^{2}.$ Setting $\phi(x):=\mathbb{E}_{y_{1},y_{2}\in[M]}\overline{g(x+q(y_{1}-y_{2}))},$ Lemma 6.2 shows this function has the required properties. ∎ Before proving Theorem 1.2, we record two standard facts. ###### Lemma 6.4. There are at most $O(N^{4})$ solutions $x\in[N]^{6}$ to the equation $x_{1}^{2}+x_{2}^{2}+x_{3}^{2}=x_{4}^{2}+x_{5}^{2}+x_{6}^{2}.$ ###### Proof. There are a number of ways to prove this. Perhaps the most robust is via the circle method, see [3]. The result can be read out of [1, Proposition 1.10]. ∎ ###### Lemma 6.5 (Weyl’s inequality). Let $P\subset\mathbb{Z}$ be an arithmetic progression with common difference $q$ and let $0<\delta\leqslant 1$. Suppose that $\left|\sum_{x\in P}e(\alpha x^{2})\right|\geqslant\delta|P|.$ Then either $|P|\ll\delta^{-O(1)}$ or there exists a positive integer $q^{\prime}\ll\delta^{-O(1)}$ such that $\|q^{\prime}q^{2}\alpha\|\ll\delta^{-O(1)}|P|^{-2}.$ ###### Proof. Let $P=x_{0}+q\cdot[N]$, so that our exponential sum becomes $\sum_{x\in P}e(\alpha x^{2})=\sum_{y\in[N]}e(\alpha q^{2}y^{2}+2\alpha qx_{0}y+\alpha x_{0}^{2}).$ Applying [5, Lemma A.11], either $N\ll\delta^{-O(1)}$ or the conclusion of our lemma follows. ∎ ###### Proof of Theorem 1.2. Write $\Lambda_{N}$ for the counting operator $\Lambda_{1,N}$ (that is, the average (1.6) with $q=1$). Let $f,g,h:[N]\to\mathbb{C}$ be 1-bounded functions satisfying $|\Lambda_{N}(f,g,h)|\geqslant\delta.$ Define the seminorm $\left\|g\right\|:=\sup\left\\{|\Lambda_{N}(g_{1},g,g_{2})|:|g_{i}|\leqslant 1\text{ and }\mathrm{supp}(g_{i})\subset[N]\right\\}.$ and the dual function $F(x):=\mathbb{E}_{y\in[N^{1/2}]}f(x-y)h(x+y^{2}-y).$ We follow the argument in the proof of Lemma 3.3 to deduce that $\left\|F\right\|\geqslant\delta^{2}.$ Hence, by Lemma 6.3, there exists $q\ll\delta^{-O(1)}$ and a 1-bounded function $\phi$ that is $O(\delta^{-O(1)}N^{-1/2})$-Lipschitz along $q\cdot\mathbb{Z}$ and satisfies $\sum_{x\in[N]}F(x)\phi(x)\gg\delta^{O(1)}N.$ Expanding the definition of the dual function, we have $\sum_{x\in[N]}\sum_{y\in[N^{1/2}]}f(x)\phi(x+y)h(x+y^{2})\gg\delta^{O(1)}N^{3/2}.$ Let us partition $\mathbb{Z}$ into arithmetic progressions $P$ each of common difference $q$ and length $M$, where $M$ will be chosen shortly. For each such arithmetic progression $P$, fix an element $y_{P}\in P$. Using the Lipschitz property of $\phi$, for any $x\in\mathbb{Z}$ and $y\in P$ we have $|\phi(x+y_{P})-\phi(x+y)|\ll\delta^{-O(1)}MN^{-1/2}.$ Hence, $\left|\sum_{P}\sum_{x\in[N]}\sum_{y\in P\cap[N^{1/2}]}f(x)[\phi(x+y)-\phi(x+y_{P})]h(x+y^{2})\right|\ll\delta^{-O(1)}MN.$ We can therefore take $M$ sufficiently small to satisfy both $M\gg\delta^{O(1)}N^{1/2}$ and $\left|\sum_{P}\sum_{x}\sum_{y\in P\cap[N^{1/2}]}f(x)\phi(x+y_{P})h(x+y^{2})\right|\gg\delta^{O(1)}N^{3/2}.$ Set $f_{P}(x):=f(x)\phi(x+y_{P})$. The number of progressions $P$ that intersect $[N^{1/2}]$ is at most $O(N^{1/2}M^{-1}+q)=O(\delta^{-O(1)})$. Therefore, the pigeon-hole principle gives a progression $P$ for which $\left|\sum_{x}\sum_{y\in P\cap[N^{1/2}]}f_{P}(x)h(x+y^{2})\right|\gg\delta^{O(1)}N^{3/2}.$ (6.2) In particular, $|P\cap[N^{1/2}]|\gg\delta^{O(1)}N^{1/2}$. Writing $S_{P}(\alpha)$ for $\sum_{y\in P\cap[N^{1/2}]}e\left(\alpha y^{2}\right)$, the orthogonality relations allow us to reformulate (6.2) as $\displaystyle\left|\int_{\mathbb{T}}\hat{f}_{P}(\alpha)\hat{h}(-\alpha)S_{P}(\alpha)d\alpha\right|\gg\delta^{O(1)}N^{3/2}.$ Let $\eta>0$ be a parameter to be determined shortly, and define the major arcs $\mathfrak{M}:=\left\\{\alpha\in\mathbb{T}:|S_{P}(\alpha)|\geqslant\eta N^{1/2}\right\\}.$ Parseval’s identity then gives $\left|\int_{\mathbb{T}\setminus\mathfrak{M}}\hat{f}_{P}(\alpha)\hat{h}(-\alpha)S_{P}(\alpha)d\alpha\right|\leqslant\eta N^{1/2}\big{\|}\hat{f}_{P}\big{\|}_{2}\big{\|}\hat{h}\big{\|}_{2}\leqslant\eta N^{3/2}.$ Hence we may take $\eta\gg\delta^{O(1)}$ and ensure that $\displaystyle\left|\int_{\mathfrak{M}}\hat{f}_{P}(\alpha)\hat{h}(-\alpha)S_{P}(\alpha)d\alpha\right|\gg\delta^{O(1)}N^{3/2}.$ By Lemma 6.4 and orthogonality, we have $\left\|S_{P}\right\|_{6}\ll N^{1/3}$. Thus, by Hölder’s inequality, we get that $\left|\int_{\mathfrak{M}}\hat{f}_{P}(\alpha)\hat{h}(-\alpha)S_{P}(\alpha)d\alpha\right|\leqslant\big{\|}\hat{f}_{P}\big{\|}_{2}\big{\|}\hat{h}\big{\|}_{2}^{2/3}\big{\|}S_{P}\big{\|}_{6}\sup_{\alpha\in\mathfrak{M}}\bigl{|}\hat{h}(-\alpha)\bigr{|}^{1/3}.$ We therefore deduce that there exists $\alpha\in\mathfrak{M}$ such that $\bigl{|}\hat{h}(-\alpha)\bigr{|}\gg\delta^{O(1)}N.$ Finally, an application of Weyl’s inequality (Lemma 6.5) shows that if $-\alpha\in\mathfrak{M}$ then $\alpha$ has the required Diophantine approximation property. ∎ ###### Proof of Corollary 1.4. Let $\alpha\in\mathbb{R}$ be the frequency and $q$ the positive integer provided by Theorem 1.2. For any integer $a$ and positive integer $M$, if $x,y\in a+q\cdot[M]$, then $\left|e(\alpha x)-e(\alpha y)\right|\leqslant 2\pi\left\|\alpha(x-y)\right\|\ll\delta^{-O(1)}MN^{-1}.$ Partitioning $\mathbb{Z}$ into arithmetic progressions of common difference $q$ and length $M$ then gives $\delta^{O(1)}N\ll\sum_{P}\Bigl{|}\sum_{x\in P}h(x)\Bigr{|}+\delta^{-O(1)}M.$ We thus take $M\gg\delta^{O(1)}N$ sufficiently small to ensure that $\delta^{O(1)}N\ll\sum_{P}\Bigl{|}\sum_{x\in P}h(x)\Bigr{|}.$ Write $\theta_{P}$ for the conjugate phase of the inner sum. Then the map $x\mapsto\sum_{P}\theta_{P}1_{P}(x)$ is a local function of resolution $\gg\delta^{O(1)}N$ and modulus $\ll\delta^{-O(1)}$, yielding the corollary. ∎ ## 7\. Longer progressions As mentioned in §1.3, the main obstacle to generalising our polylogarithmic bound to longer configurations such as (1.2) is in obtaining an appropriate generalisation of Lemma 3.3; in particular, showing that if the relevant counting operator is large, then _all_ functions must correlate with a product of a bounded number of local functions. Let us demonstrate where the argument breaks down for $m>2$. Given polynomials as in (1.2) and 1-bounded functions $f_{0},f_{1},\dots,f_{m}:[N]\to\mathbb{C}$, define the counting operator $\Lambda_{P_{1},\dots,P_{m}}^{N}(f_{0},f_{1},\dots,f_{m}):=\\\ \mathbb{E}_{x\in[N]}\mathbb{E}_{y\in[N^{1/\deg P_{m}}]}f_{0}(x)f_{1}(x+P_{1}(y))\dotsm f_{m}(x+P_{m}(y)).$ Using the main technical result of [8], [8, Theorem 3.3], one can show that if $\left|\Lambda_{P_{1},\dots,P_{m}}^{N}(f_{0},f_{1},\dots,f_{m})\right|\geqslant\delta,$ then both $f_{0}$ and $f_{1}$ correlate with local functions $\phi_{0}$ and $\phi_{1}$. Combining this with a dual function argument, as in our proofs of Theorem 1.2 and Lemma 3.3, one may conclude that $\left|\Lambda_{P_{1},\dots,P_{m}}^{N}(\phi_{0},\phi_{1},f_{2},\dots,f_{m})\right|\gg\delta^{O(1)},$ If $m=2$, one can then pigeon-hole in the smaller $y$ variable appearing in the counting operator (as we do in the proof of Lemma 3.3) to conclude that $f_{2}$ correlates with a product of two local functions. It is this simple pigeon-holing argument that fails when $m>2$. ### 7.1. An alternative strategy for longer progressions A more productive strategy is to follow our proof of Theorem 1.2 instead of Theorem 1.1. In proving Theorem 1.2 we replace the counting operator $\Lambda_{y,y^{2}}^{N}(f_{0},f_{1},f_{2})$ with $\Lambda_{y,y^{2}}^{N}(f_{0},\phi,f_{2})$, where $\phi$ is a local function that is constant on progressions of length $\approx N^{1/2}$ with common difference of size $\approx O(1)$. Provided that we pass to appropriate subprogressions in all of the variables appearing in our counting operator, we can exploit the properties of this local function and ‘remove’ it from our count. In effect (after passing to subprogressions of bounded common difference), we replace the count $\Lambda_{y,y^{2}}^{N}(f_{0},f_{1},f_{2})$ with one of the form $\Lambda_{Q}^{N^{\prime}}(f_{0},f_{2})$, where $Q$ is a quadratic polynomial and $N^{\prime}$ is slightly smaller than $N$. Generalising this approach, one can use [8, Theorem 3.3] to replace the counting operator $\Lambda_{P_{1},\dots,P_{m}}^{N}(f_{0},f_{1},\dots,f_{m})$ with $\Lambda_{P_{1},\dots,P_{m}}^{N}(f_{0},\phi,f_{2},\dots,f_{m})$, where $\phi$ is a local function. Provided that this local function has resolution $\gg N^{\deg P_{1}/\deg P_{m}}$ and common difference $q\ll 1$, we have $\phi(x+P_{1}(y))\approx\phi(x)$ for any $x\in\mathbb{Z}$ and any $y$ constrained to a subprogression of common difference $q$ and length $\approx N^{\deg P_{1}/\deg P_{m}}$. Passing to subprogressions in $x$ and $y$, one should then be able to replace the operator $\Lambda_{P_{1},\dots,P_{m}}^{N}(f_{0},\phi,f_{2},\dots,f_{m})$ by one of the form $\Lambda_{Q_{2},\dots,Q_{m}}^{N^{\prime}}(f_{0},f_{2},\dots,f_{m}).$ Applying induction on $m$ may then allow one to show that every function in the original counting operator correlates with a local function. The main impediment to carrying out this strategy is that the polynomials $Q_{2}$, …, $Q_{m}$, which arise on passing to a subprogression, may not satisfy the hypotheses required to reapply [8, Theorem 3.3]. It is likely that the polynomials are sufficiently well-behaved for the arguments of [8] to remain valid, but we leave this verification to the energetic reader. ## References * Bou [89] J. Bourgain. On $\Lambda(p)$-subsets of squares. Israel J. Math., 67(3):291–311, 1989. * Bou [99] J. Bourgain. On triples in arithmetic progression. Geom. Funct. Anal., 9(5):968–984, 1999. * Dav [05] H. Davenport. Analytic methods for Diophantine equations and Diophantine inequalities. Cambridge Mathematical Library. Cambridge University Press, Cambridge, second edition, 2005. With a foreword by R. C. Vaughan, D. R. Heath-Brown and D. E. Freeman, Edited and prepared for publication by T. D. Browning. * Gre [07] B. Green. Montréal notes on quadratic Fourier analysis. In Additive combinatorics, volume 43 of CRM Proc. Lecture Notes, pages 69–102. Amer. Math. Soc., Providence, RI, 2007. * GT [08] B. Green and T. Tao. Quadratic uniformity of the Möbius function. Ann. Inst. Fourier (Grenoble), 58(6):1863–1935, 2008. * GT [09] B. Green and T. Tao. New bounds for Szemerédi’s theorem. II. A new bound for $r_{4}(N)$. In Analytic number theory, pages 180–204. Cambridge Univ. Press, Cambridge, 2009. * HB [87] D. R. Heath-Brown. Integer sets containing no arithmetic progressions. J. London Math. Soc. (2), 35(3):385–394, 1987. * Pel [19] S. Peluse. Bounds for sets with no polynomial progressions. ArXiv e-prints, 2019. * PP [19] S. Peluse and S. Prendiville. Quantitative bounds in the non-linear Roth theorem. ArXiv e-prints, 2019. * Pre [20] S. Prendiville. The inverse theorem for the nonlinear Roth configuration: an exposition. ArXiv e-prints, 2020. * Rot [53] K. F. Roth. On certain sets of integers. J. London Math. Soc., 28:104–109, 1953. * Sár [78] A. Sárközy. On difference sets of sequences of integers. I. Acta Math. Acad. Sci. Hungar., 31(1–2):125–149, 1978. * Sze [90] E. Szemerédi. Integer sets containing no arithmetic progressions. Acta Math. Hungar., 56(1-2):155–158, 1990. * Tao [06] T. Tao. Obstructions to uniformity and arithmetic patterns in the primes. Pure Appl. Math. Q., 2(2, Special Issue: In honor of John H. Coates. Part 2):395–433, 2006.
2024-09-04T02:54:58.416642
2020-03-09T13:24:21
2003.04127
{ "authors": "Johannes Hillbrand, Nikola Opacak, Marco Piccardo, Harald Schneider,\n Gottfried Strasser, Federico Capasso, Benedikt Schwarz", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26119", "submitter": "Johannes Hillbrand", "url": "https://arxiv.org/abs/2003.04127" }
arxiv-papers
Mode-locked ultrashort pulses from an 8 µm wavelength semiconductor laser Johannes Hillbrand1,2,∗, Nikola Opačak1, Marco Piccardo2,3, Harald Schneider4, Gottfried Strasser1, Federico Capasso2, Benedikt Schwarz1,2,† 1Institute of Solid State Electronics, TU Wien, Gußhausstraße 25, 1040 Vienna, Austria 2Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, USA 3CNST – Fondazione Istituto Italiano di Tecnologia, Via Pascoli 70/3, 20133 Milano, Italy 4Institute of Ion Beam Physics and Materials Research, Helmholtz-Zentrum Dresden-Rossendorf, Germany Quantum cascade lasers (QCL) have revolutionized the generation of mid- infrared light. Yet, the ultrafast carrier transport in mid-infrared QCLs has so far constituted a seemingly insurmountable obstacle for the formation of ultrashort light pulses. Here, we demonstrate that careful quantum design of the gain medium and control over the intermode beat synchronization enable transform-limited picosecond pulses from QCL frequency combs. Both an interferometric radio-frequency technique and second-order autocorrelation shed light on the pulse dynamics and confirm that mode-locked operation is achieved from threshold to rollover current. Being electrically pumped and compact, mode-locked QCLs pave the way towards monolithically integrated non- linear photonics in the molecular fingerprint region beyond 6 µm wavelength. <EMAIL_ADDRESS><EMAIL_ADDRESS> The discovery of ultrashort light pulses has led to numerous breakthroughs in science and technology, including frequency combs1, high-speed optical telecommunication2 and refractive surgery in ophthalmology3. Nowadays, optical pulses are routinely generated in mode-locked lasers operating in the visible or near-infrared range4, 5. Currently, large efforts are aimed at bringing ultrafast laser science in the mid-infrared (MIR) region to a similarly high degree of maturity6. Due to the lack of suitable gain media, methods for the generation of pulses in the molecular fingerprint region beyond 5 µm wavelength have so far relied on non-linear downconversion of near-infrared pulses 7. Established techniques such as optical parametric oscillators8 or difference frequency generation9, 10 either require sophisticated optical setups with tabletop dimensions or are restricted to mW-level of output power. Quantum cascade lasers11 (QCL) have matured to become the dominant MIR laser source. While being microchip-sized and electrically pumped, they are capable of producing Watt-level average power12, 13. Quantum engineering of the active region allows to tailor the emission wavelength throughout the entire mid- infrared region. Hence, harnessing high-performance QCL technology for the generation of MIR pulses represents a long-sought milestone in ultrafast laser science. Mode-locked QCLs could serve as monolithic pump lasers for microresonators and resonant supercontinuum generation14, paving the way towards broadband and high-brightness frequency combs. So far, the sub- picosecond carrier transport in QCL active regions has constituted a seemingly insurmountable obstacle for the formation of short light pulses15, 16, 17. To date, the only successful attempt of mode-locking in monolithic MIR QCLs was observed using a specially designed active region with strongly enhanced upper-state lifetime of the lasing transition17. However, the necessary design modifications limited mode-locked operation to cryogenic temperatures and peak power below 10 mW, thus impeding their practical use. Figure 1: Bi-functional quantum cascade lasers for mode-locking. a: Scanning electron microscope image of three adjacent laser ridges. Each laser consists of a roughly 3 mm long gain section and a shorter (320-480 µm) modulation section. b: Simulated gain and loss spectrum in a standard active region design18 depending on the applied bias. Upon decreasing the bias, the structure becomes almost transparent at the lasing wavelength $\lambda_{L}$, limiting the maximally achievable modulation depth. c: Simulated gain and loss spectrum in a bi-functional active region design12, allowing to tune the gain at 10 V continuously to absorption (shown as negative gain) at 0 V. d: Measured light-current-voltage (L-I-V) characteristics of an epi-up mounted bi-functional QCL at 15$\,{}^{\circ}$C. e: Illustration of a system of coupled oscillators. This system shows an in-phase and anti-phase synchronization state, which oscillate at different frequencies depending on the coupling. Without external stimulation, the anti-phase state is more favorable due to the damped coupling. However, both synchronization frequencies can be probed by exerting mechanical force on the platform coupling the oscillators. In the QCL, the oscillators are represented by the intermode beatings, which tend to synchronize in anti-phase due to gain damping19, 20. Both synchronization frequencies are probed by applying modulation to the laser. f: Average optical power depending on the modulation frequency and power. Two synchronization states at $f_{\mathrm{rep}}^{0}$ and 60 MHz above are observed. g: Signal of a 2-QWIP sensitive to peak power as function of modulation frequency and power. The strongly increased signal of the lobe at $f_{\mathrm{rep}}^{0}$+60 MHz indicates in-phase synchronization. In this work, we demonstrate the generation of transform limited picosecond pulses in high-performance 8 µm wavelength QCLs at room temperature both experimentally and theoretically. Mode-locking is achieved by electrically modulating the intracavity loss using a short modulation section designed for efficient radio-frequency (RF) injection (Fig. 1a). In order to achieve the large modulation depth required for stable mode-locking, close attention has to be paid to the band structure of the QCL active region. This effect is illustrated in Figs. 1a,b. As the bias applied to a standard QCL structure is decreased, it does not switch to absorption at the lasing wavelength, but becomes nearly transparent for the intracavity light due to a bias-dependent shift of the electronic levels, known as Stark effect (Fig. 1b). Hence, the modulation depth is severely limited in standard QCL designs. For this reason, we employ a bi-functional active region whose lasing wavelength and absorption wavelength at zero bias were matched to each other 21, 12 (Fig. 1c). This strategy allows to overcome the aforementioned limitations of the modulation depth caused by the Stark shift. Most importantly, the bi-functional design shows excellent overall performance, which is competitive with other state-of- the-art designs. A 3.5 mm long device mounted epitaxial-side up emits more than 130 mW average power in continuous wave at room temperature (Fig. 2d). As a first step towards pulse generation, it is essential to determine the optimal modulation frequency. For this purpose, mode-locking can be seen as synchronization of coupled oscillators20 (Fig. 1e). Each pair of neighboring cavity modes creates a beating at their difference frequency, which is equal to the cavity roundtrip frequency. These beatings can be seen as oscillators coupled by the non-linearity of the gain medium. Thanks to this coupling, the cavity modes of a free-running QCL can be locked together without modulation, thus giving rise to a self-starting frequency comb 19. Yet, this kind of frequency comb does not emit isolated pulses, but rather a quasicontinuous wave accompanied by a strong linear frequency chirp22. This corresponds to anti-phase synchronization and will be called frequency modulated (FM) comb in the following. In contrast, in-phase synchronization of the intermode beatings leads to the formation of short pulses. It is well known from coupled oscillators that the in-phase and anti-phase states synchronize at different frequencies depending on the coupling. As a consequence, while the cavity roundtrip frequency of the FM QCL comb $f_{\mathrm{rep}}^{0}$ may seem like a reasonable choice, we expect the optimal modulation frequency for generating pulses to differ from $f_{\mathrm{rep}}^{0}$. In order to investigate these two synchronization states experimentally, we start by operating the laser well above its threshold current. Subsequently, the DC bias of the modulation section is decreased to 2.8 V, where the large absorption caused by the bifunctional design (Fig. 1c) brings the QCL just slightly below lasing threshold. In these conditions, modulation at the right frequencies can provide enough additional gain to reach threshold. Fig. 1f shows the laser power depending on modulation frequency and power. At 33 dBm modulation power, the QCL reaches threshold when modulating close to $f_{\mathrm{rep}}^{0}$. Strikingly, a second modulation frequency where lasing occurs is observed almost 60 MHz higher than $f_{\mathrm{rep}}^{0}$, as predicted by the picture of synchronized oscillators. Both the range around the two synchronization frequencies, where lasing is observed, as well as the optical power grow upon increasing the modulation power. Figure 2: Mode-locked pulses from an 8 µm wavelength QCL. a: SWIFTS characterization of the QCL operated close to lasing threshold. The laser is modulated at the in-phase synchronization frequency and at 37 dBm power level. b: Reconstructed time-domain signal of the QCL, showing a train of transform limited pulses with 6.5 ps FWHM. c: Simulation of the QCL using the coherent Master equation described in supp. section 1. d: Interferometric autocorrelation (IAC) of the QCL pulses close to threshold. Red dots: envelope of the IAC reconstructed using SWIFTS. e: IAC at higher current. f: IAC at the rollover current, still displaying the 8:1 ratio. The second burst at a delay equal to the cavity roundtrip time is due to the interference of subsequently emitted pulses. Its peak value of 8 provides another proof for the coherence of the pulses because phase-decoherence would smear out the fringes of the IAC and thus decrease the peak value to smaller than 8. Inset: zoom on the interferometric fringes. Even more insight is provided by using a two-photon quantum well infrared photodetector (2-QWIP) to detect the emitted light. The signal of the 2-QWIP is proportional to the square of the intensity. This allows to identify which modulation frequency leads to in-phase and which to anti-phase synchronization (Fig. 1g). Again, two lobes appear around $f_{\mathrm{rep}}^{0}$ and 60 MHz above. Yet, the 2-QWIP signal is more than an order of magnitude larger in the lobe at higher $f_{\mathrm{mod}}$. At this frequency, the laser operates in the in-phase synchronization regime and emits intense pulses, which leads to a strongly increased 2-QWIP signal. Figure 3: Synchronization under strong modulation. a: Schematic of a 3-section QCL comprised of modulation, gain and high-speed detector sections. b: First three harmonics of the beatnote of the free-running 7 mm long QCL FM comb (red) compared to the actively mode-locked QCL (AM comb, blue). c: laser beatnote while free-running (bottom), at $f_{\mathrm{mod}}{=}f_{\mathrm{rep}}^{0}$ (middle) and at $f_{\mathrm{mod}}{=}f_{\mathrm{rep}}^{0}{+}33\,$MHz (top). While a broad pedestal is visible for $f_{\mathrm{mod}}{=}f_{\mathrm{rep}}^{0}$, the beatnote is perfectly locked for $f_{\mathrm{mod}}{=}f_{\mathrm{rep}}^{0}{+}33\,$MHz. d: RF spectrum around $f^{0}_{\mathrm{rep}}{=}$6.196 GHz as the modulation frequency is varied around $f^{0}_{\mathrm{rep}}$. The phase-noise of the RF spectrum disappears abruptly at $f_{\mathrm{mod}}{\approx}f_{\mathrm{rep}}^{0}{+}20\,$MHz, corresponding to in-phase synchronization. Here, the beatnote consists of a single narrow peak, indicating that the laser is phase-locked to the modulation. Furthermore, the sharp sidepeaks visible at $f_{\mathrm{mod}}{=}6.18\,$GHz are attributed to a periodic modulation of the QCL output, as previously observed in simulations16. In order to unequivocally prove mode-locking, we employ two independent methods to characterize the pulse dynamics at three points of operation from threshold up to the rollover current. Firstly, an interferometric RF technique called ’Shifted wave interference Fourier transform spectroscopy’ (SWIFTS)23, 24 is used to measure the phases of the QCL spectrum (details in Methods section). This information not only enables the reconstruction of the temporal waveform, but also allows to assess the phase-coherence of the pulses and whether they form a frequency comb. Secondly, we measure the interferometric autocorrelation (IAC) of the pulses using the 2-QWIP, which constitutes an additional well-established proof for mode-locking and the pulse width. Fig. 2a shows the SWIFTS characterization of the QCL operated close to threshold. In contrast to the free-running laser, the intensity spectrum consists of a single Gaussian-shaped lobe. The SWIFTS spectrum represents the part of the intensity spectrum which is beating exactly at the modulation frequency. Since the SWIFTS spectrum has the same shape as the intensity spectrum over its entire span, the QCL generates a frequency comb whose repetition frequency is given by the modulation frequency. The intermodal difference phases $\Delta\phi$, which correspond to the spectral group delay, are synchronized almost perfectly in-phase. Hence, all parts of the spectrum have the same group delay and form a pulse. Indeed, the reconstructed waveform in Fig. 2b shows the emission of 6.5 ps short pulses. The full-width-half-maximum (FWHM) of the reconstructed pulses is given by the transform limit of the spectrum in Fig. 2b, indicating that there is negligible chirp in the pulses. In these conditions, the peak power reaches almost 250 mW, which constitutes an enhancement of more than 12 compared to the average power of 20 mW. In order to model the cavity dynamics, we use a fully coherent master equation25 (supp. section 1). This single equation for the complex field replaces the entire Maxwell-Bloch system and reliably predicts the spectral shape, phase relationship and pulse width observed experimentally (Fig. 2c). Furthermore, it allows experimentally unavailable analyses, e.g. the influence of dispersion and nonlinearities (supp. Figs. 2,3,7). The IAC close to threshold (Fig. 2d) shows a prominent peak at zero path difference caused by constructive interference of the pulses after the Michelson interferometer. The ratio of this peak to the background at a delay larger than the pulse width is 8:1, which is generally regarded as the smoking gun of mode-locked pulses. Encouragingly, the measured IAC is in excellent agreement with the expected IAC, which was calculated using the pulses obtained by SWIFTS (red dots in Fig. 2d). This confirms successful mode- locking and the retrieved pulse width. The generation of pulses becomes increasingly challenging at higher gain current. Due to gain saturation, the wings of a pulse experience more gain than the peak. This effect leads to pulse broadening and can destabilize mode- locking. Fortunately, the large modulation depth provided by the bi-functional quantum design enables mode-locking over the entire lasing range from threshold to rollover. The IAC traces at 3.7 kA/cm2 and at the rollover current still show the required peak-to-background ratio of 8:1. The pulses at rollover are slightly broadened to roughly 12 ps, which is attributed partially to a slight chirp and partially to the gain saturation effect mentioned above (supp. Fig. 6). Yet, the average power is greatly increased to 62 mW, which results in over 430 mW peak power and 5 pJ pulse energy - more than an order of magnitude higher than recent reports of comparable mid- infrared semiconductor lasers emitting at shorter wavelengths 26, 27. Another fascinating aspect of bi-functional quantum design is the possibility to monolithically integrate ultrafast photodetectors. While this is particularly important in applications such as photonic integrated circuits, it also provides a tool to measure the beatnote with very large signal-to- noise ratio directly on the chip (Fig. 3a). This provides crucial information about the type of synchronization state and about its stability. Fig. 3b shows the first three harmonics of the beatnote in the free-running and the actively mode-locked regime. In the latter conditions, the beatnote amplitudes increase by 19 dB due to the much larger amplitude modulation. The zoom on the first harmonic beatnote (Fig. 3c) allows to assess the phase-coherence and stability of the frequency comb. The free-running QCL is operating in the anti-phase state showing a weak beatnote at $f_{\mathrm{rep}}^{0}$. Previous work28, 29, 30 has shown that a weak electrical modulation can be used to lock and stabilize the beatnote of the anti-phase state. However, the situation is very different when applying strong modulation at $f_{\mathrm{rep}}^{0}$. In this case, the modulation enforces an AM waveform, which is contrary to the natural anti-phase behaviour of the laser. As a result, the beatnote of this waveform is not phase-locked, as indicated by the pedestal around $f_{\mathrm{mod}}$. The situation changes completely, when the modulation frequency is tuned to the synchronization frequency of the in-phase state ($f_{\mathrm{rep}}^{0}{+}33\,$MHz). There, the strong modulation is in consensus with the natural behavior of the laser, leading to a phase-locked frequency comb with narrow beatnote. This can also be seen in Fig. 3d, which shows the laser beatnote while tuning the modulation frequency across the in- phase and anti-phase synchronization frequencies. While the frequency of the beatnote is controlled by the modulation over the entire span, a phase-locked comb is only generated around the in-phase synchronization frequency. In conclusion, our experiments provide unambiguous proof for the generation of mode-locked pulses in high-performance QCLs at room temperature - a goal which remained elusive since the invention of the QCL - and confirm stunning similarities to synchronization in coupled oscillators. These mode-locked QCLs constitute the first compact and electrically pumped source for ultrashort pulses beyond 5 µm wavelength, demonstrating that they are a highly promising technology for ultrafast laser science in the long-wave infrared region. The availability of such a source paves the way towards a semiconductor-based platform for non-linear photonics, potentially enabling broadband mid-infrared frequency combs and supercontinuum generation. ## Methods QCLs optimized for RF modulation: The QCLs are processed as buried heterostructures. The width of the QCL ridges is 12 µm and the facets are left as cleaved. The area of the top contact of the modulation section is minimized to decrease its parasitic capacitance, which increases the RF injection efficiency. Ground contacts for the modulation are provided by etching through the Fe-doped InP layer next to the laser ridges. The modulation signal is provided by a HP8341B synthesized sweeper, amplified up to roughly 5 W and injected via coplanar tips. The insertion loss at 12 GHz is 14 dB including cables and bias-tee. SWIFTS and IAC: The light emitted by the QCL is shone through a Bruker Vertex 70v FTIR spectrometer. In order to perform SWIFTS, the light is then detected by a home-built fast QWIP at the exit of the FTIR. The optical beating obtained from the QWIP is subsequently amplified and mixed down to roughly 10 MHz using a local oscillator. A Zurich Instruments HF2LI lock-in amplifier and the trigger of the FTIR are used to record the SWIFTS and intensity interferograms in rapid scan mode. The IAC is obtained by detecting the pulses at the exit of the FTIR using the 2-QWIP and recording its photocurrent depending on the path delay of the FTIR. ## References * 1 Udem, T., Holzwarth, R. & Hänsch, T. W. Optical frequency metrology. _Nature_ 416, 233–237 (2002). * 2 Hasegawa, A. & Tappert, F. Transmission of stationary nonlinear optical pulses in dispersive dielectric fibers. I. Anomalous dispersion. _Applied Physics Letters_ 23, 142–144 (1973). * 3 Peyman, G. A. Method for modifying corneal curvature (1989). US Patent 4,840,175. * 4 Moulton, P. F. Spectroscopic and laser characteristics of Ti:Al_2O_3. _Journal of the Optical Society of America B_ 3, 125 (1986). * 5 Kim, J. & Song, Y. Ultralow-noise mode-locked fiber lasers and frequency combs: principles, status, and applications. _Advances in Optics and Photonics_ 8, 465 (2016). * 6 Cao, Q., Kärtner, F. X. & Chang, G. Towards high power longwave mid-IR frequency combs: power scalability of high repetition-rate difference-frequency generation. _Optics Express_ 28, 1369 (2020). * 7 Schliesser, A., Picqué, N. & Hänsch, T. W. Mid-infrared frequency combs. _Nature Photonics_ 6, 440–449 (2012). * 8 Iwakuni, K. _et al._ Phase-stabilized 100 mW frequency comb near 10 $\upmu$m. _Applied Physics B_ 124 (2018). * 9 Keilmann, F. & Amarie, S. Mid-infrared Frequency Comb Spanning an Octave Based on an Er Fiber Laser and Difference-Frequency Generation. _Journal of Infrared, Millimeter, and Terahertz Waves_ 33, 479–484 (2012). * 10 Sotor, J. _et al._ All-fiber mid-infrared source tunable from 6 to 9 $\upmu$m based on difference frequency generation in OP-GaP crystal. _Optics Express_ 26, 11756 (2018). * 11 Faist, J. _et al._ Quantum Cascade Laser. _Science_ 264, 553–556 (1994). * 12 Schwarz, B. _et al._ Watt-Level Continuous-Wave Emission from a Bifunctional Quantum Cascade Laser/Detector. _ACS Photonics_ 4, 1225–1231 (2017). * 13 Jouy, P. _et al._ Dual comb operation of $\lambda$ $\sim$ 8.2 $\upmu$m quantum cascade laser frequency comb with 1 W optical power. _Applied Physics Letters_ 111, 141102 (2017). * 14 Anderson, M. H. _et al._ Photonic chip-based resonant supercontinuum. _arXiv preprint arXiv:1909.00022_ (2019). * 15 Gordon, A. _et al._ Multimode regimes in quantum cascade lasers: From coherent instabilities to spatial hole burning. _Physical Review A_ 77 (2008). * 16 Wang, Y. & Belyanin, A. Active mode-locking of mid-infrared quantum cascade lasers with short gain recovery time. _Optics Express_ 23, 4173 (2015). * 17 Wang, C. Y. _et al._ Mode-locked pulses from mid-infrared Quantum Cascade Lasers. _Optics Express_ 17, 12929 (2009). * 18 Wittmann, A., Bonetti, Y., Faist, J., Gini, E. & Giovannini, M. Intersubband linewidths in quantum cascade laser designs. _Applied Physics Letters_ 93, 141103 (2008). * 19 Hugi, A., Villares, G., Blaser, S., Liu, H. C. & Faist, J. Mid-infrared frequency comb based on a quantum cascade laser. _Nature_ 492, 229–233 (2012). * 20 Hillbrand, J. _et al._ In-Phase and Anti-Phase Synchronization in a Laser Frequency Comb. _Physical Review Letters_ 124 (2020). * 21 Schwarz, B. _et al._ A bi-functional quantum cascade device for same-frequency lasing and detection. _Applied Physics Letters_ 101, 191109 (2012). * 22 Singleton, M., Jouy, P., Beck, M. & Faist, J. Evidence of linear chirp in mid-infrared quantum cascade lasers. _Optica_ 5, 948–953 (2018). * 23 Burghoff, D. _et al._ Evaluating the coherence and time-domain profile of quantum cascade laser frequency combs. _Optics Express_ 23, 1190 (2015). * 24 Han, Z., Ren, D. & Burghoff, D. Sensitivity of SWIFT spectroscopy. _Optics Express_ 28, 6002 (2020). * 25 Opačak, N. & Schwarz, B. Theory of Frequency-Modulated Combs in Lasers with Spatial Hole Burning, Dispersion, and Kerr Nonlinearity. _Physical Review Letters_ 123 (2019). * 26 Feng, T., Shterengas, L., Hosoda, T., Belyanin, A. & Kipshidze, G. Passive Mode-Locking of 3.25 $\upmu$m GaSb-Based Cascade Diode Lasers. _ACS Photonics_ 5, 4978–4985 (2018). * 27 Hillbrand, J. _et al._ Picosecond pulses from a mid-infrared interband cascade laser. _Optica_ 6, 1334 (2019). * 28 Hillbrand, J., Andrews, A. M., Detz, H., Strasser, G. & Schwarz, B. Coherent injection locking of quantum cascade laser frequency combs. _Nature Photonics_ 13, 101–104 (2018). * 29 St-Jean, M. R. _et al._ Injection locking of mid-infrared quantum cascade laser at 14 GHz, by direct microwave modulation. _Laser & Photonics Reviews_ 8, 443–449 (2014). * 30 Forrer, A. _et al._ Photon-Driven Broadband Emission and Frequency Comb RF Injection Locking in THz Quantum Cascade Lasers. _ACS Photonics_ (2020). ## Acknowledgements This work was supported by the Austrian Science Fund (FWF) in the framework of ”Building Solids for Function” (Project W1243), the projects ”NanoPlas” (P28914-N27) and ”NextLite” (F4909-N23). ## Author contributions J.H. processed the QCLs and carried out the experiments. B.S. and J.H. built up the SWIFTS and IAC setups. N.O. and B.S. developed the simulation tool. M.P. carried out the temporal reconstruction using the IAC data. H.S. provided the 2-QWIP. J.H. wrote the manuscript with editorial input from N.O., B.S., G.S and F.C. All authors analysed the results and commented on the paper.
2024-09-04T02:54:58.433052
2020-03-04T23:32:28
2003.04133
{ "authors": "Nils Ohlendorf, Wolf-Peter Schill", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26120", "submitter": "Wolf-Peter Schill", "url": "https://arxiv.org/abs/2003.04133" }
arxiv-papers
# Frequency and duration of low-wind-power events in Germany Nils Ohlendorf<EMAIL_ADDRESS>Wolf-Peter Schill<EMAIL_ADDRESS>Mercator Research Institute on Global Commons and Climate Change (MCC), EUREF Campus 19, Torgauer Straße 12-15, 10829 Berlin, Germany German Institute for Economic Research (DIW Berlin), Mohrenstrasse 58, 10117 Berlin, Germany Energy Transition Hub, Climate & Energy College, The University of Melbourne ###### Abstract In the transition to a renewable energy system, the occurrence of low-wind- power events receives increasing attention. We analyze the frequency and duration of such events for onshore wind power in Germany, based on 40 years of reanalysis data and open software. We find that low-wind-power events are less frequent in winter than in summer, but the maximum duration is distributed more evenly between months. While short events are frequent, very long events are much rarer. Every year, a period of around five consecutive days with an average wind capacity factor below 10% occurs, and every ten years a respective period of nearly eight days. These durations decrease if only winter months are considered. The longest event in the data lasts nearly ten days. We conclude that public concerns about low-wind-power events in winter may be overrated, but recommend that modeling studies consider multiple weather years to properly account for such events. ###### keywords: Wind power; Low-wind-power events; Reanalysis data; ††journal: arXiv ## 1 Introduction The Paris Agreement calls for an extensive decarbonization of the global economy. A major strategy for achieving this goal is a massive expansion of variable renewable energy sources, in particular solar photovoltaics (PV) and wind power [de Coninck et al., 2018]. While power generation from solar PV largely follows diurnal and seasonal cycles with annually repeating patterns, wind power is subject to more irregular inter-annual as well as intra-annual variations which are relevant from a security of supply perspective. In countries with growing shares of wind power, the occurrence of low-wind-power (LWP) events thus receives increasing attention. This is particularly true in Germany. In the context of its energy transition, Germany is one of the global front-runners in wind power deployment. In 2018, a total capacity of 52.5 GW of onshore wind power was installed in Germany, generating 90.5 TWh of electricity. This corresponds to 15% of German gross electricity consumption [BMWi, 2019]. Given the government’s targets to expand the share of renewables in electricity consumption to 65% by 2030 and at least 80% by 2050 [Bundesregierung, 2019], the dependence of the German energy system on wind power is set to increase strongly in the future. Concerns about LWP events have been discussed in German media [Wetzel, 2017, 2019] and in the German parliament [Deutscher Bundestag, 2019a], and LWP events are also mentioned in the government’s energy transition reporting [Deutscher Bundestag, 2019b]. In this context, the term Dunkelflaute is increasingly used. It refers to a persistent situation with very low power generation from wind and solar PV, which would be especially challenging in the German winter season where PV availability is low and electric load has its peak. Yet no clear definition of this concept has been provided so far [Wissenschaftliche Dienste, 2019], and quantitative evidence on the frequency and duration of such events is missing. In Table $15$ of Deutscher Bundestag [2019b], an independent expert commission generally assumes a no-wind-no-solar period of two weeks. Yet research on LWP events is sparse so far. In this paper, we contribute to filling this gap, focusing on onshore wind power in Germany. We provide an in- depth analysis of the frequency, duration, and magnitude of LWP events, making use of reanalysis data for 40 full years (1980 to 2019) and power curves of recently installed wind turbines. In doing so, we propose two definitions of LWP events and investigate three different thresholds of capacity factors (2%, 5% and 10%). We also compare the spatial distributions of the most persistent LWP event and the mean electricity generation. Parts of our analysis explicitly focus on winter months: these are particularly relevant, as power generation from solar PV is relatively low during this season, while the German peak load also occurs in winter. In order to allow for the highest degree of transparency and reproducibility, we provide the source code of our analysis under a permissive open-source license [Ohlendorf, 2020]. There are only few dedicated analyses on the frequency and duration of LWP events. Early contributions address reliability aspects of spatially dispersed wind power in California [Kahn, 1979] or in the midwestern United States [Archer and Jacobson, 2007]. Analyses explicitly focusing on LWP events only recently emerged. Yet these differ from our work, amongst other factors, with respect to geographical and temporal coverage, data sources used, and methodologies applied. In particular, previous low-wind analyses mostly draw on local measurement data and either evaluate wind speeds [Leahy and McKeogh, 2013, Patlakas et al., 2017] or wind power [Handschy et al., 2017, Kruyt et al., 2017]. Leahy and McKeogh [2013] and Patlakas et al. [2017] investigate low-wind events for Ireland and the North Sea area, respectively. Both studies firstly evaluate low-wind events that are constantly below a given wind speed threshold, and secondly determine annual minimum moving average wind speeds for given durations, using extreme value distributions. Kruyt et al. [2017] and Handschy et al. [2017] go one step further and calculate respective power generation from wind speeds for Switzerland and the United States, using a power curve. While the findings of these studies are necessarily idiosyncratic to the specific geographical applications, some common findings emerge. First, low-wind events are less frequent and less persistent if more, and spatially more dispersed, measurement stations are used. Second, there are generally less events in winter than in summer. The measurement-based analyses face challenges related to their data sources. In general, studies that draw on measured wind speeds are spatially biased, have low measurement densities, and extrapolation from measurement height to hub height is challenging because of distorting effects of terrain, elevations or buildings [Sharp et al., 2015]. Measurement data may further be subject to inconsistencies caused by changing equipment and measurement errors. Extreme event analyses further require consistent measurements over large time periods to sufficiently capture climatic variations. These issues can be addressed by using long-term meteorological reanalysis data. Such data is increasingly applied for onshore wind energy modelling. Several studies focus on data accuracy and on validating models of wind power generation [Decker et al., 2012, Sharp et al., 2015, Olauson and Bergkvist, 2015, Rose and Apt, 2015, Staffell and Pfenninger, 2016, González-Aparicio et al., 2017, Germer and Kleidon, 2019]. Other analyses deal with variability aspects of wind power, but do not focus on extreme low-wind events. For example, Grams et al. [2017] explain longer-term fluctuations in European wind power generation with different types of weather regimes, based on MERRA-2 data. With similar approaches, Collins et al. [2018] investigate inter-annual variations of European wind and solar power, and Santos-Alamillos et al. [2017] explore optimal allocations of renewable generation capacity in a European super grid. For the contingent U.S. states, Shaner et al. [2018] investigate the reliability of future power systems dominated by wind and/or solar PV, and Kumler et al. [2019] explore inter-annual renewable variability for Texas. Yet none of these studies explicitly focuses on the frequency and duration of extreme low-wind-power events. A notable reanalysis study that does focus on extreme wind events is conducted by Cannon et al. [2015] for Great Britain. Using 33 years of MERRA as well as ERA-Interim data, the authors conclude that the frequency and duration of low- wind-power events can be approximated by a Poisson-like process. Weber et al. [2019] also use ERA-Interim data for a superstatistical analysis of extreme wind power events at nine specific European sites, including one German onshore location. They find that the distribution of low-wind events has a heavy tail, as low-wind events may result from a combination of different weather and circulation patterns.111Weber et al. [2019] base their analysis on wind speeds, not wind power generation, with a cut-off threshold of $4$ m/s. In another analysis based on ERA-Interim reanalysis data and other sources, Raynaud et al. [2018] define and investigate the occurrence of renewable “energy droughts”, which are measured relative to average daily generation. They find that wind power droughts are both relatively frequent and relatively short in most European countries, compared to hydro power droughts. We contribute to this emerging literature with a dedicated open-source, reanalysis-based study that investigates LWP events in Germany in detail. To the best of our knowledge, we are the first to use MERRA-2 data in this context, i.e., spatially and temporally consistent reanalysis data covering 40 years at 50 m above surface. Compared to Cannon et al. [2015], we also make use of not only one, but three recent power curves to represent different types of wind turbines that are characteristic for different locations defined by mean wind speeds. Complementary to Raynaud et al. [2018], we further present an alternative approach to defining and evaluating LWPs by looking either at hours that are constantly below a threshold, or at hours with a mean below a threshold. ## 2 Methods and data ### 2.1 General approach Based on wind speeds and power curves, we derive an hourly aggregated time series of capacity factors for wind power in Germany. First, we take wind speeds at 50 m above surface from the MERRA-2 reanalysis dataset, which covers 40 years from 1980 to 2019, and extrapolate to hub heights.222See Section A for further information on the use of reanalysis data for energy modelling. Second, capacity factors of each MERRA-2 grid cell are calculated based on power curves of recently installed wind turbines. Third, we spatially aggregate these capacity factors using a weighting scheme that considers the current spatial distribution of onshore wind power capacity in Germany. Finally, we investigate the resulting time series of hourly aggregated capacity factors by applying a narrower and a wider definition of LWP events. ### 2.2 Wind speeds derived from reanalysis data We use the MERRA-2 dataset provided by NASA [Gelaro et al., 2017]. Data is available starting from the year 1980. In contrast to several other global reanalysis datasets which have time resolutions of 3 to 6 hours and provide wind speeds at 10 m above surface, MERRA-2 includes hourly wind speed data at 50 m, which allows better modelling of wind power generation. Figure 1: MERRA-2 grid points (blue) and grid cells that intersect with Germany. The MERRA-2 grid consists of 576 longitudinal and 361 latitudinal horizontal grid points, i.e., a resolution of $0.625^{\circ}$ x $0.5^{\circ}$ which for Germany roughly corresponds to 50 x 50 km [Bosilovich et al., 2016]. Figure 1 shows the grid points in blue and all grid cells extrapolated from these points that intersect with Germany. For each grid cell, MERRA-2 provides hourly northward and eastward wind speed data at 50 m above surface. Our dataset further includes surface roughness data for the year 2019. ### 2.3 Aggregated wind power derived from wind speeds using power curves We calculate the magnitude of the horizontal wind speed ($U$) for each MERRA-2 grid point based on northward ($u$) and eastward components ($v$) at 50 m (Equation 1). $U=\sqrt{(u^{2}+v^{2})}$ (1) In line with Kruyt et al. [2017], we use the logarithmic power law to extrapolate wind speeds to hub-height ($h$) with $U_{hub}$ as the wind speed at hub height and $z_{0}$ as the surface roughness data for every grid point and each hour of the year 2019 (Equation 2). $U_{hub}=w\frac{\ln\frac{h}{z_{0}}}{\ln\frac{50}{z_{0}}}$ (2) Figure 2: Wind speed zones in Germany. Dark blue implies high mean wind speeds, blue medium wind speeds, and light blue low mean wind speeds. We define three types of wind zones, based on mean local wind speeds over 40 years for each MERRA-2 grid cell (Figure 2), and assign typical hub heights for wind turbines. For high-wind-speed sites, we assign a hub height of 100 m, for medium-wind-speed sites of 125 m, and for low-wind-speed sites of 139 m [Wallasch et al., 2015]. These values reflect the mean hub heights of recently installed wind power plants in respective German wind speed zones. We calculate hourly capacity factors for each grid cell by applying power curves characteristic for the three wind zones. The power curves are based on manufacturer data of currently available wind turbines for low-, medium- and high-wind sites, respectively. Both the low- and high-wind site power curves represent an average of four wind turbines of similar diameters and capacities. We consider turbines from six manufacturers (see B), among them four large companies which cover 87% of the capacity installed in Germany in 2015 [Lüers, 2016]. Manufacturers generally provide discrete capacity factors ($CF_{disc}$) for wind speed intervals of 1 m/s. For both the low- and high-wind curves, we first calculate discrete mean capacity factors for each wind speed and then calculate continuous capacity factors using a generalized logistic function (Equation 3). $CF_{cont}=A+\frac{C}{(1+Te^{-B(U_{hub}-M)})^{1/T}}$ (3) Here, $CF_{cont}$ is the continuous capacity factor and $A$, $B$, $C$, $M$ and $T$ are fitted coefficients based on minimising the squared deviations between $CF_{disc}$ and $CF_{cont}$. For both the low- and the high-wind power curve, cut-in wind speeds of around 3 m/s emerge, and the resulting capacity factors are capped at 0% and 100%. The medium-wind power curve represents the average of the low- and high-wind curves (Figure 3). Figure 3: Power curves of three types of wind turbines Aggregated hourly capacity factor time series for overall Germany are derived by weighting all grid cells with the current distribution of installed wind power generation capacity. The latter is extracted from Open Power System Data [Open Power System Data, 2017, Wiese et al., 2019] and open-source GIS data. The red points in Figure 4 indicate the installed wind capacity of locally aggregated wind power plant sites in Germany and the blue squares show the corresponding relative capacity distribution of the MERRA-2 grid cells. Grid cells only partly intersecting with the German land area receive lower weights according to the overlapping area. We implicitly assume that the transmission infrastructure allows geographical balancing of wind power in Germany, which is currently largely the case.333This assumptions is particularly valid for low-wind periods. During high-wind, high-load periods, the German transmission grid can be constrained in North-South direction. Figure 4: Distribution of currently installed wind power capacity in Germany. Darker colors indicate a larger share of total or relative installed capacity. ### 2.4 Definition of low-wind-power events We propose two different measures of low-wind-power periods, a narrower and a wider one (Figure 5). We further consider three alternative capacity factor thresholds of 2%, 5%, and 10%. As for the narrower definition, we consider LWP events to be consecutive hours in which the aggregated capacity factors are Constantly Below the Threshold (CBT). This concept bears some resemblance to the “runs analysis” by Leahy and McKeogh [2013] or the “duration given intensity” method by Patlakas et al. [2017]. Starting in the first hour, we list annual LWP events for durations starting from five consecutive hours and report the number of hours constantly below the given capacity factor threshold. We then increase the duration in hourly steps and repeat until there are no further events listed. To provide a wider definition, we consider LWP events to consist of consecutive hours in which the moving average of capacity factors is under the same threshold, i.e., Mean Below the Threshold (MBT). Again, we list all LWP periods until we reach the threshold value, ensuring that LWP periods do not overlap. By definition, the MBT method results in more low-wind-power events for a given duration and also results in longer events for each threshold, compared to CBT. Figure 5: Illustration of the two LWP event definitions The average annual amount of LWP events per duration over all 40 years equals the expected value of events per year. Further, the reciprocal value of the annual average provides the return period, that is the expected temporal distance between two similar reoccurring events. Periods overlapping annually or monthly are assigned to the year or month in which more than 50% of the hours are located444 Accounting for annually overlapping periods requires December data from the previous year, and January data from the subsequent year. For the two boundary years 1980 and 2019, we substitute the missing data for December 1979 (January 2020) with data from December 1980 (January 2019). . ## 3 Results ### 3.1 Seasonal distribution and frequency of low-wind-power events Figure 6 shows that LWP events are generally most frequent in summer (here defined as June-August) and least frequent in winter (December-February). The results for spring (March-May) and autumn (September-November) are mostly close to the annual average. Accordingly, respective findings made for other European countries [Leahy and McKeogh, 2013, Cannon et al., 2015, Kruyt et al., 2017] are also valid for Germany. Figure 6: Average seasonal duration (horizontal axis) and frequency (vertical axis) of LWP events in Germany The frequency of events for a given duration is about 1.5-3 times higher for the wider MBT definition compared to the narrower CBT concept. For both metrics, the frequency of LWP events increases substantially with the capacity factor threshold value. For example, a 10-hour event below a capacity factor of 2% occurs on average around 0.2 times per winter for CBT and slightly less than once per winter for MBT. For a 10% capacity factor threshold, there are on average around eight such winter events for CBT and 13 for MBT. In general, we find that short LWP events with a duration of up to around half a day are relatively frequent and may occur several times per year, especially under the wider MBT definition. Longer LWP events, in contrast, are much less frequent. To provide a complementary perspective, we calculate the return periods for different durations of LWP events (Table 1). The return periods are the reciprocal of the average (annual or seasonal) frequency of LWP events for different durations, considering both definitions and all three thresholds (cf. Figure 6). For example, an LWP event with an average frequency of 0.2 for a given duration leads to a return period of 5 years for this specific duration. The longer a given duration, the lower its average frequency and the longer its return period. For a return period of ten years, we find a duration of 17 hours (2% capacity factor threshold), 41 hours (5%) and 77 hours (10%) under the narrower CBT definition, and a duration of 34 hours (2%), 79 hours (5%) and 188 hours (10%) under the wider MBT concept. In other words, every ten years the German energy system has to deal with a period of nearly eight days of average wind power generation (MBT) below 10% of the installed capacity. Table 1: Duration in hours for LWP events in winter or in any season for different return periods | Constantly below threshold (CBT) | Mean below threshold (MBT) ---|---|--- | Winter | Any season | Winter | Any season Return period | 2% | 5% | 10% | 2% | 5% | 10% | 2% | 5% | 10% | 2% | 5% | 10% 1 year | 5 | 15 | 29 | 11 | 23 | 45 | 8 | 30 | 63 | 18 | 58 | 122 2 years | 7 | 21 | 40 | 13 | 32 | 57 | 12 | 45 | 92 | 21 | 69 | 144 3 years | 8 | 23 | 44 | 14 | 33 | 60 | 14 | 52 | 101 | 23 | 71 | 161 4 years | 9 | 30 | 48 | 14 | 33 | 63 | 16 | 62 | 112 | 27 | 72 | 173 5 years | 10 | 32 | 57 | 15 | 35 | 65 | 22 | 68 | 113 | 28 | 75 | 178 6 years | 10 | 32 | 57 | 15 | 35 | 67 | 25 | 69 | 114 | 29 | 76 | 182 7 years | 12 | 33 | 60 | 15 | 36 | 67 | 27 | 70 | 114 | 31 | 76 | 186 8 years | 14 | 33 | 63 | 17 | 37 | 69 | 28 | 72 | 117 | 33 | 79 | 186 9 years | 14 | 33 | 63 | 17 | 37 | 69 | 28 | 72 | 117 | 33 | 79 | 186 10 years | 14 | 33 | 64 | 17 | 41 | 77 | 28 | 72 | 126 | 34 | 79 | 188 15 years | 17 | 36 | 67 | 18 | 41 | 77 | 31 | 76 | 129 | 38 | 82 | 189 20 years | 19 | 41 | 77 | 19 | 49 | 81 | 34 | 79 | 131 | 45 | 89 | 221 25 years | 19 | 41 | 77 | 19 | 49 | 81 | 34 | 79 | 131 | 45 | 89 | 221 30 years | 19 | 41 | 77 | 19 | 49 | 81 | 34 | 79 | 131 | 45 | 89 | 221 To better interpret these return periods, we provide an example for the German onshore wind power capacity of 52.5 GW installed in 2018. For this wind turbine fleet, average power generation is expected to not exceed around five GW, i.e., 10% of capacity, during a period of around five consecutive days every year (122 hours, MBT for ’Any Season’ in 1). Every ten years, this period increases to nearly eight days, and every twenty years to more than nine full days. Looking only at LWP events in winter, these durations decrease to less than three days every winter, less than five days every tenth winter, and around five and a half days every twentieth winter. The remaining load has to be covered by other generators, energy storage or demand-side measures. However, wind power still contributes some generation capacity above the 10% threshold during some of these hours, as indicated by much lower CBT return periods. ### 3.2 Magnitude of the most extreme low-wind-power events The most extreme LWP events over the entire 40 years analyzed can be interpreted as worst cases from an energy system planning perspective. In an annual perspective, the most extreme events occurred in 1985 for all capacity factor thresholds (Figure 7). Under the narrower CBT definition, there are nearly four consecutive days with wind power generation constantly below 10% in 1985, and still around two consecutive days with generation constantly below 5%. Under the wider MBT definition, the duration of this most extreme event increases to nearly ten days (10%) or around four days (5%). Figure 7: Most extreme LWP events per year. The vertical axis shows the duration of the longest event per year for the three capacity factor thresholds. While this 1985 event is the most extreme one under both CBT and MBT, the ranking of the second most extreme yearly events differs between the LWP definitions. For example, the second-longest event occurred in 1984 under the CBT definition. Yet under MBT, the duration of the most extreme event in 1984 is only average. In general, the definition of LWP events and the chosen thresholds have a substantial impact on quantitative results. Under MBT, the most extreme annual events are generally around twice as high compared to CBT. We further find very large inter-annual variations. Considering the 10% threshold, the longest event for the MBT definition lasted for almost 10 days in 1985, but in 2005 the longest duration was only three days for the same threshold. The relative difference between the longest events for each year increases with the threshold. These large variations of the most extreme annual LWP events complement the findings made by Collins et al. [2018], who determine large inter-annual variations of average renewable availability. We next look at the most extreme LWP event in a monthly perspective, irrespective of the year in which these occur (Figure 8). The most extreme events for the 10% threshold occur in March for both definitions. This is the 1985 event discussed above, with durations of nearly four (CBT) or nearly ten consecutive days (MBT). Figure 8: Most extreme LWP events per month. The vertical axis shows the duration of the longest event of all respective months for the three capacity factor thresholds. Considering all thresholds and both LWP definitions, there is no clear trend of the most extreme monthly LWP events. That is, substantial extreme events may occur throughout the year, and also in winter months. This contrasts the previous finding that the frequency of LWP events is generally much higher in summer than in winter, as shown in Section 3.1. Under CBT, the most extreme events in each of the winter months are even longer than those in summer months for the 10% capacity threshold. This finding is, however, not confirmed under the MBT definition. ### 3.3 Spatial distribution of wind power during most extreme LWP event To also explore the spatial dimension of LWP events, we compare the distribution of capacity factors during the most extreme LWP of 1985 to the distribution of annual mean capacity factors in the same year (Figure 9). Figure 9: Spatial distribution of wind power. Left: Average wind power during most extreme LWP event (10% capacity factor, MBT) in dataset in March 1985 (Scale: From 0% to 20% of mean capacity factors). Right: Mean wind power in the entire year 1985 (Scale: From 5% to 50% of mean capacity factors). The spatial pattern of annual mean capacity factors (Figure 9, right panel) largely resembles that of average wind speeds in Germany (Figure 2). Mean capacity factors are generally higher in Northern than in Southern Germany. They are highest close to the Northern and the Baltic Sea, and lowest in the southern Alpine region. The spatial pattern of mean capacity factors during the most extreme LWP event (Figure 9, left panel) substantially deviates from the distribution of the means. In particular, capacity factors of the north-eastern region and parts of the northern region are relatively low. The respective spatial distributions of capacity factors for other thresholds under both the CBT and MBT definitions of the same event also show substantial deviations from annual means. Accordingly, the spatial distribution of capacity factors during extreme LWP events does not necessarily correspond to the annual mean pattern. This indicates that low-wind events can be very pronounced even in regions with very good average wind resources. ## 4 Conclusions We analyze the seasonal distribution, frequency and magnitude of onshore low- wind-power events in Germany, as well as spatial aspects of the most extreme events, based on MERRA-2 reanalysis data and open software. We propose and evaluate two definitions of low-wind-power events for three capacity factor thresholds. We synthesize three key results from the analysis. First, LWP events are generally most frequent in summer and least frequent in winter. Nonetheless, substantial events occur in all months of the year, and also in winter. The most persistent LWP event in the dataset occurred in March. Second, while short events with a duration of up to around half a day are relatively frequent, very long events are much rarer.555Weber et al. [2019] argue that low-wind event statistics do not follow a simple exponential distribution, but have “heavy tails”, i.e. the probability decreases rather slowly with increasing duration. Every year, the German energy system has to deal with a period of around five consecutive days during which average wind power generation is below 10% of the installed capacity. Every ten years, a respective period of nearly eight days is to be expected. Looking only at winter months, the durations of these expected events decrease to less than three days every winter and less than five days every tenth winter. The most persistent low-wind event in the entire dataset has a duration of nearly ten consecutive days of average wind power generation below a 10% capacity factor. Third, the spatial pattern of LWP events may be very different from the one of average wind power resources. During the most persistent LWP event, we find average generation to be particularly low in several regions which have some of the best wind resources. We conclude that energy modeling studies that only consider one historic weather year are likely to substantially underestimate the occurrence of low- wind-power events and related system implications. In particular, analyses with an energy system planning perspective should take less frequent LWP events into account, e.g., the discussed events with a return period of ten years, or even the most extreme event identified here. This is particularly important when the complementary role of other variable and dispatchable generators, energy storage, or demand-side measures in highly-renewable energy systems is to be explored.666This is demonstrated, for example, by Schill and Zerrahn [2018] in an analysis of storage requirements for renewable energy integration in a sensitivity analysis with one artificial no-wind week. Further, analyses dealing with the pros and cons of either more decentralized or more centralized renewable energy systems should consider the spatial dimension of LWP events. Although not in the focus of our analysis, our results indicate that LWP events are more pronounced for smaller geographic areas. From an energy policy perspective, our findings on LWP events occurring in winter may be most relevant. Our analysis indicates that concerns about frequent and persistent LWP events in German winters appear to be overrated, considering that the longest event with an average capacity factor below 10% and a ten-year return period in winter has a duration of less than five days. We further recommend that policy makers or regulators develop a proper definition of the Dunkelflaute term, which currently appears to be used in a rather qualitative way. Our two definitions of LWP events proposed here may be useful in this context. While our analysis deliberately focuses on LWP events of onshore wind power in Germany, we see an avenue for future research that would ideally combine the analysis of low production periods of onshore and offshore wind power as well as solar PV with time series of load, while expanding the geographic focus beyond Germany. The open-source provision of the tool used for the present analysis may be a useful starting point for such research. ## Acknowledgment This analysis is a result of the research project P2X, funded by the German Federal Ministry of Education and Research (FKZ 03SFK2B1). Wolf-Peter Schill carried out parts of the work during a research stay at the University of Melbourne. Nils Ohlendorf mainly worked on this project while employed at DIW Berlin, and partly also after being employed at MCC. We thank the participants of the DIW Sustainability Cluster Seminar in April 2017, Strommarkttreffen Berlin November 2017, IAEE International Conference Groningen 2018 and Enerday Dresden 2018 for valuable comments on earlier drafts. ## Data availability statement The data that support the findings of this study have been created with software that is openly available under an MIT license at https://doi.org/10.5281/zenodo.3694373. ## References * Archer and Jacobson [2007] Archer, C.L., Jacobson, M.Z., 2007\. Supplying baseload power and reducing transmission requirements by interconnecting wind farms. Journal of Applied Meteorology and Climatology 46, 1701–1717. doi:10.1175/2007JAMC1538.1. * BMWi [2019] BMWi, 2019. Zeitreihen zur Entwicklung der erneuerbaren Energien in Deutschland. Bundesministerium für Wirtschaft und Energie (Federal Ministry for Economic Affairs and Energy). URL: https://www.erneuerbare-energien.de/EE/Redaktion/DE/Downloads/zeitreihen-zur-entwicklung-der-erneuerbaren-energien-in-deutschland-1990-2018.pdf. * Bosilovich et al. [2016] Bosilovich, M.G., Lucchesi, R., Suarez, M., 2016. MERRA-2: File Specification. GMAO Office Note No. 9 (Version 1.1). NASA Global Modeling and Assimilation Office. URL: https://gmao.gsfc.nasa.gov/pubs/docs/Bosilovich785.pdf. * Bundesregierung [2019] Bundesregierung, 2019. Klimaschutzprogramm 2030 der Bundesregierung zur Umsetzung des Klimaschutzplans 2050. German Federal Government. URL: https://www.bundesregierung.de/resource/blob/975226/1679914/e01d6bd855f09bf05cf7498e06d0a3ff/2019-10-09-klima-massnahmen-data.pdf. * Cannon et al. [2015] Cannon, D., Brayshaw, D., Methven, J., Coker, P., Lenaghan, D., 2015. Using reanalysis data to quantify extreme wind power generation statistics: A 33 year case study in Great Britain. Renewable Energy 75, 767 – 778. doi:10.1016/j.renene.2014.10.024. * Carvalho et al. [2014] Carvalho, D., Rocha, A., Gómez-Gesteira, M., Santos, C.S., 2014\. WRF wind simulation and wind energy production estimates forced by different reanalyses: Comparison with observed data for Portugal. Applied Energy 117, 116 – 126. doi:10.1016/j.apenergy.2013.12.001. * Collins et al. [2018] Collins, S., Deane, P., Ó Gallachóir, B., Pfenninger, S., Staffell, I., 2018. Impacts of inter-annual wind and solar variations on the European power system. Joule 2, 2076–2090. doi:10.1016/j.joule.2018.06.020. * de Coninck et al. [2018] de Coninck, H., Revi, A., Babiker, M., Bertoldi, P., Buckeridge, M., Cartwright, A., Dong, W., Ford, J., Fuss, S., Hourcade, J.C., Ley, D., Mechler, R., Newman, P., Revokatova, A., Schultz, S., Steg, L., Sugiyama, T., 2018. Strengthening and implementing the global response, in: Global Warming of 1.5∘C. An IPCC Special Report on the impacts of global warming of 1.5∘C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty. URL: https://www.ipcc.ch/site/assets/uploads/sites/2/2019/05/SR15_Chapter4_Low_Res.pdf. * Decker et al. [2012] Decker, M., Brunke, M.A., Wang, Z., Sakaguchi, K., Zeng, X., Bosilovich, M.G., 2012\. Evaluation of the Reanalysis Products from GSFC, NCEP, and ECMWF Using Flux Tower Observations. Journal of Climate 25, 1916–1944. doi:10.1175/JCLI-D-11-00004.1. * Deutscher Bundestag [2019a] Deutscher Bundestag (Ed.), 2019a. Plenarprotokoll 19/98 Stenografischer Bericht 98. Sitzung. Plenarprotokoll 19/98. URL: http://dip21.bundestag.de/dip21/btp/19/19098.pdf. 09.05.2019. * Deutscher Bundestag [2019b] Deutscher Bundestag (Ed.), 2019b. Unterrichtung durch die Bundesregierung Zweiter Fortschrittsbericht zur Energiewende 2019. Drucksache 19/10760. Drucksache 19/10760 19. Wahlperiode. URL: http://dip21.bundestag.de/dip21/btd/19/107/1910760.pdf. 07.06.2019. * Gelaro et al. [2017] Gelaro, R., McCarty, W., Suárez, M.J., Todling, R., Molod, A., Takacs, L., Randles, C.A., Darmenov, A., Bosilovich, M.G., Reichle, R., Wargan, K., Coy, L., Cullather, R., Draper, C., Akella, S., Buchard, V., Conaty, A., da Silva, A.M., Gu, W., Kim, G.K., Koster, R., Lucchesi, R., Merkova, D., Nielsen, J.E., Partyka, G., Pawson, S., Putman, W., Rienecker, M., Schubert, S.D., Sienkiewicz, M., Zhao, B., 2017. The Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2). Journal of Climate 30, 5419–5454. doi:10.1175/JCLI-D-16-0758.1. * Germer and Kleidon [2019] Germer, S., Kleidon, A., 2019\. Have wind turbines in germany generated electricity as would be expected from the prevailing wind conditions in 2000-2014? PLOS ONE 14, 1–16. doi:10.1371/journal.pone.0211028. * González-Aparicio et al. [2017] González-Aparicio, I., Monforti, F., Volker, P., Zucker, A., Careri, F., Huld, T., Badger, J., 2017. Simulating European wind power generation applying statistical downscaling to reanalysis data. Applied Energy 199, 155 – 168. doi:10.1016/j.apenergy.2017.04.066. * Grams et al. [2017] Grams, C.M., Beerli, R., Pfenninger, S., Staffell, I., Wernli, H., 2017. Balancing Europe’s wind-power output through spatial deployment informed by weather regimes. Nature Climate Change 7, 557–562. doi:10.1038/nclimate3338. * Handschy et al. [2017] Handschy, M.A., Rose, S., Apt, J., 2017. Is it always windy somewhere? Occurrence of low-wind-power events over large areas. Renewable Energy 101, 1124 – 1130. doi:10.1016/j.renene.2016.10.004. * Hans Ertel Zentrum [2019] Hans Ertel Zentrum, 2019. Cosmo regional reanalysis. Universität Bonn and Deutscher Wetterdienst. URL: https://reanalysis.meteo.uni-bonn.de/?Overview. * Kahn [1979] Kahn, E., 1979. The reliability of distributed wind generators. Electric Power Systems Research 2, 1 – 14. doi:10.1016/0378-7796(79)90021-X. * Kruyt et al. [2017] Kruyt, B., Lehning, M., Kahl, A., 2017. Potential contributions of wind power to a stable and highly renewable Swiss power supply. Applied Energy 192, 1 – 11. doi:10.1016/j.apenergy.2017.01.085. * Kumler et al. [2019] Kumler, A., Carreño, I.L., Craig, M.T., Hodge, B.M., Cole, W., Brancucci, C., 2019\. Inter-annual variability of wind and solar electricity generation and capacity values in Texas. Environmental Research Letters 14, 044032. doi:10.1088/1748-9326/aaf935. * Leahy and McKeogh [2013] Leahy, P.G., McKeogh, E.J., 2013\. Persistence of low wind speed conditions and implications for wind power variability. Wind Energy 16, 575–586. doi:10.1002/we.1509. * Liléo et al. [2013] Liléo, S., Berge, E., Undheim, O., Klinkert, R., Bredesen, R.E., 2013. Long-term correction of wind measurements. state-of-the-art, guidelines and future work. Complexity 1, 2–3. * Liléo and Petrik [2011] Liléo, S., Petrik, O., 2011\. Investigation on the use of NCEP/NCAR, MERRA and NCEP/CFSR reanalysis data in wind resource analysis, in: European Wind Energy Conference and Exhibition 2011, EWEC 2011\. * Lüers [2016] Lüers, S., 2016. Status des Windenergieausbaus an Land in Deutschland \- Zusätzliche Auswertungen und Daten für das Jahr 2015. Technical Report. Deutsche WindGuard. Varel. URL: https://www.windguard.de/veroeffentlichungen.html?file=files/cto_layout/img/unternehmen/veroeffentlichungen/2016/Status%20des%20Windenergieausbaus%20an%20Land%20in%20Deutschland%20-%20Zus%C3%A4tzliche%20Auswertungen%20und%20Daten%20f%C3%BCr%20das%20Jahr%202015.pdf. * Moemken et al. [2018] Moemken, J., Reyers, M., Feldmann, H., Pinto, J.G., 2018\. Future changes of wind speed and wind energy potentials in EURO-CORDEX ensemble simulations. Journal of Geophysical Research: Atmospheres 123, 6373–6389. doi:10.1029/2018JD028473. * Molod et al. [2015] Molod, A., Takacs, L., Suarez, M., Bacmeister, J., 2015\. Development of the GEOS-5 atmospheric general circulation model: evolution from MERRA to MERRA2. Geoscientific Model Development 8, 1339–1356. doi:10.5194/gmd-8-1339-2015. * Ohlendorf [2020] Ohlendorf, N., 2020. Source code for “Frequency and persistence of low-wind-power events in Germany”. Zenodo. doi:10.5281/zenodo.3694374. * Olauson and Bergkvist [2015] Olauson, J., Bergkvist, M., 2015\. Modelling the Swedish wind power production using MERRA reanalysis data. Renewable Energy 76, 717 – 725. doi:10.1016/j.renene.2014.11.085. * Open Power System Data [2017] Open Power System Data, 2017. Data package renewable power plants. URL: https://data.open-power-system-data.org/renewable_power_plants/2017-02-16/. version 2017-02-16. * Patlakas et al. [2017] Patlakas, P., Galanis, G., Diamantis, D., Kallos, G., 2017\. Low wind speed events: persistence and frequency. Wind Energy 20, 1033–1047. doi:10.1002/we.2078. * Raynaud et al. [2018] Raynaud, D., Hingray, B., François, B., Creutin, J., 2018\. Energy droughts from variable renewable energy sources in European climates. Renewable Energy 125, 578 – 589. doi:10.1016/j.renene.2018.02.130. * Rose and Apt [2015] Rose, S., Apt, J., 2015. What can reanalysis data tell us about wind power? Renewable Energy 83, 963 – 969. doi:10.1016/j.renene.2015.05.027. * Santos-Alamillos et al. [2017] Santos-Alamillos, F.J., Brayshaw, D.J., Methven, J., Thomaidis, N.S., Ruiz-Arias, J.A., Pozo-Vázquez, D., 2017\. Exploring the meteorological potential for planning a high performance European electricity super-grid: optimal power capacity distribution among countries. Environmental Research Letters 12, 114030. doi:10.1088/1748-9326/aa8f18. * Schill and Zerrahn [2018] Schill, W.P., Zerrahn, A., 2018\. Long-run power storage requirements for high shares of renewables: Results and sensitivities. Renewable and Sustainable Energy Reviews 83, 156 – 171. doi:10.1016/j.rser.2017.05.205. * Schlott et al. [2018] Schlott, M., Kies, A., Brown, T., Schramm, S., Greiner, M., 2018. The impact of climate change on a cost-optimal highly renewable European electricity network. Applied Energy 230, 1645 – 1659. doi:10.1016/j.apenergy.2018.09.084. * Shaner et al. [2018] Shaner, M.R., Davis, S.J., Lewis, N.S., Caldeira, K., 2018\. Geophysical constraints on the reliability of solar and wind power in the United States. Energy and Environmental Science 11, 914–925. doi:10.1039/C7EE03029K. * Sharp et al. [2015] Sharp, E., Dodds, P., Barrett, M., Spataru, C., 2015\. Evaluating the accuracy of CFSR reanalysis hourly wind speed forecasts for the UK, using in situ measurements and geographical information. Renewable Energy 77, 527 – 538. doi:10.1016/j.renene.2014.12.025. * Staffell and Green [2014] Staffell, I., Green, R., 2014\. How does wind farm performance decline with age? Renewable Energy 66, 775 – 786. doi:10.1016/j.renene.2013.10.041. * Staffell and Pfenninger [2016] Staffell, I., Pfenninger, S., 2016\. Using bias-corrected reanalysis to simulate current and future wind power output. Energy 114, 1224 – 1239. doi:10.1016/j.energy.2016.08.068. * Tobin et al. [2016] Tobin, I., Jerez, S., Vautard, R., Thais, F., van Meijgaard, E., Prein, A., Déqué, M., Kotlarski, S., Maule, C.F., Nikulin, G., Noël, T., Teichmann, C., 2016\. Climate change impacts on the power generation potential of a European mid-century wind farms scenario. Environmental Research Letters 11, 034013. doi:10.1088/1748-9326/11/3/034013. * Wallasch et al. [2015] Wallasch, A.K., Lüers, S., Rehfeldt, K., 2015. Kostensituation der Windenergie an Land in Deutschland - Update. Technical Report. Deutsche WindGuard. Varel. URL: https://www.windguard.de/veroeffentlichungen.html?file=files/cto_layout/img/unternehmen/veroeffentlichungen/2015/Kostensituation%20der%20Windenergie%20an%20Land%20in%20Deutschland%20-%20Update.pdf. * Weber et al. [2019] Weber, J., Reyers, M., Beck, C., Timme, M., Pinto, J.G., Witthaut, D., Schäfer, B., 2019. Wind power persistence characterized by superstatistics. Scientific Reports 9, 19971–. doi:10.1038/s41598-019-56286-1. * Wetzel [2017] Wetzel, D., 2017. Die ,,Dunkelflaute” bringt Deutschlands Stromversorgung ans Limit. Die Welt. URL: https://www.welt.de/wirtschaft/article161831272/Die-Dunkelflaute-bringt-Deutschlands-Stromversorgung-ans-Limit.html. * Wetzel [2019] Wetzel, D., 2019. In der ,,kalten Dunkelflaute” rächt sich die Energiewende. Die Welt. URL: https://www.welt.de/wirtschaft/article191195983/Energiewende-Das-droht-uns-in-der-kalten-Dunkelflaute.html. * Wiese et al. [2019] Wiese, F., Schlecht, I., Bunke, W.D., Gerbaulet, C., Hirth, L., Jahn, M., Kunz, F., Lorenz, C., Mühlenpfordt, J., Reimann, J., Schill, W.P., 2019. Open Power System Data – Frictionless data for electricity system modelling. Applied Energy 236, 401 – 409. doi:10.1016/j.apenergy.2018.11.097. * Wissenschaftliche Dienste [2019] Wissenschaftliche Dienste, 2019. Sicherstellung der Stromversorgung bei Dunkelflauten. Documentation. Deutscher Bundestag. URL: https://www.bundestag.de/resource/blob/627898/b65deea51fdb399e4b64f1182465658d/WD-5-167-18-pdf-data.pdf. wD 5 - 3000 - 167/18. ## Appendix A Reanalysis data and its use for energy modelling Reanalysis data is increasingly used for energy modelling as it provides consistent global time series of long-term atmosphere data such as wind speed, temperature and air pressure in regular spatial and temporal resolutions. The underlying global circulation models extrapolate measurement station data on wind speeds, temperature, moisture and surface pressure as well as data from satellites and precipitation measurements [Decker et al., 2012]. Several publicly available second-generation global reanalysis datasets have been released since the early 2000s. We use MERRA-2, which builds on and improves the previous MERRA dataset, using advanced models and data sources [Molod et al., 2015]. Decker et al. [2012] evaluate the accuracy of several reanalysis datasets (MERRA, NCEP, ERA-40, ERA-Interim, CFSR and GLDAS) using flux tower measurements in the Northern Hemisphere. Almost all products overestimate the monthly and 6-hourly wind speeds and their variability. MERRA and ERA-Interim show the lowest values root-mean-square error and bias for diurnal cycles. Sharp et al. [2015] review other data validation studies of different reanalysis datasets. Three studies derive Pearson’s correlation coefficients for MERRA between 0.75 and 0.89 based on measurement stations in Sweden, Portugal, Norway and Denmark [Liléo and Petrik, 2011, Liléo et al., 2013, Carvalho et al., 2014]. Staffell and Pfenninger [2016] propose country- specific wind speed bias correction factors for MERRA and MERRA-2 to increase the correlation with national capacity factors. Without such correction, average capacity factors for Germany based on raw MERRA or MERRA-2 wind speeds would be overestimated. Staffell and Green [2014] make a similar point for the UK. In contrast, Cannon et al. [2015] do not use correction factors in a UK application. Even if MERRA wind speeds turn out to be not particularly valid for single measurement points, spatial aggregation of mean wind speed over all stations results in a correlation coefficient of 0.94. This indicates a high validity of MERRA data for large-scale wind patterns. Following Cannon et al. [2015], we also refrain from introducing correction factors and instead make use of the error-smoothing effect of spatial aggregation. In doing so, we also avoid model artefacts, particularly as the usefulness of correction factors has only been demonstrated for average wind speeds, but not for extreme values. ## Appendix B Wind power turbines The low- and high-wind power curves used in our analysis are based on data of eight wind power turbines by six manufacturers, namely Nordex, Senvion, Enercon, Vestas, Gamesa and Vensys. Specifically, we use the following high- wind power turbines: * 1. Nordex N90-2.5MW * 2. Vestas V90-2.0MW * 3. Gamesa G97-2MW * 4. Vensys 100-2.5MW Analogously, we use following low-wind power turbines: * 1. Nordex N131-3.3MW * 2. Senvion 3.2M122 * 3. Enercon E126 EP4/4.2MW * 4. Vestas V126-3.3MW ## Appendix C Discussion of limitations We briefly discuss some limitations of our analysis and how these may qualitatively impact results. First, there are general limitations of using reanalysis data which have been discussed in the literature, for example spatial biases or issues with upscaling to hub heights [Sharp et al., 2015, Olauson and Bergkvist, 2015, Rose and Apt, 2015, Staffell and Pfenninger, 2016]. It is, however, not clear if there are specific distortions with respect to extreme low-wind events derived from reanalysis data. A limitation specific to the MERRA-2 dataset is the relatively coarse 50x50 km grid cell size, which insufficiently represent local impacts on wind speeds. Regional reanalysis data with more refined geographical resolutions may resolve this issue, e.g. COSMO-REA2 with 2x2 km, or COSMO-REA6 with 6x6 km [Hans Ertel Zentrum, 2019], yet these are only available for shorter periods of time. The global coverage of MERRA-2 further allows repeating our open-source analysis for other countries and world regions. Second, we use power curves of currently available wind turbines and assume hub-heights of recently constructed plants. We may thus overestimate wind power generation compared to the currently existing fleet of wind turbines in Germany, which includes many older and smaller turbines, and in turn underestimate the magnitude of current LWP events. Conversely, we may underestimate power generation of future turbines, and accordingly overestimate the magnitude of future low-wind-power events, assuming that turbine efficiency and hub height increases further, with corresponding upward shifts in the power curves. Once LWP events become more relevant for the overall energy system, this may also trigger specific technology improvements toward lower cut-in speeds and a steeper slope of the power curve on the very left-hand side. Quantifying the potentially mitigating effects of such developments on LWP periods is left for future research. Third, we use the current spatial capacity distribution of German wind power plants for deriving an aggregated capacity factor time series. We implicitly assume that this distribution also persists in the future. In reality, a relative increase of wind power deployment at sites with lower wind resources may occur, for example in southern Germany. From the results presented in Section 3.1, we infer that a more even spatial dispersion of wind turbines could slightly mitigate LWP events. Next, climate change has an impact on wind speeds. Future time series of wind power capacity factors will thus differ from the historic ones investigated here. Tobin et al. [2016] find that wind power variability in Europe may generally increase, but Schlott et al. [2018] conclude that this has no substantial effect on optimal deployment of onshore wind power in highly renewable future scenarios. Moemken et al. [2018] find that climate change will increase the occurrence of low wind speeds. Finally, the focus of this analysis is a detailed but selective investigation of onshore LWP events in Germany. This geographic focus helps to keep the analysis tractable and avoids making implicit assumptions on continental electricity transmission infrastructure. It is also relevant from an energy policy perspective, which often includes national energy security considerations. Yet expanding the geographic scope of the analysis would allow raising complementary insights on larger-scale spatial patterns of extreme LWP events. Focusing on onshore wind power, and not including other renewable energy sources such as offshore wind power and solar PV, allows for parsimonious model assumptions, and findings remain valid for any level of installed capacity. Analyses that would combine periods of low production from various renewable energy sources, and also explore their correlation with electric load, appear to be a promising field for future research. The work of Raynaud et al. [2018], albeit with lower temporal and spatial detail compared to our analysis, can be considered as a first step in this direction.
2024-09-04T02:54:58.461125
2020-03-09T16:24:41
2003.04239
{ "authors": "Debajyoti Choudhuri, Jiabin Zuo", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26121", "submitter": "Debarjoyti Choudhuri", "url": "https://arxiv.org/abs/2003.04239" }
arxiv-papers
# A shadow of algebraic topology and variational method - Prandtl Batchelor problem Debajyoti Choudhuri111Corresponding author<EMAIL_ADDRESS>ORCID ID: 0000-0001-8744-9350 Department of Mathematics, National Institute of Technology Rourkela, India ###### Abstract In this paper we study the existence of nontrivial weak solution to a Prandtl- Batchelor type free boundary value elliptic problem involving a $p$-Laplacian operator and a power nonlinearity. Topics from algebraic topology will be used to establish the existence of a solution to the approximating problem, whereas, the variational technique will be used to fix the claim of existence of a solution to the main problem. In the process, a couple of classical results were also improved to suit the purpose of establishing the existence of a nontrivial solution. Keywords: Dirichlet free boundary value problem, Sobolev space, Morse relation, cohomology group. AMS Classification: 35J35, 35J60. ## 1 Introduction We will investigate the existence of solution to the following free boundary value problem. $\displaystyle\begin{split}-\Delta_{p}u&=\lambda\chi_{\\{u>1\\}}(u-1)_{+}^{q-1},~{}\text{in}~{}\Omega\setminus H(u),\\\ |\nabla u^{+}|^{p}-|\nabla u^{-}|^{p}&=\frac{p}{p-1},~{}\text{in}~{}H(u)\\\ u&=0,\text{on}~{}\partial\Omega.\end{split}$ (1.1) Here, $\lambda>0$ is a parameter, $(u-1)_{+}=\max\\{u-1,0\\}$ and $H(u)=\partial\\{u>1\\}.$ Also $\nabla u^{\pm}$ are the limits of $\nabla u$ from the sets $\\{u>1\\}$ and $\\{u\leq 1\\}^{\circ}$ respectively. The domain $\Omega\subset\mathbb{R}^{N}(N\geq 2)$ is bounded with a sufficiently smooth boundary $\partial\Omega$. The relation between the exponents are assumed in the order $1<p\leq q-1$, with $q<p^{*}=\dfrac{Np}{N-p}$. The solution(s) satisfy the free boundary condition in the following sense: for all $\vec{\phi}\in C_{0}^{1}(\mathbb{R}^{N})$ such that $u\neq 1$ a.e. on the support of $\vec{\phi}$, $\displaystyle\underset{\epsilon^{+}\rightarrow 0}{\lim}\int_{u=1+\epsilon^{+}}\left(\frac{p}{p-1}-|\nabla u|^{p}\right)\vec{\phi}\cdot\hat{n}dS-\underset{\epsilon^{-}\rightarrow 0}{\lim}\int_{u=1-\epsilon^{-}}|\nabla u|^{p}\vec{\phi}\cdot\hat{n}dS$ $\displaystyle=0,$ (1.2) where $\hat{n}$ is the outward drawn normal to $\\{1-\epsilon^{-}<u<1+\epsilon^{+}\\}$. Note that the sets $\\{u=1\pm\epsilon^{\pm}\\}$ are smooth hypersurfaces for almost all $\epsilon^{\pm}>0$ by the Sard’s theorem. The limit above in (1.2) is taken by running such $\epsilon^{\pm}>0$ towards zero. A rich literature survey has been done in the book due to Perera et al. [10] where the author has discussed problems of several variety involving the $p$-Laplacian operators which could be studied using the Morse theory. The motivation for the current work has been drawn from the work due to Perera [13]. The treatment used to address the existence of atleast one (or two) solution(s) to the approximating problem may be classical (section $3$, Theorems 3.3 and 3.5) but the result concerning the reguarity of the free boundary is very new and the question of existence of solution to the problem (1.1) has not been answered till now (section $4$, Lemma 4.1), to the best of my knowledge. Two more results due to Alt-Caffarelli [1] (section $4$, Lemma 4.2) and Caffarelli et al. [6] (Appendix, Lemma 4.3) were improved to the best possible extent to suit the purpose of the problem in this paper. ### 1.1 A physical motivation Consider the problem $\displaystyle\begin{split}-\Delta u&=\lambda\chi_{\\{u>1\\}}(x),~{}\text{in}~{}\Omega\setminus H(u),\\\ |\nabla u^{+}|^{2}-|\nabla u^{-}|^{2}&=2,~{}\text{in}~{}H(u)\\\ u&=0,\text{on}~{}\partial\Omega.\end{split}$ (1.3) This is the well known Prandtl-Batchelor free boundary value problem, where the phase $\\{u>1\\}$ is a representation of the vortex patch bounded by the vortex line $u=1$ in a steady fluid flow for $N=2$ (refer Batchelor [2, 3]). Thus the current problem is a more generalized version of (1.3). For a more physical application to this problem we direct the reader’s attention to the work due to Caflisch [4], Elcrat and Miller [7]. Another instance of occurrence of such a phenomena is in the non-equilibrium system of melting of ice. In a given block of ice, the heat equation can be solved with a given set of appropriate initial/boundary conditions in order to determine the temperature. However, if there exists a region of ice in which the temperature is greater than the melting point of ice, this subdomain will be filled with water. The boundary thus formed due to the ice-water interface is controlled by the solution of the heat equation. Thus encountering a free boundary in the nature is not unnatural. The problem in this paper is a large enough generalization to this physical phenomena which besides being a new addition to the literature can also serve as a note to bridge the problems in elliptic PDEs with algebraic topology. ## 2 Preliminaries We begin by giving the relevant definitions and results besides defining the function space which will be used very frequently in the article. Let $X$ be a topological space and $A\subset X$ be a topological subspace. Roughly, a homology group is an algebraic group constructed from a topological object or a space. Following is the fundamental tool that will be used to work with, namely the homology theory [12]. ###### Definition 2.1. A homology group on a family of pairs of spaces $(X,A)$ consists of: 1. 1. A sequence $\\{H_{k}(X,A)\\}_{k\in\mathbb{N}_{0}}$ of abelian groups is known as homology group for the pair $(X,A)$ (note that for the pair $(X,\phi)$, we write $H_{k}(X),k\in\mathbb{N}_{0}$). Here $\mathbb{N}_{0}=\mathbb{N}\cup\\{0\\}$. 2. 2. To every map of pairs $\varphi:(X,A)\rightarrow(Y,B)$ is associated a homomorphism $\varphi^{*}:H_{k}(X,A)\rightarrow H_{k}(Y,B)$ for all $k\in\mathbb{N}_{0}$. 3. 3. To every $k\in\mathbb{N}_{0}$ and every pair $(X,A)$ is associated a homomorphism $\partial:H_{k}(X,A)\rightarrow H_{k-1}(A)$ for all $k\in\mathbb{N}_{0}$. These items satisfy the following axioms. ($A_{1}$) If $\varphi=id_{X}$, then $\varphi_{*}=id|_{H_{k}(X,A)}$. ($A_{2}$) If $\varphi:(X,A)\rightarrow(Y,B)$ and $\psi:(Y,B)\rightarrow(Z,C)$ are maps of pairs, then $(\psi\circ\varphi)_{*}=\psi_{*}\circ\varphi_{*}$. ($A_{3}$) If $\varphi:(X,A)\rightarrow(Y,B)$ is a map of pairs, then $\partial\circ\varphi_{*}=(\varphi|_{A})_{*}\circ\partial$. ($A_{4}$) If $i:A\rightarrow X$ and $j:(X,\phi)\rightarrow(X,A)$ are inclusion maps, then the following sequence is exact $...\xrightarrow[]{\partial}H_{k}(A)\xrightarrow[]{i_{*}}H_{k}(X)\xrightarrow[]{j_{*}}H_{k}(X,A)\xrightarrow[]{\partial}H_{k-1}(A)\rightarrow...$ Recall that a chain $...\xrightarrow[]{\partial_{K+1}}C_{k}(X)\xrightarrow[]{\partial_{k}}C_{K-1}(X)\xrightarrow[]{\partial_{k-1}}C_{k-2}(X)\xrightarrow[]{\partial_{k-2}}...$ is said to be exact if $im(\partial_{k+1})=ker(\partial_{k})$ for each $k\in\mathbb{N}_{0}$. ($A_{5}$) If $\varphi,\psi:(X,A)\rightarrow(Y,B)$ are homotopic maps of pairs, then $\varphi_{*}=\psi_{*}$. ($A_{6}$) (Excision): If $U\subseteq X$ is an open set with $\bar{U}\subseteq\text{int}(A)$ and $i:(X\setminus U,A\setminus U)\rightarrow(X,A)$ is the inclusion map, then $i_{*}:H_{k}(X\setminus U,A\setminus U)\rightarrow H_{k}(X,A)$ is an isomorphism. ($A_{7}$) If $X=\\{*\\}$, then $H_{k}({*})=0$ for all $k\in\mathbb{N}$. ###### Definition 2.2. A continuous map $F:X\times[0,1]\to X$ is a deformation retraction of a space $X$ onto a subspace $A$ if, for every $x\in X$ and $a\in A$, $F(x,0)=x$, $F(x,1)\in A$, and $F(a,1)=a$. A crucial notion in analysis is the idea of compactness and the Palais-Smale condition is a special type of compactness which is given as follows. ###### Definition 2.3. (S. Kesavan [11]) Let $V$ be a Banach space and $J:V\rightarrow\mathbb{R}$ a $C^{1}$ functional. It is said to satisfy the Palais-Smale condition (PS) if the following holds: whenever $(u_{n})$ is a sequence in $V$ such that $(J(u_{n}))$ is bounded and $J^{\prime}(u_{n})\rightarrow 0$ in $V^{\prime}$ (the dual space of $V$), then $(u_{n})$ has a strongly convergent subsequence. The following is a deformation lemma which will be quintessential in computing the homology groups. ###### Lemma 2.4. (S. Kesavan [11]) Let $J:V\rightarrow\mathbb{R}$ be a $C^{1}$ functional satisfying the Palais-Smale condition. Let $c,a$ be real numbers. Define $K_{J,c}=\\{v\in X:J(u)=c,J^{\prime}(v)=0\\}$, $K^{a}=\\{v\in X:J(v)\leq a\\}$ (likewise we define $K_{a}=\\{v\in X:J(v)\geq a\\}$). Let $K_{J,c}=\O$. Then there exists $\epsilon^{\prime}>0$ and a continuous homotopy $\eta:[0,1]\times V\rightarrow V$ such that $\forall~{}0<\epsilon\leq\epsilon^{\prime}$ 1. 1. $\eta(0,v)=v$ for all $v\in X$. 2. 2. $\eta(t,v)=v$ for all $t\in[0,1]$, $v\neq J^{-1}([c-\epsilon,c+\epsilon])$. 3. 3. $\eta(1,K^{c+\epsilon})\subset K^{c-\epsilon}$. ###### Definition 2.5. Morse index of a functional $J:V\rightarrow\mathbb{R}$ is defined to be the maximum subspace of $V$ such that $J^{\prime\prime}$, the second Fréchet derivative, is negative definite on it. ### 2.1 Space description We begin by defining the standard Lebesgue space $L^{p}(\Omega)$ for $1\leq p<\infty$ as $L^{p}(\Omega)=\left\\{u:\Omega\rightarrow\mathbb{R}:u\;{\text{is measurable and}}\int_{\Omega}|u|^{p}dx<\infty\right\\}$ endowed with the norm $\|u\|_{p}=\left(\int_{\Omega}|\nabla u|^{p}dx\right)^{\frac{1}{p}}$. We will define the Sobolev space as $W^{1,p}(\Omega)=\\{u\in L^{p}(\Omega):\nabla u\in(L^{p}(\Omega)^{N}\\}$ with the norm $\|u\|_{1,p}^{p}=\|u\|_{p}+\|\nabla u\|_{p}$. We further define $W_{0}^{1,p}(\Omega)=\\{u\in W^{1,p}(\Omega):u=0~{}\text{on}~{}\partial\Omega\\}.$ The associated norm is $\|u\|^{p}=\|\nabla u\|_{p}$. With these norms, $L^{p}(\Omega)$, $W^{1,p}(\Omega)$ and $W_{0}^{1,p}(\Omega)$ are separable, reflexive Banach spaces([11]). We now state the Hölder’s inequality and embedding results in the following propositions. ###### Proposition 2.6. For any $u\in L^{p}(\Omega)$ and $v\in L^{p^{\prime}}(\Omega)$, where $L^{p^{\prime}}(\Omega)$ is the conjugate space of $L^{p}(\Omega)$ such that $\frac{1}{p}+\frac{1}{p^{\prime}}=1$, $\big{|}\int_{\Omega}uv\;dx\big{|}\leq\|u\|_{p}\|v\|_{p^{\prime}}$ ###### Proposition 2.7. If $p<N$, then $W^{1,p}(\Omega)\hookrightarrow L^{r}(\Omega)$ is continuous for $r\in[p,p^{*}]$ and compact for $r\in[p,p^{*})$. If $p=N$, then $W^{1,p}(\Omega)\hookrightarrow L^{r}(\Omega)$ is continuous and compact for $r\in[p,\infty)$. Further, if $p>N$, then $W^{1,p}(\Omega)\hookrightarrow C^{1-\left[\frac{N}{p}\right]}(\bar{\Omega})$. ## 3 The way to tackle the problem using Morse theory We at first define an energy functional associated to the problem in (1.1) which is as follows. $\displaystyle\begin{split}I(u)&=\int_{\Omega}\frac{|\nabla u|^{p}}{p}dx+\int_{\Omega}\chi_{\\{u>1\\}}(x)dx-\lambda\int_{\Omega}\frac{(u-1)_{+}^{q}}{q}dx.\end{split}$ This functional is not even differentiable and hence poses serious issues as far as the application of variational theorems are concerned. Thus we approximate $I$ using the following functionals that varies with respect to a parameter $\alpha>0$. This method is adapted from the work of Jerison-Perera [9]. We define a smooth function $g:\mathbb{R}\rightarrow[0,2]$ as follows: $g(t)=\begin{cases}0,&\text{if}~{}t\leq 0\\\ \text{a positive function},&\text{if}~{}0<t<1\\\ 0,&\text{if}~{}t\geq 0\end{cases}$ and $\int_{0}^{1}g(t)dt=1$. We further let $G(t)=\int_{0}^{t}g(t)dt$. Clearly, $G$ is smooth and nondecreasing function such that $G(t)=\begin{cases}0,&\text{if}~{}t\leq 0\\\ \text{a positive function}<1,&\text{if}~{}0<t<1\\\ 1,&\text{if}~{}t\geq 0.\end{cases}$ We thus define $\displaystyle\begin{split}I_{\alpha}(u)&=\int_{\Omega}\frac{|\nabla u|^{p}}{p}dx+\int_{\Omega}G\left(\frac{u-1}{\alpha}\right)dx-\lambda\int_{\Omega}\frac{(u-1)_{+}^{q}}{q}dx.\end{split}$ This functional $I_{\alpha}$, is of at least $C^{2}$ class and hence $\displaystyle\langle I_{\alpha}^{\prime\prime}(u)v,w\rangle=$ $\displaystyle\int_{\Omega}[|\nabla u|^{p-2}\nabla v\cdot\nabla w+(p-2)|\nabla u|^{p-4}(\nabla u\cdot\nabla v)(\nabla u\cdot\nabla w)]dx$ $\displaystyle+\int_{\Omega}\frac{1}{\alpha^{2}}g^{\prime}\left(\frac{u-1}{\alpha}\right)vwdx-\lambda\int_{\Omega}(u-1)_{+}^{q-2}vwdx.$ Following is an important result in Morse theory which explains the effect of the associated Homology groups on the set $K_{J,(-\infty,a]}=\\{x\in V:f(x)\leq a\\}$. ###### Theorem 3.1. Let $J\in C^{2}(V)$ satisfy the Palais-Smale condition and let ‘$a$’ be a regular value of $J$. Then if, $H_{*}(V,J^{a})\neq 0$, implies that $K_{J,(-\infty,a]}\neq\emptyset$. ###### Remark 3.2. Before we apply the Morse lemma we recall that for a Morse function the following holds 1. 1. $H_{*}(J^{c},f^{c}\setminus\text{Crit}(J,c))=\oplus_{j}H_{*}(J^{c}\cap N_{j},J^{c}\cap N_{j}\setminus\\{x_{j}\\}),$ where $\text{Crit}(J,c)=\\{x\in V:J(x)=c,J^{\prime}(x)=0\\}$, $N_{j}$ is a neighbourhood of $x_{j}$. 2. 2. $H_{k}(J^{c}\cap N,J^{c}\cap N\setminus\\{x\\})=\begin{cases}\mathbb{R},&k=m(x)\\\ 0,&\text{otherwise}\end{cases}$ where $m(x)$ is a Morse index of $x$, a critical point of $J$. 3. 3. Further $H_{k}(J^{a},J^{b})=\oplus_{\\{i:m(x_{i})=k\\}}\mathbb{R}=\mathbb{R}^{m_{k}(a,b)}$ where $m_{k}(a,b)=n(\\{i:m(x_{i})=k,x_{i}\in K_{J,(a,b)}\\})$. Here $n(S)$ denotes the number of elements present in the set $S$. 4. 4. Morse relation $\sum_{u\in K_{J,[a,b]}}\sum_{k\geq 0}\text{dim}(C_{k}(J,u))t^{k}=\sum_{k\geq 0}\text{dim}(H_{k}(J^{a},J^{b}))t^{k}+(1+t)\mathcal{Q}_{t}$ for all $t\in\mathbb{R}$. Here $Q_{t}$ is a nonnegative polynomial in $\mathbb{N}_{0}[t]$. ###### Theorem 3.3. The functional $I_{\alpha}$ has at least one nontrivial critical point when $0<\lambda\leq\lambda_{1}$, $\lambda_{1}$ being the first eigen value of $(-\Delta)_{p}$. ###### Proof. We observe that $I_{\alpha}(tu)\rightarrow-\infty$ as $t\rightarrow\infty$. A key observation here is that there exists $R$ sufficiently small such that $I_{\alpha}(u)\geq\alpha>0$ whenever $\|u\|=R$. We choose $\epsilon>0$ such that $c=\epsilon$ is a regular value of $I_{\alpha}$. Thus, $I_{\alpha}^{\epsilon}$ is not path connected since it has at least two path connected components namely in the form of a neighbourhood of $0$ and a set $\\{u:\|u\|\geq R\\}$ for $R$ suffciently large. From the theory of homology groups we get that $\text{dim}(H_{0}(I_{\alpha}^{\epsilon}))\geq 2$, ‘dim’ denoting the dimension of the Homology group. From the Definition 2.1, let us consider the following exact sequence $...\rightarrow H_{1}(W_{0}^{1,p(x)}(\Omega),I_{\alpha}^{\epsilon})\xrightarrow[]{\partial_{1}}H_{0}(I_{\alpha}^{\epsilon},\emptyset)\xrightarrow[]{i_{0}}H_{0}(W_{0}^{1,p(x)}(\Omega),\emptyset)\rightarrow...$ Obviously $\text{dim}(H_{0}(W_{0}^{1,p(x)}(\Omega),\emptyset))=1$ and $\text{dim}(H_{0}(I_{\alpha}^{\epsilon}))\geq 2$. Due to the exactness of the sequence we conclude that $\text{dim}H_{1}(W_{0}^{1,p(x)}(\Omega),I_{\alpha}^{\epsilon})\geq 1$. Thus by the Remark we have $K_{I_{\alpha},(-\infty,\epsilon]}\neq\emptyset$. Suppose that the only critical point to (1.1) is $u=0$ at which the energy of the functional $I_{\alpha}$ is also $0$. Thus from the discussion above and the Remark (3.2)-(4) we have from the Morse relation we have the following identity over $\mathbb{R}$ $1=t+\mathcal{P}(t)+(1+t)\mathcal{Q}_{t},$ $g$ being a power series in $t$, $\mathcal{Q}_{t}\geq 0$. This is a contradiction. Thus there exists at least one $u\neq 0$ which is a critical point to $I_{\alpha}$ whenever $\lambda\leq\lambda_{1}$. ∎ ###### Definition 3.4 (Krasnoselskii genus). Let $V$ be a Banach space and $S\subset V$. A set $S$ is said to be symmetric if $u\in S$ implies $-u\in S$. Let $S$ be a close, symmetric subset of $V$ such that $0\notin S$. We define a genus $\gamma(S)$ of $S$ by the smallest integer $k$ such that there exists an odd continuous mapping from $S$ to $\mathbb{R}^{k}\setminus\\{0\\}$. We define $\gamma(S)=\infty$, if no such $k$ exists. To each closed and symmetric subsets $M$ of $W_{0}^{1,p}(\Omega)$ with the Krasnoselskii genus $\gamma(M)\geq k$, define $\lambda_{k}=\inf_{M\in\mathfrak{F}_{k}}\sup_{u\in M}I_{\alpha}(u).$ Here $\mathfrak{F}_{k}=\\{M\subset W_{0}^{1,p}(\Omega),~{}\text{closed and symmetric}:\gamma(M)\geq k\\}$. A natural question at this point will be to ask if the same conclusion can be drawn when $\lambda_{k}<\lambda\leq\lambda_{k+1}$. We will define $\lambda_{0}=0$. The next theorem answers this question. ###### Theorem 3.5. The problem in (1.1) has at least one nontrivial solution when $\lambda_{k}<\lambda\leq\lambda_{k+1}$, $\lambda_{k}$ being as defined above. ###### Proof. We at first show that $H_{k}(W_{0}^{1,p}(\Omega),I_{\alpha}^{-a})=0$ for all $k\geq 0$. Pick a $u\in\\{v:\|v\|=1\\}=\partial B^{\infty}$, where $B^{\infty}=\\{v:\|v\|\leq 1\\}$. Then $I_{\alpha}(tu)=\int_{\Omega}\frac{|\nabla(tu)|^{p}}{p}dx+\int_{\Omega}G\left(\frac{tu-1}{\alpha}\right)dx-\lambda\int_{\Omega}\frac{(tu-1)_{+}^{q}}{q}dx<-a<0$ for all $t\geq t_{0}$. It can be easily seen that for a fixed $u$, we have $I^{\prime}(tu)>0$. Further, for any $t\geq t_{0}$ we have $I_{\alpha}(tu)<-a<0$. Thus, there exists $t(u)$ such that $I_{\alpha}^{\prime}(tu)=0$ by the continuity of $I_{\alpha}^{\prime}$. We can thus say that there exists a $C^{1}$-function $T:W_{0}^{1,p}(\Omega)\setminus\\{0\\}\rightarrow\mathbb{R}^{+}$. We now define a standard deformation retract $\eta$ of $W_{0}^{1,p}(\Omega)\setminus B_{R^{\prime}}(0)$ into $I_{\alpha}^{-a}$ as follows (refer Definition 2.2). $\eta(s,u)=\begin{cases}(1-s)u+sT\left(\frac{u}{\|u\|}\right)\frac{u}{\|u\|},&\|u\|\geq R^{\prime},I_{\alpha}(u)\geq-a\\\ u,&I_{\alpha}(u)\leq-a.\end{cases}$ It is not difficult to see that $\eta$ is a $C^{1}$ function over $[0,1]\times W_{0}^{1,p}(\Omega)\setminus B_{R^{\prime}}(0)$. On using the map $\delta(s,u)=\dfrac{u}{\|u\|}$, for $u\in W_{0}^{1,p}(\Omega)\setminus B_{R^{\prime}}(0)$ we claim that $H_{k}(W_{0}^{1,p}(\Omega),W_{0}^{1,p}(\Omega)\setminus B_{r}(0))=H_{k}(B^{\infty},S^{\infty})$ for all $k\geq 0$. This is because, $H_{k}(B^{\infty},S^{\infty})\cong H_{k}(*,0)$. From elementary computation of homology groups with two $0$-dimensional simplices it is easy to see that $H_{k}(*,0)=\\{0\\}$ for each $k\geq 0$. A result in [10] says that $C_{m}(I,u)=\begin{cases}\mathbb{R},&\text{if}~{}m(u)=m\\\ 0,&\text{otherwise}\end{cases}$ Therefore, from the Morse relation in the Remark (3.2)-4 and the result above, we have for $b>0$ $\displaystyle\sum_{u\in K_{I,[-a,\infty)}}\sum_{k\geq 0}\text{dim}(C_{k}(I,u))t^{k}$ $\displaystyle=t^{m(u)}+p(t)$ (3.1) where $m(u)$ is the Morse index of $u$ and $\mathcal{P}(t)$ contains the rest of the powers of $t$ corresponding to the other critical points, if any. The Morse index is finite because of the following reason. From the argument which helped in establishing a ‘maxima’, say $u_{0}$, using the mountain pass geometry around $0$, we had to assume $\lambda<C^{-q}\frac{q}{p}\|u\|^{p-q}$. Owing to $u_{0}$ being a maxima, we have $I_{\alpha}^{\prime\prime}(u_{0})<0$ which necessarily requires $\lambda>C^{-q}\frac{p-1}{q-1}\|u\|^{p-q}$. Thus we have $C^{-q}\frac{p-1}{q-1}\|u\|^{p-q}<\lambda<C^{-q}\frac{q}{p}\|u\|^{p-q}.$ This implies that $\lambda_{i}<\lambda<\lambda_{j}$ for some $i,j\in\mathbb{N}_{0}$. On further using the the Morse relation we obtain $\displaystyle t^{m(u)}+\mathcal{P}(t)$ $\displaystyle=(1+t)\mathcal{Q}_{t}.$ (3.2) This is because the $H_{k}$s are all trivial groups. Hence, $Q_{t}$ either contains $t^{m(u)}$ or $t^{m(u)-1}$ or both. Thus there exists at least one nontrivial $u\in K_{I_{\alpha},[-a,\infty)}$ with $m(u)\leq n+1$. ∎ ###### Remark 3.6. If $0<\lambda\leq\lambda_{k+1}$, then there exists at least $k$ solutions to the equation (1.1). ## 4 Existence of solution to the main problem (1.1) and smoothness of the boundary $\partial\\{u>1\\}$ ###### Lemma 4.1. Let $\alpha_{j}\rightarrow 0$ ($\alpha_{j}>0$) as $j\rightarrow\infty$ and $u_{j}$ be a critical point of $I_{\alpha_{j}}$. If $(u_{j})$ is bounded in $W_{0}^{1,p}(\Omega)\cap L^{\infty}(\Omega)$, then there exists $u$, a Lipschitz continuous function, on $\bar{\Omega}$ such that $u\in W_{0}^{1,p}(\Omega)\cap C^{2}(\bar{\Omega}\setminus H(u))$ and a subsequence (still denoted by $(u_{j})$) such that 1. (i) $u_{j}\rightarrow u$ uniformly over $\bar{\Omega}$, 2. (ii) $u_{j}\rightarrow u$ locally in $C^{1}(\bar{\Omega}\setminus\\{u=1\\})$, 3. (iii) $u_{j}\rightarrow u$ strongly in $W_{0}^{1,p}(\Omega)$, 4. (iv) $I(u)\leq\lim\inf I_{\alpha_{j}}(u_{j})\leq\lim\sup I_{\alpha_{j}}(u_{j})\leq I(u)+|\\{u=1\\}|$, i.e. $u$ is a nontrivial function if $\lim\inf I_{\alpha_{j}}(u_{j})<0$ or $\lim\sup I_{\alpha_{j}}(u_{j})>0$. Furthemore, $u$ satisfies $-\Delta_{p}u=\lambda\chi_{\\{u>1\\}}(x)(u-1)_{+}^{q-1}$ classically in $\Omega\setminus H(u)$, the free boundary condition is satisfies in the generalized sense and vanishes continuously on $\partial\Omega$. In the case of $u$ being nontrivial, then $u>0$ in $\Omega$, the set $\\{u<1\\}$ is connected and the set $\\{u>1\\}$ is nonempty. An important result that will be used to pass the limit in the proof of the Lemma 4.1 is the following theorem which is in line to the theorem due to Caffarelli et al. in [6, Theorem $5.1$]. ###### Lemma 4.2. Let $u$ be a Lipschitz continuous function on the unit ball $B_{1}(0)\subset\mathbb{R}^{N}$ satisfying the distributional inequalities $\pm\Delta_{p}u\leq A\left(\dfrac{1}{\alpha}\chi_{\\{|u-1|<\alpha\\}}(x)+1\right)$ for constants $A>0$ and $0<\alpha\leq 1$. Then there exists a constant $C>0$ depending on $N,A$ and $\int_{{B_{1}}(0)}u^{p}dx$, but not on $\alpha$, such that $\underset{x\in B_{\frac{1}{2}}(0)}{\text{esssup}}\\{|\nabla u(x)|\\}\leq C.$ ###### Proof. Given that $u$ is a Lipschitz continuous function on the unit ball $B_{1}(0)\subset\mathbb{R}^{N}$, so $u$ is also bounded in the unit ball say by a constant $M_{0}$. Not just that, $u$ is also differentiable a.e. in $B_{1}(0)$. We will prove the result stated in the lemma for $u_{+}$, as the proof for $u_{-}$ will follow suit. Denote $v(x)=\frac{15}{\alpha}u_{-}(\alpha x/15)$ and $v_{1}=v+\underset{B_{1/4}}{\max}\\{v^{-}\\}.$ Therefore, $0\leq v_{1}\leq M_{1}$. Let us choose a test function $\eta\in C_{0}^{\infty}(B_{1/4})$ which is such that $0\leq\eta\leq 1$ in $B_{3/4}$ and $\eta=1$ in $B_{1/2}$. Thus $\displaystyle\begin{split}\int_{\Omega}\eta^{p}|\nabla v_{1}|^{p}=&-\int_{\Omega}(pv_{1}\eta^{p-1}|\nabla v_{1}|^{p-2}(\nabla v_{1}\cdot\nabla\eta)+\eta^{p}v_{1}\Delta_{p}v_{1}dx)dx\\\ \leq&\frac{1}{p}\int_{\Omega}\eta^{p}|\nabla v_{1}|^{p}dx+p\int_{\Omega}v_{1}^{p}|\nabla\eta|^{p}dx\\\ &+AM_{1}\int_{\Omega}\eta^{p}\left(\frac{1}{\alpha}\chi_{\\{|u-1|<\alpha\\}}(x)+1\right)dx\\\ \leq&\frac{1}{p}\int_{\Omega}\eta^{p}|\nabla v_{1}|^{p}dx+pM_{1}^{p}\int_{\Omega}|\nabla\eta|^{p}dx\\\ &+AM_{1}\int_{\Omega}\eta^{p}\left(\frac{1}{\alpha}\chi_{\\{|u-1|<\alpha\\}}(x)+1\right)dx.\end{split}$ (4.1) It is now established that $\displaystyle\frac{p-1}{p}\int_{B_{1/2}}|\nabla v_{1}|^{p}dx$ $\displaystyle\leq M_{2}.$ (4.2) However, $u$ being Lipschitz continuous, the gradient $\nabla u$ is bounded a.e. in $B_{1}(0)$ and hence in $B_{1/2}(0)$. Thus $\underset{B_{1/2}(0)}{\text{esssup}}\\{|\nabla u|\\}\leq C$, for some $C>0$. ∎ ###### Proof of Lemma 4.1. Let $0<\alpha_{j}<1$. Consider the problem sequence $(P_{j})$ $\displaystyle\begin{split}-\Delta_{p}u_{j}&=-\frac{1}{\alpha_{j}}g\left(\frac{(u_{j}-1)_{+}}{\alpha_{j}}\right)+\lambda(u-1)_{+}^{q-1}~{}\text{in}~{}\Omega\\\ u_{j}&>0~{}\text{in}~{}\Omega\\\ u_{j}&=0~{}\text{on}~{}\partial\Omega.\end{split}$ (4.3) The nature of the problem being a sublinear one allows us to conclude by an iterative technique that the sequence $(u_{j})$ is bounded in $L^{\infty}(\Omega)$. Therefore, there exists $C_{0}$ such that $0\leq g\left(\frac{(u_{j}-1)_{+}}{\alpha_{j}}\right)(u-1)_{+}^{q-1}\leq C_{0}$. Let $\varphi_{0}$ be a solution of $\displaystyle\begin{split}-\Delta_{p}\varphi_{0}&=\lambda C_{0}~{}\text{in}~{}\Omega\\\ \varphi_{0}&=0~{}\text{on}~{}\partial\Omega.\end{split}$ (4.4) Now since $g\geq 0$, we have that $-\Delta_{p}u_{j}\leq\lambda C_{0}=-\Delta\varphi_{0}$ in $\Omega$. Therefore by the maximum principle, $\displaystyle 0\leq u_{j}(x)\leq\varphi_{0}(x)~{}\forall x\in\Omega.$ (4.5) Since $\\{u_{j}\geq 1\\}\subset\\{\varphi_{0}\geq 1\\}$, hence $\varphi_{0}$ gives a uniform lower bound, say $d_{0}$, on the distance from the set $\\{u_{j}\geq 1\\}$ to $\partial\Omega$. Thus $(u_{j})$ is bounded with respect to the $C^{2,a}$ norm. Therefore, it has a convergent subsequence in the $C^{2}$-norm in a $\dfrac{d_{0}}{2}$ neighbourhood of the boundary $\partial\Omega$. Obviously $0\leq g\leq 2\chi_{(-1,1)}$ and hence $\displaystyle\begin{split}\pm\Delta u_{j}&=\pm\frac{1}{\alpha_{j}}g\left(\frac{(u_{j}-1)_{+}}{\alpha_{j}}\right)\mp\lambda(u_{j}-1)_{+}^{q-1}\\\ &\leq\frac{2}{\alpha_{j}}\chi_{\\{|u_{j}-1|<\alpha_{j}\\}}(x)+\lambda C_{0}.\end{split}$ (4.6) Since, $(u_{j})$ is bounded in $L^{2}(\Omega)$ and by Lemma 4.2 it follows that there exists $A>0$ such that $\displaystyle\underset{x\in B_{\frac{r}{2}}(x_{0})}{\text{esssup}}\\{|\nabla u_{j}(x)|\\}$ $\displaystyle\leq\frac{A}{r}$ (4.7) for a suitable $r>0$ such that $B_{r}(0)\subset\Omega$. However, since $(u_{j})$ is a sequence of Lipschitz continuous functions that are also $C^{1}$, therefore $\displaystyle\underset{x\in B_{\frac{r}{2}}(x_{0})}{\sup}\\{|\nabla u_{j}(x)|\\}$ $\displaystyle\leq\frac{A}{r}.$ (4.8) Thus $(u_{j})$ is uniformly Lipschitz continuous on the compact subsets of $\Omega$ such that its distance from the boundary $\partial\Omega$ is at least $\frac{d_{0}}{2}$ units. Thus by the Ascoli-Arzela theorem applied to $(u_{j})$ we have a subsequence, still named the same, such that it converges uniformly to a Lipschitz continuous function $u$ in $\Omega$ with zero boundary values and with strong convergence in $C^{2}$ on a $\frac{d_{0}}{2}$-neighbourhood of $\partial\Omega$. By the Eberlein-Šmulian theorem we conclude that $u_{j}\rightharpoonup u$ in $W_{0}^{1,p}(\Omega)$. We now prove that $u$ satisfies $\displaystyle-\Delta_{p}u$ $\displaystyle=\alpha\chi_{\\{u>1\\}}(x)(u-1)_{+}^{q-1}$ (4.9) in the set $\\{u\neq 1\\}$. Let $\varphi\in C_{0}^{\infty}(\\{u>1\\})$ and therefore $u\geq 1+2\delta$ on the support of $\varphi$ for some $\delta>0$. On using the convergence of $u_{j}$ to $u$ uniformly on $\Omega$ we have $|u_{j}-u|<\delta$ for any sufficiently large $j,\delta_{j}<\delta$. So $u_{j}\geq 1+\delta_{j}$ on the support of $\varphi$. On testing (4.9) with $\varphi$ yields $\displaystyle\int_{\Omega}|\nabla u_{j}|^{p-2}\nabla u_{j}\cdot\nabla\varphi dx$ $\displaystyle=\lambda\int_{\Omega}(u_{j}-1)_{+}^{q-1}\varphi dx.$ (4.10) On passing the limit $j\rightarrow\infty$ to (4.9), we get $\displaystyle\int_{\Omega}|\nabla u|^{p-2}\nabla u\cdot\nabla\varphi dx$ $\displaystyle=\lambda\int_{\Omega}(u_{j}-1)_{+}^{q-1}\varphi dx.$ (4.11) To arrive at (4.11) we have used the weak convergence of $u_{j}$ to $u$ in $W_{0}^{1,p}(\Omega)$ and the uniform convergence of the same in $\Omega$. Hence $u$ is a weak solution of $-\Delta_{p}u=\lambda(u-1)_{+}^{q-1}$ in $\\{u>1\\}$. Since $u$ is a Lipschitz continuous function, hence by the Schauder estimates we conclude that it is also a classical solution of $-\Delta_{p}u=\lambda(u-1)_{+}^{q-1}$ in $\\{u>1\\}$. Similarly on choosing $\varphi\in C_{0}^{\infty}(\\{u<1\\})$ one can find a $\delta>0$ such that $u\leq 1-2\delta$. Therefore, $u_{j}<1-\delta$. Let us now analyze the nature of $u$ in the set $\\{u\leq 1\\}^{\circ}$. On testing (4.9) with any nonnegative function and passing the limit $j\rightarrow\infty$ and using the fact that $g\geq 0$, $G\leq 1$ we can show that $u$ satisfies $\displaystyle-\Delta_{p}u$ $\displaystyle\leq\lambda(u-1)_{+}^{q-1}~{}\text{in}~{}\Omega$ (4.12) in the distributional sense. Furthermore, $\mu=\Delta_{p}u$ is a positive Radon measure supported on $\Omega\cap\partial\\{u<1\\}$ (refer Lemma 4.3 in Appendix). From (4.12), the positivity of the Radon measure $\mu$ and the usage of Section $9.4$ in Gilbarg-Trudinger [8] we conclude that $u\in W_{\text{loc}}^{2,p}(\\{u\leq 1\\}^{\circ})$, $1<p<\infty$. Thus $\mu$ is supported on $\Omega\cap\partial\\{u<1\\}\cap\partial\\{u>1\\}$ and $u$ satisfies $-\Delta_{p}u=0$ in the set $\\{u\leq 1\\}^{\circ}$. In order to prove $(ii)$, we will show that $u_{j}\rightarrow u$ locally in $C^{1}(\Omega\setminus\\{u=1\\})$. Note that we have already proved that $u_{j}\rightarrow u$ in the $C^{2}$ norm in a neighbourhood of $\partial\Omega$ of $\bar{\Omega}$. Suppose $M\subset\subset\\{u>1\\}$. In this set $M$ we have $u\geq 1+2\delta$ for some $\delta>0$. Thus for sufficiently large $j$, with $\delta_{j}<\delta$, we have $|u_{j}-u|<\delta$ in $\Omega$ and hence $u_{j}\geq 1+\delta_{j}$ in $M$. From (4.3) we have $-\Delta_{p}u_{j}=\lambda(u_{j}-1)_{+}^{q-1}~{}\text{in}~{}M.$ Clearly, $(u_{j}-1)_{+}^{q-1}\rightarrow(u-1)_{+}^{q-1}$ in $L^{p}(\Omega)$ for $1<p<\infty$ and $u_{j}\rightarrow u$ uniformly in $\Omega$. This analysis says something more stronger - since $(-\Delta_{p})u_{j}=\lambda(u_{j}-1)_{+}^{q-1}$ in $M$, we have that $u_{j}\rightarrow u$ in $W^{2,p}(M)$. By the embedding $W^{2,p}(M)\hookrightarrow C^{1}(M)$ for $p>2$, we have $u_{j}\rightarrow u$ in $C^{1}(M)$. This shows that $u_{j}\rightarrow u$ in $C^{1}(\\{u>1\\})$. Working on similar lines we can also show that $u_{j}\rightarrow u$ in $C^{1}(\\{u<1\\})$. We will now prove $(iii)$. Since $u_{j}\rightharpoonup u$ in $W_{0}^{1,p}(\Omega)$, we have that by the weak lower semicontinuity of the norm $\|\cdot\|$ that $\|u\|\leq\lim\inf\|u_{j}\|.$ It is sufficient to prove that $\lim\sup\|u_{j}\|\leq\|u\|$. To achieve this, we multiply (4.3) with $(u_{j}-1)$ and then integrate by parts. We will also use the fact that $tg\left(\frac{t}{\delta_{j}}\right)\geq 0$ for any $t\in\mathbb{R}$. This gives, $\displaystyle\begin{split}\int_{\Omega}|\nabla u_{j}|^{p}dx&\leq\lambda\int_{\Omega}f(u_{j}-1)_{+}^{q}dx-\int_{\partial\Omega}\frac{\partial u_{j}}{\partial\hat{n}}dS\\\ &\rightarrow\lambda\int_{\Omega}(u-1)_{+}^{q}dx-\int_{\partial\Omega}\frac{\partial u}{\partial\hat{n}}dS\end{split}$ (4.13) as $j\rightarrow\infty$. Here $\hat{n}$ is the outward drawn normal to $\partial\Omega$. ∎ We choose $\vec{\varphi}\in C_{0}^{1}(\Omega,\mathbb{R}^{N})$ such that $u\neq 1$ a.e. on the support of $\vec{\varphi}$. On multiplying $\nabla u_{n}\cdot\vec{\varphi}$ to the weak formulation of (4.3) and integrating over the set $\\{1-\epsilon^{-}<u_{n}<1+\epsilon^{+}\\}$ gives $\displaystyle\begin{split}\int_{\\{1-\epsilon^{-}<u_{n}<1+\epsilon^{+}\\}}\left[-\Delta_{p}u_{n}+\frac{1}{\alpha_{n}}g\left(\frac{u_{n}-1}{\alpha_{n}}\right)\right]\nabla u_{n}\cdot\vec{\varphi}dx\\\ =\int_{\\{1-\epsilon^{-}<u_{n}<1+\epsilon^{+}\\}}(u_{n}-1)_{+}^{q-1}\nabla u_{n}\cdot\vec{\varphi}dx.\end{split}$ (4.14) The term on the left hand side of (4.14) can be expressed as follows. $\displaystyle\nabla\cdot\left(\frac{1}{p}|\nabla u_{n}|^{p}\vec{\varphi}-(\nabla u_{n}\cdot\vec{\varphi})|\nabla u_{n}|^{p-2}\nabla u_{n}\right)+(\nabla\vec{\varphi}\cdot\nabla u_{n})\cdot\nabla u_{n}|\nabla u_{n}|^{p-2}$ $\displaystyle-\frac{1}{p}|\nabla u_{n}|^{p}\nabla\cdot\vec{\varphi}$ $\displaystyle+\nabla G\left(\frac{u_{n}-1}{\alpha_{n}}\right)\cdot\vec{\varphi}.$ (4.15) Using (4) and on integrating by parts we obtain $\displaystyle\begin{split}\int_{\\{u_{n}=1+\epsilon^{+}\\}\cup\\{u_{n}=1-\epsilon^{-}\\}}\left[\frac{1}{p}|\nabla u_{n}|^{p}\vec{\varphi}-(\nabla u_{n}\cdot\vec{\varphi})|\nabla u_{n}|^{p-2}\nabla u_{n}+G\left(\frac{u_{n}-1}{\alpha_{j}}\right)\hat{\varphi}\right]\cdot\hat{n}dS\\\ =\int_{\\{1-\epsilon^{-}<u_{n}<1+\epsilon^{+}\\}}\left(\frac{1}{p}|\nabla u_{n}|^{p}\nabla\cdot\vec{\varphi}-(\nabla\vec{\varphi}\cdot\nabla u_{n})|\nabla u_{n}|^{p-2}\nabla u_{n}\right)dx\\\ +\int_{\\{1-\epsilon^{-}<u_{n}<1+\epsilon^{+}\\}}\left[G\left(\frac{u_{n}-1}{\alpha_{n}}\right)\nabla\cdot\vec{\varphi}+\lambda(u_{n}-1)_{+}^{q-1}(\nabla u_{n}\cdot\vec{\varphi})\right]dx.\end{split}$ (4.16) The integral on the left of equation (4.16) converges to $\displaystyle\int_{\\{u_{n}=1+\epsilon^{+}\\}\cup\\{u_{n}=1-\epsilon^{-}\\}}\left(\frac{1}{p}|\nabla u|^{p}\vec{\varphi}-(\nabla u_{n}\cdot\vec{\varphi})|\nabla u_{n}|^{p-2}\nabla u_{n}\right)\cdot\hat{n}dS+\int_{\\{u_{n}=1+\epsilon^{+}\\}}\vec{\varphi}\cdot\hat{n}dS$ (4.17) $\displaystyle=\int_{\\{u_{n}=1+\epsilon^{+}\\}}\left[1-\left(\frac{p-1}{p}\right)|\nabla u_{n}|^{p}\right]\vec{\varphi}\cdot\hat{n}dS-\int_{\\{u_{n}=1-\epsilon^{-}\\}}\left(\frac{p-1}{p}\right)|\nabla u_{n}|^{p}\vec{\varphi}\cdot\hat{n}dS.$ (4.18) Thus the equation (4.17) under the limit $\epsilon\rightarrow 0$ becomes $\displaystyle 0=\underset{\epsilon\rightarrow 0}{\lim}\int_{\\{u=1+\epsilon^{+}\\}}\left[\left(\frac{p}{p-1}\right)-|\nabla u|^{p}\right]\vec{\varphi}\cdot\hat{n}dS-\underset{\epsilon\rightarrow 0}{\lim}\int_{\\{u=1-\epsilon^{-}\\}}|\nabla u|^{p}\vec{\varphi}\cdot\hat{n}dS$ (4.19) This is because $\hat{n}=\pm\dfrac{\nabla u}{|\nabla u|}$ on the set $\\{u=1+\epsilon^{+}\\}\cup\\{u=1-\epsilon^{-}\\}$. This proves that $u$ satisfies the free boundary condition. The solution cannot be trivial as it satisfies the free boundary condition. Thus a solution to (1.1) exists that obeys the free boundary condition besides the Dirichlet boundary condition. ## Appendix ###### Lemma 4.3. $u$ is in $W_{\text{loc}}^{1,p}(\Omega)$ and the Radon measure $\mu=\Delta_{p}u$ is nonnegative and supported on $\Omega\cap\\{u<1\\}$. ###### Proof. We follow the proof due to Alt-Caffarelli [1]. Choose $\delta>0$ and a test function $\varphi^{p}\chi_{\\{u<1-\delta\\}}$ where $\varphi\in C_{0}^{\infty}(\Omega)$. Therefore, $\displaystyle\begin{split}0&=\int_{\Omega}|\nabla u|^{p-2}\nabla u\cdot\nabla(\varphi^{p}\min\\{u-1+\delta,0\\})dx\\\ &=\int_{\Omega\cap\\{u<1-\delta\\}}|\nabla u|^{p-2}\nabla u\cdot\nabla(\varphi^{p}\min\\{u-1+\delta,0\\})dx\\\ &=\int_{\Omega\cap\\{u<1-\delta\\}}|\nabla u|^{p}\varphi^{p}dx+p\int_{\Omega\cap\\{u<1-\delta\\}}\varphi^{p-1}(u-1+\delta)|\nabla u|^{p-2}\nabla u\cdot\nabla\varphi dx,\end{split}$ (4.20) and so by Caccioppoli like estimate we have $\displaystyle\begin{split}\int_{\Omega\cap\\{u<1-\delta\\}}|\nabla u|^{p}\varphi^{p}dx&=-p\int_{\Omega\cap\\{u<1-\delta\\}}\varphi^{p-1}(u-1+\delta)|\nabla u|^{p-2}\nabla u\cdot\nabla\varphi dx\\\ &\leq c\int_{\Omega}u^{p}|\nabla\varphi|^{p}dx.\end{split}$ (4.21) Since $\int_{\Omega}|u|^{p}dx<\infty$, therefore on passing the limit $\delta\rightarrow 0$ we conclude that $u\in W_{\text{loc}}^{1,p}(\Omega)$. Furthermore, for a nonnegative $\zeta\in C_{0}^{\infty}(\Omega)$ we have $\displaystyle\begin{split}-\int_{\Omega}|\nabla u|^{p-2}\nabla u\cdot\nabla\zeta dx=&\left(\int_{\Omega\cap\\{0<u<1-2\delta\\}}+\int_{\Omega\cap\\{1-2\delta<u<1-\epsilon\\}}+\int_{\Omega\cap\\{1-\delta<u<1\\}}\right.\\\ &\left.+\int_{\Omega\cap\\{u>1\\}}\right)\\\ &\left[|\nabla u|^{p-2}\nabla u\cdot\nabla\left(\zeta\max\left\\{\min\left\\{2-\frac{1-u}{\delta},1\right\\},0\right\\}\right)\right]dx\\\ \geq&\int_{\Omega\cap\\{1-2\delta<u<1-\delta\\}}\left[|\nabla u|^{p-2}\nabla u\cdot\left(2-\frac{1-u}{\delta}\right)\nabla\zeta+\frac{\zeta}{\delta}|\nabla u|^{p}\right]dx\geq 0.\end{split}$ (4.22) On passing the limit $\delta\rightarrow 0$ we obtain $\Delta_{p}(u-1)_{-}\geq 0$ in the distributional sense and hence there exists a Radon measure $\mu$ (say) such that $\mu=\Delta(u-1)_{-}\geq 0$. ∎ ## Acknowledgement The author thanks the community of the free boundary value problems for injecting a new lease of life to the study of elliptic PDEs and the CSIR, India (25(0292)/18/EMR-II) for the financial support. ## References * [1] Alt, H.W., Caffarelli, L.A., Existence and regularity for a minimum problem with free boundary, J. Reine Angew. Math., 325, 105-144, 1981. * [2] Batchelor, G.K., On steady state laminar flow with closed streamlines at large Reynolds number, J. Fluid mech., 1, 177-190, 1956. * [3] Batchelor, G.K., A proposal concerning laminar wakes behind bluff bodies at large Reynolds number, J. Fluid mech., 1, 388-398, 1956. * [4] Caflisch, R.E., Mathematical analysis of vortex dynamics. In: Mathematical Aspects of Vortex Dynamics (Leesburg VA, 1988), pp 1-24, SIAM, Philadelphia, PA, 1989. * [5] Caffarelli, L.A., Peral, I., On $W^{1,p}$ estimates for elliptic equations in divergence form, Comm. Pure Appl. Math., 51, 1-21, 1998. * [6] Caffarelli, L.A., Jerison, D., Kenig, C.E., Some new monotonicity theorems with applications to free boundary problems, Ann. Math. (2), 155(2), 369-404, 2002. * [7] Elcrat, A.R., Miller, K.G., Variational formulas on Lipschitz domains, Trans. Am. Math. Soc., 347(7), 2669-2678, 1995. * [8] Gilbarg, D., Trudinger, N.S., Elliptic Partial Differential Equations of Second Order, Springer-Verlag, Berlin Heidelberg, 2001. * [9] Jerison, D., Perera, K., A multiplicity result for the Prandtl-Batchelor free boundary problem (Preprint arXiv:2003.05921). * [10] Kanishka Perera, Ravi P. Agarwal and Donald O’Regan, Morse Theoretic Aspects of $p$-Laplacian Type Operators, Mathematical surveys and Monographs, Amer. Math. Soc., 161, 2010. * [11] Kesavan, S., Topics in functional analysis and applications, New Age International (P) Ltd., 2003. * [12] Nikolaos S. Papageorgiou, Vicenţiu D. Rădulescu, Dušan D. Repovš, Nonlinear Analysis - Theory and Methods, Springer, 2019. * [13] Perera, K., On a class of elliptic free boundary problems with multiple solutions, Nonlinear Differ. Equ. Appl., 28, Art: 36, 2021.
2024-09-04T02:54:58.480708
2020-03-05T21:18:21
2003.04294
{ "authors": "Peng Zhang, Jianbin Fang, Canqun Yang, Chun Huang, Tao Tang, Zheng\n Wang", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26122", "submitter": "Zheng Wang", "url": "https://arxiv.org/abs/2003.04294" }
arxiv-papers
# Optimizing Streaming Parallelism on Heterogeneous Many-Core Architectures: A Machine Learning Based Approach Peng Zhang, Jianbin Fang, Canqun Yang, Chun Huang, Tao Tang, Zheng Wang Peng Zhang, Jianbin Fang, Canqun Yang, Chun Huang, and Tao Tang are with National University of Defense Technology, China. E-mail: {zhangpeng13a, j.fang, canqun<EMAIL_ADDRESS>Zheng Wang is with University of Leeds, United Kingdom. E-mail<EMAIL_ADDRESS> ###### Abstract As many-core accelerators keep integrating more processing units, it becomes increasingly more difficult for a parallel application to make effective use of all available resources. An effective way for improving hardware utilization is to exploit spatial and temporal sharing of the heterogeneous processing units by multiplexing computation and communication tasks – a strategy known as heterogeneous streaming. Achieving effective heterogeneous streaming requires carefully partitioning hardware among tasks, and matching the granularity of task parallelism to the resource partition. However, finding the right resource partitioning and task granularity is extremely challenging, because there is a large number of possible solutions and the optimal solution varies across programs and datasets. This article presents an automatic approach to quickly derive a good solution for hardware resource partition and task granularity for task-based parallel applications on heterogeneous many-core architectures. Our approach employs a performance model to estimate the resulting performance of the target application under a given resource partition and task granularity configuration. The model is used as a utility to quickly search for a good configuration at runtime. Instead of hand-crafting an analytical model that requires expert insights into low-level hardware details, we employ machine learning techniques to automatically learn it. We achieve this by first learning a predictive model offline using training programs. The learnt model can then be used to predict the performance of any unseen program at runtime. We apply our approach to 39 representative parallel applications and evaluate it on two representative heterogeneous many-core platforms: a CPU-XeonPhi platform and a CPU-GPU platform. Compared to the single-stream version, our approach achieves, on average, a 1.6x and 1.1x speedup on the XeonPhi and the GPU platform, respectively. These results translate to over 93% of the performance delivered by a theoretically perfect predictor. ###### Index Terms: Heterogeneous computing; Parallelism; Performance Tuning; Machine learning ## 1 Introduction Heterogeneous many-cores, as representative by GPGPUs and Intel’s XeonPhi, are widely used for accelerating parallel applications [1, 2, 3]. As users demand higher performance, many-core accelerators have become more powerful by providing more and more processing units. While the abundant computing resources offer the potential for higher performance, it becomes harder for a parallel application to utilize all the available computing resources [4, 5]. As a result, many parallel applications fail to fully unlock the performance potential of a many-core accelerator. One way for improving heterogeneous many-core utilization is to exploit spatial and temporal sharing of processing resources. This strategy is also known as heterogeneous streaming [6]. The idea is to exploit the computation and communication independency of task parallelism to improve hardware utilization. It works by partitioning the processor cores to allow independent communication and computation tasks (i.e. streams) to run concurrently on different hardware resources, which effectively overlaps the concurrent kernel execution with data movements. Representative heterogeneous streaming implementations include CUDA Streams [7], OpenCL Command Queues [8], and Intel heterogeneous streams library (hStreams) [9, 6]. These implementations allow a parallel program to spawn more than one stream (or pipeline) so that the data movement stage of one pipeline overlaps the kernel execution stage of another. Prior work on heterogeneous streaming mainly targets GPUs [10, 11, 12]. Compared to GPU implementations, OS-enabled coprocessors, like the Intel XeonPhi, provides some unique features that are currently unavailable on the GPU. For example, besides specifying the number of streams, developers can explicitly map streams to different groups of cores on XeonPhi to control the number of cores of each hardware partition. This parameter is not exposed to programmers on GPUs, making previous work on GPU-based parallel streaming optimizations infeasible to fully exploit Xeon-Phi-like many-core accelerators (see also Section 6.3). On the other hand, ample evidence is showing that choosing the right stream configuration, i.e., the number of processor core partitions and the number of concurrent tasks of a multi-stream application, values, has a significant impact the application’s performance on many-core architectures [13, 14, 15]. However, attempting to find the optimal values through exhaustive profiling would be ineffective, because the range of the possible values for the two parameters is huge. What we need is a technique that automatically determines the optimal stream configuration for any streamed application in a fast manner. This article presents a novel approach to determine the right number of processor core partitions and tasks for heterogeneous streams, targeting heterogeneous many-core architectures. Our key insight is to use a performance model to quickly search for the optimal stream configuration. The performance model estimates the resulting performance of the target streamed application when it runs under a given stream configuration. If the prediction can be performed quickly with low overhead, we can then quickly explore a large configuration space. Instead of hand-crafting the performance model that requires human modification whenever the architecture evolves (i.e., when the number and types of cores change), we employ machine learning techniques to automatically construct a predictive model. Our predictor is first trained _off-line_. Then, using code and dynamic runtime features of the program, the model predicts performance for a _new_ , _unseen_ program under a given stream configuration. Our prior work [16] develops a machine learning based classifier to predict the optimal stream configuration. However, this approach can only choose from a limited set of configurations seen during the training phase. Unlike a classification-based approach, the approach presented in the article allows us to explore a larger number of stream configurations (including those that are not seen during the training phase) with negligible runtime overhead. This advantage significantly improves the generalization ability of the proposed approach (Section 3). Due to the newness of heterogeneous streaming execution model, there are very few multi-stream benchmarks available. To evaluate our approach on a wide range of applications, we have developed a compiler-based tool to automatically translate standard OpenMP benchmarks into their streamed variants for the backends of XeonPhi and GPU architectures (Section 4). With the help of this code generator, we can apply our approach to 39 parallel benchmarks. We argue that this tool can help generate more streamed code and thus is an added value to the community. We evaluate our approach on two representative heterogeneous many-core platforms: a 57-core Intel XeonPhi and an NVIDIA 1080Ti GPU platforms. We achieve, on average, a 1.6x and 1.1x speedup over the single-stream execution on the XeonPhi and the GPU platforms, respectively. This translates to over 93% of the best available performance. The core contribution of this paper is a novel machine-learning-guided approach for automatically determining the optimal stream configuration on heterogeneous many-cores. We show that our approach delivers good performance across benchmarks and heterogeneous many-core platforms. While we do not seek to advance the machine learning algorithm itself, our work shows how machine learning can be used to address the challenging problem of tuning fine-grained streaming parallelism on heterogeneous many-core architectures. In this work, we demonstrate the usefulness of our approach on XeonPhi and an NVIDIA GPU, but our approach is equally applicable on other heterogeneous platforms like AMD GPUs. ## 2 Background and Overview In this section, we first give a brief introduction of heterogeneous streaming; we then define the scope of this work, before motivating the need of our scheme and providing an overview of our approach. ### 2.1 Heterogeneous Streaming The idea of heterogeneous streaming is to exploit spatial and temporal sharing of computing resources to utilize the hardware resources to improve application performance. Spatial Sharing. Modern many-core accelerators offer a large number of processing units. Since many applications cannot fully utilize all the cores at a time, we can partition the computing units into multiple groups to concurrently execute multiple tasks. In this way, the computing resource is spatially shared across concurrently-running application tasks. The key to spatial sharing is to determine the right number of partitions, because over- provisioning of processing units would waste computing resources but under- provisioning would lead to slowed down performance. Temporal Sharing. Code written for heterogeneous computing devices typically consists of several stages, such as host device communication and computation. Using temporal sharing, one can overlap some of these stages to exploit pipeline parallelism to improve performance by overlapping the host-device communication and kernel execution. ⬇ 1//setting the partition-size and task granularity hStreams_app_init(partition_size,streams_p_part); 3 //stream queue id 5stream_id = 0; for(…){ 7 //enquque host-device transfer to current stream hStreams_app_xfer_memory(,,, stream_id, HSTR_SRC_TO_SINK,…); 9 … //enqueue computation to the current stream 11 hStreams_EnqueueCompute(stream_id, ”kernel1”, …); … 13 //move to the next stream stream_id = (stream_id++) % MAX_STR; 15} //transfer data back to host 17hStreams_app_xfer_memory(,,, HSTR_SINK_TO_SRC,…); Figure 1: Heterogeneous streaming using hStreams as an example. (a) binomial (b) prefixsum Figure 2: Heatmaps show the resultant speedup (over single-stream) of binomial and prefixsum under different stream configurations. The #partitions and #tasks have a significant impact on the resultant performance, and the sweet spots are sparse and vary across programs. ### 2.2 Problem Scope Our work aims to improve the performance of a data parallel application by exploiting spatial and temporal sharing of heterogeneous streams. We do so by determining at runtime how many partitions should be used to group the cores (_#partitions_) and how many data parallel tasks (_#tasks_) should be used to run the application. Our current implementation is applicable to XeonPhi and GPUs by using different runtime back-ends (hStream for XeonPhi, and CUDA or OpenCL for GPUs). Code Example. Figure 1 gives a simplified code example written with Intel’s hStreams APIs that can run on the XeonPhi many-core. At line 2 we initialize the stream execution by setting the number of partitions and tasks/streams per partition. This initialization process essentially creates multiple processor domains and determines how many logical streams can run on a partition. In the _for_ loop (lines 7-14) we enqueue the communication and computation tasks to a number of streams identified by the stream_id variable. In this way, communication and computation of different streams can be overlapped during execution (temporal sharing); and streams on different processor domains (or partitions) can run concurrently (spatial sharing). Our predictive model determines the #partitions and the #tasks before invoking the hStreams initialization routine, hStreams_app_init(). Figure 3: Color table showing the speedups of best-performing configurations across inputs for dct. Each cell shows the performance for one of the 16 best- performing configurations, $Cn$, on a given input, $Dn$. The best configuration varies across inputs and a good configuration on one input can give poor performance on another dataset. ### 2.3 Motivating Examples Consider Figure 2 which shows the resultant performance improvement given by multi-stream parallelism over the single-stream version of the code for two applications on a 57-core Intel XeonPhi system. We use two streamed programs from prior work [13]: binomial computes the price evolution over a given period and prefixSum calculates the prefix sum for a sequence of numbers. It is observed from this example that not all multi-stream configurations give improved performance. As can be seen from the diagrams, the search space of multi-stream configurations is huge but good configurations are sparse. The performance varies significantly over stream configurations (#partitions, #tasks). The optimal #tasks for binomial ranges from 1 to 30, and the best #partitions is between 1 and 40. In contrast to binomial, prefixsum benefits from fine-grained parallelism when using a larger #tasks (220 to 224) and #partitions (60 to 80). However, the stream configurations that are effective for prefixsum give no speedup over the single-stream version for binomial. Now consider Figure 3 that shows the speedups of dct under 16 multi-stream configurations over the single-stream version, where each configuration is found to give the best-performance for one of the 16 inputs. In the color table, each cell shows the performance of a stream configuration ($C1,...,C16$) on a specific input dataset ($D1,...,D16$); and the values along the diagonal line represent the best-available performance (found through profiling) for an input. As can be seen from the figure, the best stream configuration can vary across inputs for the same benchmark. For example, while $C4$ gives a speedup of 1.33x over the baseline for dataset $D4$, it delivers a poor performance for dataset $D14$ by doubling the execution time over the single-stream version. This diagram also suggests that no single configuration can give improved performance for all inputs. Lesson Learned. These two examples show that choosing the stream configuration has a great impact on performance and the best configuration must be determined on a per-program and per-dataset basis. Later, we will show that this observation is not unique to XeonPhi but also holds for GPUs. Attempting to find the optimal configuration through means of an exhaustive search would be ineffective, and the overhead involved would be far bigger than the potential benefits. Online search algorithms, while can speed up the search process, the overhead can still outweigh the benefit. For example, when applying simulated annealing to binomial, the best-found configuration only reaches 84% of the best-available performance after 310,728 iterations111In Section 6.1, we show that our approach achieves 93% of the best-available performance for binomial on XeonPhi.. Classical hand-written heuristics are not ideal either, as they are not only complex to develop, but are likely to fail due to the variety of programs and the ever-changing hardware architecture. An alternate approach, and the one we chose to use, is to use machine learning to automatically construct a performance model to estimate the benefit of any candidate configuration, providing minimal runtime overhead for searching for a good configuration, and having little development cost when targeting new architectures. ### 2.4 Overview of Our Approach Our library-based approach, depicted in Figure 4, is completely automated. To determine the best streaming configuration, our approach follows a number of steps described as follows. We use a set of information or _features_ to capture the characteristics of the program. We develop a LLVM [17] compiler pass to extract static code features at compile time, and a low-overhead profiling pass to collect runtime information at execution time (i.e., during the first few loop iterations). Because profiling also contributes to the final program output, no computation cycle is wasted. At runtime, we search for a good configuration through an offline trained performance model to estimate the resulting performances for all candidate configurations. The performance model takes in the feature values, a given configuration of resource partition and task granularity and estimates the potential speedup for the given configuration over the single-stream version. The overhead of runtime feature collection and search is a few milliseconds, which is included in all our experimental results. Since our training process can be performed automatically, we can easily target our performance model for different architectures. Figure 4: Our machine learning based performance model (trained _offline_) predicts the speedup based on the extracted feature values of the code and a given stream configuration. We use the predictions to quickly rank candidate configurations at runtime to choose the one with the best predicted performance. ## 3 Performance Modeling At the core of our approach is a machine learned performance model built upon the Multi-layer Perceptron (MLP) artificial neural network (ANN). Our prototype is implemented using the Python scikit-learn machine learning package [18]. It is to note that our prior work [16] uses a Support Vector Machine (SVM) based classifier. However, such an approach can only make predictions on a limited set of configurations seen at the training time. Unlike a classification-based approach, the new approach presented in this article is a _regression-based_ model which can make predictions on any stream configuration. This new approach thus has a better generalization ability for various heterogeneous architectures. We have also evaluated a number of alternative modeling techniques, including MLP, SVM, and decision trees. We chose MLP because it gives the best performance and has modest training overhead (see Section 6.6.1). Our performance model takes as input the feature values and a given configuration (e.g., #partitions and #tasks for XeonPhi and #tasks for GPUs). It predicts the speedup for the given configuration. Building and using such a model follows a 3-step process for supervised learning: (i) generate training data (ii) train a performance model (iii) use the performance model, described as follows. Figure 5: The training process of our performance model. ### 3.1 Training the Performance Model Our method for model training is shown in Figure 5. To learn a regression model, we first need to profile the execution time (in order to calculate the speedup over the single-stream version) of all candidate configurations for each training program, and extract the feature values from the program. We then use the feature values, configuration settings and speedups to train a model. #### 3.1.1 Generating Training Data To generate training data, we apply _cross-validation_ to 39 benchmarks, i.e., by excluding the testing benchmarks from the training dataset (see also Section 5.3.1). We execute each training program and benchmark a number of times until the gap of the upper and lower confidence bounds is smaller than 5% under a 95% confidence interval setting. We then calculate the average speedup for a given stream configuration over the single-stream version. We exhaustively execute each training program across a wide range of stream configurations, and record the performance of each. Next, we calculate the speedup for each configuration, program and dataset. Finally, we extract the values of our selected set of features from each program and dataset. We stress that the trained model can be applied to stream configurations that are not seen in the training phase. #### 3.1.2 Profiling Configurations During the training phase, we exhaustively execute each training program across a set of streamed configurations. On XeonPhi, we profile each training program using the _#partitions_ ranging from 1 to 224 (the maximum number of physical threads on XeonPhi) and the _#tasks_ ranging from 1 to 256 222We chose these values because configuration settings beyond these values give a poor performance during our initial evaluation.. On GPUs, we cannot configure the number of partitions currently, we set the _#partitions_ to the same as _#tasks_ to be consistent with XenPhi. On this platform, we also set the _#tasks_ to be range between $2^{0}$ and $2^{10}$, which is big enough to include the optimal values according to our experiments. Note that these parameter ranges can be configured by the user. #### 3.1.3 Building The Model Each evaluated configuration is appended to the feature value vector of a training program to form a model input. The model inputs and the corresponding speedups (i.e., ground truths) for all training programs are passed to a learning algorithm. The algorithm finds a correlation between the input vector and the desired prediction. The output of our learning algorithm is an MLP model where the weights of the model are determined from the training data. Model parameter tuning is performed on the training dataset for each targeting hardware architecture, using cross-validation (see also Section 6.6.3). In our case, the overall training process for all the 39 training programs (which is dominated by training data generation) takes less than a week on a single machine. Since training is performed only once “at the factory”, this is a _one-off_ cost. ### 3.2 Features Our performance models are based exclusively on code and dynamic features of the target programs. Code features are extracted from the program source code, and dynamic features are collected using hardware performance counters during the initial profiling run of the target application. We restrict us in using hardware performance counters that are commonly available on modern processors such as the data cache misses to ensure that our approach can be applied to a wide range of many-core architectures. We considered 38 candidate raw features in this work. Some features were chosen from our intuition based on factors that can affect the performance such as dts (host-device data transfer size) and #xfer_mem, while other features were chosen based on previous work [19, 20]. #### 3.2.1 Feature Selection To build an accurate model through supervised learning, the training sample size typically needs to be at least one order of magnitude greater than the number of features. In this work, we start from 311 training samples and 38 raw features, so we would like to reduce the number of features in use. Our process for feature selection is fully automatic, described as follows. We first combine several raw features to form a set of combined normalized features, which are able to carry more information than the individual parts. For example, instead of reporting raw branch hit and miss counts, we use the branch miss rate. Next, we removed raw features that carried similar information which is already captured by chosen features. To find which features are closely correlated, we constructed a correlation coefficient matrix using the Pearson correlation coefficient [21]. The closer a coefficient between two features is to +/-1, the stronger the correlation between the two input features. We removed any feature which had a correlation coefficient (taking the absolute value) greater than 0.7. Similar features include the number of executed instructions and the number of E-stage cycles that were successfully completed. Our feature selection process reduces the number of features to 10 for XeonPhi (see Table I) and 10 for the NVIDIA Titan 1080Ti GPU (see Table II), where some features are shared. Since our approach for feature selection is automatic, the approach can be applied to other sets of candidate features. It is to note that feature selection is also performed using cross-validation (see also Section 5.2). Table I: Chosen features for XeonPhi performance model Feature | Description ---|--- loop nest | at which level the outermost parallelizable loop lies on loop count | # of the parallel loop iterations #xfer_mem | # of host-device transfer API calls dts | total host-device transfer size redundant transfer size | host-device transfer size among overlapping tasks max blocks | the maximum number of tasks of the application min task unit | the minimum task granularity for a partition # instructions | the total number of instructions of the kernel branch miss | branch miss rate L1 DCR | L1 Data cache miss rate Table II: Chosen features for GPU programs Feature | Description ---|--- Access type 1 | # array access, whose fastest varying index is an affine function of the block id Access type 2 | #array accesses, whose second or higher dimensional index is an affine function of the block id #xfer_mem | # of host-device transfer API calls host to device transfer size | total host to device transfer size device to host transfer size | total device to host transfer size redundant transfer size | host-device transfer size among overlapping tasks max blocks | the maximum number of tasks # instructions | the total number of instructions of the kernel divergent branches | # divergent branches L2 read miss rate | L2 cache read miss rate #### 3.2.2 Feature Standardization Supervised learning typically requires the feature values to lie in a certain range. Therefore, we scaled the value for each of our features between the range of 0 and 1. We record the maximum and minimum value of each feature found at the training phase, and use these values to scale features extracted from a new application after deployment. We truncate a value during deployment if the value is outside the minimum/maximum value range seen during training. It is to note that we also use the same approach to normalize the model predictions (speedups) to the range of 0 and 1. In this work, we choose Z-score to standardize the training data, and the details of quantifying the impact of feature engineering methods can be found in Section 6.6.2. (a) XeonPhi (b) NVIDIA GPU Figure 6: Feature importance on (a) XeonPhi and (b) NVIDIA GPU. #### 3.2.3 Feature Importance To understand the usefulness333In Section 6.6.4, we give a further breakdown of the impact of individual feature to the model performance on a per benchmark basis. of each feature, we apply a factor analysis technique called Varimax rotation [22] to the feature space transformed by the principal component analysis (PCA). This technique quantifies the contribution of each feature to the overall variance in each of the PCA dimensions. Intuitively, the more variances a feature brings to the space, the more useful information the feature carries. As an example, Figure 6 shows the top features chosen for XeonPhi and NVIDIA GPU architectures. For the XeonPhi platform, features that capture the parallelism degree (e.g. max blocks), host-device communication (e.g. redundant transfer size), and computation (e.g. #instructions) are found to be important. Other features such as L1 DCR and loop nest are useful, but are less important compared to others. On the NVIDIA GPU platform, we note that the parallelism degree is important, and the other features are equally useful (Figure 6b). This figure shows that prediction can accurately draw upon a subset of aggregated feature values. ### 3.3 Runtime Deployment Once we have built and trained our performance model as described above, we can use it as a cost function to search for the best stream configuration for any _new_ , _unseen_ program. Feature values are extracted from the single- stream version of the code. Static code features (such as loop count) are extracted from the program source at compile time. Dynamic features (such as branch miss) are extracted by profiling the program without partitioning for a few loop iterations (which typically translate to several microseconds). After feature collection, we feed the feature values to the search engine to rank all candidate configurations using the performance model. The top-ranked stream configuration is then used for the target program. In Section 4.4, we provide further details on how the performance model can be integrated with the host code generation process. #### 3.3.1 Adapt to Changing Program Phases Our current implementation chooses a configuration for each kernel and does not change the configuration throughout the kernel execution. Therefore, it can adapt to different behaviors across kernels because predictions are performed on a per-kernel basis. We found that this strategy is sufficient for many data-parallel kernels targeted in this work. Our approach can be extended to adapt phase or program behavior changes within a kernel. One way of doing this is to first partition the input data into groups and then perform configuration selection before launching the kernel that performs on an input data group. To reduce the prediction and configuration overhead, we can sample periodically to see if the performance counter readings are significantly different from the ones used for the current prediction to trigger re-configuration. Dynamic re-configuration of a running kernel will require extending the underlying runtime (e.g., hStreams or CUDA) to adjust thread mapping and having hardware support to stop and resume the execution contexts. We leave this as future work. ## 4 OpenMP to Streamed Code Generator Figure 7: Work flow for translating OpenMP programs to streamed programs using our automatic code generator. Currently, there are very few publicly available benchmarks for utilizing the streaming capability of heterogeneous many-core architectures, in particular, XeonPhi. To evaluate our approach on a diverse set of benchmarks, we have developed a compiler-based code generator, autostreamer, to automatically translate OpenMP programs onto streamed code depending on the target architecture. Our code generator is open sourced444Available at: https://github.com/wisdom-moon/autostreamer.. Our implementation currently supports converting OpenMP code to hStreams, CUDA and OpenCL programs. While we do not claim novelty on this as several works on source-to-source translation from OpenMP to CUDA[23, 24, 25, 26] or OpenCL[20, 27] exist, we believe the tool could serve as a useful utility for translating OpenMP programs to exploit multi-stream performance on heterogeneous many-core architectures. ### 4.1 Code Generator Overview Figure 7 depicts our source to source code generator for translating OpenMP code to streamed programs. We use LLVM’s Clang front-end to convert OpenMP code into the abstract syntax tree (AST). We then traverse the AST to obtain the information to generate candidate streamed kernels and host-device management code. The generated kernel and host code make use of exiting programming models for kernel launching and communication management. We use hStreams for XeonPhi and CUDA or OpenCL for GPUs. Our current implementation supports the translation of OpenMP parallel loops, i.e., loops annotated with omp for or omp for reduction constructs. For each parallel loop, we outline the loop body and translate it into an individual kernel function. We then replace the original loop body with a function call (running on the host CPU) to launch the generated kernel. We also generate management code for streaming context initialization, data partitioning, data movements between the host and the accelerator, etc. Our code generator relies on the native host/device compiler to optimize the generated code. We have also compared our automatically generated code against the manually translated code used in our prior work [16] and found that there is little difference in performance for the set of OpenMP benchmarks used in this work. ### 4.2 Preprocessing As an example, Figure 8 illustrates how an OpenMP parallel loop can be translated into hStreams code for XeonPhi. Note that a similar code generation process is implemented for GPUs, using CUDA for NVIDIA GPU architectures and OpenCL for other GPU platforms. For each OpenMP parallel loop, we extract information of loop iterations from the loop head. In this work, partitioning is achieved by splitting the loop iteration space. Furthermore, we collect all the variables needed by the hStreams kernel. Because hStreams requires kernel parameters to be passed as the uint64_t (lines 1-2 of Figure 8b), the kernel parameters will be cast into this type. The kernel parameters need to be packed into an array (line 21 in Figure 8c). Then the hStreams library will unpack kernel parameters from the array and pass the parameters to kernel function. During the preprocessing stage, we also extract the static code feature values of each target parallel loop. The code feature values will be encoded into the source code during host code generation. It is to note that our approach can be easily applied to existing hStreams programs – by first gathering feature values from an hStreams kernel, and then storing the extracted information in an auxiliary file or source code through a compiler front-end pass. ⬇ 1// An OpenMP C code for vector addition float * hostOutput = (float *) malloc(inputLength*sizeof(float)); 3… #pragma omp parallel for 5for(int i=0; i<inputLength; i++) { 7 hostOutput[i] = hostInput1[i] + hostInput2[i]; } 9… (a) OpenMP code. ⬇ 1COINATIVELIBEXPORT void kernel (uint64_t arg0, uint64_t arg1, … uint64_t arg5) 3{ int _start = (int) arg0; 5 … float *hostInput2 = (float *) arg5; 7 #pragma omp parallel for 9 for(int i= _start; i< _end; i++) hostOutput[i] = hostInput1[i] + hostInput2[i]; 11} (b) hStreams kernel code. ⬇ 1//output buffer float * hostOutput = (float *) malloc(inputLength*sizeof(float)); 3 //Feature update and prediction 5Stream config; 7conf_search(&config, &kernel_1_features, kernel_1_profile_runs); int partitions = config.partitions; 9int tasks = config.tasks; 11//hStreams Initialization hStreams_app_init(partitions, 1); . 13… hStreams_app_create_buf((float *)hostInput1, …); 15… 17//Work partition int sub_blocks = inputLength / tasks; 19int remain_index = inputLength % tasks; 21//Initialize kernel arguments uint64_t args[6]; args[2] = (uint64_t) inputLength; 23… for (int idx = 0; idx < tasks; idx++) { 25 args[0] = (uint64_t) _start; _end = _start + sub_blocks; 27 if (idx < remain_index) 29 _end ++; 31 args[1] = (uint64_t) _end; hStreams_app_xfer_memory(&hostInput1[_start], &hostInput1[_start], (_end- _start)*sizeof(float), idx % partitions, HSTR_SRC_TO_SINK, NULL); 33 hStreams_app_xfer_memory(&hostInput2[_start], …); 35 //Kernel launch hStreams_EnqueueCompute(idx % partitions, ”kernel_1”, 3, 3, args, …); 37 //Read back results 39 hStreams_app_xfer_memory(&hostOutput[_start], …); _start = _end; 41} … 43//hStreams cleanup code hStreams_app_fini(); (c) hStreams host code. Figure 8: A running example of translating (a) an OpenMP parallel loop to (b) hStreams kernel and (c) host management code. ### 4.3 Kernel Code Generation Generating a streamed kernel function is straightforward as much of the OpenMP code can be re-used. Figure 8b gives an example of the automatically generated kernel for the OpenMP loop given in Figure 8a for hStreams kernels. For the example given in Figure 8, an hStreams kernel starts with a pre- processor macro COINATIVELIBEXPORT (lines 1-2 in Figure 8b). The number and the type of the kernel parameters are loop-specific and are automatically determined by our code generator. Within the generated kernel, all the function parameters are cast from uint64_t into an appropriate type before they are used. Note that the OpenMP parallel for pragmas are kept in the generated kernel code per hStreams requirement (line 8 in Figure 8b). With our code generator, the original outer-most loop iteration space will be partitioned among parallel streams. The amount of work given to a specific stream is determined by the _start and _end variables, which define which part of the loop iteration space a stream instance will work on. A similar kernel code generation approach is implemented for GPUs using CUDA or OpenCL. ### 4.4 Host Code Generation To generate host code, we replace the original OpenMP parallel loop with a function call to invoke the generated kernel (e.g., hStreams_EnqueueCompute in Figure 8c)) together with additional code to initialize the host context and to manage data transfer. #### 4.4.1 Feature Value Collection Static code features, extracted by our code generator, will be encoded as a feature vector of real values. The feature vector will be passed to our configuration search engine to find the optimal stream configuration at runtime. Dynamic feature values are automatically collected by running the generated streamed kernel for 5 iterations under the single-stream configuration. As some loop bounds are dependent on the input, we might be unable to determine certain feature values at compile time. These features are represented as static symbolic pre-computation of loop bound variables, which will be updated using runtime values at runtime. #### 4.4.2 Setting Stream Configurations To partition tasks among streams, we break the loop iterations into a number of chunks of an equal size of subtask. We then group the hardware processor cores into partitions, where each partition contains a fixed set of streams. Processor partitioning and streams creation are achieved by calling the hStreams_app_init (line 12 in Figure 8c) function for XeonPhi (and cudaStreamCreate and clCreateCommandQueue for CUDA and OpenCL programs respectively) by passing the stream configuration given by our search engine. To overlap host device communications, we further split the input/output data arrays to multiple data blocks (lines 32-39 in Figure 8c) where each task operates on one block at a time while another data block is transferring between the host and the accelerator. The number of data blocks is determined by the stream configuration chosen at program runtime. The amount of work per task and the size of transferred data can be determined with kernel parameters. For example, in _for-loop_ at line 24 of Figure 8c, we calculate them with the starting position (_start) and the block size (sub_block). Thereafter, we schedule tasks and transfer the corresponding data blocks onto streams in a round-robin fashion. #### 4.4.3 Runtime Prediction When a streamed (e.g., hStreams or CUDA) kernel is invoked, the configuration selection engine library will choose a stream configuration (line 7 in Figure 8c) for the kernel. It uses the performance model to rank the candidate stream configurations and returns the optimal configuration (_#partitions_ and _#tasks_ for the example shown in Figure 8). The returned values are then used to initialize the streamed context (lines 8-9 of Figure 8c). The overhead of prediction is negligible (a few milliseconds) and is included in the results. #### 4.4.4 Supporting OpenMP Constructs OpenMP variables may have additional type information specified by directives, including default, share, private, firstprivate, lastprivate, copyin and threadprivate. Our generator uses these directives to map data onto the accelerator memory space. Each variable with the share or default directive will be translated into a global variable shared by all parallel threads. Variables declared as private and threadprivate are translated such that there is a private copy for each streamed kernel; no memory transfer between the host and the accelerator is needed. For each variable specified as copyin or first private, we create a private copy for each streamed kernel but initialize each copy using explicit memory transfers before its first use. Similarly, we create a private copy of a last private variable and the original variable is updated by a stream that executes the last iteration. Our implementation also supports a number of synchronization and thread constructs. Structured blocks identified with master, and single directives are executed by one thread on the host multi-core. barrier is implemented by splitting up the parallel loop into smaller tasks to create synchronization points among multiple streams. critical is implemented by using a mutex lock to restrict the execution of the associated structured blocks to a single thread at a time. The atomic and flush directives are already supported by hStreams, CUDA or OpenCL. #### 4.4.5 Host-Accelerator Communication Optimization For each buffer that is used by both the host and the accelerator, we manage two copies: one on the host memory and the other on the accelerator memory. Our runtime records the status of each variable and checks whether the copy on a device memory space is valid or not. No memory transfer is needed as long as the copy in the target memory space is valid. We currently use a conservative approach: if an element of an buffer has been updated, the entire buffer needs to be synchronized before it can be used by threads running on a different device. We also avoid unnecessary device to host data transfer by tracking the data dependence between the kernel and the host program. For example, when there are data-dependencies between two kernels but the host does not access this data in between the two kernels, we directly pass the memory address of the buffer to the later kernel (without moving the data back to the host). ## 5 Experimental Setup ### 5.1 Hardware, Systems Software and Benchmarks Table III: Our evaluation platforms | CPU-XeonPhi | CPU-GPU ---|---|--- CPU | 8-core Xeon CPU @ 2.6 GHz | Core i7-8700K CPU @ 3.7 GHz Accelerator | Intel Xeon 31SP Phi | NVIDIA GeForce GTX 1080 Ti GPU Platforms. We evaluate our approach on two heterogeneous many-core platforms: one is a CPU-XeonPhi platform and the other is a CPU-GPU platform. Table III gives details of our hardware platforms. Systems software. On the CPU-XeonPhi platform, the host CPU and the accelerator are connected through PCIe. The host runs Redhat Linux v7.0 (with kernel v3.10). The coprocessor runs a customized uOS (v2.6.38.8). We use Intel’s MPSS (v3.6) to communicate between the host and the coprocessor. We use the Intel hStreams library (v3.6) and Intel ICC (v16.0.3) for compilation (with -O3 as the compiler option). The CPU-GPU platform runs Ubuntu 16.04 (with kernel v4.15). We use CUDA v10.0 and gcc v5.3 as the host compiler with option “-O3”. Benchmarks. We use our code generator to translate 37 OpenMP applications from commonly used benchmark suites into hStreams and CUDA programs. We have excluded benchmarks where the data transfer cannot be overlapped with the kernel execution, which do not benefit from streamed parallelization. Table IV gives the full list of these benchmarks. Among them, convolutionFFT2d and convolutionSeparable have algorithm-dependent parameters, which are regarded as different benchmarks in the experiments. This setting gives us a total of 39 programs. We run the majority of the programs using over 25 different datasets, except for some applications where we used around 10 datasets because the algorithmic constraints prevent us from using a larger number of inputs. Table IV: Streamed benchmarks used in our experiments. Suite | Name | Acronym | Name | Acronym ---|---|---|---|--- | convol.Separable | convsepr1(8) | dotProduct | dotprod | convolutionFFT2d | fftx1y1(4y3) | fwt | fwt | MonteCarlo | montecarlo | matVecMul | mvmult | scalarProd | scalarprod | transpose | transpose NVIDIA SDK | vectorAdd | vecadd | | AMD SDK | binomial | binomial | BlackScholes | blackscholes dct | dct | prefixSum | prefix | bfs | bfs | histo | histo | lbm | lbm | mri-q | mri-q | mri-gridding | mri-gridding | sad | sad Parboil | sgemm | sgemm | spmv | spmv POLY BENCH | 2mm | 2mm | 3mm | 3mm adi | adi | correlation | correlation covariance | covariance | deriche | deriche gemm | gemm | gemver | gemver gesummv | gesummv | heat-3d | heat-3d jacobi-1d | jacobi-1d | jacobi-2d | jacobi-2d mvt | mvt | syr2k | syr2k syrk | syrk | | ### 5.2 Competitive Approaches We compare our regression-based approach against our preliminary work that employs an SVM-based classifier to predict the optimal stream configuration [16]. We denote our prior approach as SVM-classifier. We also compare our approach against two recent models for predicting the optimal stream configuration on GPUs. As it is currently not possible to configure the number of processor partitions on GPUs, the relevant GPU models can only predict the number of tasks. _Liu et al._ In [12], Liu _et al._ use linear regression models to search for the optimal number of tasks for GPU programs [12]. The approach employs several analytic models, described as follows. For a task with an input data size of $m$, the transferring time between the CPU and the accelerator, $T_{t}$, is determined as $T_{t}=\alpha\cdot m+\beta$, and the computation time, $T_{c}$, is calculated as: $T_{c}=\eta\cdot m+\gamma$ where the model coefficients, $\alpha$, $\beta$, $\eta$ and $\gamma$, are determined through empirical experiments. For a given kernel with $N$ input data elements running using $n$ streams, this approach partitions the computation into $n$ tasks, where the data size for each task, $m$, is equal to $N$/$n$. For the programs which kernel dominated, the total execution time, $T_{total}$, can be determined by: $T_{total}=T_{t}+nT_{c}=\alpha\cdot m+\frac{N\gamma}{m}+N\eta+\beta$ For the programs which data transfer dominated: $T_{total}=\alpha\cdot N+2\frac{N}{m}\beta$ By calculating the partial differential and second-order partial differential of $T_{total}$ with respect to $m$, we can obtain the optimal task-granularity as $m=\sqrt{\frac{N\gamma}{\alpha}}$. Then we can calculate the number of tasks ($n$). Note that $m=N/2$ is the optimal parameter for programs which data transfer dominated, i.e., the optimal number of tasks is 2. Another problem of this model is that it does not consider scenarios where communications in different direction (i.e., host to device and device to host) can overlap with each other. Note that we set the #partitions to be the same as $n$ for XeonPhi. _Werkhoven et al._ The work presented by Werkhoven _et al._ models the performance of data transfers between the CPU and the GPU [10]. They use the LogGP model to estimate the host-device data transfer time. Specifically, the model estimates the data transfer time using five parameters: the communication latency ($L$), overhead ($o$), the gap ($g$), the number of processors ($P$), and the PCIe bandwidth ($G$). Let $B_{hd}$ denotes the amount of data transferred from the host to the device and $B_{dh}$ denotes vice versa, and $T_{kernel}$ donates the kernel execution time. For the dominant transfer scenario, the optimal number of tasks(i.e., _#tasks_), $N_{s}$, can be estimated by solving the following equations: $B_{dh}*G_{dh}+g*(N_{s}-1)=\begin{cases}\frac{T_{kernel}}{N_{s}}+\frac{B_{dh}}{N_{s}}*G_{dh},&\text{if}B_{dh}>B_{hd}\\\ \frac{B_{hd}}{N_{s}}*G_{hd}+\frac{T_{kernel}}{N_{s}},&\text{otherwise}\end{cases}$ This model does not consider the dominant kernel scenario, as it assumes the kernel execution time will increase as the number of streams increases and can not model the kernel execution time. Here, we use the same equation to calculate the optimal number of tasks. For this model, we also set the #partitions to be equal to the optimal $N_{s}$ value on XeonPhi. ### 5.3 Evaluation Methodology #### 5.3.1 Model Evaluation We use cross-validation to evaluate our machine learning models. To test the portability of our approach, we apply _leave-one-out_ cross-validation, described as follows. We exclude the target program for predictions from the training program set, and learn a model using the _remaining_ programs. We then apply the learned model to the testing program. We repeat this process until each benchmark is tested once. This is a standard evaluation methodology, providing an estimate of the generalization ability of a machine learned model in predicting _unseen_ data. Note that we exclude both convolutionFFT2d and convolutionSeparable from the training set when one of the two is evaluated, and we make sure all approaches are trained on the same benchmarks for fair comparisons. #### 5.3.2 Performance Report We run each program under a stream configuration multiple times and report the _geometric mean_ of the runtime. Compared to the arithmetic mean, the geometric mean is often considered as a more suitable metric for reporting program performance, as it can better minimize the impact of outliers [28]. To determine how many runs are needed, we calculated the confidence range using a 95% confidence interval and make sure that the difference between the upper and lower confidence bounds is smaller than 5%. ## 6 Experimental Results In this section, we first present the overall performance of our approach on both platforms. We then compare our approach to that uses fixed stream configurations, two prior analytical models and our previous work. We futher discuss the benefit sources of the streaming parallelism and the working mechanism of our approach. At last, we demonstrate the tunning process of our model. ### 6.1 Overall Performance (a) XeonPhi (b) NVIDIA GPU Figure 9: Overall performance of our approach over a single-stream version on XeonPhi (a) and NVIDIA GPU (b). Our approach achieves, on average, 93.7% and 97.9% of the oracle performance on XeonPhi and NVIDIA GPU, respectively. The min-max bars show the range of performance achieved across different inputs. In this experiment, we exhaustively profiled each application with all possible stream configurations and report the best-found performance as the _Oracle_ performance. The Oracle gives an indication of how close our approach is to a _theoretically perfect_ solution. The baseline used to calculate the speedup is running the application using a single-stream without processor core or task partitioning. The overall result is shown in Figure 9. The min-max bar on the diagram shows the range of speedups per application across all evaluated inputs. Overall, our approach achieves an average speedup of 1.57$\times$ and 1.1$\times$ over the single-stream configuration on XeonPhi and the GPU respectively. This translates to 93.7% and 97.9% of the Oracle performance on XeonPhi and the GPU respectively. On XeonPhi, the performance improvement of our approach comes from two factors. First, by predicting the right processor partition size, our approach allows effective overlapping of the host-device communication and computation. Second, by matching task parallelism to the number of available processor cores, our approach can reduce the overhead of thread management, compared to the single-stream execution. When the host-device communication time dominates the streaming process, the performance improvement mainly comes from computation-communication overlapping and the speedup from streaming is consistently less than 2$\times$. When the kernel execution time dominates the stream process, the application can benefit from the overhead reduction of thread management. In this case, the speedup can be as large as 5$\times$. We provide a further discussion on this later in Section 6.5.1. On the GPU, we can exploit bidirectional data transfer between the host and the device by using pined memory which is not supported by hStreams. The support of bidirectional data transfer allows us to obtain further performance gains by overlapping host-device data transfer and computation. The theoretically up-bound speedup on the GPU platform is 3$\times$, when data transfer is perfectly overlapped with computation. The representative sample is fftx4y3 with the larges dataset, the data transfer time in the two directions is the same, and the kernel execution time is 1.5 times of the data transfer time. The oracle speedup is 2.3$\times$, and our approach achieves a speedup of 2.2 $\times$. On the other hand, because the current GPU implementation does not support processor core partition, the kernel execution time benefits less from using multiple streams. Programs which the kernel execution time dominated have no speedup using multiple streams, such as bfs, MonteCarlo. ### 6.2 Comparison to Fixed Stream Configurations Our approach predicts from a wide range of stream configurations, which configuration is likely to give the best performance for a given program and dataset. A natural question to ask is that: is there a fixed stream configuration that gives reasonable good performance across benchmarks and datasets? To answer this question, we compare our predictive modeling based approach to two specific configurations on each of our evaluation platforms. Our justification for why we selecting the fixed configurations are described as follows. On XeonPhi, our initial results in Section 2 indicate that using the stream configuration of $(4,16)$, i.e. partitioning the cores to 4 groups and running 4 tasks on each partition (16 tasks in total), gives good performance. The statistics obtained from the training data suggest that the configuration of $(17,85)$ give the best average performance across training samples. On the GPU, several programs support a maximum of 4 tasks. Thus we select the two configurations $(2,2)$ and $(4,4)$. The results are shown in Figure 10. (a) XeonPhi (b) NVIDIA GPU Figure 10: Comparing the performance with two fixed configurations on XeonPhi (a) and NVIDIA GPU (b): config. $(4,16)$ of 4 partitions and 4 tasks per partition, config. $(17,85)$ of 17 partitions and 5 tasks per partition, config. $(2,2)$ of 2 partitions and 1 tasks per partition, and config. $(4,4)$ of 4 partitions and 1 tasks per partition. (a) XeonPhi (b) NVIDIA GPU Figure 11: Violin plot showing the distribution of speedups per scheme across benchmarks and datasets on XeonPhi (a) and GPU (b). The shape of the violin corresponds to the speedup distribution to the oracle performance. The thick black line shows where 50% of the data lies. #### 6.2.1 XeonPhi On XeonPhi, we observe improved performance for several benchmarks such as mri-gridding, transpose, sad, under both configurations, but slower performance for dotprod, vecadd, blackscholes, lbm, and mir-q (Figure 10a). For prefix, configuration $(17,85)$ delivers improved performance while configuration $(4,16)$ leads to slowdown performance. Overall, none of the two fixed configurations give an improved performance on average. On average, our approach outperforms the two fixed configurations by a factor of 1.4, and delivers consistently improved performance across benchmarks and datasets. The violin plot in Figure 11a shows how far is each of the three schemes to the Oracle performance across benchmarks and datasets. Our approach not only delivers the closest performance to the Oracle, but also has the largest number of samples whose performance is next to the Oracle. By contrast, the performance given by the fixed configurations for many samples is farther from the Oracle performance. #### 6.2.2 GPU On the GPU, in most cases, the performance of configuration $(2,2)$ is moderate, not great, but not much worse than single-version, leading to an average speedup 1.03$\times$ (Figure 10b). By contrast, although configuration $(4,4)$ performs poorly on two programs, it delivers a slightly larger averaged speedup of 1.04$\times$. By choosing the stream configuration on a per-program basis, our approach outperforms the two fixed configurations, achieving an averaged speedup 1.10$\times$. On only four programs, our approach delivers slightly worse performance with a small margin. The violin plot in Figure 11b also confirms the strengths of our approach by presenting the distribution of performance improvement. The results on the diagram are normalized to the Oracle (best-available) performance. For most of the programs, the two fixed configurations deliver 80% to 100% to the Oracle performance. However, configuration $(4,4)$ can lead to rather poor performance (less than 40% to the best available performance) on some programs. Compared to the fixed configurations, the performance distribution given by our approach is centralized on a range between 90% to 100%, where most programs are within this percentile range. Furthermore, compared to the fixed configurations, our approach has a fewer number of performance outliers, which have less serious performance slowdown. Therefore, our approach delivers consistently better performance compared with the fixed configurations. #### 6.2.3 Summary This experiment confirms that a fixed configuration fails to deliver improved performance across applications and datasets, and selecting a right stream configuration on a per program, per dataset basis is thus required. ### 6.3 Comparison to Analytical Models (a) XeonPhi (b) NVIDIA GPU Figure 12: Comparing against _Liu et al._ and _Werkhoven et al._ on XeonPhi (a) and NVIDIA GPU (b). In this experiment, we compare our approach to the two recent analytical models described in Section 5.2. The results are shown in Figures 12 and 13. On XeonPhi, both competitive models prefer using $2$ tasks across benchmarks and datasets. This is because that many programs are kernel dominated, the analytical models simply assume that task partition has no effect on kernel’s performance, and do not consider the thread management overhead. On the GPU, the model proposed by _Liu et al._ tends to use $2$ tasks across benchmarks and datasets. This is due to the fact that most programs are data transfer dominated and this model ignores the overlap of the bidirectional data transfers between the host and the device. XeonPhi. Figure 12a demonstrates that our approach gives better performance for nearly all programs on XeonPhi. For the remaining handful programs, all three approaches deliver comparable performance. Compared to the results Figure 10, we can find the performance of the analytical models is similar to fixed stream configurations. This is because the performance of the seven programs, such as binomial, changes dramatically with different stream configurations (see also Figure 2). The performance of the remaining programs is not sensitive to the variation of stream configurations. From Figure 13a, we can further see that _Liu et al._ and _Werkhoven et al._ deliver a speedup within a range on 20% to 80%, while the performance of our approach is centralized on a range between 80% to 100%. Thus, our approach delivers consistently better performance compared with the alternative models. GPU. Figure 12b shows that our approach delivers better performance for around 75% of the programs on the GPU. Since _Werkhoven et al._ and _Liu et al._ are manually tuned for the GPUs, they give better performance on some benchmarks over our approach. However, our approach has the advantages of being automatically learned from training data, with little expert involvement. The performance of our approach can be further improved by using more training examples to better cover the program space. Figure 13b shows that _Liu et al._ and _Werkhoven et al._ delivers a speedup within a range of 5% to 80%, and 70% to 100%, respectively. By contrast, the performance of our approach is centralized within a range between 90% to 100% for more programs. Therefore, overall, our approach delivers better average performance compared with the alternative models. (a) XeonPhi (b) NVIDIA GPU Figure 13: Violin plots showing the distribution of speedups across benchmarks and datasets on XeonPhi (a) and GPU (b). ### 6.4 Comparison to Classification-based Approach (a) XeonPhi (b) NVIDIA GPU Figure 14: Comparing against a classification based approach on XeonPhi (a) and NVIDIA GPU (b). Our prior work uses a SVM classifier to predict the configurations [16]. Compared with it, the regression-based model presented in this article has several advantages. A classification model predicts which of a set of predefined labels the input belongs to. Using this strategy, we will need to label each unique stream configuration. This will lead to a total of 175 labels for 311 profiling samples on the XeonPhi, and 11 labels on the GPU. On the XeonPhi, the ratio of samples to labels is too small to build an accurate model. As a result, we have to merge labels in our prior work [16] at the cost of losing accuracy. Classification is a constraint optimization problem where the model has to know all the possible configurations during training. Our new regression-based approach avoids this pitfall by directly modeling the impact of the stream configuration; it thereby can be used on any stream configuration as the configuration is the model’s input. Figure 14a presents results obtained on the XeonPhi. Our regression-based approach outperforms the SVM-classifier in 21 of the 39 programs and achieves over 5% performance improvement for 13 programs. It is to note that the overhead for ranking stream configurations is included in the experimental results. Overall, our regression-based approach improves the SVM-classifier by, on average, 3% (up to 46%). Unlike XeonPhi, we were able to obtain sufficient training samples per label (because the optimization space is smaller) on the GPU to build a more accurate classification model. As can be seen from Figure 14b, the average speedup of SVM-classifier and the regression-based approach is comparable. Compared to a classifier, our regression-based approach has the advantage of being able to be applied to configurations that were not seen during the training phase. Therefore, our approach has a better generalization ability. ### 6.5 Further Analysis of Performance Results We now take a closer look into the performance results, using XeonPhi as a case study. Figure 15: Reduction of kernel computation time over a single-stream execution on XeonPhi. The performance improvement comes from the reduction of the threading overhead. A stream configuration is annotated as (_#partitions_ , _#tasks_). #### 6.5.1 High Speedup Cases On XeonPhi, bidirectional data transfer between the host and the accelerator cannot be overlapped, i.e., we can only issue data transfer from the host to the device or vice versa at once but not simultaneously. As a result, the theoretical up-bound speedup for overlapping computation and communication is 2$\times$, when the computation is perfectly overlapped with the data transfer time. It is interesting to observe that several benchmarks achieve a speedup of over 2$\times$ on XeonPhi (see Figure 9a). After having a closer investigation, we notice that such performance is attributed to the reduction in the kernel execution time in additional to the overlapping of communication and computation. To quantify the benefit of kernel time reduction, we measure the kernel execution time with and without multiple streams and calculate the speedup between them. Note that we _exclude the host-device communication time in this case_ to isolate the contributing factors. The kernel time improvement for transpose, binomial, and fftx1y1 is shown in Figure 15. As can be seen from the diagram, choosing a good stream configuration can lead to more than 4x reduction on the kernel execution time. This is because these benchmarks are implemented by parallelizing the inner loop within a nested loop. During runtime, the parallel threads working on the inner loop will be created, synchronized, or destroyed for each outer loop iteration. Such threading overhead could be significant when the outer loop iterates a large number of times. With multiple streams, we divide the whole outer loop iteration space into multiple smaller iterations. This allows multiple groups of threads to be managed simultaneously, leading to a significant decrease in threading overhead and faster kernel execution time. On the other hand, using too many streams and partitions will lead to a performance decrease. This is because stream management also comes at a cost, which increases as the number of partitions grows. Nonetheless, for applications where the kernel computation dominates the program execution time, by reducing the kernel time can lead to additional improvement, yielding more than 2x speedups. (a) XeonPhi (b) NVIDIA GPU Figure 16: Violin plot showing the distribution of speedups per benchmark across datasets on XeonPhi (a) and NVIDIA GPU (b). The shape of the violin corresponds to the speedup distribution. The thick black line shows where 50% of the data lies. #### 6.5.2 Speedup Distribution Figure 16 gives the speedup per benchmark across datasets on XeonPhi and the GPU. The shape of the violin plot corresponds to the speedup distribution. On XeonPhi, we see that the speedups of montecarlo and prefix distribute fairly uniformly while the data distribution of fftx1y1 and fftx4y3 is multimodal (i.e. it has two peaks). Further, the input datasets have little impact on the behavior of fwt and lbm, so the speedups remain constant across datasets. On the GPU, the speedups of dotprod, vecadd, blackscholes and mri-q distribute fairly uniformly while the data distribution of convsepr1, convsepr8, fftx1y1, fftx4y3 and dct is unimodal (i.e. it has one peak). Furthermore, the input datasets have a very slight impact on the performance behaviors of montecarlo, scalarprod, transpose and binomial. Thus, their speedups remain constant across datasets. To conclude, the streaming speedups of some applications are sensitive to their input datasets whereas the others are not. And the distribution of speedups on the GPU is more concentrated than XeonPhi. This is because the current GPU implementation does not support processor core partition, the kernel execution time benefits less from multiple streams than XeonPhi. Figure 17: The relation between computation-communication ratio and the speedup. The computation-communication ratio is normalized using the natural logarithm function. Thus, the kernel computation time equals the host-device communication time when $ratio=0$. In general, a higher computation- communication ratio leads to a better speedup. #### 6.5.3 Correlation Analysis Figure 17 shows the relation between the computation-communication ratio and the achieved speedup when using heterogeneous streams across all benchmarks and datasets on XeonPhi. We see that the computation-communication ratio varies over the benchmarks and the speedup changes accordingly, but in general, a higher computation-to-communication ratio leads to a greater speedup. As explained in Section 6.5.1, in addition to overlapping computation and communication, our approach can also reduce the kernel computation time by choosing the right stream configuration. Therefore, benchmarks with a high computation-communication ratio also benefit from a reduction in the kernel computation time. To quantify the relation between the computation-communication ratio and the speedup, we calculate the Pearson correlation coefficient of the two variables. The calculation gives a correlation coefficient of 0.7, indicating that the two variables (the computation-communication ratio and the speedup) have a strong linear correlation. By carefully selecting the stream configuration, our approach tries to maximize the overlap between communication and computation, which thus leads to favourable performance. #### 6.5.4 Impact of Streaming Parallelism Figure 18: Breakdown of program execution time ($T$), host-device data transfer time ($T_{m}$), kernel execution time ($T_{k}$), hStreams context initialization overhead ($T_{c}$) and communication-computation overlapping time ($T_{o}$) for single and best-performing multi-stream configurations. Our earlier experiments show that by carefully exploiting streaming parallelism, we can significantly improve application performance. We now take a closer look at three representative benchmarks, fftx1y1, fwt and gesummv, to get a better understanding of streaming performance on XeonPhi. These benchmarks represent different degrees of benefits obtained from streamed parallelism (with a speedup of 2$\times$, 1.5$\times$ and 1$\times$, respectively). We use the following analytical model to breakdown the execution time of a multi-stream program: $T=T_{m}+T_{k}+T_{c}-T_{o}$ (1) where $T_{m}$ is host-device data transfer time, $T_{k}$ is kernel execution time, $T_{c}$ is the overhead for initializing the context, and $T_{o}$ is overlapping time between data transfer and kernel execution. We measure $T$, $T_{m}$, $T_{k}$, and $T_{c}$, and use the measurements to calculate $T_{o}$. Figure 18 gives the breakdown for the five components in Equation 1. For each testing program, we compare the single-stream configuration against the best- performing multi-stream configuration. The host-device data transfer time, $T_{m}$, is nearly constant among a single and a multiple stream configuration, but multi-streaming can reduce the kernel execution time, $T_{k}$, by exploiting the spatial sharing of processing resources among computation tasks. The overhead of initializing the hStreams context, $T_{c}$, depends on the kernel execution time. For fftx1y1 and fwt, whose kernels run for a sufficiently long time, this one-off runtime overhead is negligible. However, for gesummv, this overhead cannot be ignored due to the relatively short kernel running time. The contribution for overlapping host-device communications with kernel execution, $T_{o}$, varies across programs. For fftx1y1 and fwt, it accounts for around 50% of $T_{m}$, suggesting that by exploiting temporal sharing to overlap communication with kernel execution can amortize the host-device communication overhead. For gesummv, $T_{o}$ is small due to little alignment between data transfer and kernel execution. As such, there is little benefit for exploiting temporal sharing for this program. This experiment gives a more detailed analysis for the benefits of exploiting multiple streams. The results reinforce our claim that the benefit for streaming parallelism depends on the computation kernel and hence an adaptive scheme for choosing the optimal stream configuration is needed. Our work aims to offer such a capability. ### 6.6 Analysis of Predictive Modeling Techniques In this section, we analyse the working mechanism of our predictive model, using XeonPhi as an evaluation platform. #### 6.6.1 Comparison to Alternative Modeling Techniques We compare our MLP-based model against four widely used regression methods: the DCT (Decision Tree), the RF (Random Forest), the XGB (eXtreme Gradient Boosting) and SVM (Support Vector Machine) as well as four classification models: SVM, DCT, MLP and KNN (K-Nearest Neighbors). We use the Radial basis function kernel for the SVM models. For each technique, we follow the same training methodology and use the same features and training examples to build a model. For classification models, we apply the label merging process described in our prior work [16] to improve the prediction accuracy. Table V compares the training overhead, average prediction time and achieved average speedup for each model. We note that training a regression-based SVM model has the largest overhead. Although training a DCT has less overhead over our MLP-based regression model, MLP gives better prediction performance. The RF and XGB models are based on DCT, but they do not yield a better performance. Compared to regression models, a classification model takes less time to train and make predictions. However, classification models give worse performance over regression models as they require more training data to cover the optimization space. Overall, we choose to use a regression-based approach and employ MLP because it gives the best overall prediction performance and has modest training overhead. Table V: Comparison to alternative modeling techniques Technique | Training time | Avg. pred. time | Avg. speedup ---|---|---|--- SVM (regression) | 100 hours | 2280 ms | 1.56 DCT (regression) | 65.57 seconds | 0.74 ms | 1.51 RF (regression) | 317.89 seconds | 11.94 ms | 1.51 XGB (regression) | 28.46 seconds | 0.74 ms | 1.49 MLP (regression, ours) | 245.8 seconds | 0.76 ms | 1.57 SVM (classifier) | 1.28 seconds | 0.10 ms | 1.53 DCT (classifier) | 0.79 seconds | 0.05 ms | 1.38 MLP(classifier) | 46.45 seconds | 0.05 ms | 1.41 KNN (classifier) | 0.22 seconds | 0.23 ms | 1.43 #### 6.6.2 Feature Engineering Feature engineering has a significant impact on the performance of a machine learning model (Section 3.2). Here we quantify the impact of feature engineering methods. In this work, we consider three standard feature engineering approaches including standardization, normalization and dimension reduction. Standardization converts all features value to be in a common range, e.g., between 0 and 1. The idea is to prevent the feature value range to dominate the importance of that feature. In this work we apply a commonly used standardization method called _Z-score_ [29] to standardize the raw feature values and the speedups (i.e., prediction targets) in the training data. We found that feature standardization improves the achieved speedup by 3% on average, and speedup standardization improves the achieved speedup by 5% on average. Normalization scales the feature values to make them follow the normal distribution. We have tested a range of normalization methods including the square root, the reciprocal of square root and the natural logarithm transformation. However, we found that normalization does not improve our model prediction accuracy. Dimension reduction reduces the number of features, which is often useful when the number of training examples is not proportional to the number of feature dimensions. In this work, we apply factor analysis (FA) [30] and principal component analysis (PCA) [31] to the raw features. Applying PCA and using 9 PCA components gives the best overall result, by improving the average speedup by 17%. PCA outperforms FA which gives an average 3% improvement on the achieved speedup. #### 6.6.3 MLP Parameters Tuning We now discuss the impact of the MLP parameter choices. There are four configurable parameters for an MLP model: the activation function, the number of hidden layers, the number of neurons, and the learning algorithm (i.e., the solver). For activation functions, we consider identity, logistic, tanh and relu. For hidden layers and neurons, we vary the number of hidden layers from 1 to 5 and the number of neurons per layer from 3 to 100. For the solver, we consider three commonly used weight optimizers: lbfgs, sgd and and adam. We use scikit-learn implementations for the activation function and the solver. Our experimental results suggest that the best-performing activation function and solver are tanh and adam respectively, and using three hidden layers with 9 neurons per layers gives the best overall results on our training data. Overall, tuning MLP model parameter improves the average speedup by 5% over the default parameter setting. Figure 19: A Hinton diagram showing the impact of each feature used by the performance model to the resultant application performance. The larger the box, the more likely a feature has a greater impact on the performance of the respective benchmark. #### 6.6.4 Impact of Individual Feature In this experiment, we consider the impact of a specific feature to the resultant performance. Figure 19 presents a Hinton diagram illustrating how important a feature contribution to the performance model prediction accuracy (which in turns affects the resulting application performance). The larger the box, the more significant a feature for a given program’s performance. Here, the x-axis denotes the programs, and the y-axis denotes the features used by our performance model. The impact of a feature is quantified by measuring how much speedup improvement can be obtained if that feature is used by the performance model. Note that this is a post-hoc analysis and, in general, we cannot know in advance the importance of a feature on _unseen_ programs. Figure 19 shows that all the features are important for the set of benchmarks targeted in the work, but the importance of features varies across programs. This diagram illustrates how hard it is to develop an analytical model to capture the diverse behaviors and characteristics of streaming programs. ## 7 Related Work Our work builds upon the following past foundation, while qualitatively differing from each. Task Scheduling. There is considerable work on distributing work across heterogeneous processors to improve application performance [32, 33, 34]. Prior work in the area typically assumes that the processor configuration is fixed and relies on the operating system to schedule parallel tasks across processing units. Recent studies show that by partitioning the processing units into groups it is possible to significantly improve the application performance by overlapping the host-device communication and computation on coprocessors like Intel XeonPhi [14, 6]. However, existing approaches rely on static tuning to find the processor partition and the best number of streams to run within a partition. As a result, previous approaches cannot adapt to the change of program inputs. As a departure from prior work, we develop an automatic approach to dynamically adjust the processor partition and task- granularity during runtime, considering the characteristics of applications and input datasets; our approach thereby can adapt to the change of program inputs. Domain-specific Optimizations There is considerable work on domain-specific optimization on Intel XeonPhi. Cheng _et al._ [35] and Jha _et al._ [36] show that in-memory database applications suffer from under-utilization of processor resources and hence a fine-grained tuning approach is required. Mrphi is a framework for optimizing MapReduce workload on the XeonPhi [37]. It employs a set of techniques to improve the resource utilization to obtain higher application performance. Other works look at performance optimization for numerical solvers [38], sparse matrix vector multiplication [39, 40], and dynamic stochastic economic models [39]. Ferrão _et al._ [41] and Memeti _et al._ [42] develop a stream processing framework for XeonPhi to increase the programming productivity. The runtime can automatically distribute workloads across CPUs and accelerating devices. These approaches improve the processor utilization by adjusting the algorithmic design, which are complementary to our work on tuning multi-streaming parallelism for data parallel applications. Multiple Streams Modeling. Gomez-Luna _et al._ [11] develop a set of models to estimate the asynchronous data transfer overhead on different GPU architectures. The models can be used to estimate the optimal number of streams to use on a given GPU platform. Werkhoven _et al._ [10] present an analytical model to determine when to apply an overlapping method on GPUs. Liu _et al._ [12] also develop an analytical based approach to determine the optimal number of streams to use on GPUs. However, none of these approaches considers the processor partition. As we have shown in Section 6.3, ignoring the processor partitioning parameter can lead to poor performance on Intel XeonPhi. Furthermore, these hand-crafted models have the drawback of being not portable across architectures as the model is tightly coupled to a specific GPU architecture. Our work advances prior work by employing machine learning to automatically learn the optimal processor partition and the number of streams/tasks to use. Since our models are automatically learned from empirical observations, one can easily re-learn a model for a new architecture. Predictive Modeling. Recent studies have shown that machine learning based predictive modeling is effective in code optimization [43, 44], performance predicting [45, 46], parallelism mapping [47, 48, 20, 49, 50], and task scheduling [51, 52, 53, 54, 55, 56]. Its great advantage is its ability to adapt to the ever-changing platforms as it has no prior assumption about their behavior. The work presented by Wen _et al._ [57] employs SVMs to develop a binary classifier to predict that if a given OpenCL kernel can achieve a large speedup or not. Our work differs from [57] in that it targets a different architecture and programming model, and it predicts from a larger number of configurations instead of making a binary prediction. Our prior work developed an SVM based classifier to predict the optimal stream configuration for Intel XeonPhi [16]. However, it requires having sufficient training data samples to cover all possible stream configurations. Our approach improves the prior work by directly modeling the impact of the stream configuration. As a result, our approach can make predictions for any stream configuration (even those are not seen in the training data). Autotuning Parallel Programs. Our approach is closely related to autotuning that searches for the best-performing optimization configuration [58, 59]. This technique is demonstrated to be effective for choosing algorithmic choices [60], tuning GPU code [61, 62, 63], optimizing structured parallel programs [64, 65, 66] and non-uniform memory access (NUMA) architectures [67], and more recently for deep neural networks [68]. Many of the prior works in this area employ an evolutionary-based approach by applying and profiling candidate optimization options to choose a good option to use. One of the key changes of autotuning is how to avoid the profiling overhead which could be prohibitively expensive. We do so by using a performance model to quickly evaluate the profitability of a candidate optimization option. We show that our approach has low runtime overhead, which thus permits us to apply it at runtime to best match the optimization strategy to the program input. Furthermore, our work is the first for tuning heterogeneous streaming parallelism on heterogeneous many-cores (XeonPhis and GPUs). Automatic Generation of Parallel Programs. The OpenMPC compiler [69] translates OpenMP to CUDA programs. Wang _et al._ [24, 20, 70] translates OpenMP to OpenCL programs and use machine learning to select the most suitable device from the host CPU and the GPU to run the code. Rawat _et al._ presents an automatic approach to generate GPU code from a domain-specific language (DSL) for stencil programs [71]. All of the above approaches target GPUs, and do not utilize the multi-streaming strategy. ## 8 Conclusion This article has presented an automatic approach to exploit streaming parallelism on heterogeneous many-cores. Central to our approach is a machine learning-based model that predicts the resulting performance when running the target application under a given streamed configuration. The performance predictor is then used as a cost function to quickly rank candidate configurations at runtime, to determine which stream configuration should be used on a per-program per-dataset basis. We have evaluated our approach on an Intel XeonPhi and an NVIDIA GTX 1080 Ti GPU, with 39 representative benchmarks. Experimental results show that our approach delivers an average speedup of 1.6x and 1.1x on XeonPhi and the GPU, respectively. These results translate to over 93% of the best-available performance. ## Acknowledgment This work was partially funded by the National Key Research and Development Program of China under Grant No. 2018YFB0204301, the National Natural Science Foundation of China under Grant agreements 61972408, 61602501 and 61872294; For any correspondence, please contact Jianbin Fang (Email: [email protected]). ## References * [1] J. D. Owens _et al._ , “Gpu computing,” _Proceedings of the IEEE_ , 2008\. * [2] A. Li _et al._ , “Exploring and analyzing the real impact of modern on-package memory on HPC scientific kernels,” in _SC_ , 2017. * [3] C. Chen _et al._ , “LU factorization on heterogeneous systems: an energy-efficient approach towards high performance,” _Computing_ , 2017. * [4] M. R. Meswani _et al._ , “Modeling and predicting performance of high performance computing applications on hardware accelerators,” _IJHPCA_ , 2013\. * [5] J. Fang _et al._ , “A comprehensive performance comparison of CUDA and opencl,” in _ICPP_ , 2011. * [6] C. J. Newburn, _et al._ , “Heterogeneous streaming,” in _IPDPSW_ , 2016\. * [7] _CUDA C Best Practices Guide Version 7.0_ , NVIDIA Inc., 2015. * [8] The Khronos OpenCL Working Group, “OpenCL - The open standard for parallel programming of heterogeneous systems,” http://www.khronos.org/opencl/, 2016. * [9] _hStreams Architecture for MPSS 3.5_ , Intel Inc., 2015. * [10] B. Van Werkhoven _et al._ , “Performance models for cpu-gpu data transfers,” in _CCGrid_ , 2014. * [11] J. GóMez-Luna _et al._ , “Performance models for asynchronous data transfers on consumer graphics processing units,” _JPDC_ , 2012. * [12] B. Liu _et al._ , “Software pipelining for graphic processing unit acceleration: Partition, scheduling and granularity,” _IJHPCA_ , 2016. * [13] Z. Li _et al._ , “Streaming applications on heterogeneous platforms,” in _NPC_ , 2016. * [14] J. Fang _et al._ , “Evaluating multiple streams on heterogeneous platforms,” _Parallel Processing Letters_ , 2016. * [15] Z. Li _et al._ , “Evaluating the performance impact of multiple streams on the mic-based heterogeneous platform,” in _IPDPSW_ , 2016. * [16] P. Zhang _et al._ , “Auto-tuning streamed applications on intel xeon phi,” in _IPDPS_ , 2018. * [17] C. Lattner and V. Adve, “LLVM: A compilation framework for lifelong program analysis & transformation,” in _CGO_ , 2004. * [18] F. Pedregosa _et al._ , “Scikit-learn: Machine learning in python,” _Journal of machine learning research_ , 2011. * [19] G. Fursin _et al._ , “Milepost gcc: machine learning based research compiler,” in _GCC summit_ , 2008. * [20] Z. Wang _et al._ , “Automatic and portable mapping of data parallel programs to opencl for gpu-based heterogeneous systems,” _ACM TACO_ , 2015\. * [21] S. Boslaugh, _Statistics in a Nutshell, 2nd Edition_ , 2nd ed. O’Reilly Media, 2012. * [22] B. F. Manly, _Multivariate statistical methods: a primer_. CRC Press, 2004. * [23] S. Lee _et al._ , “Openmp to gpgpu: a compiler framework for automatic translation and optimization,” _ACM Sigplan Notices_ , 2009. * [24] D. Grewe _et al._ , “Portable mapping of data parallel programs to opencl for heterogeneous systems,” in _CGO_ , 2013. * [25] D. Mikushin _et al._ , “Kernelgen–the design and implementation of a next generation compiler platform for accelerating numerical models on gpus,” in _IPDSW_ , 2014. * [26] T. Grosser and T. Hoefler, “Polly-acc transparent compilation to heterogeneous hardware,” in _Supercomputing_ , 2016. * [27] R. Sotomayor _et al._ , “Automatic cpu/gpu generation of multi-versioned opencl kernels for c++ scientific applications,” _International Journal of Parallel Programming_ , 2017. * [28] W. Ertel, “On the definition of speedup,” in _International Conference on Parallel Architectures and Languages Europe_ , 1994. * [29] E. Kreyszig, _Advanced Engineering Mathematics, 10th Eddition_ , 2009. * [30] R. L. Gorsuch, _Factor Analysis, 2nd Edition_. Routledge, 2014. * [31] H. Hotelling, “Analysis of a complex of statistical variables into principal components.” _Journal of educational psychology_ , 1933. * [32] S. Mittal and J. S. Vetter, “A survey of CPU-GPU heterogeneous computing techniques,” _ACM Computing Surveys (CSUR)_ , vol. 47, no. 4, p. 69, 2015\. * [33] C.-K. Luk _et al._ , “Qilin: exploiting parallelism on heterogeneous multiprocessors with adaptive mapping,” in _MICRO_ , 2009. * [34] J. Shen _et al._ , “Workload partitioning for accelerating applications on heterogeneous platforms,” _IEEE TPDS_ , 2016. * [35] X. Cheng _et al._ , “Many-core needs fine-grained scheduling: A case study of query processing on intel xeon phi processors,” _JPDC_ , 2018. * [36] S. Jha _et al._ , “Improving main memory hash joins on intel xeon phi processors: An experimental approach,” _PVLDB_ , 2015. * [37] M. Lu _et al._ , “Mrphi: An optimized mapreduce framework on intel xeon phi coprocessors,” _IEEE TPDS_ , 2015. * [38] A. Lastovetsky _et al._ , “Model-based optimization of eulag kernel on intel xeon phi through load imbalancing,” _IEEE TPDS_ , 2017. * [39] W. T. Tang _et al._ , “Optimizing and auto-tuning scale-free sparse matrix-vector multiplication on intel xeon phi,” in _CGO_ , 2015. * [40] M. E. Guney _et al._ , “Optimizing matrix multiplication on intel® xeon phi th x200 architecture,” in _ARITH_ , 2017\. * [41] P. Ferrão _et al._ , “Stream processing on hybrid cpu/intel® xeon phi systems,” in _European Conference on Parallel Processing_ , 2018. * [42] S. Memeti and S. Pllana, “Hstream: A directive-based language extension for heterogeneous stream computing,” in _CSE_ , 2018. * [43] C. Cummins _et al._ , “End-to-end deep learning of optimization heuristics,” in _PACT_ , 2017. * [44] Z. Wang and M. O’Boyle, “Machine learning in compiler optimisation,” _Proc. IEEE_ , 2018. * [45] J. Zhao _et al._ , “Predicting cross-core performance interference on multicore processors with regression analysis,” _IEEE TPDS_ , 2016. * [46] Z. Wang and M. F. O’boyle, “Using machine learning to partition streaming programs,” _ACM TACO_ , 2013. * [47] G. Tournavitis _et al._ , “Towards a holistic approach to auto-parallelization: integrating profile-driven parallelism detection and machine-learning based mapping,” _ACM Sigplan Notices_ , 2009. * [48] Z. Wang and M. F. O’Boyle, “Partitioning streaming parallelism for multi-cores: a machine learning based approach,” in _PACT_ , 2010. * [49] Z. Wang _et al._ , “Integrating profile-driven parallelism detection and machine-learning-based mapping,” _ACM TACO_ , 2014. * [50] B. Taylor _et al._ , “Adaptive optimization for opencl programs on embedded heterogeneous systems,” in _LCTES_ , 2017. * [51] M. K. Emani _et al._ , “Smart, adaptive mapping of parallelism in the presence of external workload,” in _CGO_ , 2013. * [52] V. S. Marco _et al._ , “Improving spark application throughput via memory aware task co-location: A mixture of experts approach,” in _Middleware_ , 2017. * [53] J. Ren _et al._ , “Optimise web browsing on heterogeneous mobile platforms: a machine learning based approach,” in _INFOCOM_ , 2017. * [54] ——, “Proteus: Network-aware web browsing on heterogeneous mobile systems,” in _CoNEXT ’18_ , 2018. * [55] L. Yuan _et al._ , “Using machine learning to optimize web interactions on heterogeneous mobile systems,” _IEEE Access_ , 2019. * [56] B. Taylor _et al._ , “Adaptive deep learning model selection on embedded systems,” _ACM SIGPLAN Notices_ , 2018. * [57] Y. Wen _et al._ , “Smart multi-task scheduling for opencl programs on cpu/gpu heterogeneous platforms,” in _HiPC_ , 2014. * [58] K. Datta _et al._ , “Stencil computation optimization and auto-tuning on state-of-the-art multicore architectures,” in _Supercomputing_ , 2008. * [59] J. Ansel _et al._ , “Opentuner: An extensible framework for program autotuning,” in _PACT_ , 2014. * [60] J. Ragan-Kelley _et al._ , “Halide: A language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines,” in _PLDI_ , 2013. * [61] A. Nukada and S. Matsuoka, “Auto-tuning 3-d fft library for cuda gpus,” in _SC_ , 2009. * [62] P. Tillet and D. Cox, “Input-aware auto-tuning of compute-bound hpc kernels,” in _SC_ , 2017. * [63] T. T. Dao and J. Lee, “An auto-tuner for opencl work-group size on gpus,” _IEEE TPDS_ , 2018. * [64] U. Dastgeer _et al._ , “Auto-tuning skepu: a multi-backend skeleton programming framework for multi-gpu systems,” in _IWMSE_ , 2011. * [65] J. J. Thiagarajan _et al._ , “Bootstrapping parameter space exploration for fast tuning,” in _Supercomputing_ , 2018. * [66] D. Chen _et al._ , “Optimizing sparse matrix–vector multiplications on an armv8-based many-core architecture,” _International Journal of Parallel Programming_ , 2019. * [67] T. Katagiri _et al._ , “Auto-tuning on numa and many-core environments with an fdm code,” in _IPDPSW_ , 2017. * [68] L. Liao _et al._ , “Uhcl-darknet: An opencl-based deep neural network framework for heterogeneous multi-/many-core clusters,” in _ICPP_ , 2018\. * [69] S. Lee and R. Eigenmann, “Openmpc: Extended openmp programming and tuning for gpus,” in _SC_ , 2010. * [70] Z. Wang _et al._ , “Exploitation of gpus for the parallelisation of probably parallel legacy code,” in _CC ’14_ , 2014. * [71] P. S. Rawat _et al._ , “Domain-specific optimization and generation of high-performance gpu code for stencil computations,” _Proceedings of the IEEE_ , 2018.
2024-09-04T02:54:58.500113
2020-03-09T18:36:38
2003.04352
{ "authors": "LHCb collaboration: R. Aaij, C. Abell\\'an Beteta, T. Ackernley, B.\n Adeva, M. Adinolfi, H. Afsharnia, C.A. Aidala, S. Aiola, Z. Ajaltouni, S.\n Akar, P. Albicocco, J. Albrecht, F. Alessio, M. Alexander, A. Alfonso Albero,\n G. Alkhazov, P. Alvarez Cartelle, A.A. Alves Jr, S. Amato, Y. Amhis, L. An,\n L. Anderlini, G. Andreassi, M. Andreotti, F. Archilli, A. Artamonov, M.\n Artuso, K. Arzymatov, E. Aslanides, M. Atzeni, B. Audurier, S. Bachmann, J.J.\n Back, S. Baker, V. Balagura, W. Baldini, A. Baranov, R.J. Barlow, S. Barsuk,\n W. Barter, M. Bartolini, F. Baryshnikov, J.M. Basels, G. Bassi, V.\n Batozskaya, B. Batsukh, A. Battig, A. Bay, M. Becker, F. Bedeschi, I.\n Bediaga, A. Beiter, L.J. Bel, V. Belavin, S. Belin, V. Bellee, K. Belous, I.\n Belyaev, G. Bencivenni, E. Ben-Haim, S. Benson, S. Beranek, A. Berezhnoy, R.\n Bernet, D. Berninghoff, H.C. Bernstein, C. Bertella, E. Bertholet, A.\n Bertolin, C. Betancourt, F. Betti, M.O. Bettler, Ia. Bezshyiko, S. Bhasin, J.\n Bhom, M.S. Bieker, S. Bifani, P. Billoir, A. Bizzeti, M. Bj{\\o}rn, M.P.\n Blago, T. Blake, F. Blanc, S. Blusk, D. Bobulska, V. Bocci, O. Boente Garcia,\n T. Boettcher, A. Boldyrev, A. Bondar, N. Bondar, S. Borghi, M. Borisyak, M.\n Borsato, J.T. Borsuk, T.J.V. Bowcock, C. Bozzi, M.J. Bradley, S. Braun, A.\n Brea Rodriguez, M. Brodski, J. Brodzicka, A. Brossa Gonzalo, D. Brundu, E.\n Buchanan, A. B\\\"uchler-Germann, A. Buonaura, C. Burr, A. Bursche, A.\n Butkevich, J.S. Butter, J. Buytaert, W. Byczynski, S. Cadeddu, H. Cai, R.\n Calabrese, L. Calero Diaz, S. Cali, R. Calladine, M. Calvi, M. Calvo Gomez,\n P. Camargo Magalhaes, A. Camboni, P. Campana, D.H. Campora Perez, A.F.\n Campoverde Quezada, L. Capriotti, A. Carbone, G. Carboni, R. Cardinale, A.\n Cardini, I. Carli, P. Carniti, K. Carvalho Akiba, A. Casais Vidal, G. Casse,\n M. Cattaneo, G. Cavallero, S. Celani, R. Cenci, J. Cerasoli, M.G. Chapman, M.\n Charles, Ph. Charpentier, G. Chatzikonstantinidis, M. Chefdeville, V.\n Chekalina, C. Chen, S. Chen, A. Chernov, S.-G. Chitic, V. Chobanova, S.\n Cholak, M. Chrzaszcz, A. Chubykin, P. Ciambrone, M.F. Cicala, X. Cid Vidal,\n G. Ciezarek, F. Cindolo, P.E.L. Clarke, M. Clemencic, H.V. Cliff, J. Closier,\n J.L. Cobbledick, V. Coco, J.A.B. Coelho, J. Cogan, E. Cogneras, L. Cojocariu,\n P. Collins, T. Colombo, A. Comerma-Montells, A. Contu, N. Cooke, G. Coombs,\n S. Coquereau, G. Corti, C.M. Costa Sobral, B. Couturier, D.C. Craik, J.\n Crkovsk\\'a, A. Crocombe, M. Cruz Torres, R. Currie, C.L. Da Silva, E.\n Dall'Occo, J. Dalseno, C. D'Ambrosio, A. Danilina, P. d'Argent, A. Davis, O.\n De Aguiar Francisco, K. De Bruyn, S. De Capua, M. De Cian, J.M. De Miranda,\n L. De Paula, M. De Serio, P. De Simone, J.A. de Vries, C.T. Dean, W. Dean, D.\n Decamp, L. Del Buono, B. Delaney, H.-P. Dembinski, A. Dendek, V. Denysenko,\n D. Derkach, O. Deschamps, F. Desse, F. Dettori, B. Dey, A. Di Canto, P. Di\n Nezza, S. Didenko, H. Dijkstra, V. Dobishuk, F. Dordei, M. Dorigo, A.C. dos\n Reis, L. Douglas, A. Dovbnya, K. Dreimanis, M.W. Dudek, L. Dufour, G. Dujany,\n P. Durante, J.M. Durham, D. Dutta, M. Dziewiecki, A. Dziurda, A. Dzyuba, S.\n Easo, U. Egede, V. Egorychev, S. Eidelman, S. Eisenhardt, R. Ekelhof, S.\n Ek-In, L. Eklund, S. Ely, A. Ene, E. Epple, S. Escher, S. Esen, T. Evans, A.\n Falabella, J. Fan, N. Farley, S. Farry, D. Fazzini, P. Fedin, M. F\\'eo, P.\n Fernandez Declara, A. Fernandez Prieto, F. Ferrari, L. Ferreira Lopes, F.\n Ferreira Rodrigues, S. Ferreres Sole, M. Ferrillo, M. Ferro-Luzzi, S.\n Filippov, R.A. Fini, M. Fiorini, M. Firlej, K.M. Fischer, C. Fitzpatrick, T.\n Fiutowski, F. Fleuret, M. Fontana, F. Fontanelli, R. Forty, V. Franco Lima,\n M. Franco Sevilla, M. Frank, C. Frei, D.A. Friday, J. Fu, Q. Fuehring, W.\n Funk, E. Gabriel, A. Gallas Torreira, D. Galli, S. Gallorini, S. Gambetta, Y.\n Gan, M. Gandelman, P. Gandini, Y. Gao, L.M. Garcia Martin, J. Garc\\'ia\n Pardi\\~nas, B. Garcia Plana, F.A. Garcia Rosales, L. Garrido, D. Gascon, C.\n Gaspar, D. Gerick, E. Gersabeck, M. Gersabeck, T. Gershon, D. Gerstel, Ph.\n Ghez, V. Gibson, A. Giovent\\`u, O.G. Girard, P. Gironella Gironell, L.\n Giubega, C. Giugliano, K. Gizdov, V.V. Gligorov, C. G\\\"obel, D. Golubkov, A.\n Golutvin, A. Gomes, P. Gorbounov, I.V. Gorelov, C. Gotti, E. Govorkova, J.P.\n Grabowski, R. Graciani Diaz, T. Grammatico, L.A. Granado Cardoso, E.\n Graug\\'es, E. Graverini, G. Graziani, A. Grecu, R. Greim, P. Griffith, L.\n Grillo, L. Gruber, B.R. Gruberg Cazon, C. Gu, E. Gushchin, A. Guth, Yu. Guz,\n T. Gys, P. A. G\\\"unther, T. Hadavizadeh, G. Haefeli, C. Haen, S.C. Haines,\n P.M. Hamilton, Q. Han, X. Han, T.H. Hancock, S. Hansmann-Menzemer, N. Harnew,\n T. Harrison, R. Hart, C. Hasse, M. Hatch, J. He, M. Hecker, K. Heijhoff, K.\n Heinicke, A.M. Hennequin, K. Hennessy, L. Henry, J. Heuel, A. Hicheur, D.\n Hill, M. Hilton, P.H. Hopchev, J. Hu, W. Hu, W. Huang, W. Hulsbergen, T.\n Humair, R.J. Hunter, M. Hushchyn, D. Hutchcroft, D. Hynds, P. Ibis, M. Idzik,\n P. Ilten, A. Inglessi, K. Ivshin, R. Jacobsson, S. Jakobsen, E. Jans, B.K.\n Jashal, A. Jawahery, V. Jevtic, F. Jiang, M. John, D. Johnson, C.R. Jones, B.\n Jost, N. Jurik, S. Kandybei, M. Karacson, J.M. Kariuki, N. Kazeev, M. Kecke,\n F. Keizer, M. Kelsey, M. Kenzie, T. Ketel, B. Khanji, A. Kharisova, K.E. Kim,\n T. Kirn, V.S. Kirsebom, S. Klaver, K. Klimaszewski, S. Koliiev, A.\n Kondybayeva, A. Konoplyannikov, P. Kopciewicz, R. Kopecna, P. Koppenburg, M.\n Korolev, I. Kostiuk, O. Kot, S. Kotriakhova, L. Kravchuk, R.D. Krawczyk, M.\n Kreps, F. Kress, S. Kretzschmar, P. Krokovny, W. Krupa, W. Krzemien, W.\n Kucewicz, M. Kucharczyk, V. Kudryavtsev, H.S. Kuindersma, G.J. Kunde, T.\n Kvaratskheliya, D. Lacarrere, G. Lafferty, A. Lai, D. Lancierini, J.J. Lane,\n G. Lanfranchi, C. Langenbruch, O. Lantwin, T. Latham, F. Lazzari, C.\n Lazzeroni, R. Le Gac, R. Lef\\`evre, A. Leflat, O. Leroy, T. Lesiak, B.\n Leverington, H. Li, L. Li, X. Li, Y. Li, Z. Li, X. Liang, R. Lindner, V.\n Lisovskyi, G. Liu, X. Liu, D. Loh, A. Loi, J. Lomba Castro, I. Longstaff,\n J.H. Lopes, G. Loustau, G.H. Lovell, Y. Lu, D. Lucchesi, M. Lucio Martinez,\n Y. Luo, A. Lupato, E. Luppi, O. Lupton, A. Lusiani, X. Lyu, S. Maccolini, F.\n Machefert, F. Maciuc, V. Macko, P. Mackowiak, S. Maddrell-Mander, L.R. Madhan\n Mohan, O. Maev, A. Maevskiy, D. Maisuzenko, M.W. Majewski, S. Malde, B.\n Malecki, A. Malinin, T. Maltsev, H. Malygina, G. Manca, G. Mancinelli, R.\n Manera Escalero, D. Manuzzi, D. Marangotto, J. Maratas, J.F. Marchand, U.\n Marconi, S. Mariani, C. Marin Benito, M. Marinangeli, P. Marino, J. Marks,\n P.J. Marshall, G. Martellotti, L. Martinazzoli, M. Martinelli, D. Martinez\n Santos, F. Martinez Vidal, A. Massafferri, M. Materok, R. Matev, A. Mathad,\n Z. Mathe, V. Matiunin, C. Matteuzzi, K.R. Mattioli, A. Mauri, E. Maurice, M.\n McCann, L. Mcconnell, A. McNab, R. McNulty, J.V. Mead, B. Meadows, C. Meaux,\n G. Meier, N. Meinert, D. Melnychuk, S. Meloni, M. Merk, A. Merli, M.\n Mikhasenko, D.A. Milanes, E. Millard, M.-N. Minard, O. Mineev, L. Minzoni,\n S.E. Mitchell, B. Mitreska, D.S. Mitzel, A. M\\\"odden, A. Mogini, R.D. Moise,\n T. Momb\\\"acher, I.A. Monroy, S. Monteil, M. Morandin, G. Morello, M.J.\n Morello, J. Moron, A.B. Morris, A.G. Morris, R. Mountain, H. Mu, F. Muheim,\n M. Mukherjee, M. Mulder, D. M\\\"uller, K. M\\\"uller, C.H. Murphy, D. Murray, P.\n Muzzetto, P. Naik, T. Nakada, R. Nandakumar, T. Nanut, I. Nasteva, M.\n Needham, N. Neri, S. Neubert, N. Neufeld, R. Newcombe, T.D. Nguyen, C.\n Nguyen-Mau, E.M. Niel, S. Nieswand, N. Nikitin, N.S. Nolte, C. Nunez, A.\n Oblakowska-Mucha, V. Obraztsov, S. Ogilvy, D.P. O'Hanlon, R. Oldeman, C.J.G.\n Onderwater, J. D. Osborn, A. Ossowska, J.M. Otalora Goicochea, T.\n Ovsiannikova, P. Owen, A. Oyanguren, P.R. Pais, T. Pajero, A. Palano, M.\n Palutan, G. Panshin, A. Papanestis, M. Pappagallo, L.L. Pappalardo, C.\n Pappenheimer, W. Parker, C. Parkes, G. Passaleva, A. Pastore, M. Patel, C.\n Patrignani, A. Pearce, A. Pellegrino, M. Pepe Altarelli, S. Perazzini, D.\n Pereima, P. Perret, L. Pescatore, K. Petridis, A. Petrolini, A. Petrov, S.\n Petrucci, M. Petruzzo, B. Pietrzyk, G. Pietrzyk, M. Pili, D. Pinci, J.\n Pinzino, F. Pisani, A. Piucci, V. Placinta, S. Playfer, J. Plews, M. Plo\n Casasus, F. Polci, M. Poli Lener, M. Poliakova, A. Poluektov, N. Polukhina,\n I. Polyakov, E. Polycarpo, G.J. Pomery, S. Ponce, A. Popov, D. Popov, S.\n Poslavskii, K. Prasanth, L. Promberger, C. Prouve, V. Pugatch, A. Puig\n Navarro, H. Pullen, G. Punzi, W. Qian, J. Qin, R. Quagliani, B. Quintana,\n N.V. Raab, R.I. Rabadan Trejo, B. Rachwal, J.H. Rademacker, M. Rama, M. Ramos\n Pernas, M.S. Rangel, F. Ratnikov, G. Raven, M. Reboud, F. Redi, F. Reiss, C.\n Remon Alepuz, Z. Ren, V. Renaudin, S. Ricciardi, D.S. Richards, S. Richards,\n K. Rinnert, P. Robbe, A. Robert, A.B. Rodrigues, E. Rodrigues, J.A. Rodriguez\n Lopez, M. Roehrken, S. Roiser, A. Rollings, V. Romanovskiy, M. Romero Lamas,\n A. Romero Vidal, J.D. Roth, M. Rotondo, M.S. Rudolph, T. Ruf, J. Ruiz Vidal,\n A. Ryzhikov, J. Ryzka, J.J. Saborido Silva, N. Sagidova, N. Sahoo, B. Saitta,\n C. Sanchez Gras, C. Sanchez Mayordomo, R. Santacesaria, C. Santamarina Rios,\n M. Santimaria, E. Santovetti, G. Sarpis, A. Sarti, C. Satriano, A. Satta, M.\n Saur, D. Savrina, L.G. Scantlebury Smead, S. Schael, M. Schellenberg, M.\n Schiller, H. Schindler, M. Schmelling, T. Schmelzer, B. Schmidt, O.\n Schneider, A. Schopper, H.F. Schreiner, M. Schubiger, S. Schulte, M.H.\n Schune, R. Schwemmer, B. Sciascia, A. Sciubba, S. Sellam, A. Semennikov, A.\n Sergi, N. Serra, J. Serrano, L. Sestini, A. Seuthe, P. Seyfert, D.M.\n Shangase, M. Shapkin, L. Shchutska, T. Shears, L. Shekhtman, V. Shevchenko,\n E. Shmanin, J.D. Shupperd, B.G. Siddi, R. Silva Coutinho, L. Silva de\n Oliveira, G. Simi, S. Simone, I. Skiba, N. Skidmore, T. Skwarnicki, M.W.\n Slater, J.G. Smeaton, A. Smetkina, E. Smith, I.T. Smith, M. Smith, A. Snoch,\n M. Soares, L. Soares Lavra, M.D. Sokoloff, F.J.P. Soler, B. Souza De Paula,\n B. Spaan, E. Spadaro Norella, P. Spradlin, F. Stagni, M. Stahl, S. Stahl, P.\n Stefko, O. Steinkamp, S. Stemmle, O. Stenyakin, M. Stepanova, H. Stevens, S.\n Stone, S. Stracka, M.E. Stramaglia, M. Straticiuc, S. Strokov, J. Sun, L.\n Sun, Y. Sun, P. Svihra, K. Swientek, A. Szabelski, T. Szumlak, M. Szymanski,\n S. Taneja, Z. Tang, T. Tekampe, F. Teubert, E. Thomas, K.A. Thomson, M.J.\n Tilley, V. Tisserand, S. T'Jampens, M. Tobin, S. Tolk, L. Tomassetti, D.\n Tonelli, D. Torres Machado, D.Y. Tou, E. Tournefier, M. Traill, M.T. Tran, E.\n Trifonova, C. Trippl, A. Trisovic, A. Tsaregorodtsev, G. Tuci, A. Tully, N.\n Tuning, A. Ukleja, A. Usachov, A. Ustyuzhanin, U. Uwer, A. Vagner, V.\n Vagnoni, A. Valassi, G. Valenti, M. van Beuzekom, H. Van Hecke, E. van\n Herwijnen, C.B. Van Hulse, M. van Veghel, R. Vazquez Gomez, P. Vazquez\n Regueiro, C. V\\'azquez Sierra, S. Vecchi, J.J. Velthuis, M. Veltri, A.\n Venkateswaran, M. Vernet, M. Veronesi, M. Vesterinen, J.V. Viana Barbosa, D.\n Vieira, M. Vieites Diaz, H. Viemann, X. Vilasis-Cardona, A. Vitkovskiy, A.\n Vollhardt, D. Vom Bruch, A. Vorobyev, V. Vorobyev, N. Voropaev, R. Waldi, J.\n Walsh, J. Wang, J. Wang, J. Wang, M. Wang, Y. Wang, Z. Wang, D.R. Ward, H.M.\n Wark, N.K. Watson, D. Websdale, A. Weiden, C. Weisser, B.D.C. Westhenry, D.J.\n White, M. Whitehead, D. Wiedner, G. Wilkinson, M. Wilkinson, I. Williams, M.\n Williams, M.R.J. Williams, T. Williams, F.F. Wilson, W. Wislicki, M. Witek,\n L. Witola, G. Wormser, S.A. Wotton, H. Wu, K. Wyllie, Z. Xiang, D. Xiao, Y.\n Xie, H. Xing, A. Xu, L. Xu, M. Xu, Q. Xu, Z. Xu, Z. Yang, Z. Yang, Y. Yao,\n L.E. Yeomans, H. Yin, J. Yu, X. Yuan, O. Yushchenko, K.A. Zarebski, M.\n Zavertyaev, M. Zdybal, M. Zeng, D. Zhang, L. Zhang, S. Zhang, W.C. Zhang, Y.\n Zhang, A. Zhelezov, Y. Zheng, X. Zhou, Y. Zhou, X. Zhu, V. Zhukov, J.B.\n Zonneveld, S. Zucchelli", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26123", "submitter": "Matthew Rudolph", "url": "https://arxiv.org/abs/2003.04352" }
arxiv-papers
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) ​​​ CERN-EP-2020-020 LHCb-PAPER-2019-043 9 March 2020 Search for the lepton flavour violating decay ${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}$ using $B_{s2}^{*0}$ decays LHCb collaboration†††Authors are listed at the end of this paper. A search is presented for the lepton flavour violating decay ${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}$ using a sample of proton–proton collisions at centre-of-mass energies of 7, 8, and $13\text{\,}\mathrm{Te\kern-1.00006ptV}$, collected with the LHCb detector and corresponding to a total integrated luminosity of 9$\text{\,fb}^{-1}$. The $\tau$ leptons are selected inclusively, primarily via decays with a single charged particle. The four-momentum of the $\tau$ lepton is determined by using ${B}^{+}$ mesons from ${B_{s2}^{*0}}\rightarrow{{B}^{+}}{{K}^{-}}$ decays. No significant excess is observed, and an upper limit is set on the branching fraction ${\mathcal{B}}\quantity({{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}})<$3.9\text{\times}{10}^{-5}$\text{ at 90\% confidence level}.$ The obtained limit is comparable to the world-best limit. Submitted to JHEP © 2024 CERN for the benefit of the LHCb collaboration. CC BY 4.0 licence. ## 1 Introduction A number of experimental hints of lepton flavour universality violation in the semileptonic transitions $b\\!\rightarrow s\ell^{+}\ell^{-}$ [1, 2, 3] and $b\\!\rightarrow c\ell^{-}{\overline{\nu}}_{\ell}$ [4, 5, 6, 7, 8, 9] have recently been found.111The inclusion of charge-conjugate processes is implied throughout. In general, physics beyond the Standard Model that generates lepton flavour non-universality is likely to also produce direct lepton flavour violation [10]. Theoretical models seeking to simultaneously explain all these anomalies, for example with a vector leptoquark, often lead to relatively large branching fractions for the decays ${B}\\!\rightarrow{K}\mu^{\pm}\tau^{\mp}$ [11, 12, 13, 14, 15, 16]. The branching fractions for the two $\mu\tau$ charge combinations are not in general the same, as they depend on the details of the physics mechanism producing the decay. In this paper, we present a search for the decay ${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}$. From an experimental point of view, this combination is preferred over ${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{+}}{\tau^{-}}$ as it has a lower background from semileptonic ${B}\\!\rightarrow{\kern 1.79993pt\overline{\kern-1.79993ptD}}X{\mu^{+}}{{\nu}_{\mu}}$ decays, because Cabibbo-favoured decays of the charm meson are likely to lead to kaons of the same charge as the muon. An upper limit on the branching fraction for the signal decay has been previously set by the BaBar collaboration [17] ${\mathcal{B}}\quantity({{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}})<$2.8\text{\times}{10}^{-5}$$ at 90% confidence level (CL). We reconstruct the full four-momentum of the $\tau$ lepton using ${B}^{+}$ mesons from the decay ${B_{s2}^{*0}}\\!\rightarrow{{B}^{+}}{{K}^{-}}$, which amounts to about 1% of ${B}^{+}$ production. By reconstructing the decay vertex of the ${B}^{+}$ meson from the ${{K}^{+}}{\mu^{-}}$ pair and the momentum of the ${K}^{-}$ meson, it is possible to determine the momentum of the ${B}^{+}$ meson up to a quadratic ambiguity by imposing mass constraints on the $B_{s2}^{*0}$ and ${B}^{+}$ mesons [18]. This technique was first used to study relative branching fractions in ${{B}^{+}}\\!\rightarrow{{\kern 1.79993pt\overline{\kern-1.79993ptD}}{}^{0}}X{\mu^{+}}\nu$ decays [19]. We then search for a peak in the missing-mass squared distribution corresponding to the $\tau$ mass squared, $m_{\tau}^{2}$. Even signal ${B}^{+}$ mesons not coming from a $B_{s2}^{*0}$ decay show a peak at $m_{\tau}^{2}$. We account for the contribution of these non-$B_{s2}^{*0}$ candidates in the analysis. The $\tau$ leptons are selected inclusively, as we only require one additional charged track near the ${{K}^{+}}{\mu^{-}}$ pair to help discriminate against background. To normalise the branching fraction, we use the decay ${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}}$, with ${{J\mskip-3.0mu/\mskip-2.0mu\psi}}\\!\rightarrow{\mu^{+}}{\mu^{-}}$. The normalisation channel is also used to quantify the contributions from $B_{s2}^{*0}$ decays, as well as non-$B_{s2}^{*0}$ candidates with nearby kaons. In addition to providing the missing-mass discriminating variable, this method allows us to study the control sample composed of same-sign ${{B}^{+}}{{K}^{+}}$ decays, which does not include any $B_{s2}^{*0}$ component. We use this sample to optimise the signal selection, and motivate our description of the background missing-mass shape. ## 2 Detector, data samples, and simulation The LHCb detector [20, 21] is a single-arm forward spectrometer covering the pseudorapidity range $2<\eta<5$, designed for the study of particles containing $b$ or $c$ quarks. The detector includes a high-precision tracking system consisting of a silicon-strip vertex detector surrounding the $pp$ interaction region, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about $4{\mathrm{\,Tm}}$, and three stations of silicon-strip detectors and straw drift tubes placed downstream of the magnet. The tracking system provides a measurement of the momentum, $p$, of charged particles with a relative uncertainty that varies from 0.5% at low momentum to 1.0% at 200 GeV.222Natural units with $c=1$ are used throughout. The minimum distance of a track to a primary $pp$ interaction vertex (PV), the impact parameter, is measured with a resolution of $(15+29/p_{\mathrm{T}})\,\upmu\text{m}$, where $p_{\mathrm{T}}$ is the component of the momentum transverse to the beam, in GeV. Different types of charged hadrons are distinguished using information from two ring-imaging Cherenkov detectors. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers. The online event selection is performed by a trigger, which consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies a full event reconstruction. At the hardware trigger stage, events are required to have a muon with high $p_{\mathrm{T}}$ or a hadron, photon or electron with high transverse energy deposited in the calorimeters. The software trigger requires a two-, three- or four-track secondary vertex with a significant displacement from any primary vertex. We use data samples collected from 2011 to 2018, at centre-of-mass energies of 7, 8, and $13\text{\,}\mathrm{Te\kern-1.00006ptV}$, corresponding to an integrated luminosity of 9$\text{\,fb}^{-1}$. We model signal and normalisation decays using simulation. In the simulation, $pp$ collisions are generated using Pythia [22, *Sjostrand:2007gs] with a specific LHCb configuration [24]. Decays of hadronic particles are described by EvtGen [25]. The interaction of the generated particles with the detector, and its response, are implemented using the Geant4 toolkit [26, *Agostinelli:2002hh] as described in Ref. [28]. For the signal, we consider both a phase space model and variations of the decay kinematics with effective operators for the $b\\!\rightarrow s{\mu^{+}}{\tau^{-}}$ interaction and their corresponding Wilson coefficients using the distributions from Ref. [29] and the form factors from Ref. [30]. The branching fraction limit is determined for various hypotheses: for the phase-space decay, for a decay via the vector or axial-vector operators $\mathcal{O}_{9}^{(^{\prime})}$ or $\mathcal{O}_{10}^{(^{\prime})}$, and for a decay using the scalar or pseudoscalar operators $\mathcal{O}^{(^{\prime})}_{S}$ or $\mathcal{O}^{(^{\prime})}_{P}$ [29]. ## 3 Selection and missing mass calculation The selection of ${B}^{+}$ candidates begins with a ${{K}^{+}}{\mu^{-}}$ pair with an invariant mass $m_{{{K}^{+}}{\mu^{-}}}>$1800\text{\,}\mathrm{Me\kern-1.00006ptV}$$ to reduce background from semileptonic charm decays. The ${K}^{+}$ and $\mu^{-}$ candidates are formed from high-quality tracks consistent with kaon and muon hypotheses and inconsistent with being produced at any PV in the event. The ${{K}^{+}}{\mu^{-}}$ vertex must be of high quality and well separated from any PV. To better separate signal candidates with $\tau$ leptons from background, we require an additional track, labelled $t^{+}$, with charge opposite to that of the muon. By adding this third track, we also fully reconstruct the normalisation mode ${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}}$, with ${{J\mskip-3.0mu/\mskip-2.0mu\psi}}\\!\rightarrow{\mu^{+}}{\mu^{-}}$. Many background candidates are expected to come from $B$-meson decays of the form ${B}\\!\rightarrow{\kern 1.79993pt\overline{\kern-1.79993ptD}}\quantity(\\!\rightarrow{{K}^{+}}X{\mu^{-}}){{K}^{+}}Y$, where $X$ and $Y$ refer to any number of additional particles. In these cases the kaon originating from the $\kern 1.79993pt\overline{\kern-1.79993ptD}$ meson is assigned as the additional track. Since only approximately 2% of $\tau$ decays contain a charged kaon, we apply particle identification requirements so that the track is unlikely to be a charged kaon. Events in which a candidate ${\tau^{+}}\\!\rightarrow{{\pi}^{+}}{{\pi}^{-}}{{\pi}^{+}}{{\overline{\nu}}_{\tau}}$ decay is found are not used in this search to avoid overlap with ongoing searches at LHCb exclusively using this decay channel. In addition, events in which we find multiple candidates are not used in this analysis. These requirements do remove signal with multi-prong $\tau$ decays, with an overall loss of less than 3%. Multiple candidate events are more likely to come from background, however. We split the data samples into signal and normalisation regions based on the invariant mass of the ${{K}^{+}}{\mu^{-}}t^{+}$ triple, using the muon hypothesis for the third track. Candidates with $m_{K\mu\mu}<$4800\text{\,}\mathrm{Me\kern-1.00006ptV}$$ fall into the signal region, while candidates with $5180<m_{K\mu\mu}<$5380\text{\,}\mathrm{Me\kern-1.00006ptV}$$ and $\absolutevalue{m_{\mu\mu}-m_{{{J\mskip-3.0mu/\mskip-2.0mu\psi}}}}<$40\text{\,}\mathrm{Me\kern-1.00006ptV}$$ fall into the normalisation region. The ${B}^{+}$ candidate direction is estimated using the PV and ${{K}^{+}}{\mu^{-}}$ vertex positions. We next consider prompt tracks, _i.e._ those that are consistent with being produced at that PV. Those tracks identified as kaons, with a charge opposite to that of the kaon in the ${{K}^{+}}{\mu^{-}}$ pair and a small perpendicular momentum relative to the ${B}^{+}$ candidate direction, are combined with the ${B}^{+}$ candidates to form $B_{s2}^{*0}$ candidates. We refer to this sample as the opposite-sign kaon (OS$K$) sample. Additionally, we select a control sample, referred to as same-sign kaon (SS$K$) sample, by adding prompt kaons of the same sign as the kaon in the ${{K}^{+}}{\mu^{-}}$ pair. From Ref. [19], the two $B$-meson energy solutions are $\displaystyle E_{B}$ $\displaystyle=\frac{\Delta^{2}}{2E_{K}}\frac{1}{1-\quantity(p_{K}/E_{K})^{2}\cos^{2}\theta}\quantity[1\pm\sqrt{d}],\text{ where}$ (1) $\displaystyle d$ $\displaystyle=\frac{p_{K}^{2}}{E_{K}^{2}}\cos^{2}\theta-\frac{4m_{B}^{2}p_{K}^{2}\cos^{2}\theta}{\Delta^{4}}\quantity(1-\frac{p_{K}^{2}}{E_{K}^{2}}\cos^{2}\theta),$ (2) $\displaystyle\Delta^{2}$ $\displaystyle=m_{BK}^{2}-m_{B}^{2}-m_{K}^{2},$ (3) where $m_{BK}=m_{{B_{s2}^{*0}}}$ is the assumed ${{B}^{+}}{{K}^{-}}$ mass, $p_{K}$ and $E_{K}$ are the reconstructed prompt kaon momentum and energy, and $\theta$ is the laboratory frame angle between the prompt kaon and $B$-meson directions. The missing four-momentum of the $\tau$ lepton, $P_{\text{miss}}$, is then reconstructed as $P_{B}-P_{{{K}^{+}}{\mu^{-}}}$, where $P_{B}$ and $P_{{{K}^{+}}{\mu^{-}}}$ are the four-momenta of the $B$ meson and ${{K}^{+}}{\mu^{-}}$ pair. The missing mass squared is calculated using the lowest energy, real solution for which the resulting missing energy is greater than the reconstructed energy of the third track under a pion mass hypothesis. With this choice, we correctly reconstruct the energy of signal decays in simulation in more than 75% of cases. About 9% of all signal decays have no such solution and are lost. Both signal and normalisation candidates, as well as the SS$K$ control-sample candidates, are required to pass this procedure. Candidates in the signal region are additionally required to have the residual missing mass squared, defined as the four-momentum difference of the $B$ meson and ${{K}^{+}}{\mu^{-}}t^{+}$ triple, $\quantity(P_{B}-P_{{{K}^{+}}{\mu^{-}}}-P_{t})^{2}$, greater than $-0.5\text{\,}{\mathrm{Ge\kern-1.00006ptV}}^{2}$. This requirement removes background and only poorly reconstructed signal candidates which do not peak at the $\tau$ mass squared. The minimum mass difference, defined in Ref. [19] as $\Delta m_{\mathrm{min}}=\sqrt{m_{B}^{2}+m_{K}^{2}+2m_{B}\sqrt{p_{K}^{2}\sin^{2}\theta+m_{K}^{2}}}-m_{B}-m_{K},$ (4) is required to be greater than $30\text{\,}\mathrm{Me\kern-1.00006ptV}$. This removes contributions from $B_{s1}^{0}$ and ${B_{s2}^{*0}}\\!\rightarrow{B}^{*+}{{K}^{-}}$ decays, as well as background in which a kaon from the $B$ decay is wrongly associated to the primary vertex. Missing-mass distributions for the signal simulation and the full data sample after the above selection are shown in Fig. 1. All signal decays, whether they come from a $B_{s2}^{*0}$ meson or not, peak at the known $m_{\tau}^{2}$, however the non-$B_{s2}^{*0}$ candidates have a much wider peak than the $B_{s2}^{*0}$ ones. The data distributions are shown for both the OS$K$ and SS$K$ samples. They have similar shapes with a broad hump centred near $5\text{\,}{\mathrm{Ge\kern-1.00006ptV}}^{2}$. We note that the OS$K$ sample has a higher yield than the SS$K$; this excess has been observed in both fully and partially reconstructed decays [31, 19]. Figure 1: Missing mass squared, $m_{\mathrm{miss}}^{2}$, distributions for (left) simulated signal ${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}$ decays and (right) all selected candidates in data before applying the signal optimisation described in Sect. 5. ## 4 Normalisation We determine the yield of the normalisation decay, as well as the relative efficiency of the signal modes with respect to the normalisation mode, separately for each data-taking year. For the normalisation mode, we determine the inclusive yield of ${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}}$ decays, whether or not they originate from a $B_{s2}^{*0}$ meson, by a binned maximum- likelihood fit to the ${{K}^{+}}{\mu^{-}}t^{+}$ mass distribution, where we assign the muon mass hypothesis to the third track. The signal is described with a Gaussian distribution, and the background with a linear model. We determine the fraction of the normalisation candidates coming from $B_{s2}^{*0}$ decays using a ${{K}^{+}}{\mu^{-}}t^{+}$ mass fit for the combined-years data sample using the same model as the separated-years samples, along with a binned maximum-likelihood fit to the measured mass- difference distribution $m_{{{B}^{+}}{{K}^{-}}}-m_{{{B}^{+}}}-m_{{{K}^{-}}}$ around the $B_{s2}^{*0}$ peak. For the latter fit, we describe the signal peak with a Gaussian core that transitions to an exponential tail on each side, and we model the background with a third-degree polynomial. The results of these fits are shown in Fig. 2. The total data sample contains $4240\pm 70$ ${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}}$ decays; the fraction originating from $B_{s2}^{*0}$ decays is $f_{B_{s2}^{*0}}=$25.4\pm 1.8\text{\,}\mathrm{\char 37\relax}$$, where the uncertainty combines the statistical and systematic uncertainties from the choice of fit function. The year-to-year variation is not found to be statistically significant, so we use the value obtained from the combined dataset for all years. Figure 2: Distributions of normalization candidates in (left) mass, $m_{{{K}^{+}}{\mu^{-}}{\mu^{+}}}$, and (right) the mass difference, $m_{{{B}^{+}}{{K}^{-}}}-m_{{{B}^{+}}}-m_{{{K}^{-}}}$. The result of each fit is shown as a solid line, with the background component as a dashed line. The relative efficiency of the signal and normalisation modes is determined using simulation with corrections from data. For $B_{s2}^{*0}$ decays the relative efficiencies in different years average around 30%, with an absolute year-to-year variation of less than 3%. Different signal decay models change the relative efficiency by approximately 10%, with the decays via scalar and pseudoscalar operators having a lower overall efficiency. Signal events in which the ${B}^{+}$ meson does not originate from a $B_{s2}^{*0}$ decay have a lower selection efficiency, primarily because fewer of these candidates pass the residual missing-mass requirement and fall into the missing-mass fit range. Using simulation, we derive an additional efficiency factor for this signal component of $r_{\text{non-{$B_{s2}^{*0}$}}}=0.849\pm 0.007$. ## 5 Multivariate signal selection We further improve the signal selection using a Boosted Decision Tree (BDT) classification with the Adaboost algorithm [32]. The BDT inputs are primarily chosen to distinguish additional tracks coming from signal $\tau$ lepton decays from various sources of background. Some examples are semileptonic $b$-hadron decays to charm where the charm hadron produces a kaon with charge opposite that of the muon, or $b$-hadron decays where the muon is produced in the semileptonic decay of a child charm hadron. The background training sample is taken from the SS$K$ sample in the $m_{\mathrm{miss}}^{2}$ region around $m_{\tau}^{2}$. This focuses the training on the sources of background which fall near the signal peak. We describe the signal with simulation samples that include only $B_{s2}^{*0}$ decays; the effect of the BDT on non-$B_{s2}^{*0}$ signal simulation is then estimated separately. The training makes use of different topological reconstructions of the ${{K}^{+}}{\mu^{-}}t^{+}$ triple: in addition to the signal selection, we also first combine either the kaon and the track or the muon and the track into a pair before adding the third particle. The pair masses and the flight distance of the pair in each topology help to distinguish the signal from background, for instance when the pair comes from a charm hadron decay. We also include the flight distance of the $\tau$, which we reconstruct as the distance along the $\tau$ trajectory found in the missing-mass calculation from the ${{K}^{+}}{\mu^{-}}$ vertex to the point of closest approach of the third track. The result of a separate isolation discriminant is included to reduce background with additional charged tracks; this discriminant is trained to distinguish additional tracks belonging to the same $b$-hadron decay from other tracks in the event based on kinematic and topological variables. We perform the rest of the analysis in four bins of the signal optimisation BDT output, keeping about 70% of all simulated $B_{s2}^{*0}$ signal candidates and about 40% of non-$B_{s2}^{*0}$ signal candidates. The bins are chosen by optimising the expected upper limit using a number of background events derived from the OS$K$ and SS$K$ $m_{\mathrm{miss}}^{2}$ sidebands. ## 6 Background studies The background in this analysis is composed of a large number of different partially reconstructed $b$-hadron decays. None of them, however, produce a narrow peak in $m_{\mathrm{miss}}^{2}$. Only ${B}^{+}$ mesons produced from $B_{s2}^{*0}$ decays have a resolution comparable to the signal. Furthermore, if there is more than one missing particle then the true missing-mass distribution will be much wider than the expected signal peak. Charm hadrons have masses close to the $\tau$ mass, however there is no Standard Model decay ${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{{D}^{+}}$. Because of their low branching fraction, we are not sensitive to decays such as ${{B}^{+}}\\!\rightarrow{{K}^{+}}{{\pi}^{-}}{{D}^{+}}$, where the pion is misidentified as a muon. We expect that the missing-mass distribution, summed over many different background components, is smooth, and we model it as a polynomial. These assumptions are tested using simulation and data. We produce fast simulation samples with RapidSim [33] of a number of potential exclusive background sources from ${B}^{+}$, ${B}^{0}$, ${B}^{0}_{s}$, and ${\mathchar 28931\relax}^{0}_{b}$ hadrons; the true missing-mass distributions for these decays are smeared to estimate their shapes in data. No sign of any sharply peaking component is found. In data we consider a number of different control samples, namely all possible $K\mu t$ charge combinations in both OS$K$ and SS$K$ samples, excluding the signal selection of ${{K}^{+}}{\mu^{-}}t^{+}$ in the OS$K$ sample. There is no sign of any narrow peak in any of the distributions, even after applying a tight requirement on the BDT output. Maximum-likelihood fits to the SS$K$ sample using polynomials of different degrees in the restricted $m_{\mathrm{miss}}^{2}$ range from $16\text{\,}{\mathrm{Ge\kern-1.00006ptV}}^{2}$ are used to study the background shape in more detail. The optimal number of free polynomial parameters in the most signal-like BDT output bin, based on the best-fit value of $-2\log\mathcal{L}$, penalised by one for each additional parameter, is four. We further study the effect of background modelling by performing a large number of pseudoexperiments, both background-only and with injected signal at branching fractions of $1\text{\times}{10}^{-5}$ and $2\text{\times}{10}^{-5}$. In these studies, we first fit a background model of some polynomial degree to one of the control samples. From this background model we generate many pseudodatasets that we fit with a model of a different degree. Based on these studies, we take into account the systematic uncertainty due to the background modelling by reporting the weakest limit using background descriptions of third, fourth, or fifth degree polynomials, all of which well describe the background shapes in the pseudoexperiments. ## 7 Fit description We search for the ${{K}^{+}}{\mu^{-}}{\tau^{+}}$ missing-mass peak with an unbinned maximum-ikelihood fit simultaneously in four bins of BDT output in the OS$K$ ${{K}^{+}}{\mu^{-}}t^{+}$ signal channel. The fit is performed in the missing-mass range $1<m_{\mathrm{miss}}^{2}<$6\text{\,}{\mathrm{Ge\kern-1.00006ptV}}^{2}$$. The parameter of interest is the branching fraction ${\mathcal{B}}\quantity({{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}})$. We describe the $m_{\mathrm{miss}}^{2}$ shape for the signal component with a generalized hyperbolic distribution with shape parameters obtained from simulation. Two signal shapes are used: one for $B_{s2}^{*0}$ decays, and one for the wider non-$B_{s2}^{*0}$ contribution. We determine the shapes separately in each bin of BDT response. The signal decay model does not significantly affect the signal missing-mass shape. The background is described by polynomial functions which vary independently in each BDT output bin. We base the normalization of the signal components on the yields of the ${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}}$ decays determined in data year-by-year. We combine this together with the relative efficiencies, $\varepsilon_{\text{rel}}$; the known ${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}}$ with ${{J\mskip-3.0mu/\mskip-2.0mu\psi}}\\!\rightarrow{\mu^{+}}{\mu^{-}}$ combined branching fraction, abbreviated as ${\mathcal{B}}\quantity({{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}})$; and the parameter of interest to derive a total number of ${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}$ signal decays. This total is divided between $B_{s2}^{*0}$ and non-$B_{s2}^{*0}$ decays based on the observed fraction in the normalization channel, and then distributed across the four BDT bins. This gives yields in each BDT bin $j$ of $\displaystyle N_{j}\quantity({{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}|{B_{s2}^{*0}})={}$ $\displaystyle\varepsilon_{{B_{s2}^{*0}},j}\frac{{\mathcal{B}}\quantity({{K}^{+}}{\mu^{-}}{\tau^{+}})}{{\mathcal{B}}\quantity({{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}})}f_{{B_{s2}^{*0}}}\times{}$ $\displaystyle\sum_{i\in\text{years}}\varepsilon_{\text{rel},i}N_{i}\quantity({{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}}),$ (5) $\displaystyle N_{j}\quantity({{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}|\text{non-{$B_{s2}^{*0}$}})={}$ $\displaystyle\varepsilon_{\text{non-{$B_{s2}^{*0}$}},j}\frac{{\mathcal{B}}\quantity({{K}^{+}}{\mu^{-}}{\tau^{+}})}{{\mathcal{B}}\quantity({{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}})}\quantity(1-f_{{B_{s2}^{*0}}})\times{}$ $\displaystyle\sum_{i\in\text{years}}\varepsilon_{\text{rel},i}r_{\text{non-{$B_{s2}^{*0}$}}}N_{i}\quantity({{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}}),$ (6) where $\varepsilon_{{B_{s2}^{*0}},j}$ and $\varepsilon_{\text{non-{$B_{s2}^{*0}$}},j}$ are the separate efficiencies for each signal component to be found in BDT bin $j$. The main parameters of the fit are thus the ${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}$ branching fraction, four parameters for the background normalisation in each BDT bin, and up to five parameters describing the polynomial background shapes in each BDT bin. The largest systematic uncertainty comes from the choice of background model. The fifth degree background description obtains the weakest limit among the tested background models. We include the effects of other systematic uncertainties using Gaussian-constrained nuisance parameters. These nuisance parameters modify the normalisation yield, the relative efficiency of the signal and normalisation channels, the signal yield in each BDT bin, and the signal shapes. The largest effects come from the modelling of the kinematics of $B_{s2}^{*0}$ decays in simulation, which results in 5% changes in the relative efficiency and in the signal fractions in each bin of BDT response. The relative statistical uncertainty of the $B_{s2}^{*0}$ fraction taken from the normalisation channel is also approximately 5%. Altogether, the total effect of these systematic uncertainties on the final limit is small, at the $10^{-6}$ level. ## 8 Results and conclusion The result at the best fit point is shown in Fig. 3. The obtained value for the signal branching fraction from the maximum-likelihood fit is ${\quantity(1.9\pm 1.5)\times 10^{-5}}$. No significant excess is observed, and we set upper limits on the branching fraction using the CLs method [34]. We perform a scan in the signal branching fraction, obtaining the signal and background $p$-values from the distributions of a one-sided profile- likelihood-ratio test statistic obtained with pseudoexperiments in which we vary the constraints on the systematic uncertainties. The scan used to determine the observed limits, compared to the expected one, is shown in Fig. 4. The expected upper limit at 90% CL is $2.3\text{\times}{10}^{-5}$. The observed 90% and 95% CL limits, assuming a phase space signal decay model, are: $\displaystyle{\mathcal{B}}\quantity({{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}})$ $\displaystyle<$3.9\text{\times}{10}^{-5}$\text{ at 90\% CL},$ $\displaystyle<$4.5\text{\times}{10}^{-5}$\text{ at 95\% CL}.$ An identical limit is obtained when the decay is generated from the effective operators $\mathcal{O}_{9}^{(^{\prime})}$ or $\mathcal{O}_{10}^{(^{\prime})}$. If instead it is produced from $\mathcal{O}^{(^{\prime})}_{S}$ or $\mathcal{O}^{(^{\prime})}_{P}$, the obtained limit is ${\mathcal{B}}\quantity({{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}})<$4.4\text{\times}{10}^{-5}$$ at 90% CL and ${}<$5.0\text{\times}{10}^{-5}$$ at 95% CL. Figure 3: Fits to the missing-mass-squared distribution OS$K$ signal sample in each bin of BDT output included in the final fit. The best fit is overlaid. BDT bin 1 is the most background-like. The fit is performed using a fifth degree polynomial description of the background. Figure 4: Scan of the $p$-value in the signal branching fraction used to determine the CLs upper limits, compared to the expected one. The horizontal red line shows a $p$-value of 0.1, used to define the 90% CL upper limit. This is the first result from the LHCb experiment for the lepton-flavour violating decay ${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}$. By studying ${B}^{+}$ mesons from $B_{s2}^{*0}$ decays, we are able to make the first analysis at LHCb of a $B$ hadron decay using inclusive $\tau$ decays. This provides complementary information to searches for lepton-flavour violation at LHCb with three-prong $\tau$ decays, for example $B_{(s)}^{0}\\!\rightarrow\tau^{\pm}\mu^{\mp}$ decays [35]. We observe no significant signal, and set an upper limit slightly above that obtained by the BaBar collaboration [17]. ## Acknowledgements We express our gratitude to our colleagues in the CERN accelerator departments for the excellent performance of the LHC. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); MOST and NSFC (China); CNRS/IN2P3 (France); BMBF, DFG and MPG (Germany); INFN (Italy); NWO (Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania); MSHE (Russia); MinECo (Spain); SNSF and SER (Switzerland); NASU (Ukraine); STFC (United Kingdom); DOE NP and NSF (USA). We acknowledge the computing resources that are provided by CERN, IN2P3 (France), KIT and DESY (Germany), INFN (Italy), SURF (Netherlands), PIC (Spain), GridPP (United Kingdom), RRCKI and Yandex LLC (Russia), CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil), PL- GRID (Poland) and OSC (USA). We are indebted to the communities behind the multiple open-source software packages on which we depend. Individual groups or members have received support from AvH Foundation (Germany); EPLANET, Marie Skłodowska-Curie Actions and ERC (European Union); ANR, Labex P2IO and OCEVU, and Région Auvergne-Rhône-Alpes (France); Key Research Program of Frontier Sciences of CAS, CAS PIFI, and the Thousand Talents Program (China); RFBR, RSF and Yandex LLC (Russia); GVA, XuntaGal and GENCAT (Spain); the Royal Society and the Leverhulme Trust (United Kingdom). ## References * [1] LHCb collaboration, R. Aaij et al., _Test of lepton universality with ${{B}^{0}}\\!\rightarrow{{K}^{*0}}\ell^{+}\ell^{-}$ decays_, JHEP 08 (2017) 055, arXiv:1705.05802 * [2] LHCb collaboration, R. Aaij et al., _Search for lepton-universality violation in ${{{B}^{+}}}\\!\rightarrow{{K}^{+}}\ell^{+}\ell^{-}$ decays_, Phys. Rev. Lett. 122 (2019) 191801, arXiv:1903.09252 * [3] LHCb collaboration, R. Aaij et al., _Test of lepton universality using ${{\mathchar 28931\relax}^{0}_{b}}\\!\rightarrow p{{K}^{-}}\ell^{+}\ell^{-}$ decays_, arXiv:1912.08139, submitted to JHEP * [4] BaBar collaboration, J. P. Lees et al., _Measurement of an excess of $\bar{B}\rightarrow D^{(*)}\tau^{-}\bar{\nu}_{\tau}$ decays and implications for charged Higgs bosons_, Phys. Rev. D88 (2013) 072012, arXiv:1303.0571 * [5] Belle collaboration, M. Huschle et al., _Measurement of the branching ratio of $\bar{B}\rightarrow D^{(\ast)}\tau^{-}\bar{\nu}_{\tau}$ relative to $\bar{B}\rightarrow D^{(\ast)}\ell^{-}\bar{\nu}_{\ell}$ decays with hadronic tagging at Belle_, Phys. Rev. D92 (2015) 072014, arXiv:1507.03233 * [6] LHCb collaboration, R. Aaij et al., _Measurement of the ratio of branching fractions $\mathcal{B}({{B}_{c}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi}}{\tau^{+}}\nu_{\tau})/\mathcal{B}({{B}_{c}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi}}{\mu^{+}}\nu_{\mu})$_, Phys. Rev. Lett. 120 (2018) 121801, arXiv:1711.05623 * [7] LHCb collaboration, R. Aaij et al., _Test of lepton flavor universality by the measurement of the ${{B}^{0}}\\!\rightarrow D^{\ast-}{\tau^{+}}\nu_{\tau}$ branching fraction using three-prong $\tau$ decays_, Phys. Rev. D97 (2018) 072013, arXiv:1711.02505 * [8] LHCb collaboration, R. Aaij et al., _Measurement of the ratio of the $\mathcal{B}({{B}^{0}}\\!\rightarrow D^{\ast-}{\tau^{+}}\nu_{\tau})$ and $\mathcal{B}({{B}^{0}}\\!\rightarrow D^{\ast-}{\mu^{+}}\nu_{\mu})$ branching fractions using three-prong $\tau$-lepton decays_, Phys. Rev. Lett. 120 (2018) 171802, arXiv:1708.08856 * [9] LHCb collaboration, R. Aaij et al., _Measurement of the ratio of branching fractions ${\mathcal{B}}({{\kern 1.79993pt\overline{\kern-1.79993ptB}}{}^{0}}\\!\rightarrow{{D}^{*+}}{\tau^{-}}{{\overline{\nu}}_{\tau}})/{\mathcal{B}}({{\kern 1.79993pt\overline{\kern-1.79993ptB}}{}^{0}}\\!\rightarrow{{D}^{*+}}{\mu^{-}}{{\overline{\nu}}_{\mu}})$_, Phys. Rev. Lett. 115 (2015) 111803, Publisher’s Note ibid. 115 (2015) 159901, arXiv:1506.08614 * [10] S. L. Glashow, D. Guadagnoli, and K. Lane, _Lepton flavor violation in $B$ decays?_, Phys. Rev. Lett. 114 (2015) 091801, arXiv:1411.0565 * [11] R. Barbieri, G. Isidori, A. Pattori, and F. Senia, _Anomalies in $B$-decays and $U(2)$ flavour symmetry_, Eur. Phys. J. C76 (2016) 67, arXiv:1512.01560 * [12] M. Bordone, C. Cornella, J. Fuentes-Martín, and G. Isidori, _A three-site gauge model for flavor hierarchies and flavor anomalies_ , Phys. Lett. B779 (2018) 317, arXiv:1712.01368 * [13] M. Bordone, C. Cornella, J. Fuentes-Martín, and G. Isidori, _Low-energy signatures of the $\mathrm{PS}^{3}$ model: from $B$-physics anomalies to LFV_, JHEP 10 (2018) 148, arXiv:1805.09328 * [14] M. Duraisamy, S. Sahoo, and R. Mohanta, _Rare semileptonic $B\rightarrow K(\pi)l_{i}^{-}l_{j}^{+}$ decay in a vector leptoquark model_, Phys. Rev. D95 (2017) 035022, arXiv:1610.00902 * [15] L. Di Luzio, A. Greljo, and M. Nardecchia, _Gauge leptoquark as the origin of B-physics anomalies_ , Phys. Rev. D96 (2017) 115011, arXiv:1708.08450 * [16] L. Di Luzio et al., _Maximal Flavour Violation: a Cabibbo mechanism for leptoquarks_ , JHEP 11 (2018) 081, arXiv:1808.00942 * [17] BaBar collaboration, J. P. Lees et al., _A search for the decay modes $B^{\pm}\rightarrow h^{\pm}\tau\ell$_, Phys. Rev. D86 (2012) 012004, arXiv:1204.2852 * [18] S. Stone and L. Zhang, _Method of studying ${{\mathchar 28931\relax}^{0}_{b}}$ decays with one missing particle_, Adv. High Energy Phys. 2014 (2014) 931257, arXiv:1402.4205 * [19] LHCb collaboration, R. Aaij et al., _Measurement of the relative ${{{B}^{-}}}\\!\rightarrow{{D}^{0}}/{{D}^{*0}}/D^{**0}{\mu^{-}}{{\overline{\nu}}_{\mu}}$ branching fractions using ${{B}^{-}}$ mesons from ${\kern 1.79993pt\overline{\kern-1.79993ptB}}{}_{s2}^{*0}$ decays_, Phys. Rev. D99 (2019) 092009, arXiv:1807.10722 * [20] LHCb collaboration, A. A. Alves Jr. et al., _The LHCb detector at the LHC_, JINST 3 (2008) S08005 * [21] LHCb collaboration, R. Aaij et al., _LHCb detector performance_ , Int. J. Mod. Phys. A30 (2015) 1530022, arXiv:1412.6352 * [22] T. Sjöstrand, S. Mrenna, and P. Skands, _PYTHIA 6.4 physics and manual_ , JHEP 05 (2006) 026, arXiv:hep-ph/0603175 * [23] T. Sjöstrand, S. Mrenna, and P. Skands, _A brief introduction to PYTHIA 8.1_ , Comput. Phys. Commun. 178 (2008) 852, arXiv:0710.3820 * [24] I. Belyaev et al., _Handling of the generation of primary events in Gauss, the LHCb simulation framework_ , J. Phys. Conf. Ser. 331 (2011) 032047 * [25] D. J. Lange, _The EvtGen particle decay simulation package_ , Nucl. Instrum. Meth. A462 (2001) 152 * [26] Geant4 collaboration, J. Allison et al., _Geant4 developments and applications_ , IEEE Trans. Nucl. Sci. 53 (2006) 270 * [27] Geant4 collaboration, S. Agostinelli et al., _Geant4: A simulation toolkit_ , Nucl. Instrum. Meth. A506 (2003) 250 * [28] M. Clemencic et al., _The LHCb simulation application, Gauss: Design, evolution and experience_, J. Phys. Conf. Ser. 331 (2011) 032023 * [29] D. Bečirević, O. Sumensari, and R. Zukanovich Funchal, _Lepton flavor violation in exclusive $b\rightarrow s$ decays_, Eur. Phys. J. C76 (2016) 134, arXiv:1602.00881 * [30] P. Ball and R. Zwicky, _New results on $B\rightarrow\pi,K,\eta$ decay form factors from light-cone sum rules_, Phys. Rev. D71 (2005) 014015, arXiv:hep-ph/0406232 * [31] LHCb collaboration, R. Aaij et al., _First observation of the decay $B_{s2}^{*}(5840)^{0}\\!\rightarrow B^{*+}{{K}^{-}}$ and studies of excited ${B}^{0}_{s}$ mesons_, Phys. Rev. Lett. 110 (2013) 151803, arXiv:1211.5994 * [32] Y. Freund and R. E. Schapire, _A decision-theoretic generalization of on-line learning and an application to boosting_ , J. Comput. Syst. Sci. 55 (1997) 119 * [33] G. A. Cowan, D. C. Craik, and M. D. Needham, _RapidSim: an application for the fast simulation of heavy-quark hadron decays_ , Comput. Phys. Commun. 214 (2017) 239, arXiv:1612.07489 * [34] A. L. Read, _Presentation of search results: The CL s technique_, J. Phys. G28 (2002) 2693 * [35] LHCb collaboration, R. Aaij et al., _Search for the lepton-flavour-violating decays ${{B}^{0}_{s}}\\!\rightarrow{\tau^{\pm}}{\mu^{\mp}}$ and ${{B}^{0}}\\!\rightarrow{\tau^{\pm}}{\mu^{\mp}}$_, Phys. Rev. Lett. 123 (2019) 211801, arXiv:1905.06614 LHCb collaboration R. Aaij31, C. Abellán Beteta49, T. Ackernley59, B. Adeva45, M. Adinolfi53, H. Afsharnia9, C.A. Aidala80, S. Aiola25, Z. Ajaltouni9, S. Akar66, P. Albicocco22, J. Albrecht14, F. Alessio47, M. Alexander58, A. Alfonso Albero44, G. Alkhazov37, P. Alvarez Cartelle60, A.A. Alves Jr45, S. Amato2, Y. Amhis11, L. An21, L. Anderlini21, G. Andreassi48, M. Andreotti20, F. Archilli16, A. Artamonov43, M. Artuso67, K. Arzymatov41, E. Aslanides10, M. Atzeni49, B. Audurier11, S. Bachmann16, J.J. Back55, S. Baker60, V. Balagura11,b, W. Baldini20,47, A. Baranov41, R.J. Barlow61, S. Barsuk11, W. Barter60, M. Bartolini23,47,h, F. Baryshnikov77, J.M. Basels13, G. Bassi28, V. Batozskaya35, B. Batsukh67, A. Battig14, A. Bay48, M. Becker14, F. Bedeschi28, I. Bediaga1, A. Beiter67, L.J. Bel31, V. Belavin41, S. Belin26, V. Bellee48, K. Belous43, I. Belyaev38, G. Bencivenni22, E. Ben-Haim12, S. Benson31, S. Beranek13, A. Berezhnoy39, R. Bernet49, D. Berninghoff16, H.C. Bernstein67, C. Bertella47, E. Bertholet12, A. Bertolin27, C. Betancourt49, F. Betti19,e, M.O. Bettler54, Ia. Bezshyiko49, S. Bhasin53, J. Bhom33, M.S. Bieker14, S. Bifani52, P. Billoir12, A. Bizzeti21,u, M. Bjørn62, M.P. Blago47, T. Blake55, F. Blanc48, S. Blusk67, D. Bobulska58, V. Bocci30, O. Boente Garcia45, T. Boettcher63, A. Boldyrev78, A. Bondar42,x, N. Bondar37, S. Borghi61,47, M. Borisyak41, M. Borsato16, J.T. Borsuk33, T.J.V. Bowcock59, C. Bozzi20, M.J. Bradley60, S. Braun16, A. Brea Rodriguez45, M. Brodski47, J. Brodzicka33, A. Brossa Gonzalo55, D. Brundu26, E. Buchanan53, A. Büchler-Germann49, A. Buonaura49, C. Burr47, A. Bursche26, A. Butkevich40, J.S. Butter31, J. Buytaert47, W. Byczynski47, S. Cadeddu26, H. Cai72, R. Calabrese20,g, L. Calero Diaz22, S. Cali22, R. Calladine52, M. Calvi24,i, M. Calvo Gomez44,m, P. Camargo Magalhaes53, A. Camboni44,m, P. Campana22, D.H. Campora Perez31, A.F. Campoverde Quezada5, L. Capriotti19,e, A. Carbone19,e, G. Carboni29, R. Cardinale23,h, A. Cardini26, I. Carli6, P. Carniti24,i, K. Carvalho Akiba31, A. Casais Vidal45, G. Casse59, M. Cattaneo47, G. Cavallero47, S. Celani48, R. Cenci28,p, J. Cerasoli10, M.G. Chapman53, M. Charles12,47, Ph. Charpentier47, G. Chatzikonstantinidis52, M. Chefdeville8, V. Chekalina41, C. Chen3, S. Chen26, A. Chernov33, S.-G. Chitic47, V. Chobanova45, S. Cholak48, M. Chrzaszcz33, A. Chubykin37, P. Ciambrone22, M.F. Cicala55, X. Cid Vidal45, G. Ciezarek47, F. Cindolo19, P.E.L. Clarke57, M. Clemencic47, H.V. Cliff54, J. Closier47, J.L. Cobbledick61, V. Coco47, J.A.B. Coelho11, J. Cogan10, E. Cogneras9, L. Cojocariu36, P. Collins47, T. Colombo47, A. Comerma-Montells16, A. Contu26, N. Cooke52, G. Coombs58, S. Coquereau44, G. Corti47, C.M. Costa Sobral55, B. Couturier47, D.C. Craik63, J. Crkovská66, A. Crocombe55, M. Cruz Torres1,ab, R. Currie57, C.L. Da Silva66, E. Dall’Occo14, J. Dalseno45,53, C. D’Ambrosio47, A. Danilina38, P. d’Argent47, A. Davis61, O. De Aguiar Francisco47, K. De Bruyn47, S. De Capua61, M. De Cian48, J.M. De Miranda1, L. De Paula2, M. De Serio18,d, P. De Simone22, J.A. de Vries31, C.T. Dean66, W. Dean80, D. Decamp8, L. Del Buono12, B. Delaney54, H.-P. Dembinski15, A. Dendek34, V. Denysenko49, D. Derkach78, O. Deschamps9, F. Desse11, F. Dettori26,f, B. Dey7, A. Di Canto47, P. Di Nezza22, S. Didenko77, H. Dijkstra47, V. Dobishuk51, F. Dordei26, M. Dorigo28,y, A.C. dos Reis1, L. Douglas58, A. Dovbnya50, K. Dreimanis59, M.W. Dudek33, L. Dufour47, G. Dujany12, P. Durante47, J.M. Durham66, D. Dutta61, M. Dziewiecki16, A. Dziurda33, A. Dzyuba37, S. Easo56, U. Egede69, V. Egorychev38, S. Eidelman42,x, S. Eisenhardt57, R. Ekelhof14, S. Ek-In48, L. Eklund58, S. Ely67, A. Ene36, E. Epple66, S. Escher13, S. Esen31, T. Evans47, A. Falabella19, J. Fan3, N. Farley52, S. Farry59, D. Fazzini11, P. Fedin38, M. Féo47, P. Fernandez Declara47, A. Fernandez Prieto45, F. Ferrari19,e, L. Ferreira Lopes48, F. Ferreira Rodrigues2, S. Ferreres Sole31, M. Ferrillo49, M. Ferro-Luzzi47, S. Filippov40, R.A. Fini18, M. Fiorini20,g, M. Firlej34, K.M. Fischer62, C. Fitzpatrick47, T. Fiutowski34, F. Fleuret11,b, M. Fontana47, F. Fontanelli23,h, R. Forty47, V. Franco Lima59, M. Franco Sevilla65, M. Frank47, C. Frei47, D.A. Friday58, J. Fu25,q, Q. Fuehring14, W. Funk47, E. Gabriel57, A. Gallas Torreira45, D. Galli19,e, S. Gallorini27, S. Gambetta57, Y. Gan3, M. Gandelman2, P. Gandini25, Y. Gao4, L.M. Garcia Martin46, J. García Pardiñas49, B. Garcia Plana45, F.A. Garcia Rosales11, L. Garrido44, D. Gascon44, C. Gaspar47, D. Gerick16, E. Gersabeck61, M. Gersabeck61, T. Gershon55, D. Gerstel10, Ph. Ghez8, V. Gibson54, A. Gioventù45, O.G. Girard48, P. Gironella Gironell44, L. Giubega36, C. Giugliano20, K. Gizdov57, V.V. Gligorov12, C. Göbel70, D. Golubkov38, A. Golutvin60,77, A. Gomes1,a, P. Gorbounov38,6, I.V. Gorelov39, C. Gotti24,i, E. Govorkova31, J.P. Grabowski16, R. Graciani Diaz44, T. Grammatico12, L.A. Granado Cardoso47, E. Graugés44, E. Graverini48, G. Graziani21, A. Grecu36, R. Greim31, P. Griffith20, L. Grillo61, L. Gruber47, B.R. Gruberg Cazon62, C. Gu3, E. Gushchin40, A. Guth13, Yu. Guz43,47, T. Gys47, P. A. Günther16, T. Hadavizadeh62, G. Haefeli48, C. Haen47, S.C. Haines54, P.M. Hamilton65, Q. Han7, X. Han16, T.H. Hancock62, S. Hansmann-Menzemer16, N. Harnew62, T. Harrison59, R. Hart31, C. Hasse14, M. Hatch47, J. He5, M. Hecker60, K. Heijhoff31, K. Heinicke14, A.M. Hennequin47, K. Hennessy59, L. Henry46, J. Heuel13, A. Hicheur68, D. Hill62, M. Hilton61, P.H. Hopchev48, J. Hu16, W. Hu7, W. Huang5, W. Hulsbergen31, T. Humair60, R.J. Hunter55, M. Hushchyn78, D. Hutchcroft59, D. Hynds31, P. Ibis14, M. Idzik34, P. Ilten52, A. Inglessi37, K. Ivshin37, R. Jacobsson47, S. Jakobsen47, E. Jans31, B.K. Jashal46, A. Jawahery65, V. Jevtic14, F. Jiang3, M. John62, D. Johnson47, C.R. Jones54, B. Jost47, N. Jurik62, S. Kandybei50, M. Karacson47, J.M. Kariuki53, N. Kazeev78, M. Kecke16, F. Keizer54,47, M. Kelsey67, M. Kenzie55, T. Ketel32, B. Khanji47, A. Kharisova79, K.E. Kim67, T. Kirn13, V.S. Kirsebom48, S. Klaver22, K. Klimaszewski35, S. Koliiev51, A. Kondybayeva77, A. Konoplyannikov38, P. Kopciewicz34, R. Kopecna16, P. Koppenburg31, M. Korolev39, I. Kostiuk31,51, O. Kot51, S. Kotriakhova37, L. Kravchuk40, R.D. Krawczyk47, M. Kreps55, F. Kress60, S. Kretzschmar13, P. Krokovny42,x, W. Krupa34, W. Krzemien35, W. Kucewicz33,l, M. Kucharczyk33, V. Kudryavtsev42,x, H.S. Kuindersma31, G.J. Kunde66, T. Kvaratskheliya38, D. Lacarrere47, G. Lafferty61, A. Lai26, D. Lancierini49, J.J. Lane61, G. Lanfranchi22, C. Langenbruch13, O. Lantwin49, T. Latham55, F. Lazzari28,v, C. Lazzeroni52, R. Le Gac10, R. Lefèvre9, A. Leflat39, O. Leroy10, T. Lesiak33, B. Leverington16, H. Li71, L. Li62, X. Li66, Y. Li6, Z. Li67, X. Liang67, R. Lindner47, V. Lisovskyi14, G. Liu71, X. Liu3, D. Loh55, A. Loi26, J. Lomba Castro45, I. Longstaff58, J.H. Lopes2, G. Loustau49, G.H. Lovell54, Y. Lu6, D. Lucchesi27,o, M. Lucio Martinez31, Y. Luo3, A. Lupato27, E. Luppi20,g, O. Lupton55, A. Lusiani28,t, X. Lyu5, S. Maccolini19,e, F. Machefert11, F. Maciuc36, V. Macko48, P. Mackowiak14, S. Maddrell-Mander53, L.R. Madhan Mohan53, O. Maev37,47, A. Maevskiy78, D. Maisuzenko37, M.W. Majewski34, S. Malde62, B. Malecki47, A. Malinin76, T. Maltsev42,x, H. Malygina16, G. Manca26,f, G. Mancinelli10, R. Manera Escalero44, D. Manuzzi19,e, D. Marangotto25,q, J. Maratas9,w, J.F. Marchand8, U. Marconi19, S. Mariani21, C. Marin Benito11, M. Marinangeli48, P. Marino48, J. Marks16, P.J. Marshall59, G. Martellotti30, L. Martinazzoli47, M. Martinelli24,i, D. Martinez Santos45, F. Martinez Vidal46, A. Massafferri1, M. Materok13, R. Matev47, A. Mathad49, Z. Mathe47, V. Matiunin38, C. Matteuzzi24, K.R. Mattioli80, A. Mauri49, E. Maurice11,b, M. McCann60, L. Mcconnell17, A. McNab61, R. McNulty17, J.V. Mead59, B. Meadows64, C. Meaux10, G. Meier14, N. Meinert74, D. Melnychuk35, S. Meloni24,i, M. Merk31, A. Merli25, M. Mikhasenko47, D.A. Milanes73, E. Millard55, M.-N. Minard8, O. Mineev38, L. Minzoni20,g, S.E. Mitchell57, B. Mitreska61, D.S. Mitzel47, A. Mödden14, A. Mogini12, R.D. Moise60, T. Mombächer14, I.A. Monroy73, S. Monteil9, M. Morandin27, G. Morello22, M.J. Morello28,t, J. Moron34, A.B. Morris10, A.G. Morris55, R. Mountain67, H. Mu3, F. Muheim57, M. Mukherjee7, M. Mulder47, D. Müller47, K. Müller49, C.H. Murphy62, D. Murray61, P. Muzzetto26, P. Naik53, T. Nakada48, R. Nandakumar56, T. Nanut48, I. Nasteva2, M. Needham57, N. Neri25,q, S. Neubert16, N. Neufeld47, R. Newcombe60, T.D. Nguyen48, C. Nguyen- Mau48,n, E.M. Niel11, S. Nieswand13, N. Nikitin39, N.S. Nolte47, C. Nunez80, A. Oblakowska-Mucha34, V. Obraztsov43, S. Ogilvy58, D.P. O’Hanlon53, R. Oldeman26,f, C.J.G. Onderwater75, J. D. Osborn80, A. Ossowska33, J.M. Otalora Goicochea2, T. Ovsiannikova38, P. Owen49, A. Oyanguren46, P.R. Pais48, T. Pajero28,t, A. Palano18, M. Palutan22, G. Panshin79, A. Papanestis56, M. Pappagallo57, L.L. Pappalardo20,g, C. Pappenheimer64, W. Parker65, C. Parkes61, G. Passaleva21,47, A. Pastore18, M. Patel60, C. Patrignani19,e, A. Pearce47, A. Pellegrino31, M. Pepe Altarelli47, S. Perazzini19, D. Pereima38, P. Perret9, L. Pescatore48, K. Petridis53, A. Petrolini23,h, A. Petrov76, S. Petrucci57, M. Petruzzo25,q, B. Pietrzyk8, G. Pietrzyk48, M. Pili62, D. Pinci30, J. Pinzino47, F. Pisani19, A. Piucci16, V. Placinta36, S. Playfer57, J. Plews52, M. Plo Casasus45, F. Polci12, M. Poli Lener22, M. Poliakova67, A. Poluektov10, N. Polukhina77,c, I. Polyakov67, E. Polycarpo2, G.J. Pomery53, S. Ponce47, A. Popov43, D. Popov52, S. Poslavskii43, K. Prasanth33, L. Promberger47, C. Prouve45, V. Pugatch51, A. Puig Navarro49, H. Pullen62, G. Punzi28,p, W. Qian5, J. Qin5, R. Quagliani12, B. Quintana8, N.V. Raab17, R.I. Rabadan Trejo10, B. Rachwal34, J.H. Rademacker53, M. Rama28, M. Ramos Pernas45, M.S. Rangel2, F. Ratnikov41,78, G. Raven32, M. Reboud8, F. Redi48, F. Reiss12, C. Remon Alepuz46, Z. Ren3, V. Renaudin62, S. Ricciardi56, D.S. Richards56, S. Richards53, K. Rinnert59, P. Robbe11, A. Robert12, A.B. Rodrigues48, E. Rodrigues64, J.A. Rodriguez Lopez73, M. Roehrken47, S. Roiser47, A. Rollings62, V. Romanovskiy43, M. Romero Lamas45, A. Romero Vidal45, J.D. Roth80, M. Rotondo22, M.S. Rudolph67, T. Ruf47, J. Ruiz Vidal46, A. Ryzhikov78, J. Ryzka34, J.J. Saborido Silva45, N. Sagidova37, N. Sahoo55, B. Saitta26,f, C. Sanchez Gras31, C. Sanchez Mayordomo46, R. Santacesaria30, C. Santamarina Rios45, M. Santimaria22, E. Santovetti29,j, G. Sarpis61, A. Sarti30, C. Satriano30,s, A. Satta29, M. Saur5, D. Savrina38,39, L.G. Scantlebury Smead62, S. Schael13, M. Schellenberg14, M. Schiller58, H. Schindler47, M. Schmelling15, T. Schmelzer14, B. Schmidt47, O. Schneider48, A. Schopper47, H.F. Schreiner64, M. Schubiger31, S. Schulte48, M.H. Schune11, R. Schwemmer47, B. Sciascia22, A. Sciubba30,k, S. Sellam68, A. Semennikov38, A. Sergi52,47, N. Serra49, J. Serrano10, L. Sestini27, A. Seuthe14, P. Seyfert47, D.M. Shangase80, M. Shapkin43, L. Shchutska48, T. Shears59, L. Shekhtman42,x, V. Shevchenko76,77, E. Shmanin77, J.D. Shupperd67, B.G. Siddi20, R. Silva Coutinho49, L. Silva de Oliveira2, G. Simi27,o, S. Simone18,d, I. Skiba20, N. Skidmore16, T. Skwarnicki67, M.W. Slater52, J.G. Smeaton54, A. Smetkina38, E. Smith13, I.T. Smith57, M. Smith60, A. Snoch31, M. Soares19, L. Soares Lavra9, M.D. Sokoloff64, F.J.P. Soler58, B. Souza De Paula2, B. Spaan14, E. Spadaro Norella25,q, P. Spradlin58, F. Stagni47, M. Stahl64, S. Stahl47, P. Stefko48, O. Steinkamp49, S. Stemmle16, O. Stenyakin43, M. Stepanova37, H. Stevens14, S. Stone67, S. Stracka28, M.E. Stramaglia48, M. Straticiuc36, S. Strokov79, J. Sun26, L. Sun72, Y. Sun65, P. Svihra61, K. Swientek34, A. Szabelski35, T. Szumlak34, M. Szymanski47, S. Taneja61, Z. Tang3, T. Tekampe14, F. Teubert47, E. Thomas47, K.A. Thomson59, M.J. Tilley60, V. Tisserand9, S. T’Jampens8, M. Tobin6, S. Tolk47, L. Tomassetti20,g, D. Tonelli28, D. Torres Machado1, D.Y. Tou12, E. Tournefier8, M. Traill58, M.T. Tran48, E. Trifonova77, C. Trippl48, A. Trisovic54, A. Tsaregorodtsev10, G. Tuci28,47,p, A. Tully48, N. Tuning31, A. Ukleja35, A. Usachov31, A. Ustyuzhanin41,78, U. Uwer16, A. Vagner79, V. Vagnoni19, A. Valassi47, G. Valenti19, M. van Beuzekom31, H. Van Hecke66, E. van Herwijnen47, C.B. Van Hulse17, M. van Veghel75, R. Vazquez Gomez44,22, P. Vazquez Regueiro45, C. Vázquez Sierra31, S. Vecchi20, J.J. Velthuis53, M. Veltri21,r, A. Venkateswaran67, M. Vernet9, M. Veronesi31, M. Vesterinen55, J.V. Viana Barbosa47, D. Vieira64, M. Vieites Diaz48, H. Viemann74, X. Vilasis-Cardona44,m, A. Vitkovskiy31, A. Vollhardt49, D. Vom Bruch12, A. Vorobyev37, V. Vorobyev42,x, N. Voropaev37, R. Waldi74, J. Walsh28, J. Wang3, J. Wang72, J. Wang6, M. Wang3, Y. Wang7, Z. Wang49, D.R. Ward54, H.M. Wark59, N.K. Watson52, D. Websdale60, A. Weiden49, C. Weisser63, B.D.C. Westhenry53, D.J. White61, M. Whitehead13, D. Wiedner14, G. Wilkinson62, M. Wilkinson67, I. Williams54, M. Williams63, M.R.J. Williams61, T. Williams52, F.F. Wilson56, W. Wislicki35, M. Witek33, L. Witola16, G. Wormser11, S.A. Wotton54, H. Wu67, K. Wyllie47, Z. Xiang5, D. Xiao7, Y. Xie7, H. Xing71, A. Xu4, L. Xu3, M. Xu7, Q. Xu5, Z. Xu4, Z. Yang3, Z. Yang65, Y. Yao67, L.E. Yeomans59, H. Yin7, J. Yu7,aa, X. Yuan67, O. Yushchenko43, K.A. Zarebski52, M. Zavertyaev15,c, M. Zdybal33, M. Zeng3, D. Zhang7, L. Zhang3, S. Zhang4, W.C. Zhang3,z, Y. Zhang47, A. Zhelezov16, Y. Zheng5, X. Zhou5, Y. Zhou5, X. Zhu3, V. Zhukov13,39, J.B. Zonneveld57, S. Zucchelli19,e. 1Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil 2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil 3Center for High Energy Physics, Tsinghua University, Beijing, China 4School of Physics State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing, China 5University of Chinese Academy of Sciences, Beijing, China 6Institute Of High Energy Physics (IHEP), Beijing, China 7Institute of Particle Physics, Central China Normal University, Wuhan, Hubei, China 8Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, IN2P3-LAPP, Annecy, France 9Université Clermont Auvergne, CNRS/IN2P3, LPC, Clermont-Ferrand, France 10Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France 11Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France , Orsay, France 12LPNHE, Sorbonne Université, Paris Diderot Sorbonne Paris Cité, CNRS/IN2P3, Paris, France 13I. Physikalisches Institut, RWTH Aachen University, Aachen, Germany 14Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany 15Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany 16Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany 17School of Physics, University College Dublin, Dublin, Ireland 18INFN Sezione di Bari, Bari, Italy 19INFN Sezione di Bologna, Bologna, Italy 20INFN Sezione di Ferrara, Ferrara, Italy 21INFN Sezione di Firenze, Firenze, Italy 22INFN Laboratori Nazionali di Frascati, Frascati, Italy 23INFN Sezione di Genova, Genova, Italy 24INFN Sezione di Milano-Bicocca, Milano, Italy 25INFN Sezione di Milano, Milano, Italy 26INFN Sezione di Cagliari, Monserrato, Italy 27INFN Sezione di Padova, Padova, Italy 28INFN Sezione di Pisa, Pisa, Italy 29INFN Sezione di Roma Tor Vergata, Roma, Italy 30INFN Sezione di Roma La Sapienza, Roma, Italy 31Nikhef National Institute for Subatomic Physics, Amsterdam, Netherlands 32Nikhef National Institute for Subatomic Physics and VU University Amsterdam, Amsterdam, Netherlands 33Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences, Kraków, Poland 34AGH - University of Science and Technology, Faculty of Physics and Applied Computer Science, Kraków, Poland 35National Center for Nuclear Research (NCBJ), Warsaw, Poland 36Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest-Magurele, Romania 37Petersburg Nuclear Physics Institute NRC Kurchatov Institute (PNPI NRC KI), Gatchina, Russia 38Institute of Theoretical and Experimental Physics NRC Kurchatov Institute (ITEP NRC KI), Moscow, Russia, Moscow, Russia 39Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow, Russia 40Institute for Nuclear Research of the Russian Academy of Sciences (INR RAS), Moscow, Russia 41Yandex School of Data Analysis, Moscow, Russia 42Budker Institute of Nuclear Physics (SB RAS), Novosibirsk, Russia 43Institute for High Energy Physics NRC Kurchatov Institute (IHEP NRC KI), Protvino, Russia, Protvino, Russia 44ICCUB, Universitat de Barcelona, Barcelona, Spain 45Instituto Galego de Física de Altas Enerxías (IGFAE), Universidade de Santiago de Compostela, Santiago de Compostela, Spain 46Instituto de Fisica Corpuscular, Centro Mixto Universidad de Valencia - CSIC, Valencia, Spain 47European Organization for Nuclear Research (CERN), Geneva, Switzerland 48Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland 49Physik-Institut, Universität Zürich, Zürich, Switzerland 50NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine 51Institute for Nuclear Research of the National Academy of Sciences (KINR), Kyiv, Ukraine 52University of Birmingham, Birmingham, United Kingdom 53H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom 54Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom 55Department of Physics, University of Warwick, Coventry, United Kingdom 56STFC Rutherford Appleton Laboratory, Didcot, United Kingdom 57School of Physics and Astronomy, University of Edinburgh, Edinburgh, United Kingdom 58School of Physics and Astronomy, University of Glasgow, Glasgow, United Kingdom 59Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom 60Imperial College London, London, United Kingdom 61Department of Physics and Astronomy, University of Manchester, Manchester, United Kingdom 62Department of Physics, University of Oxford, Oxford, United Kingdom 63Massachusetts Institute of Technology, Cambridge, MA, United States 64University of Cincinnati, Cincinnati, OH, United States 65University of Maryland, College Park, MD, United States 66Los Alamos National Laboratory (LANL), Los Alamos, United States 67Syracuse University, Syracuse, NY, United States 68Laboratory of Mathematical and Subatomic Physics , Constantine, Algeria, associated to 2 69School of Physics and Astronomy, Monash University, Melbourne, Australia, associated to 55 70Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil, associated to 2 71Guangdong Provencial Key Laboratory of Nuclear Science, Institute of Quantum Matter, South China Normal University, Guangzhou, China, associated to 3 72School of Physics and Technology, Wuhan University, Wuhan, China, associated to 3 73Departamento de Fisica , Universidad Nacional de Colombia, Bogota, Colombia, associated to 12 74Institut für Physik, Universität Rostock, Rostock, Germany, associated to 16 75Van Swinderen Institute, University of Groningen, Groningen, Netherlands, associated to 31 76National Research Centre Kurchatov Institute, Moscow, Russia, associated to 38 77National University of Science and Technology “MISIS”, Moscow, Russia, associated to 38 78National Research University Higher School of Economics, Moscow, Russia, associated to 41 79National Research Tomsk Polytechnic University, Tomsk, Russia, associated to 38 80University of Michigan, Ann Arbor, United States, associated to 67 aUniversidade Federal do Triângulo Mineiro (UFTM), Uberaba-MG, Brazil bLaboratoire Leprince-Ringuet, Palaiseau, France cP.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS), Moscow, Russia dUniversità di Bari, Bari, Italy eUniversità di Bologna, Bologna, Italy fUniversità di Cagliari, Cagliari, Italy gUniversità di Ferrara, Ferrara, Italy hUniversità di Genova, Genova, Italy iUniversità di Milano Bicocca, Milano, Italy jUniversità di Roma Tor Vergata, Roma, Italy kUniversità di Roma La Sapienza, Roma, Italy lAGH - University of Science and Technology, Faculty of Computer Science, Electronics and Telecommunications, Kraków, Poland mDS4DS, La Salle, Universitat Ramon Llull, Barcelona, Spain nHanoi University of Science, Hanoi, Vietnam oUniversità di Padova, Padova, Italy pUniversità di Pisa, Pisa, Italy qUniversità degli Studi di Milano, Milano, Italy rUniversità di Urbino, Urbino, Italy sUniversità della Basilicata, Potenza, Italy tScuola Normale Superiore, Pisa, Italy uUniversità di Modena e Reggio Emilia, Modena, Italy vUniversità di Siena, Siena, Italy wMSU - Iligan Institute of Technology (MSU-IIT), Iligan, Philippines xNovosibirsk State University, Novosibirsk, Russia yINFN Sezione di Trieste, Trieste, Italy zSchool of Physics and Information Technology, Shaanxi Normal University (SNNU), Xi’an, China aaPhysics and Micro Electronic College, Hunan University, Changsha City, China abUniversidad Nacional Autonoma de Honduras, Tegucigalpa, Honduras
2024-09-04T02:54:58.521393
2020-03-03T20:35:36
2003.04360
{ "authors": "Vaishali Ingale, Pushpender Singh", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26124", "submitter": "Pushpender Singh", "url": "https://arxiv.org/abs/2003.04360" }
arxiv-papers
# GenNet: Reading Comprehension with Multiple Choice Questions using Generation and Selection model Vaishali Ingale Department of Information Technology Army Institute of Technology, Pune <EMAIL_ADDRESS> Pushpender Singh Department of Information Technology Army Institute of Technology, Pune <EMAIL_ADDRESS> ###### Abstract Multiple-choice machine reading comprehension is difficult task as its required machines to select the correct option from a set of candidate or possible options using the given passage and question.Reading Comprehension with Multiple Choice Questions task, required a human (or machine) to read a given passage, question pair and select the best one option from n given options. There are two different ways to select the correct answer from the given passage. Either by selecting the best match answer to by eliminating the worst match answer. Here we proposed GenNet model, a neural network-based model. In this model first we will generate the answer of the question from the passage and then will matched the generated answer with given answer, the best matched option will be our answer. For answer generation we used S-net (Tan et al.,, 2017) model trained on SQuAD and To evaluate our model we used Large-scale RACE (ReAding Comprehension Dataset From Examinations) (Lai et al.,, 2017). ## 1 Introduction Reading comprehension is one of the fundamental skills for human, which one learn systematically since the elementary school. Reading comprehension give human the ability of reading texts, understanding their meanings,and with the help of given context answering questions. When machines are required to comprehend texts, they first need to understand the unstructured text and do reasoning based on given text (Chen et al.,, 2016)(Wang et al., 2018b, ).Answering questions based a passage requires an individual unique skill set. It requires ability to perform basic mathematical operations and logical ability (e.g. to answer questions like how many times Amit visited sweet shop?), look-up ability, ability to deduce, ability to gather information contained in multiple sentences and passages. This diverse and unique skill set makes question answering a challenging task.There are several variants of this task, For example, if we have a given passage and a question, the answer could either (i) be generated from the passage (ii) match some span in the passage (iii) or could be one of the n number of given candidate answers. The last variant is mostly used in various high school, quiz , middle school, and different competitive examinations. This variant of Reading Comprehension generally referred as Reading Comprehension with Multiple Choice Questions (RC-MCQ).In the given figure 1 We have a passage and a question and 4 candidate answers. Task here defined is to find the most suitable answer from the passage for given question. While answering such Multiple Choice Questions (MCQs) figure 1, humans typically use a combination of option elimination and option selection or sometimes they find answer from the passage i.e they generate the answer of the question from passage and match the generated answer with given options and they choose more close candidate as correct answer. Here we proposed model which mimic the answer generation and then matching human process.First the span where possible answer in the passage is computed. we first compute a question-aware representation of the passage (which essentially tries to retain portions of the passage which are only relevant to the question). Then we use answer generation using state-of-art S-Net model (Tan et al.,, 2017)which extract and generate answer figure 2. After we have answer generated from the passage now we weight every given candidate option and select the best matched option. That best matched option was our answer figure 3. Figure 1: An example multiple-choice reading comprehension question. Figure 2: Overview of S-Net.(Tan et al.,, 2017) Figure 3: Overview of option matching and selection. ## 2 Related Work Datasets played an important role in machine reading comprehension, there were different type of datasets designed to solve different variant of machine reading comprehension. SQuAD dataset(Rajpurkar et al.,, 2016) was designed to answer simple question answer reading comprehension that aims to answer a question with exact text spans in a passage. Later MS-MACRO dataset(Nguyen et al.,, 2016) was designed for multi-passage reading comprehension. CNN/ Dailymail (Chen et al.,, 2016) and Who did what dataset(Onishi et al.,, 2016) designed for cloze variant reading comprehension. MCtest(Richardson et al.,, 2013) and RACE dataset(Lai et al.,, 2017) are released for Multiple choice question variant reading comprehension. Similar work in reading comprehension where Multiple choice variant of Comprehension considered includes Hierarchical Attention Flow model(Zhu et al.,, 2018), in this model the candidate options leverage to model the interaction between question options and passage.This was a option selection model which select the correct option from the given candidate options. Other work relatable to this paper was eliminating options model(Parikh et al.,, 2019) which eliminate the wrong answer from the candidate answer.Multi matching network(Tang et al.,, 2019) models interaction relationship between passage, questions and candidate answer. It take different paradigm of matching into account. Option comparison Network (Ran et al.,, 2019) compares between options at word level and identify correlation to help buildup logic and reasoning. Co-matching model (Wang et al., 2018a, ) is used to match between answer and question and passage pair. It match for the relationship between question and answer with the passage. Bidirectional co-matching based model (Zhang et al.,, 2019) matched passage and question, answer bidirectionally. The Convolutional Spatial Attention (CSA) model (Chen et al.,, 2019) form the enriched representaion by fully extract the mutual information among the passage, question, and the candidates. To generate answer several models are there like QANet (Yu et al.,, 2018) which combined local Convolution with Global Self-Attention and its encoder consist exclusively of convolution and self-attention.Bidirectional Attention Flow model (Seo et al.,, 2016) use to focus on large span of passage. BIDAF network is a multi stage hierarchical process and use bidirection attention flow to obtain a query-aware context representation. But the reason to use S-Net model as answer generation model because S-Net not only find the answer from the passage but it can also synthesise passage when required. Some questions are tricky and there answer lies in different span of passage. In such situation S-Net is useful as it remember the past context for longer time as it have GRU as basic component. ## 3 Proposed model There are two tasks needs to be performed in this model. First is Answer extraction and Answer Synthesis/Generation and then option selection. Answer extraction and Generation will be done using state-of-art S-NET model(Tan et al.,, 2017). S-Net first pull out evidence snippets by matching the question and passage respectively, and then generates the answer by filtering the question, passage, and evidence snippets. consider a passage $P=[p_{1},p_{2},p_{3},...p_{p}]$ of word length P, Question $Q=[Q_{1},Q_{2},Q_{3},...Q_{q}]$ of word length Q, and n options $Z_{n}=[z_{1},z_{2},z_{3},...z_{k}]$ where n > 1 and word length k. We first convert the words to their word-level embedding and character-level embedding using GLOVE(Pennington et al.,, 2014).The encoding and embedding layers take in a series of tokens and represent it as a series of vectors. The character- level embeddings are cause by taking the final hidden states of a bi- directional GRU applied to embedding of characters in the token. They then use a bi-directional Gated Recurrent Unit to give rise to new depiction $u_{1}^{p},u_{2}^{p},u_{3}^{p},...u_{p}^{p}$ for questions as well as $u_{1}^{q},u_{2}^{q},u_{3}^{q},...u_{q}^{q}$ for passages too and $u_{1}^{z},u_{2}^{z},u_{3}^{z},...u_{z}^{z}$ for options as well. The embedding matrix is boot only once and not trained in the entire learning process. As shown in Figure 4 S-NET uses the series-to-series model to incorporate the answer with the extracted evidences as features. They first produce the depiction It first produce the depiction $h_{p}^{t}$ and $h_{q}^{t}$ of all words in the question and passage respectively. When giving out the answer depiction, it merge the basic word embedding $e_{p}^{t}$ with some added features $f_{s}^{t}$ and $f_{e}^{t}$ to indicate the end and start place of the evidence snippet given out by evidence extraction model. $f_{s}^{t}$=1 and $f_{e}^{t}$=1 mean the position t is the start and end of the evidence span, respectively. $h_{t}^{p}=BiGRU(h_{t-1}^{p},[e_{t}^{p},f_{t}^{s},f_{t}^{e}])$ (1) $h_{t}^{q}=BiGRU(h_{t-1}^{q},e_{t}^{q})$ (2) On top of the encoder, S-Net uses GRU with attention as the decoder to produce the answer. At each decoding time step t , the GRU reads the previous word embedding $w_{t-1}$ and previous context vector $c_{t-1}$ and finally produced answer. Figure 4: Answer Synthesis/Generation Model(Tan et al.,, 2017) The produced answer will be stored in Answer vector. $A_{n}=[a_{1},a_{2},a_{3},...a_{a}]$ where a is length of the answer.Figure 3 shows the overview of selection module. The selection module will take the refined answer representation $a_{t}$ and computes its bi-linear similarity with each option representation. $score(i)=a_{t}W_{att}z_{t_{i}}$ (3) where i is the number of option, $a_{t}$ is generated answer vector, $z_{t_{i}}$ is option vector and $W_{att}$ is a matrix which needs to be learned. We select the option which gives the highest score as computed above. We train the model using the cross entropy loss by normalizing the above scores (using softmax) first to obtain a probability distribution. ## 4 Experimental Setup Here we discussed about the dataset used to evaluate our model, Training procedure, result comparison and future work. ### 4.1 Dataset We evaluate our model on RACE dataset(Lai et al.,, 2017) Race is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The dataset is collected from English examinations in China, which are designed for middle school and high school students. Each passage is a JSON file. The JSON file contains fields (i) article: A string, which is the passage (ii) questions: A string list. Each string is a query. There are two types of questions. First one is an interrogative sentence. Another one has a placeholder, which is represented by _. (iii)options: A list of the options list. Each options list contains 4 strings, which are the candidate option. (iv) answers: A list contains the golden label of each query.(v) id: Each passage has a unique id in this dataset. RACE has wide variety of questions like Summarization, Inference, Deduction and Context matching. Figure 5: Statistic information about Reasoning type in RACE dataset ### 4.2 Training Procedures and Hyper-parameter We integrate two different model into once. First we train our model on S-Net. To train model on S-Net we process dataset differently. We only consider passage and question and correct option to train model on S-Net. Later we pass the result on to next stage on our model where we train model using generated answer and all candidate options. To train the model, we used stochastic gradient descent with ADAM optimizer.(Kingma and Ba,, 2014) We initialize learning rate with 0.005. Gradients are clipped in L2-norm to no larger than 10\. To update model parameter per step,we used A mini-batch of 32 samples. We have created a vocabulary using top 65k words from passage and questions and if a new out-of-vocabulary(OOV) word encountered we add a special token UNK. We use the same vocabulary for the passage, question, and options vector embedding. We tune all our models based on the accuracy achieved on the validation set. We use 300 dimensional Glove embedding (Pennington et al.,, 2014) for word embedding and word and character encoding.We experiment with both fine-tuning and not fine-tuning these word embedding. We train all our models for upto 80 epochs as we do not see any benefit of training beyond 80 epochs as result were starting recurrence.The hidden state size of all GRU network is 128. We apply dropout (Srivastava et al.,, 2014)to word embeddings and BiGRU’s outputs with a drop rate of 0.45. ### 4.3 Results and Future Work Model | RACE-Mid | RACE-High | RACE ---|---|---|--- Random* | 24.6 | 25.0 | 24.9 Sliding Window* | 37.3 | 30.4 | 32.2 GA Reader (100D)* | 43.7 | 44.2 | 44.1 Stanford AR (100D)* | 44.2 | 43.0 | 43.3 Sliding Window* | 37.3 | 30.4 | 32.2 GenNet | 79.6 | 75.4 | 77.3 Table 1: Accuracy on test set of RACE-M, RACE-H and RACE. * indicates the results from (Lai et al.,, 2017) which are trained with 100D pre-trained Glove word embeddings The Human Ceiling Performance reported by CMU on RACE dataset is 94.2. Our model gives accuracy of 79.6 % on RACE-M 75.4 % on RACE-H and 77.3% on RACE FULL which outperform several other model. Since in this model first answer are generated and then option is selected such model can be used to solve such multiple choice question whose answer option is not present or MCQ with "none of the above" or "No answer" type multiple choice questions. ## 5 Conclusion In this paper, we present the GenNet model for multiple-choice reading comprehension. Specifically, the model uses a combination of Generation and selection to arrive at the correct option. This is achieved by first generating the answer for the questions from the passage and then matching generated answer with the options.At last, the proposed model achieves overall sate-of-the-art accuracy on RACE and significantly outperforms neural network baselines on RACE-M, RACE-H and RACE FULL.As future work, we would like to work towards unanswerable questions or questions where no option matched. ## References * Chen et al., (2016) Chen, D., Bolton, J., and Manning, C. D. (2016). A thorough examination of the cnn/daily mail reading comprehension task. arXiv preprint arXiv:1606.02858. * Chen et al., (2019) Chen, Z., Cui, Y., Ma, W., Wang, S., and Hu, G. (2019). Convolutional spatial attention model for reading comprehension with multiple-choice questions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6276–6283. * Kingma and Ba, (2014) Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. * Lai et al., (2017) Lai, G., Xie, Q., Liu, H., Yang, Y., and Hovy, E. (2017). Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683. * Nguyen et al., (2016) Nguyen, T., Rosenberg, M., Song, X., Gao, J., Tiwary, S., Majumder, R., and Deng, L. (2016). Ms marco: a human-generated machine reading comprehension dataset. * Onishi et al., (2016) Onishi, T., Wang, H., Bansal, M., Gimpel, K., and McAllester, D. (2016). Who did what: A large-scale person-centered cloze dataset. arXiv preprint arXiv:1608.05457. * Parikh et al., (2019) Parikh, S., Sai, A. B., Nema, P., and Khapra, M. M. (2019). Eliminet: A model for eliminating options for reading comprehension with multiple choice questions. arXiv preprint arXiv:1904.02651. * Pennington et al., (2014) Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. * Rajpurkar et al., (2016) Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. (2016). Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. * Ran et al., (2019) Ran, Q., Li, P., Hu, W., and Zhou, J. (2019). Option comparison network for multiple-choice reading comprehension. arXiv preprint arXiv:1903.03033. * Richardson et al., (2013) Richardson, M., Burges, C. J., and Renshaw, E. (2013). Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 193–203. * Seo et al., (2016) Seo, M., Kembhavi, A., Farhadi, A., and Hajishirzi, H. (2016). Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. * Srivastava et al., (2014) Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958. * Tan et al., (2017) Tan, C., Wei, F., Yang, N., Du, B., Lv, W., and Zhou, M. (2017). S-net: From answer extraction to answer generation for machine reading comprehension. arXiv preprint arXiv:1706.04815. * Tang et al., (2019) Tang, M., Cai, J., and Zhuo, H. H. (2019). Multi-matching network for multiple choice reading comprehension. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7088–7095. * (16) Wang, S., Yu, M., Chang, S., and Jiang, J. (2018a). A co-matching model for multi-choice reading comprehension. arXiv preprint arXiv:1806.04068. * (17) Wang, Y., Li, R., Zhang, H., Tan, H., and Chai, Q. (2018b). Using sentence-level neural network models for multiple-choice reading comprehension tasks. Wireless Communications and Mobile Computing, 2018. * Yu et al., (2018) Yu, A. W., Dohan, D., Luong, M.-T., Zhao, R., Chen, K., Norouzi, M., and Le, Q. V. (2018). Qanet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541. * Zhang et al., (2019) Zhang, S., Zhao, H., Wu, Y., Zhang, Z., Zhou, X., and Zhou, X. (2019). Dual co-matching network for multi-choice reading comprehension. arXiv preprint arXiv:1901.09381. * Zhu et al., (2018) Zhu, H., Wei, F., Qin, B., and Liu, T. (2018). Hierarchical attention flow for multiple-choice reading comprehension. In Thirty-Second AAAI Conference on Artificial Intelligence.
2024-09-04T02:54:58.531454
2020-03-09T19:28:33
2003.04376
{ "authors": "Kyle M. Whitcomb and Chandralekha Singh", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26125", "submitter": "Kyle Whitcomb", "url": "https://arxiv.org/abs/2003.04376" }
arxiv-papers
# Not all disadvantages are equal: Racial/ethnic minority students have largest disadvantage of all demographic groups in both STEM and non-STEM GPA Kyle M. Whitcomb Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA, 15260 Chandralekha Singh Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA, 15260 ###### Abstract An analysis of institutional data to understand the outcome of the many obstacles faced by students from historically disadvantaged backgrounds is important in order to work towards promoting equity and inclusion for all students. We use 10 years of institutional data at a large public research university to investigate the grades earned (both overall and in STEM courses only) by students categorized on four demographic characteristics: gender, race/ethnicity, low-income status, and first-generation college student status. We find that on average across all years of study and for all clusters of majors, underrepresented minority students experience a larger penalty to their mean overall and STEM GPA than even the most disadvantaged non-URM students. Moreover, the underrepresented minority students with additional disadvantages due to socioeconomic status or parental education level were even further penalized in their average GPA. Furthermore, we also find that while women in all demographic groups had a higher average overall GPA, these gender differences are almost completely non-existent in STEM GPA except among the most privileged students. These findings suggest that there is need to provide support to bridge the gaps that emanate from historical disadvantages to certain groups. ## I Introduction and Theoretical Framework The importance of evidence-based approaches to improving student learning and ensuring that all students have the opportunity to excel regardless of their background is becoming increasingly recognized by Science, Technology, Engineering, and Mathematics (STEM) departments across the US Johnson (2012); Johnson _et al._ (2017); Maltese and Tai (2011); Borrego _et al._ (2008); Borrego and Bernhard (2011); Borrego and Henderson (2014); Henderson and Dancy (2008); Dancy and Henderson (2010); Henderson _et al._ (2012). With advances in digital technology in the past few decades, institutions have been keeping increasingly large digital databases of student records. We have now reached the point where there is sufficient data available for robust statistical analyses using data analytics that can provide valuable information useful for transforming learning for all students Baker and Inventado (2014); Papamitsiou and Economides (2014). This has lead to many recent studies utilizing many years of institutional data to perform analyses that were previously limited by statistical power Lord _et al._ (2009, 2015); Ohland and Long (2016); Matz _et al._ (2017); Witherspoon and Schunn (2019). Therefore, here we focus on harnessing institutional data to investigate the obstacles faced by students with various disadvantages who must overcome obstacles in their pursuit of higher education. The theoretical framework for this study has two main foundations: critical theory and intersectionality. Critical theories of race, gender, etc. identify historical sources of inequity within society, that is, societal norms that perpetuate obstacles to the success of certain groups of disadvantaged people Crenshaw _et al._ (1995); Kellner (2003); Yosso (2005); Gutiérrez (2009); Taylor _et al._ (2009); Tolbert _et al._ (2018); Schenkel and Calabrese Barton (2020). Critical theory tells us that the dominant group in a society perpetuates these norms, which are born out of their interests, and pushes back against support systems that seek to subvert these norms Crenshaw _et al._ (1995); Kellner (2003); Yosso (2005). These highly problematic societal norms are founded in the historical oppression of various groups of people, and manifest today in many ways including economic disadvantages, stereotypes about who can succeed in certain career paths, and racist and/or sexist barriers to opportunity, including educational advancement. While these norms are, by definition, specific to a particular culture or even country, they are nonetheless pervasive and oppressive and demand attention to rectify these historical wrongs. Much important work has been done on building critical race and/or gender theories of STEM education Johnson (2012); Johnson _et al._ (2017); Solorzano _et al._ (2000); Lewis _et al._ (2009); Bang and Medin (2010); Estrada _et al._ (2018); Ong _et al._ (2018); Tolbert _et al._ (2018); Green _et al._ (2019); Mutegi _et al._ (2019); Sheth (2019); Schenkel and Calabrese Barton (2020). In one study, Bancroft (2018) lays out a “critical capital theory,” using varying forms of capital (economic, social, and cultural) to examine persistence through graduation in STEM doctoral programs and to contextualize the mechanisms behind racial inequities in STEM education Bancroft (2018). The idea that race, gender, or another demographic characteristic alone cannot fully explain the intricacies of the obstacles that students face is rooted in the framework of intersectionality Crenshaw (1990); Cho _et al._ (2013); Mitchell _et al._ (2014); Charleston _et al._ (2014); Morton and Parsons (2018). In particular, the combination of different aspects of an individual’s social identity (e.g., gender, race, first-generation college status, and socioeconomic status) leads to unique levels of disadvantages that cannot be explained by simply adding together the effects of the individual components of identity Crenshaw (1990). For example, according to the framework of intersectionality, in many STEM disciplines where the societal norm expects that students are white men, the experience of a black woman is not a simple sum of the experiences of white women and black men Charleston _et al._ (2014); Morton and Parsons (2018). With an eye toward this intersectional approach to critical theory, we seek to understand the relationship between four different aspects of student identity that can lead to obstacles in STEM education: race/ethnicity, gender, low- income status, and first-generation college student status. The students disadvantaged by low-income or first-generation status are likely to experience a lack of resources relative to their more privileged peers Lam _et al._ (2005); Dika and D’Amico (2016); Katrevich and Aruguete (2017). Women and underrepresented minority students are susceptible to additional stress and anxiety from stereotype threat (i.e., the fear of confirming stereotypes pertaining to their identity) which is not experienced by their majority group peers Lewis _et al._ (2009); Johnson (2012); Green _et al._ (2019); Mutegi _et al._ (2019); Sheth (2019); Astin (1993); Cross (1993); Felder _et al._ (1995, 1998); Bianchini _et al._ (2002); Britner and Pajares (2006); Bianchini (2013); Basile and Lopez (2015); Cheryan _et al._ (2017); Hilts _et al._ (2018). In summary, the different mechanisms by which students belonging to each demographic characteristic can be disadvantaged are as follows. * • Race/Ethnicity: Students belonging to underrepresented minority (URM) groups may experience stereotype threat that causes anxiety and robs the students of their cognitive resources, particularly during high-stakes testing. * • Gender: There are pervasive societal biases against women succeeding in many STEM disciplines which can result in stereotype threat. * • Low-Income Status: Low-Income (LI) students are more likely to need to work to support themselves, reducing their time and energy available to devote to their studies, in addition to anxiety due to the financial burden of attending college. These burdens are in addition to other factors that low-income students may be more likely to face, such as lower quality preparation for college. * • First-Generation Status: First-Generation (FG) students may lack the resources of encouragement, advice, and support that are available more readily to students with degree-holding parents. This lack of resources can make FG students more susceptible to the stress of the unknown in college. All of these mechanisms can produce an inequitable learning environment wherein students belonging to any of these groups are forced to work against obstacles that their peers do not have. The framework of intersectionality asserts that for students that belong to more than one of these groups, complex interactions between these different obstacles can result in compounded disadvantages that are not a simple sum of the individual effects Crenshaw (1990); Cho _et al._ (2013); Mitchell _et al._ (2014); Charleston _et al._ (2014); Morton and Parsons (2018). In order to measure the long-term effects of these systemic disadvantages, we will investigate the academic achievement of students belonging to these various demographic groups over the course of their studies at one large public research university using 10 years of institutional data. By grouping students according to their demographic background, we will be able to investigate how different combinations of obstacles affect student grade point averages. ## II Research Questions Our research questions regarding the intersectional relationships between demographic characteristics and academic achievement are as follows. 1. RQ1. Are there differences in the overall or STEM grades earned by students belonging to different demographic groups (i.e., underrepresented minority, low-income status, and first-generation college student status)? 2. RQ2. Do any patterns observed in RQ1 differ for men and women? 3. RQ3. Do grades earned in STEM courses alone exhibit similar demographic patterns as grades earned in all courses? 4. RQ4. What are the trends over time in the mean GPA of these different demographic groups among different clusters of majors (i.e., computer science, engineering, mathematics, and physical science majors, other STEM majors, and non-STEM majors)? ## III Methodology ### III.1 Sample Using the Carnegie classification system, the university at which this study was conducted is a public, high-research doctoral university, with balanced arts and sciences and professional schools, and a large, primarily residential undergraduate population that is full-time and reasonably selective with low transfer-in from other institutions Indiana University Center for Postsecondary Research (2018). The university provided for analysis the de-identified institutional data records of students with Institutional Review Board approval. In this study, we examined these records for $N=24,567$ undergraduate students enrolled in three colleges within the university: the colleges of Arts and Sciences, Computing and Information, and Engineering. This sample of students includes all of those from ten cohorts who met several selection criteria, namely that the student had first enrolled at the university in a Fall semester from Fall 2005 to Fall 2014, inclusive, and the institutional data on the student was not missing or unspecified for any of the following measures: gender, race/ethnicity, parental education level, and family income. This sample of students is $50\%$ female and had the following race/ethnicities: 79% White, 9% Asian, 7% Black, 3% Hispanic, and 2% other or multiracial. Further, this sample is $16\%$ first-generation college students and $21\%$ “low-income” students (to be defined in the following section). We acknowledge that gender is not a binary construct, however in self- reporting their gender to the university students were given the options of “male” or “female” and so those are the two self-reported genders that we are able to analyze. There were $39$ students who had met all other selection criteria but who had not indicated any gender on the survey, these students were removed from the sample and are not included in the reported sample size or any analyses. ### III.2 Measures #### III.2.1 Demographic Characteristics Four primary measures are the demographic characteristics mentioned in the previous section, namely gender, race/ethnicity, parental education level, and family income. All of these were converted into binary categories intended to distinguish between the most and least privileged students on each measure. * • Gender. Gender was reported as a binary category to begin with (either “male” or “female”), therefore no further steps were required. * • First-generation. Students for whom both parents had a highest completed level of education of high school or lower were grouped together as “first- generation” (FG) college students and correspondingly students for whom at least one parent had earned a college degree were labeled non-FG. * • Low-income. Students whose reported family Adjusted Gross Income (AGI) was at or below 200% of the federal U.S. poverty line were categorized as “low- income” (LI), and those above 200% of the poverty line as non-LI Cauthen and Fass (2007); Jiang _et al._ . * • Underrepresented minority. All students who identified as any race or ethnicity other than White or Asian were grouped together as “underrepresented minority” (URM) students, including multiracial students who selected White and/or Asian in addition to another demographic option. Students who only identified as White and/or Asian students were categorized as non-URM students. #### III.2.2 Academic Performance Measures of student academic performance were also included in the provided data. High school GPA was provided by the university on a weighted scale from 0-5 that includes adjustments to the standard 0-4 scale for Advanced Placement and International Baccalaureate courses. The data also include the grade points earned by students in each course taken at the university. Grade points are on a 0-4 scale with $\text{A}=4$, $\text{B}=3$, $\text{C}=2$, $\text{D}=1$, $\text{F}=0$, where the suffixes “$+$” and “$-$” add or subtract, respectively, $0.25$ grade points (e.g. $\text{B}-=2.75$), with the exception of $\text{A}+$ which is reported as the maximum 4 grade points. The courses were categorized as either STEM or non-STEM courses, with STEM courses being those courses taken from any of the following departments: biological sciences, chemistry, computer science, economics, any engineering department, geology and environmental science, mathematics, neuroscience, physics and astronomy, and statistics. We note that for the purposes of this paper, “STEM” does not include the social sciences other than economics, which has been included due to its mathematics-intensive content. #### III.2.3 Year of Study Finally, the year in which the students took each course was calculated from the students’ starting term and the term in which the course was taken. Since the sample only includes students who started in fall semesters, each “year” contains courses taken in the fall and subsequent spring semesters, with courses taken over the summer omitted from this analysis. For example, if a student first enrolled in Fall 2007, then their “first year” occurred during Fall 2007 and Spring 2008, their “second year” during Fall 2008 and Spring 2009, and so on in that fashion. If a student is missing both a fall and spring semester during a given year but subsequently returns to the university, the numbering of those post-hiatus years is reduced accordingly. If instead a student is only missing one semester during a given year, no corrections are made to the year numbering. In this study we consider up through the students’ sixth year of study or the end of their enrollment at the studied institution, whichever comes first. ### III.3 Analysis The primary method by which we grouped students in this analysis was by their set of binary demographic categories. This grouping was performed in two different ways. First, use of all four binary categories (gender, FG, LI, URM) resulted in sixteen mutually exclusive groups (e.g., “female, FG+URM” or “male, LI”). Second, use of all categories except gender resulted in eight mutually exclusive categories. We calculated each student’s yearly (i.e., not cumulative) grade point average (GPA) across courses taken in each year of study from the first to sixth years. In addition, we calculated the student’s yearly STEM GPA, that is, the GPA in STEM courses alone. Then, using the aforementioned grouping schemes, we computed the mean GPA in each demographic group as well as the standard error of the mean separately for each year of study Freedman _et al._ (2007). Further, in the case of grouping by gender, we computed the effect size of the gender differences within each demographic group using Cohen’s $d$, which is typically interpreted using minimum cutoff values for “small” ($d=0.20$), “medium” ($d=0.50$), and “large” ($d=0.80$) effect sizes Cohen (1988); Neter _et al._ (2004); Montgomery _et al._ (2012). All analyses were conducted using R R Core Team (2019), making use of the package tidyverse Wickham (2017) for data manipulation and plotting. ## IV Results ### IV.1 GPA Trends by Demographic Group: “Dinosaur Plots” In order to answer RQ1, we plotted in Fig. 1 the mean GPA earned by students in each demographic group, including gender as a grouping characteristic. We start with overall GPA, rather than STEM GPA alone, in order to provide context for the results in STEM GPA and identify trends that may or may not be present when viewing STEM grades alone. Groups are ordered from left to right first by the ascending number of selected characteristics and then alphabetically. Mean GPA is plotted separately (i.e., not cumulatively) for each year of study from the first to sixth year. Setting aside the gender differences for a moment, we note that the general GPA trends by demographic group in Fig. 1 follow a shape resembling the neck, back, and tail of a sauropod, and so accordingly we refer to the plots in Fig. 1 as “dinosaur plots.” This shape is clearest in the plots for the first through fourth years, as the sample size drops significantly in the fifth year as the majority of students graduate. Looking more closely at Fig. 1, particularly the first four years, we see that the “neck” is consistently comprised of the group of students with the most privileges, namely those students that are non-FG, non-LI, and non-URM. Following this, the “back” is relatively flat across the next four groups, namely students that are FG only, LI only, URM only, or FG and LI. Notably, the URM group of students typically have the lowest mean GPA within this set of demographic groups. Finally, the “tail” consists of the final three groups, FG+URM, LI+URM, and FG+LI+URM. The mean GPA in this set of groups tends to decrease from left to right in the plots. Notably, the four groups that contain URM students are consistently in the lowest four or five mean GPAs. Figure 1: Average GPA of each demographic group. Students are binned into separate demographic groups based on their status as first-generation (FG), low-income (LI), and/or underrepresented minority (URM) students. The men and women in each demographic group are plotted separately. The mean GPA in all courses taken by students in each demographic group is plotted along with the standard error on the mean, with a separate plot for each of the (a) first, (b) second, (c) third, (d) fourth, (e) fifth, and (f) sixth years. The sample size is reported by each point, and Cohen’s $d$ Cohen (1988) measuring the effect size of the gender difference in each group is reported. ### IV.2 Intersectionality with Gender We now turn our attention to the differences between men and women in Fig. 1 in order to answer RQ2. We note in particular that across all demographic groups women’s mean GPA is roughly 0.2 grade points higher than men’s. The effect sizes (Cohen’s $d$) of this difference range from small to medium Cohen (1988). This difference in mean GPA earned is substantial enough to indicate a change in letter grade, given that the grading system at the studied university uses increments of 0.25 grade points for letter grades containing “$+$” or “$-$.” Further, this trend holds in the fifth year (Fig. 1e) and sixth year (Fig. 1f), with some exceptions in demographic groups with particularly low sample sizes after the fourth year. ### IV.3 STEM GPA Trends In order to answer RQ3, Figure 2 plots students’ mean STEM GPA in a similar manner to Fig. 1. We note that the general “dinosaur” pattern discussed in Fig. 1 also holds at least for the first and second years (Figs. 2a and 2b, respectively). In the third year and beyond, the general features of the trend continue to hold, with the most privileged students having the highest mean GPA, followed by those with one disadvantage as well as the first-generation and low-income group, followed by the remaining groups of URM students with one or more additional disadvantages. However, in these later years, the finer details of the plots noted before fall away in favor of a sharper mean GPA decrease for URM students with at least one additional disadvantage in the third year (Fig. 2c) and a more gradual decrease across all groups in the fourth year (Fig. 2d) and fifth year (Fig. 2e). When restricting the GPA calculations to STEM courses, the sample size becomes too small in the sixth year (Fig. 2f) to draw meaningful conclusions. Figure 2: Average STEM GPA of each demographic group. Students are binned into separate demographic groups based on their status as first-generation (FG), low-income (LI), and/or underrepresented minority (URM) students. The men and women in each demographic group are plotted separately. The mean GPA in all courses taken by students in each demographic group is plotted along with the standard error on the mean, with a separate plot for each of the (a) first, (b) second, (c) third, and (d) fourth, (e) fifth, and (f) sixth years. The sample size is reported by each point, and Cohen’s $d$ Cohen (1988) measuring the effect size of the gender difference in each group is reported. We further observe a trend of students earning higher grades on average in later years, although the rise from the first to the fourth year is somewhat lower in STEM GPA than in overall GPA. Notably, while in overall GPA this trend seemed to be somewhat universal across demographic groups, in Fig. 2 we see a quicker rise in mean STEM GPA over time for the more privileged students than the less privileged students, particularly comparing the leftmost and rightmost groups. Regarding gender differences, Fig. 2 shows smaller gender differences in STEM GPA than those observed in overall GPA in Fig. 1. While in overall GPA women earned roughly 0.2 grade points more than men on average, in STEM GPA that difference is much less consistent and typically ranges from 0 to 0.1 grade points. For many demographic groups we see no significant differences between men and women’s mean STEM GPA. We do see that there is still a consistent STEM GPA gender difference, albeit smaller than in Fig. 1, among the group of the most privileged students (i.e., those with “None” of the disadvantages). There is also a STEM GPA gender difference among first-generation low-income but non-URM students, however this difference is less consistent and in fact briefly vanishes in the third year. ### IV.4 GPA Trends By Major Over Time In order to better understand the trends over time in both overall and STEM GPA and answer RQ4, we plotted the mean GPA by year in Fig. 3 and mean STEM GPA by year in Fig. 4. In these plots, we have not separated men and women and instead focus on the other demographic characteristics while further grouping students into three different groups of majors in order to understand if these trends differ for students in different areas of study. Further, since the sample size becomes quite small in years five and six for many of the demographic groups of interest, we plot only the mean GPA over the first four years. In Figs. 3a and 4a, we plot the mean overall and STEM GPA, respectively, of all students. In the other subfigures, we plot the mean GPA earned by students majoring in different clusters of majors. In particular, we plot the mean GPA of engineering (including computer science), mathematics, and physical science (i.e., chemistry and physics) majors in Figs. 3b and 4b, the remaining STEM majors in Figs. 3c and 4c, and non-STEM majors in Figs. 3d and 4d. Figure 3: Students are binned into separate demographic groups as in Fig. 1, but not separated by gender. The mean GPA in all courses of each group is plotted over time from year one to four, along with the standard error of the mean. The plots show this for four subpopulations: (a) all students; (b) chemistry, computer science, engineering, mathematics, and physics students; (c) biology, economics, geology, neuroscience, and statistics students; and (d) non-STEM students including psychology. Figure 4: Students are binned into separate demographic groups as in Fig. 2, but not separated by gender. The mean GPA in STEM courses of each group is plotted over time from year one to four along with the standard error of the mean. The plots show this for four subpopulations: (a) all students; (b) chemistry, computer science, engineering, mathematics, and physics students; (c) biology economics, geology, neuroscience, and statistics students; and (d) non-STEM students including psychology. These plots make clearer some of the trends noted earlier, especially the rise in mean GPA over time from the first to the fourth year. However, we can now see that this is not universally true since the first-generation URM students have a drop in mean GPA in the second year for physical science majors (Fig. 3b), and in the third year for other STEM majors (Fig. 3c). This trend is even more noticeable in STEM GPA (Fig. 4), where the mean STEM GPA of the group of first-generation URM students drops in the third year for every subpopulation by major. ## V Discussion To start, we consider how much the current system disadvantages students who are first-generation, low-income, or underrepresented minority but not a combination of the two. Discussing these groups first is helpful in setting the stage for a more complex discussion of the intersectionality of these various demographic characteristics. We find in Figs. 1 and 2 that not all of these disadvantages are equal. In particular, non-URM students who have one disadvantage, namely the first-generation (but not low-income) and low-income (but not first-generation) students, still earn slightly higher grades than even the URM students who are not low-income or first-generation. Notably, this trend (the “back” of the dinosaur plots) is similar in both overall grades (Fig. 1) and in STEM grades alone (Fig. 2). The size of this mean grade difference varies from year to year, but in STEM grades it can reach as high as about 0.25 grade points, which at the studied institution is the difference between, for example, a B and B$+$ or B$-$ grade. The group with the grades most similar to these non-first-generation, non-low- income URM students are the first-generation, low-income non-URM students, who earn both overall (Fig. 1) and STEM (Fig. 2) grades similar to or very slightly higher than the URM students. One explanation could be that the lack of resources available due to being first-generation or low-income is not as severe an obstacle as the stereotype threat experienced by URM students. Turning then to the “tail” in the dinosaur plots, we find that consistently the most disadvantaged students in both overall grades (Fig. 1) and STEM grades (Fig. 2) are the URM students with at least one additional obstacle. In this case, it appears that the intersection of being low-income and URM is the most disadvantageous combination, with no notable difference in either Fig. 1 or Fig. 2 among these students whether or not they are also first-generation. Meanwhile, the first-generation URM students who are not low-income sometimes have a slightly higher mean GPA than the low-income URM students (Fig. 1). Another avenue to investigate intersectionality is how gender interacts with the other demographic groups. Interestingly, in overall GPA (Fig. 1), gender appears to have about the same effect across all demographic groups. That is, there does not appear to be an intersectional effect of gender identity with other identities as measured by overall GPA. However, Fig. 2 shows that this is a context-dependent effect, with the gender gap substantially and unevenly reduced across all groups in mean STEM GPA. For most demographic groups in Fig. 2, the higher overall GPA earned by women in Fig. 1 has vanished completely in STEM GPA. This is consistent with stereotype threat being the mechanism of disadvantage for women, where stereotypes surrounding STEM disciplines unfairly cause stress and anxiety for women Astin (1993); Cross (1993); Felder _et al._ (1995, 1998); Britner and Pajares (2006); Basile and Lopez (2015); Cheryan _et al._ (2017); Hilts _et al._ (2018). Notably, while the gender gap is reduced nearly to zero for most groups in Fig. 2, there does remain a small consistent gender gap favoring women in the most privileged group of students. In other groups the gender gap in Fig. 2 is inconsistent across years. One explanation could be that the wealth of resources available to them may help to alleviate the stereotype threat. Taking a more temporal view of these GPA trends, Fig. 3 (overall GPA) and Fig. 4 (STEM GPA) have grouped men and women together in order to focus on the other demographic characteristics more closely. In these plots, the most noteworthy trend is again that, with the sole exception of the first year in Fig. 3b, the four groups with the lowest mean GPA (Fig. 3) and STEM GPA (Fig. 4) across the first four years are always the four groups containing URM students. Notably, this trend is true regardless of which group of majors we investigate. The consistency of this result is particularly striking, showing that the most otherwise disadvantaged non-URM students have fewer obstacles to success than even the most privileged URM students among all students. Focusing further on the STEM GPA of STEM majors in Figs. 4b and 4c, we see that while non-URM students consistently rise in mean GPA over time, the same is not true for all URM students. In particular, the first-generation URM students who major in chemistry, computer science, engineering, mathematics, or physics (Fig. 4b) experience a steady decline in mean STEM GPA from year one to two and year two to three. While the standard error of those means is quite large due to a relatively small sample size, that lack of representation for these students could itself be what is hindering their coursework by causing a stereotype threat. Based upon the frameworks of critical theory and intersectionality, the main implication of these findings is that many students who come from less privileged backgrounds are not being adequately supported in college in order to catch up with the privileged students Crenshaw _et al._ (1995); Kellner (2003); Yosso (2005); Gutiérrez (2009); Taylor _et al._ (2009); Tolbert _et al._ (2018); Schenkel and Calabrese Barton (2020); Johnson (2012); Johnson _et al._ (2017); Crenshaw (1990); Cho _et al._ (2013); Mitchell _et al._ (2014); Charleston _et al._ (2014); Morton and Parsons (2018). The disadvantages of these less privileged students manifest as lower mean overall and STEM GPA for those demographic groups. In order to promote equity and inclusion, it is crucial that these students are provided appropriate mentoring, guidance, scaffolding, and support in college so that these obstacles can be cleared for students who have been put at a disadvantage relative to their peers through no fault of their own Birt _et al._ (2019). We note that these demographic groups with more disadvantages are likely to consist of students who had K-12 education from schools with fewer resources and less well-prepared teachers than those of the more privileged students, with high school being an especially important time for disadvantages related to STEM learning increasing Bianchini _et al._ (2003); Maltese and Tai (2011); Means _et al._ (2017); Bottia _et al._ (2018); Daley (2019); Dou _et al._ (2019). Analyses such as those discussed here can help inform the allocation of resources to support these students, with efforts to reduce the classroom stereotype threat of URM students and creating a low-anxiety environment in which all students have a high sense of belonging and can participate fully without fear of being judged being clear priorities. Additional resources to support low-income and/or first-generation students, e.g., financial support and timely advising pertaining to various academic and co-curricular opportunities, are also important in order to level the playing field and work towards a goal of all students succeeding in college, regardless of their race/ethnicity, socioeconomic status, and parental education history. ## VI Acknowledgments This research is supported by the National Science Foundation Grant DUE-1524575 and the Sloan Foundation Grant G-2018-11183. ## References * Johnson (2012) A. Johnson, Science Education 96, 960 (2012). * Johnson _et al._ (2017) A. Johnson, M. Ong, L. T. Ko, J. Smith, and A. Hodari, The Physics Teacher 55, 356 (2017). * Maltese and Tai (2011) A. V. Maltese and R. H. Tai, Science Education 95, 877 (2011). * Borrego _et al._ (2008) M. Borrego, R. A. Streveler, R. L. Miller, and K. A. Smith, Journal of Engineering Education 97, 147 (2008). * Borrego and Bernhard (2011) M. Borrego and J. Bernhard, Journal of Engineering Education 100, 14 (2011). * Borrego and Henderson (2014) M. Borrego and C. Henderson, Journal of Engineering Education 103, 220 (2014). * Henderson and Dancy (2008) C. Henderson and M. H. Dancy, American Journal of Physics 76, 79 (2008). * Dancy and Henderson (2010) M. Dancy and C. Henderson, American Journal of Physics 78, 1056 (2010). * Henderson _et al._ (2012) C. Henderson, M. Dancy, and M. Niewiadomska-Bugaj, Phys. Rev. ST Phys. Educ. Res. 8, 020104 (2012). * Baker and Inventado (2014) R. S. Baker and P. S. Inventado, in _Learning Analytics_ (Springer, 2014) pp. 61–75. * Papamitsiou and Economides (2014) Z. Papamitsiou and A. A. Economides, Journal of Educational Technology & Society 17, 49 (2014). * Lord _et al._ (2009) S. M. Lord, M. M. Camacho, R. A. Layton, R. A. Long, M. W. Ohland, and M. H. Wasburn, Journal of Women and Minorities in Science and Engineering 15, 167 (2009). * Lord _et al._ (2015) S. M. Lord, R. A. Layton, and M. W. Ohland, IEEE Transactions on Education 58, 141 (2015). * Ohland and Long (2016) M. W. Ohland and R. A. Long, Advances in Engineering Education 5, 1 (2016). * Matz _et al._ (2017) R. L. Matz, B. P. Koester, S. Fiorini, G. Grom, L. Shepard, C. G. Stangor, B. Weiner, and T. A. McKay, AERA Open 3, 1 (2017). * Witherspoon and Schunn (2019) E. B. Witherspoon and C. D. Schunn, Science Education 104, 144 (2019). * Crenshaw _et al._ (1995) K. Crenshaw, N. Gotanda, G. Peller, and K. Thomas, _Critical Race Theory: The Key Writings that Formed the Movement_ (The New Press, 1995). * Kellner (2003) D. Kellner, Democracy & Nature 9, 51 (2003). * Yosso (2005) T. J. Yosso, Race Ethnicity and Education 8, 69 (2005). * Gutiérrez (2009) R. Gutiérrez, Teaching for Excellence and Equity in Mathematics 1, 4 (2009). * Taylor _et al._ (2009) E. Taylor, D. Gillborn, and G. Ladson-Billings, _Foundations of critical race theory in education_ (Routledge, 2009). * Tolbert _et al._ (2018) S. Tolbert, A. Schindel, and A. J. Rodriguez, Science Education 102, 796 (2018). * Schenkel and Calabrese Barton (2020) K. Schenkel and A. Calabrese Barton, Science Education (2020). * Solorzano _et al._ (2000) D. Solorzano, M. Ceja, and T. Yosso, Journal of Negro Education 69, 60 (2000). * Lewis _et al._ (2009) J. L. Lewis, H. Menzies, E. I. Nájera, and R. N. Page, Science Education 93, 961 (2009). * Bang and Medin (2010) M. Bang and D. Medin, Science Education 94, 1008 (2010). * Estrada _et al._ (2018) M. Estrada, A. Eroy-Reveles, and J. Matsui, Social Issues and Policy Review 12, 258 (2018). * Ong _et al._ (2018) M. Ong, J. M. Smith, and L. T. Ko, Journal of Research in Science Teaching 55, 206 (2018). * Green _et al._ (2019) A. M. Green, B. R. Brand, and G. E. Glasson, Science Education 103, 241 (2019). * Mutegi _et al._ (2019) J. W. Mutegi, B. Sorge, G. A. Fore, and G. S. Gibau, Science Education 103, 1456 (2019). * Sheth (2019) M. J. Sheth, Science Education 103, 37 (2019). * Bancroft (2018) S. F. Bancroft, Science Education 102, 1319 (2018). * Crenshaw (1990) K. Crenshaw, Stan. L. Rev. 43, 1241 (1990). * Cho _et al._ (2013) S. Cho, K. W. Crenshaw, and L. McCall, Signs: Journal of Women in Culture and Society 38, 785 (2013). * Mitchell _et al._ (2014) J. D. Mitchell, C. Y. Simmons, and L. A. Greyerbiehl, _Intersectionality & Higher Education_ (Peter Lang, 2014). * Charleston _et al._ (2014) L. Charleston, R. P. Adserias, N. M. Lang, and J. F. Jackson, Journal of Progressive Policy & Practice 2, 273 (2014). * Morton and Parsons (2018) T. R. Morton and E. C. Parsons, Science Education 102, 1363 (2018). * Lam _et al._ (2005) P. C. Lam, T. Srivatsan, D. Doverspike, J. Vesalo, and P. R. Mawasha, Journal of STEM Education: Innovations and Research 6, 14 (2005). * Dika and D’Amico (2016) S. L. Dika and M. M. D’Amico, Journal of Research in Science Teaching 53, 368 (2016). * Katrevich and Aruguete (2017) A. V. Katrevich and M. S. Aruguete, Journal of STEM Education: Innovations and Research 18, 40 (2017). * Astin (1993) A. W. Astin, _What Matters in College_ , Vol. 9 (Jossey-Bass, 1993). * Cross (1993) K. P. Cross, Journal of Engineering Education 82, 9 (1993). * Felder _et al._ (1995) R. M. Felder, G. N. Felder, M. Mauney, C. E. Hamrin Jr., and E. J. Dietz, Journal of Engineering Education 84, 151 (1995). * Felder _et al._ (1998) R. M. Felder, G. N. Felder, and E. J. Dietz, Journal of Engineering Education 87, 469 (1998). * Bianchini _et al._ (2002) J. A. Bianchini, D. J. Whitney, T. D. Breton, and B. A. Hilton-Brown, Science Education 86, 42 (2002). * Britner and Pajares (2006) S. L. Britner and F. Pajares, Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching 43, 485 (2006). * Bianchini (2013) J. A. Bianchini, Science Education 97, 163 (2013). * Basile and Lopez (2015) V. Basile and E. Lopez, Science Education 99, 519 (2015). * Cheryan _et al._ (2017) S. Cheryan, S. A. Ziegler, A. K. Montoya, and L. Jiang, Psychological Bulletin 143, 1 (2017). * Hilts _et al._ (2018) A. Hilts, R. Part, and M. L. Bernacki, Science Education 102, 744 (2018). * Indiana University Center for Postsecondary Research (2018) Indiana University Center for Postsecondary Research, _The Carnegie Classification of Institutions of Higher Education_, Tech. Rep. (Indiana University Center for Postsecondary Research, Bloomington, IN, 2018). * Cauthen and Fass (2007) N. K. Cauthen and S. Fass, _Measuring Income and Poverty in the United States_ , Tech. Rep. (National Center for Children in Poverty, 2007). * (53) Y. Jiang, M. Ekono, and C. Skinner, _Basic facts about low-income children_ , Tech. Rep. (National Center for Children in Poverty). * Freedman _et al._ (2007) D. Freedman, R. Pisani, and R. Purves, _Statistics_ , 4th ed. (W. W. Norton & Co., 2007). * Cohen (1988) J. Cohen, _Statistical Power Analysis for the Behavioral Sciences_ , 2nd ed. (Lawrence Erlbaum Associates, 1988). * Neter _et al._ (2004) J. Neter, M. H. Kutner, C. J. Nachtsheim, and W. Wasserman, _Applied Linear Statistical Models_ , 5th ed. (McGraw-Hill/Irwin, 2004). * Montgomery _et al._ (2012) D. C. Montgomery, E. A. Peck, and G. G. Vining, _Introduction to Linear Regression Analysis_ , 4th ed. (John Wiley & Sons, 2012). * R Core Team (2019) R Core Team, _R: A Language and Environment for Statistical Computing_, R Foundation for Statistical Computing, Vienna, Austria (2019). * Wickham (2017) H. Wickham, _tidyverse: Easily Install and Load the ‘tidyverse’_ (2017), R package version 1.2.1. * Birt _et al._ (2019) J. A. Birt, M. Khajeloo, C. C. Rega-Brodsky, M. A. Siegel, T. S. Hancock, K. Cummings, and P. D. Nguyen, Science Education 103, 770 (2019). * Bianchini _et al._ (2003) J. A. Bianchini, C. C. Johnston, S. Y. Oram, and L. M. Cavazos, Science Education 87, 419 (2003). * Means _et al._ (2017) B. Means, H. Wang, X. Wei, S. Lynch, V. Peters, V. Young, and C. Allen, Science Education 101, 681 (2017). * Bottia _et al._ (2018) M. C. Bottia, E. Stearns, R. A. Mickelson, and S. Moller, Science Education 102, 85 (2018). * Daley (2019) S. G. Daley, Science Education 103, 1306 (2019). * Dou _et al._ (2019) R. Dou, Z. Hazari, K. Dabney, G. Sonnert, and P. Sadler, Science Education 103, 623 (2019).
2024-09-04T02:54:58.547629
2020-03-09T20:06:20
2003.04389
{ "authors": "Adam Gaier, Alexander Asteroth, Jean-Baptiste Mouret", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26126", "submitter": "Adam Gaier", "url": "https://arxiv.org/abs/2003.04389" }
arxiv-papers
# Discovering Representations for Black-box Optimization Adam Gaier Inria, CNRS, Université de LorraineBonn-Rhein-Sieg University of Applied Sciences<EMAIL_ADDRESS>, Alexander Asteroth Bonn-Rhein-Sieg University of Applied SciencesSankt AugustinGermany53757 <EMAIL_ADDRESS>and Jean-Baptiste Mouret Inria, CNRS,Université de LorraineNancyFrance54000<EMAIL_ADDRESS> (2020; 2020) ###### Abstract. The encoding of solutions in black-box optimization is a delicate, handcrafted balance between expressiveness and domain knowledge — between exploring a wide variety of solutions, and ensuring that those solutions are useful. Our main insight is that this process can be automated by generating a dataset of high- performing solutions with a quality diversity algorithm (here, MAP-Elites), then learning a representation with a generative model (here, a Variational Autoencoder) from that dataset. Our second insight is that this representation can be used to scale quality diversity optimization to higher dimensions — but only if we carefully mix solutions generated with the learned representation and those generated with traditional variation operators. We demonstrate these capabilities by learning an low-dimensional encoding for the inverse kinematics of a thousand joint planar arm. The results show that learned representations make it possible to solve high-dimensional problems with orders of magnitude fewer evaluations than the standard MAP-Elites, and that, once solved, the produced encoding can be used for rapid optimization of novel, but similar, tasks. The presented techniques not only scale up quality diversity algorithms to high dimensions, but show that black-box optimization encodings can be automatically learned, rather than hand designed. ††copyright: rightsretained††doi: 10.1145/nnnnnnn.nnnnnnn††isbn: 978-x-xxxx- xxxx-x/YY/MM††conference: the Genetic and Evolutionary Computation Conference 2020; July 8–12, 2020; Cancún, Mexico††journalyear: 2020††price: 15.00††journalyear: 2020††copyright: rightsretained††conference: Genetic and Evolutionary Computation Conference; July 8–12, 2020; Cancún, Mexico††booktitle: Genetic and Evolutionary Computation Conference (GECCO ’20), July 8–12, 2020, Cancún, Mexico††doi: 10.1145/3377930.3390221††isbn: 978-1-4503-7128-5/20/07 Figure 1. Data-Driven Encoding MAP-Elites (DDE- Elites) searches the space of representations to search for solutions. A data- driven encoding (DDE) is learned by training a VAE on the MAP-Elites archive. High fitness solutions, which increase the bias of the DDE toward performance, are found using the DDE. Novel solutions, which increase the range of solutions which can be expressed, are found using mutation operators. UCB1, a bandit algorithm, balances the mix of these explorative and exploitative operators. ## 1\. Introduction The method of encoding solutions is one of the most critical design decisions in optimization, as the representation defines the way an algorithm can move in the search space (Rothlauf, 2006). Work on representations tends to focus on encoding priors or innate biases: aerodynamic designs evolved with splines to encourage smooth forms (Olhofer et al., 2001), Compositional Pattern Producing Networks (CPPNs) with biases for symmetry and repetition in images and neural network weight patterns (Stanley, 2007; Stanley et al., 2009), modularity induced in evolved neural networks (Mouret and Doncieux, 2008; Durr et al., 2010; Doncieux and Meyer, 2004), or neural network structures which encode strong enough biases to perform without training (Gaier and Ha, 2019). The best representations balance a bias for high performing solutions, so they can easily be discovered, and the ability to express a diversity of potential solutions, so the search space can be widely explored. At the one extreme, a representation which only encodes the global optimum is easy to search but useless for finding any other solution. At the other, a representation which can encode anything presents a difficult and dauntingly vast search space. Given a large set of example solutions, representations could be learned from data instead of being hand-tailored by trial-and-error: a learned representation would replicate the same biases toward performance and the same range of expressivity as the source data set. For instance, given a dataset of face images, a Variational Autoencoder (VAE) (Kingma and Welling, 2014) or a Generative Adversarial Network (GAN) (Goodfellow et al., 2014) can learn a low-dimensional latent space, or encoding, that makes it possible to explore the space of face images. In essence, the decoder which maps the latent space to the phenotypic space learns the “recipe” of faces. Importantly, the existence of such a low-dimensional latent space is possible because _the dataset is a very small part of the set of all possible images_. However, using a dataset of preselected high-performing solutions “traps” the search within the distribution of solutions that are already known: a VAE trained on white faces will never generate a black face. This limits the usefulness of such data-driven representations for discovering _novel_ solutions to hard problems. In this paper, we propose the use of the MAP-Elites algorithm (Mouret and Clune, 2015) to automatically generate a dataset for representations using only a performance function and a diversity space. Quality diversity (QD) algorithms (Cully and Demiris, 2018; Pugh et al., 2016) like MAP-Elites are a good fit for representation discovery: creating archives of diverse high- performing solutions is precisely their purpose. Using the MAP-Elites archive as a source of example solutions, we can capture the genetic distribution of the highest performing solutions, or elites, by training a VAE and obtaining a latent representation. As the VAE is only trained on elites, this learned representation, or Data-Driven Encoding (DDE), has a strong bias towards solutions with high fitness; and because the elites have varying phenotypes, the DDE is able to express a range of solutions. Though the elites vary along a phenotypic continuum, they commonly have many genotypic similarities (Vassiliades and Mouret, 2018), making it more likely to find a well- structured latent space. Nonetheless, MAP-Elites will struggle to find high-performing solutions without an adequate representation. Fortunately, the archive is produced by MAP-Elites in an iterative, any-time fashion, so there is no “end state” to wait for before a DDE can be trained — a DDE can be trained during optimization. The DDE can then be used to enhance optimization. By improving the quality of the archive the DDE improves the quality of its own source data, establishing a virtuous cycle of archive and encoding improvement. A DDE based on an archive will encounter the same difficulty as any learned encoding: the DDE can only represent solutions that are already in the dataset. How then, can we discover new solutions? Fundamentally, to search for an encoding we need to both _exploit the best known representation_ , that is, create better solutions according to the current best “recipes”, and also _explore new representations_ — solutions which do not follow any “recipe”. In this paper, we address this challenge by mixing solutions generated with the DDE with solutions obtained using standard evolutionary operators. Our algorithm applies classic operators, such as Gaussian mutation, to create candidates which could not be captured by the current DDE. At the same time we leverage the DDE to generalize common patterns across the map and create new solutions that are likely to be high-performing. To avoid introducing new hyper-parameters, we tune this exploration/exploitation trade-off optimally using a multi-armed bandit algorithm (Garivier and Moulines, 2011). This new algorithm, DDE-Elites, reframes optimization as a search for representations (Figure 1). Integrating MAP-Elites with a VAE makes it possible to apply quality diversity to high-dimensional search spaces, and to find effective representations for future uses. We envision application to domains that have straightforward but expansive low-level representations, for instance: joints positions at 20Hz for a walking robot ($12\times 100=1200$ joint positions for a 5-second gait of a robot with $12$ degrees of freedom), 3D shapes in which each voxel is encoded individually (1000-dimensional for a $10\times 10\times 10$ grid), images encoded in the pixel-space, etc. Ideally, the generated DDE will capture the main regularities of the domain. In robot locomotion, this could correspond to periodic functions, since we already know that a $36$-dimensional controller based on periodic functions can produce the numerous joint commands required every second to effectively drive a 12-joint walking robot in many different ways (Cully et al., 2015). In many domains the space of possible solutions can be vast, while the inherent dimensionality of interesting solutions is still compact. By purposefully seeking out a space of solutions, rather than the solutions themselves, we can solve high-dimensional problems in a lower dimensional space. ## 2\. Background ### 2.1. Optimization of Representations In his 30 year perspective on adaptation in evolutionary algorithms, Kenneth De Jong identified representation adaptation as ”perhaps the most difficult and least understood area of EA design.” (De Jong, 2007) Despite the difficulty of creating adaptive encodings, the potential rewards have lured researchers for decades. Directly evolving genotypes to increase in complexity has a tradition going back to the eighties (Goldberg et al., 1989; Altenberg, 1994). The strategy of optimizing a solution at low complexity and then adding degrees of freedom has proved effective on problems from optimal control (Gaier and Asteroth, 2014), to aerodynamic design (Olhofer et al., 2001), to neural networks (Stanley and Miikkulainen, 2002). Evolving the genome’s structure is particularly important when the structure itself is the solution, such as in genetic programming (Koza, [n. d.]) or neural architecture search (Elsken et al., 2019; Miikkulainen et al., 2019; Gaier and Ha, 2019). Recent approaches toward representation evolution have focused on genotype- phenotype mappings (Bongard and Pfeifer, 2003). Neural networks, which map between inputs and outputs, are a natural choice for such ‘meta- representations’. These mappings can evolve with the genome (Scott and Bassett, 2015; Simões et al., 2014), or fix the genome and evolve only the mapping (Stanley et al., 2009; Stanley, 2007). Supervised methods have been previously applied to learn encodings. These approaches require a set of example solutions for training. Where large, well- curated data sets are available this strategy has proven effective at creating representations well suited to optimization (Volz et al., 2018; Bontrager et al., 2018b; Bontrager et al., 2018a), but where a corpus of solutions does not exist it must be created. In (Scott and De Jong, 2018; Moreno et al., 2018) these solutions were collected by saving the champion solutions found after repeatedly running an optimizer on the problem, with the hope that the learned representation would then be effective in similar classes of problems. ### 2.2. MAP-Elites MAP-Elites (Mouret and Clune, 2015) is a QD algorithm which uses a niching approach to produce high-performing solutions which span a continuum of user- defined phenotypic dimensions. These phenotypic dimensions, or behavior descriptors, describe the way the problem is solved, and are often orthogonal to performance. MAP-Elites has been used in such diverse cases as optimizing the distance traveled by a walking robot using different legs (Cully et al., 2015), the drag of aerodynamic designs with varied volumes and curvatures (Gaier et al., 2017), and the win rate of decks composed of different cards in deck-building games (Fontaine et al., 2019). MAP-Elites is a steady-state evolutionary algorithm which maintains a population in a discretized grid or ‘archive’. This grid divides the continuous space of possible behaviors into bins, or ‘niches’ with each bin holding a single individual, or ‘elite’. These elites act as parents, and are mutated to form new individuals. These child individuals are evaluated and assigned a niche based on their behavior. If the niche is empty the child is placed inside; if the niche is already occupied, the individual with higher fitness is stored in the niche and the other discarded. By repeating this process, increasingly optimal solutions which cover the range of phenotype space are found. The MAP-Elites algorithm is summarized in Algorithm 1. Algorithm 1 MAP-Elites 1:function MAP-Elites($fitness()$, $variation()$, $\mathcal{X}_{initial}$) 2: $\mathcal{X}\leftarrow\emptyset$, $\mathcal{F}\leftarrow\emptyset$ $\triangleright$ Map of genomes $\mathcal{X}$, and fitnesses $\mathcal{F}$ 3: $\mathcal{X}\leftarrow\mathcal{X}_{initial}$ $\triangleright$ Place initial solutions in map 4: $\mathcal{F}\leftarrow fitness(\mathcal{X}_{initial})$ 5: for iter = $1\to I$ do 6: $\mathbf{x^{\prime}}\leftarrow variation(\mathcal{X})$ $\triangleright$ Create new solution from elites 7: $\mathbf{p^{\prime}},\mathbf{b^{\prime}}\leftarrow fitness(\mathbf{x^{\prime}})$ $\triangleright$ Get performance and behavior 8: if $\mathcal{F}(\mathbf{b^{\prime}})=\emptyset$ or $\mathcal{F}(\mathbf{b^{\prime}})<\mathbf{f^{\prime}}$ then $\triangleright$ Replace if better 9: $\mathcal{F}(\mathbf{b^{\prime}})\leftarrow\mathbf{f^{\prime}}$ 10: $\mathcal{X}(\mathbf{b^{\prime}})\leftarrow\mathbf{x^{\prime}}$ 11: end if 12: end for 13: return $(\mathcal{X}$, $\mathcal{F})$ $\triangleright$ Return illuminated map 14:end function Though phenotypically diverse the elites are often genotypically similar, existing in an “elite hypervolume”, a high performing region of genotype space (Vassiliades and Mouret, 2018). Just as in nature, where species as diverse as fruit flies and humans share nearly 60 percent of their genome (Adams et al., 2000), the “recipe” for high performance is often composed of many of the same ingredients. This insight was leveraged in (Vassiliades and Mouret, 2018) to create a new variation operator which considers the correlation among elites. Genes which vary little across the elites, and so are likely common factors that produce high performance, are also subject to the smallest amount of perturbation — lowering the chance their children stray from the elite hypervolume. Biasing mutation in this way ensures that exploration is focused on factors which induce phenotypic variation without drifting into regions of poor performance. ### 2.3. Variational Autoencoders Autoencoders (AEs) (Hinton and Salakhutdinov, 2006) are neural networks designed to perform dimensionality reduction. AEs are composed of two components: an encoder, which maps the input to a lower dimensional latent space; and a decoder, which maps the latent space back to the original space. The decoder is trained to reconstruct the input through this lower dimensional latent “bottleneck”. The encoder component can be viewed as a generalization of Principal Component Analysis (Wold et al., 1987), with the latent space approximating principal components. Though the AE is able to represent the data at a lower dimensionality, and reproduce it with minimal loss, it can still be a poor representation for optimization. An important quality of representations is ‘locality’, that a small change in the genotype induces a small change in the phenotype (Rothlauf, 2006). When AEs are trained only to minimize reconstruction error they may overfit the distribution of the training data and create an irregular latent space. The low-locality of such latent spaces limits their usefulnesses in optimization: nearby points in latent space may decode to very different solutions, meaning even a small mutation could have a large effect. Variational autoencoders (VAEs) (Kingma and Welling, 2014) are AEs whose training is regularized to ensure a high-locality latent space. The architecture is broadly the same: an encoder and decoder mediated by a bottleneck, but rather than encoding the input as a single point it is encoded as a normal distribution in the latent space. When training the model a point from this input distribution is sampled, decoded, and the reconstruction error computed. By encoding the input as a normal distribution we induce the distributions produced by the encoder to be closer to normal. VAEs are trained by minimizing two terms: (1) the reconstruction error, and (2) the Kullback- Liebler (KL) divergence (Kullback and Leibler, 1951) of the latent space to a unit Gaussian distribution, giving the loss function: (1) $loss=\|x-\hat{x}\|^{2}+KL\left[N\left(\mu_{x},\sigma\right),N(0,1)\right]$ Inducing solutions to be encoded in the form of a normal distribution structures the latent space in a continuous and overlapping way, creating a local encoding better suited to optimization. ## 3\. DDE-Elites Figure 2. DDE-Elites Algorithm (1) A VAE is trained on the archive, and used to create a ‘reconstructive crossover’ operator which creates new solutions by averaging the parameters of an individual with its own reconstruction; (2) the mix of exploitative and explorative variation operators predicted to have the most success is chosen by the multi-armed bandit algorithm UCB1 and used to create new solutions; (3) the new solutions are added to the archive and the success rate of the applied variation operator is updated. Every representation biases optimization in some way, improving optimization by limiting the range of solutions that can be expressed to those which are valid or high-performing (Rothlauf, 2006). But finding a balance between expressivity and bias is an arduous task requiring considerable domain expertise. Our method, DDE-Elites, automates the process of representation design and learns new encodings in tandem with search — allowing optimization and representation learning to improve each other in a self-reinforcing cycle. DDE-Elites learns an encoding from examples of high performing solutions. To create these examples we use MAP-Elites, which produces a variety of high performing solutions rather than converging to a single optima. The variety produced by MAP-Elites is critical — the expressivity of any learned encoding is limited by the variety of examples. That MAP-Elites not only produces a variety of solutions, but allows us to define the nature of that variety, makes it particularly powerful for crafting useful representations. By defining the type of variety we want to explore we are defining the biases and expressivity we encode in our representation. DDE-Elites is a variant of the MAP-Elites algorithm. The core component of competition within a niched archive is maintained, but novel methods of producing child solutions are introduced. Child solutions are created using an encoding learned from the archive. This encoding is refined as the archive improves, which in turn improves the optimization process. DDE-Elites optimizes an archive of varied solutions by reframing optimization as a search for the best representation, rather than the best solution. The DDE-Elites algorithm proceeds as follows (see Figure 2 and Algorithm 2): (1) a DDE and reconstructive crossover operator is created by training a VAE on the archive; (2) the probability of using each variation operator is determined by the UCB1 bandit algorithm; (3) MAP-Elites is run with the chosen variation operator probabilities. The success rate of the variation operators to create solutions is used to update the bandit and the improved archive is used to create a new DDE and reconstructive crossover operator. #### Data Driven Encoding The MAP-Elites archive is a record of the highest-performing solutions yet found in each bin. When the archive is updated the VAE is trained to reconstruct the individuals in the archive. Reconstruction is a mapping from one phenotype to another, mediated through latent space; and the mapping from latent space to phenotype space analogous to a genotype-phenotype mapping, which we refer to as a Data-Driven Encoding (DDE). Features common in high performing solutions will be the most successfully compressed and reconstructed — and features widely shared by high performing solutions are likely to lead to high performance. Critically, by training the encoding only on high-performing solutions we bias the space of solutions the DDE can express to those with high performance. #### Reconstructive Crossover By limiting the range of solutions which can be expressed by a representation, we are able to bias the solutions found during search. When a solution is reconstructed with the VAE it is mapped onto the restricted space of solutions expressible by the DDE — a space characterized by high performance. Reconstructing individuals with the VAE can create new solutions with higher fitness than the originals, but cannot create novel solutions. Solutions created by the DDE are based on those already in the archive, so cannot reach solutions which lie outside of the encoded distribution. At early stages of optimization when there are few example solutions, using only reconstruction to create new solutions would doom our encoding to a small region of expression. Rather than completely replacing individuals with their reconstructions we instead shift them closer to forms expressible by the DDE with a new variation operator, reconstructive crossover. Child solutions are created by performing crossover with two parents: a parent chosen from the archive and its reconstruction. Crossover takes the form of an element-wise mean of the parameter vectors. (2) $\mathbf{x}_{i}^{(t+1)}=\frac{1}{2}*(\mathbf{x}_{i}^{(t)}+VAE.Decode(VAE.Encode(\mathbf{x}_{i}^{(t)})))$ The reconstructive crossover operator slows the loss of diversity by only moving an individual toward the distribution of solutions encoded by the DDE, not directly into it. By only shifting solutions rather than replacing them, we allow exploration outside of the distribution to continue. Even when there is little gain in fitness, solutions that are the result of reconstructive crossover have a lower inherent dimensionality, on the account of having parents pass through the compressive bottleneck of the VAE. In this way the reconstructive crossover operator not only spreads globally advantageous genes throughout the archive, but also pulls the archive towards more easily compressed solutions. #### Line Mutation Reconstructive crossover enables effective optimization within the range of solutions that the DDE can express, but explorative operators are required to widen the pool of example solutions and improve the DDE. So when creating new solutions we choose to either produce them through reconstructive crossover, or through random mutation. In addition to isometric Gaussian mutation commonly used in MAP-Elites, we apply the line mutation operator proposed in (Vassiliades and Mouret, 2018). Line mutation imposes a directional component on the Gaussian perturbations. During mutation the parent genome is compared to a random genome from the archive. The variance of mutation in each dimension is then scaled by the difference in each gene: (3) $\mathbf{x}_{i}^{(t+1)}=\mathbf{x}_{i}^{(t)}+\sigma_{1}\mathcal{N}(0,\mathbf{I})+\sigma_{2}\left(\mathbf{x}_{j}^{(t)}-\mathbf{x}_{i}^{(t)}\right)\mathcal{N}(0,1)$ where $\sigma_{1}$ and $\sigma_{2}$ are hyperparameters which define the relative strength of the isometric and directional mutations. Intuitively, when two genes have similar values the spread of mutation will be small, when the values are very different the spread will be large. In many cases certain parameter values will be correlated to high fitness, regardless of the individual’s place in behavior space. The line operator is a simple way of exploiting this similarity, but in contrast to reconstructive crossover does not limit expressivity – allowing it to be used as a method of exploring new solutions. Though both the reconstructive crossover and line mutation operators take advantage of the similarities between high performing individuals, their differing approaches allow them to be effectively combined as explorative and exploitative operators. #### Parameter Control DDE-Elites explores the space of representations with the exploitative operator of reconstructive crossover, which finds high performing solutions similar to those already encoded by the DDE, and explorative operators of mutation, which expand the space of solutions beyond the range of the DDE. The optimal ratio to use these operators is not only domain dependent, but dependent on the stage of the algorithm. When the archive is nearly empty, it makes little sense to base a representation on a few randomly initialized solutions; once the behavior space has been explored, it is beneficial to continue optimization through the lens of the DDE; and when the archive is full of solutions produced by the DDE it is more useful to expand the range of possible solutions with mutation. These stages are neither predictable nor clear cut, complicating the decision of when to use each operator. Faced with a trade-off between exploration and exploitation we frame the choice of operators as a multi-armed bandit problem (Auer et al., 2002). Multi-armed bandits imagine sets of actions as levers on a slot machine, each with their own probability of reward. The goal of a bandit algorithm is to balance exploration, trying new actions, and exploitation, repeating actions that yield good rewards. Bandit approaches are straightforward to implement and have been previously used successfully to select genetic operators (DaCosta et al., 2008). We define a set of possible actions as usage ratios between reconstructive crossover, line mutation, and isometric mutation. The ratio of $[\frac{1}{4},\frac{3}{4},0]$, for example, would have solutions created by reconstructive crossover with a probability of $\frac{1}{4}$, line mutation with a probability of $\frac{3}{4}$, and never with isometric mutation. Each action is used to create a batch of child solutions and a reward is assigned in proportion to the number of children who earned a place in the archive. At each generation a new action is chosen, and the reward earned for that action recorded. Actions are chosen based on UCB1 (Auer et al., 2002), a simple and effective bandit algorithm which minimizes regret. Actions with the greatest potential reward are chosen, calculated as: (4) $Q(a)+\sqrt{(2\log t)/(N_{t}(a))}$ where $Q(a)$ is the reward for an action $a$, $t$ is the total number of actions that have been performed, and $N_{t}(a)$ the number of times that action has been performed. UCB1 is an optimistic algorithm which rewards uncertainty — given two actions with the same mean reward, the action which has been tried fewer times will be chosen. Our archive is in constant flux, and so the true reward of each mix of operators changes from generation to generation. To handle the non-stationary nature of the problem we use a sliding window (Garivier and Moulines, 2011), basing our predictions only on the most recent generations. Algorithm 2 DDE-Elites 1:function DDE-Elites($fitness()$ $\mathcal{X}_{initial}$) 2: $\mathcal{X}\leftarrow\mathcal{X}_{initial}$ 3: $\mathcal{V}$: Possible Variation Operator Probabilities (vector) 4: (e.g., [0,0.5,0.5], [0.8,0.0,0.2], [1.0,0.0,0.0] for [xover,line,iso]) 5: successes $\leftarrow zeros(len(\mathcal{V}))$ $\triangleright$ # successes for each option 6: selection $\leftarrow zeros(len(\mathcal{V}))$ $\triangleright$ # selections for each option 7: for iter = $1\to I$ do 8: — Train VAE on Current Archive — 9: VAE.Train ($\mathcal{X}$) 10: — Choose Variation Based on UCB1 — 11: $i\leftarrow\arg\max\left(\frac{\text{successes}[s]}{\text{ selected }[s]}+\sqrt{\frac{2\ln(\text{sum}(\text{successes}))}{\text{selected}[s]}}\right)$ 12: — Run MAP-Elites Using Chosen Variation — 13: $variation()\leftarrow\mathcal{V}[i]$ 14: $\mathcal{X^{\prime}}\leftarrow$MAP- Elites$(fitness(),variation(),\mathcal{X}$) 15: — Track Performance of Chosen Variation — 16: $selection[i]\leftarrow selection[i]+1$ 17: $successes[i]\leftarrow successes[i]+nImproved(\mathcal{X^{\prime}},\mathcal{X})$ 18: end for 19: DDE $\leftarrow$ VAE.Decode() 20: return $\mathcal{~{}X}$, DDE 21:end function 1:function Isometric Mutation($\mathcal{X}$) 2: $\mathbf{x~{}}\leftarrow random\\_selection(\mathcal{X})$ 3: return $\mathbf{x}+\sigma\mathcal{N}(0,\mathbf{I})$ 4:end function 1:function Line Mutation($\mathcal{X}$) 2: $\mathbf{x~{},y~{}}\leftarrow random\\_selection(\mathcal{X})$ 3: return $\mathbf{x}+\sigma_{1}\mathcal{N}(0,\mathbf{I})+\sigma_{2}(\mathbf{x}-\mathbf{y})\mathcal{N}(0,1)$ 4:end function 1:function Reconstructive Crossover($\mathcal{X}$) 2: $\mathbf{x~{}}\leftarrow random\\_selection(\mathcal{X})$ 3: $\mathbf{y~{}}\leftarrow VAE.Decode(VAE.Encode(\mathbf{x}))$ $\triangleright$ VAE Reconstruction 4: return $(\mathbf{x}+\mathbf{y})/2$ 5:end function Figure 3. Archive Illumination Archive illumination performance of MAP-Elites with different variation operators: standard isometric mutation (MAP-Elites), line mutation (ME-Line), reconstructive crossover (DDE-XOver) and DDE-Elites, which uses the UCB1 bandit algorithm to choose between the three at every generation. We measure fitness as the mean fitness of all solutions in the archive; coverage as the fraction of behavior space bins which contain solutions. Results over 20 replicates with lines indicating medians and quartile bounds shaded. The median of DDE-Elites, our approach, is additionally noted with black dots. All final results are significantly different ($p<0.01$ Mann-Whitney U) in fitness and coverage. Progress is shown in evaluations (0 to 1 million); a batch size of 100 evaluations per generation was used, so this scale corresponds to generations from 0 to 10,000. ## 4\. Experiments #### Planar Arm Inverse Kinematics 111see Figure 5 for a visualization of this domain We demonstrate the effectiveness of DDEs and DDE-Elites on in the inverse kinematics (IK) problem of a 2D robot arm, a common QD benchmark problem (Cully and Demiris, 2018; Vassiliades and Mouret, 2018). Given target coordinates a configuration of joint angles should be found to place the end effector at the target. To solve this task, a discretized behavior space is defined over the x,y plane and MAP-Elites finds a configuration of joint angles which places the end effector in each bin. The location of the end effector is derived for an arm with $n$ joints with angles $y$ with using the forward kinematics equation: $\mathbf{b}(\mathbf{y})=\left[\begin{array}[]{c}{l_{1}\cos(y_{1})+l_{2}\cos(y_{1}+y_{2})+\cdots+l_{n}\cos(y_{1}+\cdots+y_{n})}\\\ {l_{1}\sin(y_{1})+l_{2}\sin(y_{1}+y_{2})+\cdots+l_{n}\sin(y_{1}+\cdots+y_{n})}\end{array}\right]$ There are many solutions to this IK problem, but solutions with lower joint variance are preferred to allow for smoother transitions between configurations. We define fitness as the negative joint variance: $-\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\mu)^{2}$ where $(\mu=\sum_{i=1}^{n}y_{i})$. To summarize: the phenotype is the angle of each joint, the behavior is the x,y coordinates of the end effector, and the fitness the negative variance of the joint angles. The difficulty of the problem can be easily scaled up by increasing the number of joints in the arm: we solve this task with 20, 200, and 1000 joints. When a DDE is used 10 latent dimensions are used for the 20D arm, and 32 dimensions for the 200 and 1000D arms. The same archive structure is used for all domains. A unit circle is divided into 1950 bins, with each bin defined by the Voronoi cell (Vassiliades et al., 2017) with centers placed in a ring formation222See supplementary material for a visualization of this structure. Figure 4. Archive Recreation with Data-Driven Encoding Performance of MAP-Elites algorithm when run with direct or data-driven encoding. When using the direct encoding, MAP-Elites was given one order of magnitude more evaluations (note logarithmic scale of evaluations). Fitness is measured as the mean fitness of all solutions in the archive, coverage as the fraction of behavior space bins which contain solutions. Results over 50 replicates with dotted lines indicating medians and quartile bounds shaded. ### 4.1. Archive Illumination We first demonstrate the ability of DDE-Elites to scale up illumination to high-dimensional problems. The performance of DDE-Elites is compared to three algorithmic variants: the canonical MAP-Elites algorithm using isometric mutation (MAP-Elites); MAP-Elites using line, or directional, mutation (ME- Line); and MAP-Elites using the reconstructive crossover (DDE-XOver). Our proposed approach DDE-Elites uses all operators at a ratio determined by the UCB1 bandit algorithm. These treatments are summarized in Table 1. | | Isometric --- Mutation | Line --- Mutation | Reconstructive --- Crossover MAP-Elites | X | | ME-Line | | X | DDE-XOver | | | X DDE-Elites | X | X | X Table 1. Algorithm variants. DDE-Elites is our approach. These variants are compared based on the quality of the archive at each generation (Figure 3). Archives are judged based on two metrics: (1) coverage, the number of bins filled, and (2) performance, the mean fitness of solutions.333Sixty-four core machines were used to evaluate 100 individuals in parallel, requiring $\sim$0.2s, $\sim$0.8s, $\sim$1.6s, for the arm at 20d, 200d, and 1000D arm respectively. In every case the VAE required $\sim$2.4s to train on a single CPU core. In the 20-dimensional case ME-Line quickly fills the map with high performing solutions. In only a one hundred thousand evaluations ME-Line creates an archive unmatched by MAP-Elites even after one million evaluations. When only the reconstructive crossover operator is used, despite promising early progress, a chronic lack of exploration results in archives which are worse than the standard MAP-Elites. DDE-Elites, with access to all operators, explores as quickly as ME-Line and creates archives of similar quality. When the dimensionality of the arm is scaled up to 200D, we see the convergence rate of ME-Line slow down considerably. While still reaching high levels of performance it does so only after one million evaluations, a tenth of the evaluations required in in the 20D case — suggesting that the effectiveness of ME-Line scales linearly with the dimensionality of the problem. In contrast DDE-Elites is barely affected by a ten-fold increase in parameters — exploration is only slightly slowed, and high-performing solutions are found from the very earliest iterations. The effects of scaling can be observed even more clearly in the 1000D case: ME-Line illuminates the archive only very slowly, while the performance of DDE-Elites is marked by the same burst of exploration and consistently high fitness solutions that characterized its performance in lower dimensions. The line mutation operator is clearly able to leverage the similarities in high performing solutions across the archive — in every case performing far better than the isometric mutation operator. The mechanism for doing this, adjusting the range of parameter mutations, does not appear to scale well enough to handle very high dimensional problems. The reconstructive crossover operator is able to rapidly find high-performing solutions even in high- dimensional spaces, but is poor at exploring. Search with reconstructive crossover is confined to the distribution of genes that already exist in the archive, if used exclusively that distribution of genes is limited to the initial population. By combining these operators — expanding the range of genes in the archive with mutation, and spreading high performing genes with reconstructive crossover — DDE-Elites is able to create high-performing archives even in high-dimensional problems. Figure 5. Optimization with Direct and Data-Driven Encodings CMA-ES is given a set budget to find a solution with a target behavior, and searches with either a direct encoding or a DDE. Left: Example solutions for target matching with the direct and data driven encodings. End effectors in yellow, targets in red. Top: Optimization over time of median distance (dotted line) to the 18 targets over 50 replicates (quartiles shaded). Bottom: The final distance to the targets, and a characteristic of the solution. These characteristics were not optimized by CMA-ES, but optimized during the creation of the DDE, biasing the solutions produced. ### 4.2. Archive Recreation DDE-Elites is as much a method of optimizing representations as solutions. By learning a representation from the archive, we create an encoding that is biased towards high performance and has a range of expression matching the defined behavior space. In these experiments, our DDE encodes smooth joint configurations which place an arm’s end effector anywhere in its reach. To demonstrate that DDE-Elites does more than guide search, but learns a representation, we search the space again, using the found DDE in place of the direct encoding. We run the standard MAP-Elites algorithm, with isometric mutation only, using a learned DDE444The decoder network of the VAE found in the highest coverage replicate of DDE-Elites. acting as our genome. In the 20D arm this DDE has 10 parameters, in the 200D and 1000D arms the DDE has 32 parameters. No previous solutions are maintained, only the trained DDE. For reference we compare to the MAP-Elites algorithm using the direct encoding. An order of magnitude fewer evaluations were budgeted when using the DDE. In every case the DDE far outperforms the direct encoding, reaching the same levels of fitness and coverage with several orders of magnitude fewer evaluations (Figure 4). The DDE can express the same range of solutions as were found in the original archive, and finds them rapidly. Archives were recreated after only 10,000 evaluations — a rate of about 5 evaluations per bin.55510,000 individuals/1950 bins $\approx$ 5 evaluations/bin discovered. The found solutions are also high performing. Such improvement cannot be explained away by the decrease in dimensionality of the search. In both low and high dimensional cases the bias toward high performance is also apparent: the mean fitness curve is nearly flat at the optima, indicating that when new solutions are added to the map they are already near optimal. The contrast with the direct encoding is stark, with the direct encoding considerable effort is taken to search for good solutions, the DDE finds little else. DDE-Elites not only produces solutions, but learns domain-specific representation. ### 4.3. Optimization with Learned Encodings Beyond its place in the DDE-Elites optimization loop, the produced DDE is a powerful representation with high expressivity and built in biases. Though created by MAP-Elites, the DDE is not tied to it. Once discovered, a DDE can be used as a representation for any black box optimization algorithm. We illustrate this generality by using again solving the arm inverse kinematics problem with the black-box optimizer CMA-ES (Hansen and Ostermeier, 2001). A set of target positions for the end effector is defined (Figure 5, left), and CMA-ES used to find a joint configuration which reaches each target. In one case optimization is performed using the DDE; in the other the direct encoding is used. When optimizing with the DDE, CMA-ES quickly finds solutions to the target hitting problems with a precision never matched with the direct encoding (Figure 5, top). Moreover, a bias for how the problem is solved is built into the representation (Figure 5, bottom). As the DDE was trained only on solutions with low joint variance, this same property is found in the solutions found by CMA-ES with the DDE — even without searching for them. With the DDE CMA-ES not only finds solutions to the IK problem, the built-in priors of the DDE ensures we find kind of solutions we want. ## 5\. Discussion Learning representations by combining quality diversity (here, MAP-Elites) and generative models (here, a VAE) opens promising research avenues for domains in which optimizations of the same cost function are launched continuously. This is, for example, the case of Model Predictive Control (Mayne et al., 2000), in which the sequence of actions for the next seconds is optimized at every time-step of the control loop, or the case of shape optimization in interactive design tools (Hoyer et al., 2019; Bendsøe and Sigmund, 1995), in which each modification by the user requires a novel optimization. In preliminary experiments, we searched for an encoding to describe action sequences for a walking robot. The results show that using MAP-Elites to generate a diversity of sequences, then using a VAE to learn a representation leads to an encoding that can accelerate future optimizations by several orders of magnitude. Nevertheless, using the representation during optimization, as described in this paper, did not accelerate the quality diversity optimization as much as in the high-dimensional arm used here. One hypothesis is that the regularities in action sequences are harder to recognize than in the arm experiments, especially at the beginning of the process. For instance, it might help to use an auto-encoder that is especially designed for sequences (Vaswani et al., 2017; Co-Reyes et al., 2018). For other tasks, appropriate generative models could be explored, for example convolutional models for tasks with spatial correlations (Salimans et al., 2015). In addition, though the latent spaces created by VAEs are easier to navigate than those created by normal autoencoders, even better models offer the opportunities for further improvements. Much work has been done to create VAEs which have even better organized latent spaces (Higgins et al., 2017; Burgess et al., 2018; Chen et al., 2018; Kim and Mnih, 2018), ideally with each dimension responsible for a single phenotypic feature such as the lighting or color of an image. A second research avenue is to improve the bandit algorithm used to balance between operators. In theory, it should ensure that adding new operators can only aid optimization, since useless or detrimental operators would rarely be selected. However, we observed that it is not always effective: in some cases, using only the line mutation outperformed DDE-Elites, whereas DDE-Elites could revert to using only line mutation with a perfect bandit. Our hypothesis is that this is a sign that “successes” — child solutions which discover new bins or improve on existing solutions — is not the perfect measure of utility for a QD algorithm. In the case of our experiments, it may be that reconstructive crossover consistently improves solutions, but may only do so slightly. According to the “success” metric, a tiny improvement is worth the same as a large one. To best utilize the bandit, other methods of judging performance in QD algorithms should be explored. Beyond performance advantages, for both the current and future optimizations, these “disentangled” representations offer even more interesting opportunities. Reducing the dimensionality of the search space into meaningful components would allow rapid model-based optimization of single solutions (Shahriari et al., 2015), or entire archives (Gaier et al., 2018). Engineers could interactively explore and understand such encodings, laying bare the underlying properties responsible for performance and variation — and so from encodings receive, rather than provide, insight and domain knowledge. ## Acknowledgements This work received funding from the European Research Council (ERC) under the EU Horizon 2020 research and innovation programme (grant agreement number 637972, project ”ResiBots”) and the German Federal Ministry of Education and Research (BMBF) under the Forschung an Fachhochschulen mit Unternehmen programme (grant agreement number 03FH012PX5, project ”Aeromat”). ## Source Code The source code used to produce the results in this paper is available at https://github.com/resibots/2020_gaier_gecco ## References * (1) * Adams et al. (2000) Mark D Adams, Susan E Celniker, Robert A Holt, Cheryl A Evans, Jeannine D Gocayne, Peter G Amanatides, Steven E Scherer, Peter W Li, Roger A Hoskins, Richard F Galle, et al. 2000\. The genome sequence of Drosophila melanogaster. Science. * Altenberg (1994) Lee Altenberg. 1994\. Evolving better representations through selective genome growth. In First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence. IEEE. * Auer et al. (2002) Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. 2002\. Finite-time analysis of the multiarmed bandit problem. Machine learning. * Bendsøe and Sigmund (1995) Martin P Bendsøe and Ole Sigmund. 1995. Optimization of structural topology, shape, and material. Vol. 414. Springer. * Bongard and Pfeifer (2003) Josh C Bongard and Rolf Pfeifer. 2003. Evolving complete agents using artificial ontogeny. In Morpho-functional Machines: The new species. Springer, 237–258. * Bontrager et al. (2018a) Philip Bontrager, Wending Lin, Julian Togelius, and Sebastian Risi. 2018a. Deep interactive evolution. In International Conference on Computational Intelligence in Music, Sound, Art and Design. Springer. * Bontrager et al. (2018b) Philip Bontrager, Aditi Roy, Julian Togelius, Nasir Memon, and Arun Ross. 2018b. Deepmasterprints: Generating masterprints for dictionary attacks via latent variable evolution. In 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS). IEEE, 1–9. * Burgess et al. (2018) Christopher P Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. 2018. Understanding disentangling in beta-VAE. arXiv preprint arXiv:1804.03599. * Chen et al. (2018) Tian Qi Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud. 2018\. Isolating sources of disentanglement in variational autoencoders. In Advances in Neural Information Processing Systems. * Co-Reyes et al. (2018) John D Co-Reyes, YuXuan Liu, Abhishek Gupta, Benjamin Eysenbach, Pieter Abbeel, and Sergey Levine. 2018\. Self-consistent trajectory autoencoder: Hierarchical reinforcement learning with trajectory embeddings. In Proceedings of the International Conference on Machine Learning (ICML). * Cully et al. (2015) Antoine Cully, Jeff Clune, Danesh Tarapore, and Jean-Baptiste Mouret. 2015. Robots that can adapt like animals. Nature. * Cully and Demiris (2018) Antoine Cully and Yiannis Demiris. 2018. Quality and diversity optimization: A unifying modular framework. IEEE Trans. on Evolutionary Computation. * DaCosta et al. (2008) Luis DaCosta, Alvaro Fialho, Marc Schoenauer, and Michèle Sebag. 2008. Adaptive operator selection with dynamic multi-armed bandits. In 10th annual conference on Genetic and evolutionary computation. * De Jong (2007) Kenneth De Jong. 2007\. Parameter setting in EAs: a 30 year perspective. In Parameter setting in evolutionary algorithms. Springer. * Doncieux and Meyer (2004) Stephane Doncieux and Jean-Arcady Meyer. 2004. Evolving modular neural networks to solve challenging control problems. In Fourth International ICSC Symposium on engineering of intelligent systems (EIS 2004). ICSC Academic Press Canada. * Durr et al. (2010) Peter Durr, Dario Floreano, and Claudio Mattiussi. 2010\. Genetic representation and evolvability of modular neural controllers. IEEE Computational Intelligence Magazine. * Elsken et al. (2019) Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. 2019\. Neural Architecture Search: A Survey. Journal of Machine Learning Research. * Fontaine et al. (2019) Matthew C Fontaine, Scott Lee, LB Soros, Fernando De Mesentier Silva, Julian Togelius, and Amy K Hoover. 2019. Mapping hearthstone deck spaces through MAP-elites with sliding boundaries. In Proceedings of The Genetic and Evolutionary Computation Conference. * Gaier and Asteroth (2014) Adam Gaier and Alexander Asteroth. 2014. Evolution of optimal control for energy-efficient transport. In IEEE Intelligent Vehicles Symposium Proceedings. * Gaier et al. (2017) Adam Gaier, Alexander Asteroth, and Jean-Baptiste Mouret. 2017\. Aerodynamic design exploration through surrogate-assisted illumination. In 18th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference. * Gaier et al. (2018) Adam Gaier, Alexander Asteroth, and Jean-Baptiste Mouret. 2018\. Data-efficient design exploration through surrogate-assisted illumination. Evolutionary computation. * Gaier and Ha (2019) Adam Gaier and David Ha. 2019. Weight agnostic neural networks. In Advances in Neural Information Processing Systems. * Garivier and Moulines (2011) Aurélien Garivier and Eric Moulines. 2011. On upper-confidence bound policies for switching bandit problems. In International Conference on Algorithmic Learning Theory. Springer. * Goldberg et al. (1989) David E Goldberg, Bradley Korb, Kalyanmoy Deb, et al. 1989\. Messy genetic algorithms: Motivation, analysis, and first results. Complex systems. * Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014\. Generative adversarial nets. In Advances in neural information processing systems. * Hansen and Ostermeier (2001) Nikolaus Hansen and Andreas Ostermeier. 2001. Completely derandomized self-adaptation in evolution strategies. Evolutionary computation. * Higgins et al. (2017) Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. International Conference on Machine Learning. * Hinton and Salakhutdinov (2006) Geoffrey E Hinton and Ruslan R Salakhutdinov. 2006. Reducing the dimensionality of data with neural networks. Science. * Hoyer et al. (2019) Stephan Hoyer, Jascha Sohl-Dickstein, and Sam Greydanus. 2019\. Neural reparameterization improves structural optimization. arXiv preprint arXiv:1909.04240. * Kim and Mnih (2018) Hyunjik Kim and Andriy Mnih. 2018. Disentangling by Factorising. In International Conference on Machine Learning. * Kingma and Welling (2014) Diederik P. Kingma and Max Welling. 2014. Auto-Encoding Variational Bayes. In International Conference on Learning Representation (ICLR), Yoshua Bengio and Yann LeCun (Eds.). * Koza ([n. d.]) John R Koza. [n. d.]. Genetic programming: A paradigm for genetically breeding populations of computer programs to solve problems. Stanford University, Department of Computer Science Stanford, CA. * Kullback and Leibler (1951) Solomon Kullback and Richard A Leibler. 1951. On information and sufficiency. The annals of mathematical statistics. * Mayne et al. (2000) David Q Mayne, James B Rawlings, Christopher V Rao, and Pierre OM Scokaert. 2000. Constrained model predictive control: Stability and optimality. Automatica. * Miikkulainen et al. (2019) Risto Miikkulainen, Jason Liang, Elliot Meyerson, Aditya Rawal, Daniel Fink, Olivier Francon, Bala Raju, Hormoz Shahrzad, Arshak Navruzyan, Nigel Duffy, et al. 2019\. Evolving deep neural networks. In Artificial Intelligence in the Age of Neural Networks and Brain Computing. Elsevier. * Moreno et al. (2018) Matthew Andres Moreno, Wolfgang Banzhaf, and Charles Ofria. 2018\. Learning an evolvable genotype-phenotype mapping. In Genetic and Evolutionary Computation Conference. ACM. * Mouret and Clune (2015) Jean-Baptiste Mouret and Jeff Clune. 2015. Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909. * Mouret and Doncieux (2008) Jean-Baptiste Mouret and Stéphane Doncieux. 2008. MENNAG: a modular, regular and hierarchical encoding for neural-networks based on attribute grammars. Evolutionary Intelligence. * Olhofer et al. (2001) Markus Olhofer, Yaochu Jin, and Bernhard Sendhoff. 2001\. Adaptive encoding for aerodynamic shape optimization using evolution strategies. In 2001 Congress on Evolutionary Computation. IEEE. * Pugh et al. (2016) Justin K Pugh, Lisa B Soros, and Kenneth O Stanley. 2016\. Quality diversity: A new frontier for evolutionary computation. Frontiers in Robotics and AI. * Rothlauf (2006) Franz Rothlauf. 2006\. Representations for genetic and evolutionary algorithms. In Representations for Genetic and Evolutionary Algorithms. Springer. * Salimans et al. (2015) Tim Salimans, Diederik Kingma, and Max Welling. 2015\. Markov chain monte carlo and variational inference: Bridging the gap. In International Conference on Machine Learning. 1218–1226. * Scott and Bassett (2015) Eric O Scott and Jeffrey K Bassett. 2015. Learning genetic representations for classes of real-valued optimization problems. In Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation. ACM. * Scott and De Jong (2018) Eric O Scott and Kenneth A De Jong. 2018. Toward learning neural network encodings for continuous optimization problems. In Genetic and Evolutionary Computation Conference Companion. ACM. * Shahriari et al. (2015) Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando De Freitas. 2015. Taking the human out of the loop: A review of Bayesian optimization. Proc. IEEE (2015). * Simões et al. (2014) Luís F Simões, Dario Izzo, Evert Haasdijk, and Agoston Endre Eiben. 2014. Self-adaptive genotype-phenotype maps: neural networks as a meta-representation. In International Conference on Parallel Problem Solving from Nature. Springer. * Stanley (2007) Kenneth O Stanley. 2007\. Compositional pattern producing networks: A novel abstraction of development. Genetic programming and evolvable machines. * Stanley et al. (2009) Kenneth O Stanley, David B D’Ambrosio, and Jason Gauci. 2009\. A hypercube-based encoding for evolving large-scale neural networks. Artificial life. * Stanley and Miikkulainen (2002) Kenneth O Stanley and R Miikkulainen. 2002. Evolving neural networks through augmenting topologies. Evolutionary computation. * Vassiliades et al. (2017) Vassilis Vassiliades, Konstantinos Chatzilygeroudis, and Jean-Baptiste Mouret. 2017. Using centroidal voronoi tessellations to scale up the multidimensional archive of phenotypic elites algorithm. IEEE Transactions on Evolutionary Computation 22, 4, 623–630. * Vassiliades and Mouret (2018) Vassilis Vassiliades and Jean-Baptiste Mouret. 2018. Discovering the elite hypervolume by leveraging interspecies correlation. In Genetic and Evolutionary Computation Conference. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998–6008. * Volz et al. (2018) Vanessa Volz, Jacob Schrum, Jialin Liu, Simon M Lucas, Adam Smith, and Sebastian Risi. 2018\. Evolving mario levels in the latent space of a deep convolutional generative adversarial network. In Genetic and Evolutionary Computation Conference. ACM. * Wold et al. (1987) Svante Wold, Kim Esbensen, and Paul Geladi. 1987\. Principal component analysis. Chemometrics and intelligent laboratory systems. ## Supplemental Material ### A.. Example Maps Arm20 | Arm200 | Arm1000 ---|---|--- | | Table 2. Example Maps Final archives colored by fitness value for each cell for each domain. In the 20D Arm both MAP-Elites and DDE-Elites converge on similar optimal solutions. In the 200D and 1000D Arm MAP-Elites is unable to reach the levels of performance of DDE-Elites in any region. ### B.. Hyperparameters of DDE Experiments Hyperparameter | Value ---|--- Isometric Mutation Strength | 0.003 Line Mutation Strength | 0.1 Batch Size | 100 Bandit Options, | | [0.00:0.00:1.00], [0.25:0.00:0.75], --- [0.50:0.00:0.50], [0.75:0.00:0.25], [1.00:0.00:0.00], [0.00:0.25:0.75], [0.00:0.50:0.50], [0.00:0.75:0.25], [0.00:1.00:0.00] Bandit Window Length | 1000 Generations per VAE Training | 1 Epochs per VAE Training | 5 Mutation Strength when Searching DDE | 0.15 Latent Vector Length [Arm20] | 10 Latent Vector Length [Arm200] | 32 Latent Vector Length [Arm1000] | 32
2024-09-04T02:54:58.573582
2020-03-10T00:10:22
2003.04470
{ "authors": "V.M. Ngo, N.A. Le-Khac, and M.T. Kechadi", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26127", "submitter": "Vuong M. Ngo", "url": "https://arxiv.org/abs/2003.04470" }
arxiv-papers
Int. J. Business Process Integration and Management Ngo, V.M., Le-Khac, N.A. and Kechadi M.T. Int. J. Business Process Integration and Management 10 1 2020 Vuong M. Ngo E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS> Nhien-An Le-Khac E-mail<EMAIL_ADDRESS> M-Tahar Kechadi E-mail<EMAIL_ADDRESS> Ho Chi Minh City Open University, HCMC, Vietnam University College Dublin, Belfield, Dublin 4, Ireland # Data Warehouse and Decision Support on Integrated Crop Big Data ###### Abstract In recent years, precision agriculture is becoming very popular. The introduction of modern information and communication technologies for collecting and processing Agricultural data revolutionise the agriculture practises. This has started a while ago (early 20th century) and it is driven by the low cost of collecting data about everything; from information on fields such as seed, soil, fertiliser, pest, to weather data, drones and satellites images. Specially, the agricultural data mining today is considered as Big Data application in terms of volume, variety, velocity and veracity. Hence it leads to challenges in processing vast amounts of complex and diverse information to extract useful knowledge for the farmer, agronomist, and other businesses. It is a key foundation to establishing a crop intelligence platform, which will enable efficient resource management and high quality agronomy decision making and recommendations. In this paper, we designed and implemented a continental level agricultural data warehouse (ADW). ADW is characterised by its (1) flexible schema; (2) data integration from real agricultural multi datasets; (3) data science and business intelligent support; (4) high performance; (5) high storage; (6) security; (7) governance and monitoring; (8) consistency, availability and partition tolerant; (9) cloud compatibility. We also evaluate the performance of ADW and present some complex queries to extract and return necessary knowledge about crop management. Data warehouse, decision support, crop Big Data, smart agriculture. to this paper should be made as follows: Ngo, V.M., Le-Khac, N.A. and Kechadi, M.T. (2020) ‘Data Warehouse and Decision Support on Integrated Crop Big Data’, Int. J. Business Process Integration and Management, Vol. 10, No. 1, pp. 17–28. Vuong M. Ngo received the B.E, M.E and PhD degrees in computer science at HCMC University of Technology in 2004, 2007 and 2013 respectively. He is currently a Senior Researcher at UCD and HCMC Open University. His research interests include information retrieval, sentiment analysis, data mining, graph matching and data Nhien-An Le-Khac is currently a Lecturer at the School of Computer Science, UCD and a Programme Director of MSc programme in forensic computing and cybercrime investigation. He obtained his PhD in computer science in 2006 at the Institut National Polytechnique Grenoble, France. His research interest spans the area of cybersecurity and digital forensics, data mining/distributed data mining for security, grid and high performance computing. M-Tahar Kechadi was awarded PhD and Master degrees in computer science from University of Lille 1, France. He joined the UCD School of Computer Science in 1999. He is currently Professor of Computer Science at UCD. His research interests span the areas of data mining, data analytics, distributed data mining, heterogeneous distributed systems, grid and cloud Computing, cybersecurity, and digital forensics. He is a Principal Investigator at Insight Centre for Data Analytics and CONSUS project. He is a member of IEEE and ACM. ## 1 Introduction Annual world cereal productions were $2,608$ million tons and $2,595$ million tons in $2017$ and $2018$, respectively (USDA report, 2018; FAO-CSDB report, 2018). However, there were also around $124$ million people in $51$ countries faced food crisis and food insecurity (FAO-FSIN report, 2018). According to United Nations (UN document, 2017), we need an increase $60\%$ of cereal production to meet $9.8$ billion people needs by $2050$. To satisfy the huge increase demand for food, crop yields must be significantly increased using modern farming approaches, such as smart farming also called precision agriculture. As highlighted in the European Commission report (EC report, 2016), precision agriculture is vitally important for the future and can make a significant contribution to food security and safety. The precision agriculture’s current mission is to use the decision-support system (DSS) based on Big Data approaches to provide precise information for more control of waste and farming efficiency, such as soil nutrient (Rogovska and et al., 2019), early warning (Rembold and et al., 2019), forecasting (Bendre and et al., 2015), irrigation systems (Huang and et al., 2013), evapotranspiration prediction (Paredes and et al., 2014), soil and herbicide, insecticide optimisation (Ngo and Kechadi, 2020), awareness (Lokers and et al., 2016), supply chain (Protopop and Shanoyan, 2016) and financial services (Ruan and et al., 2019). Normally, the DSSs implement a knowledge discovery process also called data mining process, which consists of data collection and data modelling, data warehousing, data analysis (using machine learning or statistical techniques), and knowledge deployment (Dicks and et al., 2014). Hence, designing and implementing an efficient agricultural data warehouse (ADW) is one of the key steps of this process, as it defines a uniform data representation through its schema model and stores the derived datasets so that they can be analysed to extract useful knowledge. However, currently, this step was not given much attention. Therefore, there are very few reports in the literature that focus on the design of efficient ADWs with the view to enable Agricultural Big Data analytics and mining. The design of large scale ADWs is very challenging. Because, the agricultural data is spatial, temporal, complex, heterogeneous, non-standardised, high dimensional, collected from multi-sources, and very large. In particular, it has all the features of Big Data; volume, variety, velocity and veracity. Moreover, the precision agriculture system can be used by different kinds of users at the same time, for instance by farmers, policymakers, agronomists, and so on. Every type of user needs to analyse different information, sets thus requiring specific analytics. Unlike in any other domains; health-care, financial data, etc, the data and its warehousing in precision agriculture are unique. This is because, there are very complex relationships between the agricultural data dimensions. The data sources are very diversified and varying levels of quality. Precision agriculture (PA) warehousing has many decision-making processes and each needs different levels of data access and different needs of analysis. Finally, there are many stakeholders involved in the data ownership and exploitation. So, the data has significant number of uncertainties. For examples, the quality of data collected by farmers depends directly on their knowledge, routines and frequency of information recording, and support tools, etc. All these issues make the PA data unique when it becomes to its storage, access, and analysis. These issues may exist in other domains, but not at the same scale and as in agriculture practices. In this research, we firstly analyse real-world agricultural Big Data to build the effective constellation schema. From this schema, some simple questions can be easily answered directly from the modelled data. These questions include: (1) For a given field, what kind of crops are suitable to grow? (2) Which companies can purchase a specific crop with the highest price in the past season? (3) List the history of soil texture and applied fertilisers for a given field; (4) List costs of production for wheat and barley in the last 5 years, and so on. Secondly, the proposed ADW has enough main features and characteristics of Big Data Warehouse (BDW). These are (1) high storage capacity, high performance and cloud computing compatibility; (2) flexible schema and integrated storage structure; (3) data ingestion, monitoring, and security to deal with the data veracity. Besides, an experimental evaluation is conducted to study the performance of ADW storage. The rest of this paper is organised as follows: in the next Section, we reviewed the related work about decision support systems and data warehouses in agriculture. In Sections 3, 4 and 5, we presented big data aspects of PA, our ADW architecture and its modules. In Sections 6, 7, 8 and 9, the quality criteria, implementation, performance analysis and decision-making applications of the proposed ADW are presented respectively. Section 10 gives some concluding remarks and future research directions. Finally, a concrete example about the ADW and its operational average run-times are shown in the appendix. ## 2 Related Work In precision agriculture, DSSs are designed to support different stakeholders such as farmers, advisers and policymakers to optimise resources, farms’ management and improve business practices (Gutierreza and et al., 2019). For instance, DSSs were built to 1) manage microbial pollution risks in dairy farming (Oliver and et al., 2017); 2) analyse nitrogen fertilisation from satellite images (Lundstrom and Lindblom, 2018); 3) control pest and disease under uncertainty in climate conditions (Devitt and et al., 2017); 4) manage drip irrigation and its schedule (Friedman and et al., 2016); 5) predict and adopt climate risks (Han and et al., 2017). However, the datasets that were used in the mentioned studies are small. Besides, they focused on using visualisation techniques to assist end-users understand and interpret their data. Recently, many papers have been published on how to exploit intelligent algorithms on sensor data to improve agricultural economics Pantazi (2016), Park and et al. (2016), Hafezalkotob and et al. (2018), Udiasa and et al. (2018) and Rupnik and et al. (2019). In Pantazi (2016), the authors predicted crop yield by using self-organising-maps; namely supervised Kohonen networks, counter-propagation artificial networks and XY-fusion. In Park and et al. (2016), one predicted drought conditions by using three rule-based machine learning; namely random forest, boosted regression trees, and Cubist. To select the best olive harvesting machine, the authors in Hafezalkotob and et al. (2018) applied the target-based techniques on the main criteria, which are cost, vibration, efficiency, suitability, damage, automation, work capacity, ergonomics, and safety. To provide optimal management of nutrients and water, the paper Udiasa and et al. (2018) exploited the multi-objective genetic algorithm to implement an E-Water system. This system enhanced food crop production at river basin level. Finally, in Rupnik and et al. (2019) the authors predicted pest population dynamics by using time series clustering and structural change detection which detected groups of different pest species. However, the proposed solutions are not scalable enough to handle agricultural Big Data; they present weaknesses in one of the following aspects: data integration, data schema, storage capacity, security and performance. From a Big Data point of view, the papers Kamilaris and et al. (2018) and Schnase and et al. (2017) have proposed “smart agricultural frameworks”. In Kamilaris and et al. (2018), the authors used Hive to store and analyse sensor data about land, water and biodiversity which can help increase food production with less environmental impact. In Schnase and et al. (2017), the authors moved toward a notion of climate analytics-as-a-service, by building a high-performance analytics and scalable data management platform, which is based on modern cloud infrastructures, such as Amazon web services, Hadoop, and Cloudera. However, the two papers did not discuss how to build and implement a DW for a precision agriculture. The proposed approach, inspired from Schulze and et al. (2007), Schuetz and et al. (2018), Nilakanta and et al. (2008) and Ngo and et al. (2018), introduces ways of building agricultural data warehouse (ADW). In Schulze and et al. (2007), the authors extended entity-relationship concept to model operational and analytical data; called multi-dimensional entity-relationship model. They also introduced new representation elements and showed how can be extended to an analytical schema. In Schuetz and et al. (2018), a relational database and an RDF triple store were proposed to model the overall datasets. The data is loaded into the DW in RDF format, and cached in the RDF triple store before being transformed into relational format. The actual data used for analysis was contained in the relational database. However, as the schemas used in Schulze and et al. (2007) and Schuetz and et al. (2018) were based on entity- relationship models, they cannot deal with high-performance, which is the key feature of a data warehouse. In Nilakanta and et al. (2008), a star schema model was used. All data marts created by the star schemas are connected via some common dimension tables. However, a star schema is not enough to present complex agricultural information and it is difficult to create new data marts for data analytics. The number of dimensions of the DW proposed in Nilakanta and et al. (2008) is very small; only 3-dimensions – Species, Location, and Time. Moreover, the DW concerns livestock farming. Overcoming disadvantages of the star schema, the authors of Ngo and et al. (2018) and Ngo and Kechadi (2020) proposed a constellation schema for an agricultural DW architecture in order to satisfy the quality criteria. However, they did not describe how to design and implement their DW. ## 3 Crop Big Data ### 3.1 Crop Datasets The datasets were primarily obtained from an agronomy company, which extracted it from them operational data storage systems, research results, and field trials. Especially, we were given real-world agricultural datasets on iFarms, Business-to-Business (B2B) sites, technology centres and demonstration farms. Theses datasets were collected from several European countries and they are presented in Figures 1 and 2 (Origin report, 2018). These datasets describe more than $112$ distribution points, $73$ demonstration farms, $32$ formulation and processing facilities, $12.7$ million hectares of direct farm customer footprint and $60,000$ trial units. Figure 1: Data from UK and Ireland. Figure 2: Data in Continental Europe. There is a total of 29 datasets. On average, each dataset contains $18$ tables and is about $1.4$ GB in size. Each dataset focuses on a few information that impact the crop. For instance, the weather dataset includes information on location of weather stations, temperature, rainfall and wind speed over time. Meanwhile, soil component information in farm sites, such as mineral, organic matter, air, water and micro-organisms, were stored in the soil dataset. The fertiliser dataset contains information about field area and geographic position, crop name, crop yield, season, fertiliser name and quantity. ### 3.2 Big Data Challenges Raw and semi-processed agricultural datasets are usually collected through various sources: Internet of Thing (IoT) devices, sensors, satellites, weather stations, robots, farm equipment, farmers and agronomists, etc. Besides, agricultural datasets are very large, complex, unstructured, heterogeneous, non-standardised, and inconsistent. Hence, it has all the features of Big Data. 1. 1. Volume: The amount of agricultural data is increasing rapidly and is intensively produced by endogenous and exogenous sources. The endogenous data is collected from operational systems, experimental results, sensors, weather stations, satellites, and farming equipment. The systems and devices in the agricultural ecosystem can be connected through IoT. The exogenous data concerns the external sources, such as government agencies, retail agronomists, and seed companies. They can help with information about local pest and disease outbreak tracking, crop monitoring, food security, products, prices, and knowledge. 2. 2. Variety: Agricultural data has many different forms and formats, structured and unstructured data, video, imagery, chart, metrics, geo-spatial, multi- media, model, equation, text, etc. 3. 3. Velocity: The collected data increases at very high rate, as sensing and mobile devices are becoming more efficient and cheaper. The datasets must be cleaned, aggregated and harmonised in real-time. 4. 4. Veracity: The tendency of agronomic data is uncertain, inconsistent, ambiguous and error prone because the data is gathered from heterogeneous sources, sensors and manual processes. ### 3.3 ADW Schema Figure 3: A part of ADW schema for Precision Agriculture The DW uses schema to logically describe the entire datasets. A schema is a collection of objects, including tables, views, indexes, and synonyms which consist of some fact and dimension tables (Oracle document, 2017). The DW schema can be designed based on the model of source data and the user requirements. There are three kind of models, namely star, snowflake and fact constellation. With the its various uses, the ADW schema needs to have more than one fact table and should be flexible. So, the constellation schema, also known galaxy schema should be used to design the ADW schema. Figure 4: Field and Crop dimension tables Figure 5: Soil and Pest dimension tables We developed a constellation schema for ADW and it is partially described in Figure 3. It includes few fact tables and many dimension tables. FieldFact fact table contains data about agricultural operations on fields. Order and Sale fact tables contain data about farmers’ trading operations. The key dimension tables are connected to their fact table. There are some dimension tables connected to more than one fact table, such as Crop and Farmer. Besides, CropState, Inspection, Site, and Weather Reading dimension tables are not connected to any fact table. CropState and Inspection tables are used to support Crop table. While, Site and Weather Reading tables support Field and WeatherStation tables. FieldFact fact table saves the most important facts about teh field; yield, water volume, fertiliser quantity, nutrient quantity, spray quantity and pest number. While, in Order and Sale tables, the important facts needed by farm management are quantity and price. Table 1: Descriptions of other dimension tables No. | Dim. tables | Particular attributes ---|---|--- 1 | Business | BusinessID, Name, Address, Phone, Mobile, Email 2 | CropState | CropStateID, CropID, StageScale, Height, MajorStage, MinStage, MaxStage, Diameter, MinHeight, MaxHeight, CropCoveragePercent 3 | Farmer | FarmerID, Name, Address, Phone, Mobile, Email 4 | Fertiliser | FertiliserID, Name, Unit, Status, Description, GroupName 5 | Inspection | InspectionID, CropID, Description, ProblemType, Severity, ProblemNotes, AreaValue, AreaUnit, Order, Date, Notes, GrowthStage 6 | Nutrient | NutrientID, NutrientName, Date, Quantity 7 | Operation Time | OperationTimeID, StartDate, EndDate, Season 8 | Plan | PlanID, PName, RegisNo, ProductName, ProductRate, Date, WaterVolume 9 | Product | ProductID, ProductName, GroupName 10 | Site | SiteID, FarmerID, SiteName, Reference, Country, Address, GPS, CreatedBy 11 | Spray | SprayID, SprayProductName, ProductRate, Area,Date, WaterVol, ConfDuration, ConfWindSPeed, ConfDirection, ConfHumidity, ConfTemp, ActivityType 12 | Supplier | SupplierID, Name, ContactName, Address, Phone, Mobile, Email 13 | Task | TaskID, Desc, Status, TaskDate, TaskInterval, CompDate, AppCode 14 | Trans Time | TransTimeID, OrderDate, DeliverDate, ReceivedDate, Season 15 | Treatment | TreatmentID, TreatmentName, FormType, LotCode, Rate, ApplCode, LevlNo, Type, Description, ApplDesc, TreatmentComment 16 | Weather Reading | WeatherReadingID, WeatherStationID, ReadingDate, ReadingTime, AirTemperature, Rainfall, SPLite, RelativeHumidity, WindSpeed, WindDirection, SoilTemperature, LeafWetness 17 | Weather Station | WeatherStationID, StationName, Latitude, Longitude, Region The dimension tables contain details on each instance of an object involved in a crop yield or farm management. Figure 4 describes attributes of Field and Crop dimension tables. Field table contains information about name, area, co- ordinates (being longitude and latitude of the centre point of the field), geometric (being a collection of points to show the shape of the field) and site identify the site that the field it belongs to. While, Crop table contains information about name, estimated yield of the crop (estYield), BBCH Growth Stage Index (BbchScale), harvest equipment and its weight. These provide useful information for crop harvesting. Figure 5 describes attributes of Soil and Pest dimension tables. Soil table contains information about PH value (a measure of the acidity and alkalinity), minerals (nitrogen, phosphorus, potassium, magnesium and calcium), its texture (texture label and percentage of Silt, Clay and Sand), cation exchange capacity (CEC) and organic matter. Besides, information about recommended nutrient and testing dates ware also included in this table. In Pest table contains name, type, density, coverage and detected dates of pests. For the remaining dimension tables, their main attributes are described in Table 1. ## 4 ADW Architecture A DW is a federated repository for all the data that an enterprise can collect through multiple heterogeneous data sources; internal or external. The authors in Golfarelli and Rizzi (2009) and Inmon (2005) defined DW as a collection of methods, techniques, and tools used to conduct data analyses, make decisions and improve information resources. DW is defined around key subjects and involves data cleaning, data integration and data consolidations. Besides, it must show its evolution over time and is not volatile. The general architecture of a typical DW system includes four separate and distinct modules; Raw Data, Extraction Transformation Loading (ETL), Integrated Information and Data Mining (Kimball and Ross, 2013), which is illustrated in Figure 6. In that, Raw Data (source data) module is originally stored in various storage systems (e.g. SQL, sheets, flat files, …). The raw data often requires cleansing, correcting noise and outliers, dealing with missing values. Then it needs to be integrated and consolidated before loading it into a DW storage through ETL module. Figure 6: Agricultural Data Warehouse Architecture. The Integrated Information module is a logically centralised repository, which includes the DW storage, data marts, data cubes and OLAP engine. The DW storage is organised, stored and accessed using a suitable schema defined by the metadata. It can be either directly accessed or used to create data marts, which is usually oriented to a particular business function or an enterprise department. A data mart partially replicates DW storage’s contents and is a subset of DW storage. Besides, the data is extracted in a form of data cube before it is analysed in the data mining module. A data cube is a data structure that allows advanced analysis of data according to multiple dimensions that define a given problem. The data cubes are manipulated by the OLAP engine. The DW storage, data mart and data cube are considered as metadata, which can be applied to the data used to define other data. Finally, Data Mining module contains a set of techniques, such as machine learning, heuristic, and statistical methods for data analysis and knowledge extraction at multiple level of abstraction. ## 5 ETL and OLAP The ETL module contains Extraction, Transformation, and Loading tools that can merge heterogeneous schemata, extract, cleanse, validate, filter, transform and prepare the data to be loaded into a DW. The extraction operation allows to read, retrieve raw data from multiple and different types of data sources systems and store it in a temporary staging. During this operation, the data goes through multiple checks – detect and correct corrupted and/or inaccurate records, such as duplicate data, missing data, inconsistent values and wrong values. The transformation operation structures, converts or enriches the extracted data and presents it in a specific DW format. The loading operation writes the transformed data into the DW storage. The ETL implementation is complex, and consuming significant amount of time and resources. Most DW projects usually use existing ETL tools, which are classified into two groups. The first is a commercial and well-known group and includes tools such as Oracle Data Integrator, SAP Data Integrator and IBM InfoSphere DataStage. The second group is famous for it open source tools, such as Talend, Pentaho and Apatar. OLAP is a category of software technology that provides the insight and understanding of data in multiple dimensions through fast, consistent, interactive access, management and analysis of the data. By using roll-up (consolidation), drill-down, slice-dice and pivot (rotation) operations, OLAP performs multidimensional analysis in a wide variety of possible views of information that provides complex calculations, trend analysis and sophisticated data modelling quickly. The OLAP systems are divided into three categories: 1) Relational OLAP (ROLAP), which uses relational or extended- relational database management system to store and manage the data warehouse; 2) Multidimensional OLAP (MOLAP), which uses array-based multidimensional storage engines for multidimensional views of data, rather than in a relational database. It often requires pre-processing to create data cubes. 3) Hybrid OLAP (HOLAP), which is a combination of both ROLAP and MOLAP. It uses both relational and multidimensional techniques to inherit the higher scalability of ROLAP and the faster computation of MOLAP. In the context of agricultural Big Data, HOLAP is more suitable than both ROLAP and MOLAP because: 1) ROLAP has quite slow performance and does not meet all the users’ needs, especially when performing complex calculations; 2) MOLAP is not capable of handling detailed data and requires all calculations to be performed during the data cube construction; 3) HOLAP inherits advantages of both ROLAP and MOLAP, which allow the user to store large data volumes of detailed information and perform complex calculations within reasonable response time. ## 6 Quality Criteria The accuracy of data mining and analysis techniques depends on the quality of the DW. As mentioned in Adelman and Moss (2000) and Kimball and Ross (2013), to build an efficient ADW, the quality of the DW should meet the following important criteria: 1. 1. Making information easily accessible. 2. 2. Presenting consistent information. 3. 3. Integrating data correctly and completely. 4. 4. Adapting to change. 5. 5. Presenting and providing right information at the right time. 6. 6. Being a secure bastion that protects the information assets. 7. 7. Serving as the authoritative and trustworthy foundation for improved decision making. The analytics tools need to provide right information at the right time. 8. 8. Achieving benefits, both tangible and intangible. 9. 9. Being accepted by DW users. The above criteria must be formulated in a form of measurements. For example, with the 8th criterion, it needs to determine quality indicators about benefits, such as improved fertiliser management, cost containment, risk reduction, better or faster decision, and efficient information transaction. In the last criterion, a user satisfaction survey should be used to find out how a given DW satisfies its user’s expectations. ## 7 ADW Implementation Currently, there are many popular large-scale database types that can implement DWs. Redshift (Amazon document, 2018), Mesa (Gupta and et al., 2016), Cassandra (Hewitt and Carpenter, 2016; Neeraj, 2015), MongoDB (Chodorow, 2013; Hows and et al., 2015) and Hive (Du, 2018; Lam and et al., 2016). In Ngo and et al. (2019), the authors analysed the most popular no-sql databases, which fulfil most of the aforementioned criteria. The advantages, disadvantages, as well as similarities and differences between Cassandra, MongoDB and Hive were investigated carefully in the context of ADW. It was reported that Hive is a better choice as it can be paired with MongoDB to implement the proposed ADW for the following reasons: 1. 1. Hive is based on Hadoop which is the most powerful cloud computing platform for Big Data. Besides, HQL is similar to SQL which is popular for the majority of users. Hive supports well high storage capacity, business intelligent and data science more than MongoDB or Cassandra. These Hive features are useful to implement ADW. 2. 2. Hive does not have real-time performance so it needs to be combined with MongoDB or Cassandra to improve its performance. 3. 3. MongoDB is more suitable than Cassandra to complement Hive because: 1) MongoDB supports joint operation, full text search, ad-hoc query and second index which are helpful to interact with the users. Cassandra does not support these features; 2) MongoDB has the same master – slave structure with Hive that is easy to combine. While the structure of Cassandra is peer - to - peer; 3) Hive and MongoDB are more reliable and consistent. So the combination of both Hive and MongoDB adheres to the CAP theorem. Figure 7: Agricultural Data Warehouse Implementation The ADW implementation is illustrated in Figure 7 which contains three modules, namely Integrated Information, Products and Raw Data. The Integrated Information module includes two components; MongoDB and Hive. MongoDB receives real-time data; as user data, logs, sensor data or queries from Products module, such as web application, web portal or mobile app. Besides, some results which need to be obtained in real-time will be transferred from the MongoDB to Products. Hive stores the online data and sends the processed data to MongoDB. Some kinds of queries having complex calculations will be sent directly to Hive. In the Raw Data module, almost data in Operational Databases or External Data components, is loaded into Cassandra. It means that we use Cassandra to represent raw data storage. Hence, with the diverse formats of raw data; image, video, natural language and sql data, Cassandra is better to store them than SQL databases. In the idle times of the system, the updated raw data in Cassandra will be imported into Hive through the ELT tool. This improves the performance of ETL and helps us deploy ADW on cloud or distributed systems. ## 8 Performance Analysis The performance analysis was conducted using MySQL 5.7.22, JDK 1.8.0_171, Hadoop 2.6.5 and Hive 2.3.3 which run on Bash, on Ubuntu 16.04.2, and on Windows 10. All experiments were run on a desktop with an Intel Core i7 CPU (2.40 GHz) and 16 GB memory. We only evaluate the performance of reading operation as ADW is used for reporting and data analysis. The database of ADW is duplicated into MySQL to compare performance. By combining popular HQL/SQL commands, namely Where, Group by, Having, Left (right) Join, Union and Order by, we created 10 groups for testing. Every group has 5 queries and uses one, two or more commands (see Table 2). Moreover, every query uses operators; And, Or, $\geq$, Like, Max, Sum and Count, to express complex queries. Table 2: Command combinations of queries Group | Commands ---|--- $G_{1}$ | Where $G_{2}$ | Where, Group by $G_{3}$ | Where, Left (right) Join $G_{4}$ | Where, Union $G_{5}$ | Where, Order by $G_{6}$ | Where, Left (right) Join, Order by $G_{7}$ | Where, Group by, Having $G_{8}$ | Where, Group by, Having, Order by $G_{9}$ | Where, Group by, Having, Left (right) Join, | Order by $G_{10}$ | Where, Group by, Having, Union, Order by $0$$10$$20$$30$$40$$50$$0$$10$$20$$30$1Queries ($q_{i}$)Different times ($Times_{q_{i}}$)Group 1Group 2Group 3Group 4Group 5Group 6Group 7Group 8Group 9Group 10 Figure 8: Different times between MySQL and ADW in runtime of every Query All queries were executed three times and we took the average value of the their execution timess. The difference in runtime between MySQL and ADW for a query $q_{i}$ is calculated as $Times_{q_{i}}=RT^{mysql}_{q_{i}}/RT^{ADW}_{q_{i}}$. Where, $RT^{mysql}_{q_{i}}$ and $RT^{ADW}_{q_{i}}$ are average runtimes of query $q_{i}$ on MySQL and ADW, respectively. Moreover, with each group $G_{i}$, the difference in runtime between MySQL and ADW is $Times_{G_{i}}=RT^{mysql}_{G_{i}}/RT^{ADW}_{G_{i}}$. Where, $RT_{G_{i}}=Average(RT_{q_{i}})$ is average runtime of group $G_{i}$ on MySQL or ADW. Figure 8 describes the time difference between MySQL and ADW for every query. Although running on one computer, but with large data volume, ADW is faster than MySQL on 46 out of 50 queries. MySQL is faster for three queries $12^{th}$, $13^{th}$ and $18^{th}$ belonging to groups $3^{rd}$ and $4^{th}$. The two systems returned the same time for query $24^{th}$ from group $5^{th}$. Within each query group, for fair performance comparison, the queries combine randomly fact tables and dimensional tables. This makes complex queries taking more time and the time difference is significant. When varying the sizes and structures of the tables, the difference is very significant; see Figure 8. $0$$2$$4$$6$$8$$10$$2$$4$$6$Mean$6.24$$2.92$$1.22$$2.86$$2.27$$4.66$$3.36$$4.63$$3.16$$1.56$$3.19$Groups ($G_{i}$)Different times ($Times_{G_{i}}$) Figure 9: Different times between MySQL and ADW in runtime of every group Beside comparing runtime in every query, we aslo compare runtime of every group presented in Figure 9. Comparing to MySQL, ADW is more than at most (6.24 times) at group $1^{st}$ which uses only Where command, and at least (1.22 times) at group $3^{rd}$ which uses Where and Joint commands. 12345678910Mean$0$$500$$1{,}000$$1{,}081.5$$599.7$$111.7$$790.4$$776.6$$1{,}109.2$$483$$1{,}057.3$$297.9$$571.1$$687.8$$173.4$$205.2$$91.2$$276.4$$342.8$$238$$143.7$$228.3$$94.2$$366.4$$216.1$Groups ($G_{i}$)Average runtimes (seconds)MySQLADW Figure 10: Average Runtimes of MySQL and ADW in every Groups Figure 10 presents the average runtime of the 10 query groups on MySQL and ADW. Mean, the run time of a reading query on MySQL and ADW is 687.8 seconds and 216.1 seconds, respectively. It means that ADW is faster 3.19 times. In the future, by deploying ADW solution on cloud or distributed systems, we believe that the performance will be even much better than MySQL. ## 9 Application for Decision Making The proposed ADW and study its performance on real agricultural data, we illustrated some queries examples to show how to extract information from ADW. These queries incorporate inputs on crop, yield, pest, soil, fertiliser, inspection, farmer, businessman and operation time to reduce labour and fertiliser inputs, farmer services, disease treatment and also increase yields. These query information could not be extracted if the Origin’s separate 29 datasets have not been integrated into ADW. The data integration through ADW is actually improve the value of a crop management data over time to better decision-making. Example 1: List fields, crops in the fields, yield and pest in the field with conditions: (1) the fields do not used ’urea’ fertilizer; (2) the crops has ’yellow rust’ or ’brown rust’ diseases; (3) the crops were grown in 2015. select CR.CropName, FI.FieldName, FF.Yield, PE.CommonName, FF.PestNumber, PE.Description from FieldFact FF, Crop CR, Field FI, Pest PE, Fertiliser FE, Inspection INS, OperationTime OP where FF.CropID = CR.CropID and FF.FieldID = FI.FieldID and FF.PestID = PE.PestID and FF.FertiliserID = FE.FertiliserID and CR.CropID = INS.CropID and FF.OperationTimeID = OP.OperationTimeID and FE.FertiliserName <> ’urea’ and (INS.Description = ’Yellow Rust’ or INS.Description = ’Brown Rust’) and Year(INS.Date) = ’2015’ and Year(OP.StartDate) = ’2015’ and Year(OP.EndDate) = ’2015’ Example 2: List farmers and their crop quantities were sold by Ori Agro company in 08/2016. select FA.FarmerID, FA.FarmerName, CR.CropName, SF.Unit, SUM(SF.Quantity) from Salefact SF, business BU, farmer FA, crop CR where SF.BusinessID = BU.BusinessID and SF.FarmerID = FA.FarmerID and SF.CropID = CR.CropID and Month(SF.SaleDate) = ’08’ and Year(SF.SaleDate) = ’2016’ and BU.BusinessName = ’Ori Agro’ group by CR.CropName Example 3: List Crops and their fertiliser and treatment information. In that, crops were cultivated and harvested in 2017, Yield $>$ 10 tons/ha and attached by ’black twitch’ pest. Besides, the soil in field has PH $>6$ and Silt $<=50$ mg/l. Select CR.CropName, FE.FertiliserName, FF.FertiliserQuantity, TR.TreatmentName, TR.Rate, TR.TreatmentComment From FieldFact FF, Crop CR, OperationTime OT, Soil SO, PEST PE, Fertiliser FE, Treatment TR Where FF.CropID = CR.CropID and FF.OperationTimeID = OT.OperationTimeID and FF.SoildID = SO.SoilID and FF.PestID = PE.PestID and FF.FertiliserID = FE.FertiliserID and FF.TreatmentID = TR.TreatmentID and Year(OT.StartDate) = ’2017’ and Year(OT.EndDate) = ’2017’ and FF.Yield > 10 and SO.PH > 6 and SO.Silt <= 50 and PE.CommonName = ’Black twitch’ Example 4: List crops, fertilisers, corresponding fertiliser quantities in spring, 2017 in every field and site of 10 farmers (crop companies) who used the large amount of $P_{2}O_{5}$ in winter, 2016. To execute this request, the query needs to exploit data in the FieldFact fact table and the six dimension tables, namely Crop, Field, Site, Farmer, Fertiliser and OperationTime. The query consists of two subqueries which return 10 farmers (crop companies) that used the largest amount of Urea in spring, 2016. Select FI.FieldName, SI.SiteName, FA.FarmerName, CR.CropName, FE.FertiliserName, FF.FertiliserQuantity, FE.Unit, OT.StartDate From FieldFact FF, Crop CR, Field FI, Site SI, Farmer FA, Fertiliser FE, Operationtime OT Where FF.CropID = CR.CropID and FF.FieldID = FI.FieldID and FF.FertiliserID = FE.FertiliserID and FF.OperationTimeID = OT.OperationTimeID and FI.SiteID = SI.SiteID and SI.FarmerID = FA.FarmerID and OT.Season = ’Spring’ and YEAR(OT.StartDate) = ’2017’ and FA.FarmerID IN( Select FarmerID From (Select SI.FarmerID as FarmerID, SUM(FF.FertiliserQuantity) as SumFertiliser From FieldFact FF, Field FI, Site SI, Fertiliser FE, OperationTime OT Where FF.FieldID = FI.FieldID and FF.FertiliserID = FE.FertiliserID and FF.OperationTimeID = OT.OperationTimeID and SI.SiteID = FI.SiteID and FE.FertiliserName = ’SO3’ and OT.Season = ’Spring’ and YEAR(OT.StartDate) = ’2016’ Group by SI.FarmerID Order by SumFertiliser DESC Limit 10 )AS Table1 ) ## 10 Conclusion and Future Work In this paper, we presented a schema herein optimised for the real agricultural datasets that were made available to us. The schema been designed as a constellation so it is flexible to adapt to other agricultural datasets and quality criteria of agricultural Big Data. Based on some existing popular open source DWs, We designed and implemented the agricultural DW by combining Hive, MongoDB and Cassandra DWs to exploit their advantages and overcome their limitations. ADW includes necessary modules to deal with large scale and efficient analytics for agricultural Big Data. Moreover, through particular reading queries using popular HQL/SQL commands, ADW storage outperforms MySQL by far. Finally, we outlined some complex HQL queries that enabled knowledge extraction from ADW to optimize of agricultural operations. In the future work, we shall pursue the deployment of ADW on a cloud system and implement more functionalities to exploit this DW. The future developments will include: (1) experimentation and analyzation the performance of MongoDB and the affectation between MongoDB and Hive; (2) The sophisticated the data mining and the spreading activation algorithms (Ngo, 2014) to determine crop data characteristics and combine with expected outputs to extract useful knowledge; (3) Predictive models based on machine learning algorithms; (4) An intelligent interface and graph representation (Helmer and et al., 2015) for data access; (5) Combination with the ontology to extract knowledge (Ngo and et al., 2011; Cao and et al., 2012). ## Appendix The followings are HQL/SQL scripts of 10 queries which are representative of 10 query groups. The average runtimes of these queries on MySQL and ADW are shown in Figure 11. 1) The query $5^{th}$ belongs to the group $1^{st}$: SELECT fieldfact.FieldID, crop.cropname, fieldfact.yield FROM fieldfact, crop WHERE fieldfact.cropid = crop.cropid and SprayQuantity = 7 and (crop.CropName like ’P\%’ or crop.CropName like ’R\%’ or crop.CropName like ’G\%’); 2) The query $10^{th}$ belongs to the group $2^{nd}$: SELECT soil.PH, count(*) FROM fieldfact, soil WHERE fieldfact.SoildID = soil.SoilID and fieldfact.sprayquantity = 2 GROUP by soil.PH; 5101520253035404550$0$$1{,}000$$2{,}000$$97.9$$754.8$$52.7$$2{,}297$$1{,}192$$2{,}188.4$$95.4$$265.9$$439.5$$892.4$$3$$233.2$$3.6$$479$$422.6$$226.7$$5.2$$7.6$$212.3$$472.1$Queries ($q_{i}$)Average runtimes (seconds)MySQLADW Figure 11: Average runtimes of MySQL and ADW in 10 typical queries 3) The query $15^{th}$ belongs to the group $3^{rd}$: SELECT fieldfact.yield, fertiliser.fertiliserName, fertiliser.fertiliserGroupName FROM fieldfact RIGHT JOIN fertiliser on fieldfact.fertiliserID = fertiliser.fertiliserID WHERE fieldfact.fertiliserQuantity = 10 and fertiliser.fertiliserName like ’%slurry%’; 4) The query $20^{th}$ belongs to the group $4^{th}$: SELECT sprayproductname FROM fieldfact, spray WHERE fieldfact.sprayid = spray.sprayid and fieldfact.watervolumn > 5 and fieldfact.watervolumn < 20 UNION SELECT productname FROM product, orderfact WHERE product.ProductID = orderfact.ProductID and (orderfact.Quantity = 5 or orderfact.Quantity = 6); 5) The query $25^{th}$ belongs to the group $5^{th}$: SELECT fieldfact.fieldID, field.FieldName, field.FieldGPS, spray.SprayProductName FROM fieldfact, field, spray WHERE fieldfact.FieldID = field.FieldID and fieldfact.SprayID = spray.SprayID and fieldfact.PestNumber = 6 ORDER BY field.FieldName; 6) The query $30^{th}$ belongs to the group $6^{th}$: SELECT fieldfact.FieldID, nutrient.NutrientName, nutrient.Quantity, nutrient.‘Year‘ FROM fieldfact RIGHT JOIN nutrient on fieldfact.NutrientID = nutrient.NutrientID WHERE fieldfact.NutrientQuantity = 3 and fieldfact.fertiliserquantity = 3 ORDER BY nutrient.NutrientName LIMIT 10000; 7) The query $35^{th}$ belongs to the group $7^{th}$: SELECT crop.cropname, sum(fieldfact.watervolumn) as sum1 FROM fieldfact, crop WHERE fieldfact.cropid = crop.cropid and fieldfact.sprayquantity = 8 and crop.EstYield >= 1 and crop.EstYield <=10 GROUP BY crop.cropname HAVING sum1 > 100; 8) The query $40^{th}$ belongs to the group $8^{th}$: SELECT crop.cropname, sum(fieldfact.fertiliserquantity) as sum1 FROM fieldfact, crop WHERE fieldfact.cropid = crop.cropid and fieldfact.nutrientquantity= 5 and crop.EstYield <=1 GROUP by crop.cropname HAVING sum1 > 30 ORDER BY crop.cropname; 9) The query $45^{th}$ belongs to the group $9^{th}$: SELECT nutrient.NutrientName, sum(nutrient.Quantity) as sum1 FROM fieldfact LEFT JOIN nutrient on fieldfact.NutrientID = nutrient.NutrientID WHERE nutrient.nutrientName like ’%tr%’ and (fieldfact.pestnumber = 16 or fieldfact.pestnumber = 15) GROUP by nutrient.NutrientName HAVING sum1 <300 ORDER BY nutrient.NutrientName; 10) The query $50^{th}$ belongs to the group $10^{th}$: SELECT sprayproductname as name1, sum(fieldfact.watervolumn) as sum1 FROM fieldfact, spray WHERE fieldfact.sprayid = spray.sprayid and fieldfact.Yield > 4 and fieldfact.Yield < 8 GROUP by sprayproductname HAVING sum1 > 210 UNION SELECT productname as name1, sum(orderfact.Quantity) as sum2 FROM product, orderfact WHERE product.ProductID = orderfact.ProductID and (orderfact.Quantity = 5 or orderfact.Quantity = 6) GROUP by productname HAVING sum2 > 50 ORDER BY name1; ## Acknowledgment This research is an extended work of Ngo and et al. (2019) being part of the CONSUS research program. It is funded under the SFI Strategic Partnerships Programme (16/SPP/3296) and is co-funded by Origin Enterprises Plc. ## References * Adelman and Moss (2000) Adelman, S. and Moss, L. (2000). Data warehouse project management, 1st edition. Addison-Wesley Professional. * Amazon document (2018) Amazon document (2018). Amazon Redshift database developer guide. Samurai ML. * Bendre and et al. (2015) Bendre, M. R. and et al. (2015). Big data in precision agriculture: Weather forecasting for future farming. In International Conference on Next Generation Computing Technologies (NGCT). IEEE. * Cao and et al. (2012) Cao, T. and et al. (2012). Semantic search by latent ontological features. International Journal of New Generation Computing, Springer, SCI, 30(1):53–71. * Chodorow (2013) Chodorow, K. (2013). MongoDB: The definitive guide, 2nd edition (powerful and scalable data storage). O’Reilly Media. * Devitt and et al. (2017) Devitt, S. K. and et al. (2017). A cognitive decision tool to optimise integrated weed management. In Proceedings of International Tri-Conference for Precision Agriculture. * Dicks and et al. (2014) Dicks, L. V. and et al. (2014). Organising evidence for environmental management decisions: a ‘4s’ hierarchy. Trends in Ecology & Evolution, 29(11):607–613. * Du (2018) Du, D. (2018). Apache Hive essentials, 2nd edition. Packt Publishing. * EC report (2016) EC report (2016). Europeans, agriculture and the common agricultural policy. Special Eurobarometer 440, The European Commission. * FAO-CSDB report (2018) FAO-CSDB report (2018). Global cereal production and inventories to decline but overall supplies remain adequate, release date: December 06, 2018. Cereal Supply and Demand Brief, FAO. * FAO-FSIN report (2018) FAO-FSIN report (2018). Global report on food crises 2018. Food Security Information Network, FAO. * Friedman and et al. (2016) Friedman, S. P. and et al. (2016). Didas – user-friendly software package for assisting drip irrigation design and scheduling. Computers and Electronics in Agriculture, 120:36–52. * Golfarelli and Rizzi (2009) Golfarelli, M. and Rizzi, S. (2009). Data warehouse design: modern principles and methodologies. McGraw-Hill Education. * Gupta and et al. (2016) Gupta, A. and et al. (2016). Mesa: a geo-replicated online data warehouse for google’s advertising system. Communications of the ACM, 59(7):117–125. * Gutierreza and et al. (2019) Gutierreza, F. and et al. (2019). A review of visualisations in agricultural decision support systems: An HCI perspective. Computers and Electronics in Agriculture, 163. * Hafezalkotob and et al. (2018) Hafezalkotob, A. and et al. (2018). A decision support system for agricultural machines and equipment selection: A case study on olive harvester machines. Computers and Electronics in Agriculture, 148:207–216. * Han and et al. (2017) Han, E. and et al. (2017). Climate-agriculture-modeling and decision tool (camdt): a software framework for climate risk management in agriculture. Environmental Modelling & Software, 95:102–114. * Helmer and et al. (2015) Helmer, S. and et al. (2015). A similarity measure for weaving patterns in textiles. In In the 38th ACM SIGIR Conference on Research and Development in Information Retrieval, pages 163–172. * Hewitt and Carpenter (2016) Hewitt, E. and Carpenter, J. (2016). Cassandra: the definitive guide, 2nd edition (distributed data at web scale). O’Reilly Media. * Hows and et al. (2015) Hows, D. and et al. (2015). The definitive guide to MongoDB, 3rd edition (a complete guide to dealing with big data using MongoDB. Apress. * Huang and et al. (2013) Huang, Y. and et al. (2013). Estimation of cotton yield with varied irrigation and nitrogen treatments using aerial multispectral imagery. International Journal of Agricultural and Biological Engineering, 6(2):37–41. * Inmon (2005) Inmon, W. H. (2005). Building the data warehouse. Wiley. * Kamilaris and et al. (2018) Kamilaris, A. and et al. (2018). Estimating the environmental impact of agriculture by means of geospatial and big data analysis: the case of Catalonia, pages 39–48. Springer. * Kimball and Ross (2013) Kimball, R. and Ross, M. (2013). The data warehouse toolkit: the definitive guide to dimensional modeling (3rd edition). Wiley. * Lam and et al. (2016) Lam, C. P. and et al. (2016). Hadoop in action, 2nd edition. Manning. * Lokers and et al. (2016) Lokers, R. and et al. (2016). Analysis of big data technologies for use in agro-environmental science. Environmental Modelling & Software, 48:494–504. * Lundstrom and Lindblom (2018) Lundstrom, C. and Lindblom, J. (2018). Considering farmers’ situated knowledge of using agricultural decision support systems (agridss) to foster farming practices: the case of cropsat. Agricultural Systems, 159:9–20. * Neeraj (2015) Neeraj, N. (2015). Mastering Apache Cassandra, 2nd edition. Packt Publishing. * Ngo (2014) Ngo, V. (2014). Discovering latent information by spreading activation algorithm for document retrieval. International Journal of Artificial Intelligence & Applications, 5(1):23–34. * Ngo and et al. (2011) Ngo, V. and et al. (2011). Discovering latent concepts and exploiting ontological features for semantic text search. In In the 5th Int. Joint Conference on Natural Languag Processing, ACL, pages 571–579. * Ngo and et al. (2018) Ngo, V. and et al. (2018). An efficient data warehouse for crop yield prediction. In The 14th International Conference Precision Agriculture (ICPA-2018), pages 3:1–3:12. * Ngo and et al. (2019) Ngo, V. M. and et al. (2019). Designing and implementing data warehouse for agricultural big data. In The 8th International Congress on BigData (BigData-2019), pages 1–17. Springer-LNCS, Vol. 11514. * Ngo and Kechadi (2020) Ngo, V. M. and Kechadi, M. T. (2020). Crop knowledge discovery based on agricultural big data integration. In The 4th International Conference on Machine Learning and Soft Computing (ICMLSC), pages 1–5. ACM. * Nilakanta and et al. (2008) Nilakanta, S. and et al. (2008). Dimensional issues in agricultural data warehouse designs. Computers and Electronics in Agriculture, 60(2):263–278. * Oliver and et al. (2017) Oliver, D. M. and et al. (2017). Design of a decision support tool for visualising e. coli risk on agricultural land using a stakeholder-driven approach. Land Use Policy, 66:227–234. * Oracle document (2017) Oracle document (2017). Database data warehousing guide. Oracle12c doc release 1. * Origin report (2018) Origin report (2018). Annual report and accounts. Origin Enterprises plc. * Pantazi (2016) Pantazi, X. E. (2016). Wheat yield prediction using machine learning and advanced sensing techniques. Computers and Electronics in Agriculture, 121:57–65. * Paredes and et al. (2014) Paredes, P. and et al. (2014). Partitioning evapotranspiration, yield prediction and economic returns of maize under various irrigation management strategies. Agricultural Water Management, 135:27–39. * Park and et al. (2016) Park, S. and et al. (2016). Drought assessment and monitoring through blending of multi-sensor indices using machine learning approaches for different climate regions. Agricultural and Forest Meteorology, 216:157–169. * Protopop and Shanoyan (2016) Protopop, I. and Shanoyan, A. (2016). Big data and smallholder farmers: Big data applications in the agri-food supply chain in developing countries. International Food and Agribusiness Management Review, IFAMA, 19(A):1–18. * Rembold and et al. (2019) Rembold, F. and et al. (2019). Asap: A new global early warning system to detect anomaly hot spots of agricultural production for food security analysis. Agricultural Systems, 168:247–257. * Rogovska and et al. (2019) Rogovska, N. and et al. (2019). Development of field mobile soil nitrate sensor technology to facilitate precision fertilizer management. Precision Agriculture, 20(1):40–55. * Ruan and et al. (2019) Ruan, J. and et al. (2019). A life cycle framework of green iot-based agriculture and its finance, operation, and management issues. IEEE Communications Magazine, 57(3):90–96. * Rupnik and et al. (2019) Rupnik, R. and et al. (2019). Agrodss: a decision support system for agriculture and farming. Computers and Electronics in Agriculture, 161:260–271. * Schnase and et al. (2017) Schnase, J. and et al. (2017). Merra analytic services: meeting the big data challenges of climate science through cloud-enabled climate analytics-as-a-service. Computers, Environment and Urban Systems, 161:198–211. * Schuetz and et al. (2018) Schuetz, C. G. and et al. (2018). Building an active semantic data warehouse for precision dairy farming. Organizational Computing and Electronic Commerce, 28(2):122–141. * Schulze and et al. (2007) Schulze, C. and et al. (2007). Data modelling for precision dairy farming within the competitive field of operational and analytical tasks. Computers and Electronics in Agriculture, 59(1-2):39–55. * Udiasa and et al. (2018) Udiasa, A. and et al. (2018). A decision support tool to enhance agricultural growth in the mékrou river basin (west africa). Computers and Electronics in Agriculture, 154:467––481. * UN document (2017) UN document (2017). World population projected to reach 9.8 billion in 2050, and 11.2 billion in 2100. Department of Economic and Social Affairs, United Nations. * USDA report (2018) USDA report (2018). World agricultural supply and demand estimates 08/2018. United States Department of Agriculture.
2024-09-04T02:54:58.587189
2020-03-10T00:25:25
2003.04472
{ "authors": "Jie Ren, Wen-Long You, and Xiaoqun Wang", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26128", "submitter": "Jie Ren", "url": "https://arxiv.org/abs/2003.04472" }
arxiv-papers
# Entanglements and correlations of one-dimensional quantum spin-1/2 chain with anisotropic power-law long range interactions Jie Ren<EMAIL_ADDRESS>Department of Physics, Changshu Institute of Technology, Changshu 215500, China Wen-Long You College of Science, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China School of Physical Science and Technology, Soochow University, Suzhou, Jiangsu 215006, China Xiaoqun Wang<EMAIL_ADDRESS>Key Laboratory of Artificial Structures and Quantum Control of MOE, Shenyang National Laboratory for Materials Science, School of Physics and Astronomy, Tsung-Dao Lee Institute, Shanghai Jiao Tong University, Shanghai 200240, China Collaborative Innovation Center for Advanced Microstructures, Nanjing University, Nanjing 210093, China Beijing Computational Science Research Center, Beijing 100084, China ###### Abstract The correlations, entanglement entropy, and fidelity susceptibility are calculated for a one-dimensional spin-1/2 XXZ chain with anisotropic power-law long range interactions by employing the density matrix renormalization group method. In particular, this long-range interaction is assigned to ferromagnetic for transversal components, while it can be either ferro- or antiferromagnetic for the longitudinal spin component. Two ground-state phase diagrams are established versus the anisotropy of the interactions which not only changes the phase boundaries of the counterparts with short-range interactions, but also leads to the emergence of exotic phases. We found that the long-range interactions of the $z$-component results in a Wigner crystal phase, whereas the transversal one may break a continuous symmetry, resulting in a continuous symmetry breaking phase. ###### pacs: 03.67.-a,05.30.Jp ## I Introduction The quantum phase transition (QPT) and quantum critical phenomena are generally important in understanding novel properties involved in strongly correlated systems, such as quantum magnetic materials. Usually, short-range interactions, e.g., nearest neighbor and next nearest neighbor interactions, are considered to be sufficient for appropriate descriptions on the major magnetic properties of those systems Sachdev ; XWang2000 ; Luo2017 ; You19 ; WN19 ; Luo2019 . However, there actually exist several types of long range interactions such as the Coulomb interaction $1/r$ Saffman , the dipole-dipole interaction $1/r^{3}$ Lahaye ; Deng ; Yan , and the van der Waals interaction $1/r^{6}$ Saffman in some complicated compounds, where relevant electrons are in higher orbits of atoms with lower symmetries subject to crystal field effects. Moreover, in recent years, some long-range interactions have been generated in ultracold atomic systems with the optical lattices or trapped ions. For instance, a power-law Ising interaction $1/r^{\alpha}$ with an adjustable exponent $0<\alpha<3$ has been realized in trapped ions Britton ; Islam ; Gorshkov ; Jurcevic . This kind of experimental progress has greatly stimulated theoretical studies on possible novel effects particularly resulting from long-range interactions W ; Koffel ; Sun01 ; Zhu ; gong16 ; gong17 ; gong17L ; gong16R ; Frerot ; Vanderstraeten . In particular, a transition was revealed by the calculation of the entanglement for a long- range ($\sim r^{-\alpha}$) antiferromagnetic Ising chain Koffel , and is affirmed further by the fidelity susceptibility, being second-order for all $\alpha$ Sun01 ; Zhu . Moreover, by combining the linear spin-wave theory, field theory approach and density-matrix renormalization-group (DMRG) white ; KWHP ; U01 ; U02 ; McCulloch , effects of the long range interactions on local correlation functions, entanglement entropy and central charge are investigated for both spin-1/2 gong17 and spin-1 gong16 to await experimental observation. In addition, one also finds that long-range interactions and long-range hopping may lead to drastic effects on the many- body localization in a one-dimensional (1D) spinless fermion system Nag2019 , which essentially corresponds to a $XY$ type of long range spin interaction. In this regard, the anisotropic long-range spin interaction can be anticipated to give rise to more effects on quantum transitions. In this paper, we study a ID spin-1/2 XXZ system with anisotropic power-law long range interactions in terms of the entanglement entropy, fidelity susceptibility, and correlation functions by performing DMRG calculations. Phase diagrams are established with respect to the power exponents and the anisotropy of interactions. In the following, Sec II presents the Hamiltonian in our studies. The details on DMRG calculations and the definitions of those calculated quantities are discussed in Sec. III. Numerical results are shown in Sec IV with further discussions given in the last section. ## II Hamiltonian In the paper, we consider the following spin-$1/2$ chain with anisotropic long-range interactions, and its Hamiltonian is given by: $\displaystyle H=\sum_{j>i}\\{\frac{J_{xy}}{|i-j|^{\alpha}}(S^{x}_{i}S^{x}_{j}+S^{y}_{i}S^{y}_{j})+\frac{J_{z}}{|i-j|^{\beta}}S^{z}_{i}S^{z}_{j}\\},$ (1) where $i$ and $j$ are the sites of one dimensional lattice, and $S^{\gamma}=\sigma^{\gamma}/2$ with $\gamma=x,y$, or $z$, setting $\hbar=1$ and $\sigma^{\gamma}$ being the Pauli matrices. Interactions between two spins separated by a distance of $r=|i-j|$ decay as $r^{-\alpha}$ for both $x$ and $y$ components of spins, but as $r^{-\beta}$ for the $z$ direction. As usual, the parameters $\alpha,\beta$ are both taken positive, while $J_{xy}=-1$ is set up for the simplicity so that $J_{z}$ readily stands for an anisotropic parameter involved in the establishment of the phase diagram. For this system, in the limit of $\alpha,\beta\rightarrow+\infty$, the Hamiltonian is reduced to describe a spin-1/2 anisotropic chain with the nearest-neighbor interaction. It turns out that the system involves a ferromagnetic (FM) phase for $J_{z}<-1$, whereas a gapful antiferromagnetic (AFM) phase can be shown for $J_{z}>1$. Furthermore, in the region of $-1<J_{z}\leq 1$, the system displays an $XY$ phase where quantum fluctuations exclude the existence of any long-range order but correlation functions behave as a power-law decay of the distance characterized as in the Luttinger liquid. For more general values of $\alpha$ and $\beta$, long range interactions may result in different features for those phases, which are expected also to be properly characterized by long-distance correlation functions as exploited below. ## III Measurements and Method Thanks to the DMRG methodwhite ; KWHP ; U01 , the ground state properties of quasi-one-dimensional systems can be calculated with very high accuracy. For the present studies of Hamiltonian (1), we adopt both infinite-size DMRG (iDMRG) McCulloch and finite-size DMRG, which are based on matrix product states U02 . The number of eigenstates for the reduced matrix is kept up to $m=400$ in the truncation of bases, which allows the truncation error to be smaller than $10^{-9}$. In our calculations where finite-size DMRG algorithm, we handle the long range interaction with directly using as a summation over matrix product of operators (MPOs) rather than the summation of finite exponential terms with MPOs Vidal , which inevitably introduces additional systematic error otherwise. Our codes are mainly based on iTensor C++ library tesnor . Since the $z$-component of the total spins for the present system commutes with the Hamiltonian (1), the ground-state energy is obtained by comparing the lowest energies for each subspace of $S^{z}_{t}=\sum_{i=1}^{L}\langle S^{z}_{i}\rangle$. We found that the ground state resides in the sector of either $S^{z}_{t}=0$ or $S^{z}_{t}=L/2$. To examine the reliability of our numerics, we also perform the finite-size DMRG with varying the number of states in the truncated bases. Once the ground state energy and the corresponding ground state are identified accurately, the first excited state and the corresponding energy (gap) can be determined similarly as orthonormalized to the ground state. For a quantum many-body system, the entanglement entropy (EE) can be extracted from the ground state wavefunction $|\psi_{0}\rangle$ properly to characterize the quantum phase transition induced by the interaction or external fields. Usually, one may separate a given Hamiltonian into two subsystems $A$ and $B$, and compute the reduced density matrix for part $A$ by partially tracing over the degree of freedom of the subsystem $B$, which can be written formally as $\rho_{A}=\textrm{Tr}_{B}(|\psi_{0}\rangle\langle\psi_{0}|).$ Then, the entanglement entropy measuring the entanglement between parts $A$ and $B$ is given by $\displaystyle S_{A}=-\textrm{Tr}(\rho_{A}\ln\rho_{A}).$ (2) which is evaluated in terms of the eigenvalues of $\rho_{A}$ feasibly in DMRG calculations. For a one-dimensional short-range interacting system with an open boundary condition (OBC), the conformal field theory (CFT) suggests that the entanglement entropy for the subsystem $A$ with size $l$ possesses the following finite-size $L$ scaling behavior Cardy $\displaystyle S_{l}=\frac{c}{6}\ln[\frac{L}{\pi}\sin(\frac{\pi l}{L})]+S_{0},$ (3) where $c$ is the central charge which usually has different values for different phases and $S_{0}$ is a non-universal constant. This scaling behavior has been employed to explore the critical entanglement of defects Zhao2006 and Gaussian transitionHu2011 . In this paper, we will show that this scaling behavior is applicable to a case associated with long-range interactions. ## IV Results ### IV.1 $1/\alpha=0$ Now we first consider the case of $\alpha=\infty$, which implies that only the nearest-neighbor term of $xy-$interaction survives. It turn out that the long- range interaction for $z-$component governed by $\beta$ may result in novel properties in competition with the $xy-$components. In this case, Hamiltonian (1) can be recast to describe a one-dimensional interacting spinless fermionic chain via the Jordan-Wigner transformation: $\displaystyle S^{z}_{i}$ $\displaystyle=$ $\displaystyle\frac{1}{2}-c_{i}^{\dagger}c_{i},$ $\displaystyle S^{+}_{i}$ $\displaystyle=$ $\displaystyle e^{i\pi\sum_{j=1}^{i-1}c_{i}^{\dagger}c_{i}}c_{i},$ $\displaystyle S^{-}_{i}$ $\displaystyle=$ $\displaystyle e^{i\pi\sum_{j=1}^{i-1}c_{i}^{\dagger}c_{i}}c_{i}^{\dagger},$ where $S^{\pm}_{i}$=$S^{x}_{i}$ $\pm$ $iS^{y}_{i}$ are the raising and lowering spin operators. Subsequently, the ferromagnetic $J_{xy}-$term thus simply represents the hopping of fermions, while the $J_{z}-$term stands for the density-density interactions of fermions, which can be either attractive for $J_{z}<0$ or repulsive for $J_{z}>0$. One may expect that this density- density interaction results in quantum transitions for different $\alpha$ and $\beta$. To explore this, we compute the correlation functions between two spins at $i$ and $j$ with a distance of $r=|i-j|$ and for $\beta$ = 2 with using the iDMRG algorithm. Figure 1 shows results for $r=99$. One can see that when $J_{z}<-0.636$, the transverse correlation $\langle S^{+}_{i}S^{-}_{i+99}\rangle=0$ and the longitudinal correlation $\langle S^{z}_{i}S^{z}_{i+99}\rangle=1/4$, implying that the system is in the FM phase, and then $\langle S^{+}_{i}S^{-}_{i+99}\rangle$ suddenly jumps to a positive value at $J_{z}=-0.636$ and $\langle S^{z}_{i}S^{z}_{i+99}\rangle$ drops to zero simultaneously. This discontinuity indicates that the ground state undergoes a first order transition from the FM phase into the $XY$ phase. This discontinuous feature is thus utilized here to determine the critical values of $\beta$ and $J_{z}$ for the quantum phase transition between the $XY$ and FM phases. Figure 1: (Color online) Correlation functions $\langle S^{+}_{i}S^{-}_{i+r}\rangle$ and $\langle S^{z}_{i}S^{z}_{i+r}\rangle$ are plotted as a function of $z$-component interaction $J_{z}$ for $\alpha=\infty$, $\beta=2$ and $r=99$. Inset: a log-log plot for $\langle S^{+}_{i}S^{-}_{i+r}\rangle$ as a function of $r$ when $J_{z}=\pm 0.5$. Figure 2: (Color online) (a) Entanglement entropies are plotted as a function of $z$-component interaction $J_{z}$ for various system sizes with $\alpha=\infty$ and $\beta=2$. (b) The peak positions of $S_{L/2}$ versus system sizes $L$. Moreover, as $J_{z}$ further increases, the transverse correlation $\langle S^{+}_{i}S^{-}_{i+99}\rangle$ gradually reduce to zero, while the longitudinal correlation $\langle S^{z}_{i}S^{z}_{i+99}\rangle$ turns to negative for $J_{z}\gtrsim 3/2$, which signals that the system is driven into a AFM phase. A little scrutiny reveals that the transverse correlation $\langle S^{+}_{i}S^{-}_{i+r}\rangle$ satisfies a power-law decay with the distance $r$ gong17 , as manifested in the inset of Fig. 1. To determine the critical point at the transition between the $XY$ phase and AFM phase more precisely, we also calculate the von Neumann entropy, i.e. entanglement entropy, for the right part apart from the rest for the chain with using the finite-size DMRG algorithm. The entanglement entropy is shown in Fig. 2 as a function of $J_{z}$ with $\beta=2$ for different sizes of the chain. With increasing $J_{z}$, the EE increases first and then declines. The peak becomes more pronounced for a larger size $L$ and the location of the peak moves to a lower value of $J_{z}$, characterizing a transition between the $XY$ phase and the AFM phase Wang . According to the finite-size scaling theory Fisher ; Barber83 , it is expected that the position of the pseudo-critical point for a finite- size system approaches the true critical point as $L$ $\to$ $\infty$. For relevant operators in the driving Hamiltonian on sufficiently large-size systems, i.e., $\nu$$d$$<$2, where $\nu$ is the critical exponent of the correlation length and $d$ the dimensionality of the system, the leading term in the expansion of pseudo-critical point obeys $\displaystyle|J_{z}^{c}(L)-J_{z}^{c}(\infty)|\propto L^{-1/\nu},$ (4) where $J_{z}^{c}(\infty)$ is the critical value for the thermodynamic limit. Such algebraic convergence can be accelerated considerably by some elaborated strategies Roncaglia . We obtain that $J_{z}^{c}=1.520$ and $\nu=1.695$ for the present case consistent with the inflection point of the correlations shown in Fig. 1. We note that the scaling behavior of Eq. (4) with $L$ is also valid for the maximum of fidelity susceptibility defined in Eq.(6) You2011 (see below). Figure 3: (Color online) Finite size scaling of the energy gap $\Delta$ with various $\beta$ and $J_{z}$. Symbols show numerical results obtained by DMRG calculations and solid lines are fits of the data by quadratic polynomials in $1/L$. The results for $J_{z}=0$ is also plotted as for comparison. Low-lying excitation energy often reveals perspective features of different phases in the quantum many-body interacting systems. As mentioned previously, the system involves the gapless $XY$ phase for $-1<J_{z}\leq 1$ in the limit of $\beta=\infty$, which has the central charge $c_{\rm eff}=1$ owing to the conformal symmetry Vidal03 . In the Jordan-Wigner representation of the Hamiltonian (1), the spinless interacting fermion has a linear $1/L-$dependence for the finite-size energy gap as a relativistic spectrum at the Fermi point or the low-lying property of the spectrum for the Luttinger liquid. When $\beta\neq\infty$, however, it is clearly of great interest whether such a $XY$ phase can be robust against a strong long-range repulsive interaction. For $J_{z}=1$ and $\beta=1$ Schulz ; Li , it was suggested that the ground state would be a quasi-Wigner crystal (WC), which results from the dominant long-range repulsive interaction over the kinetic energy. We calculated the finite-size gap energy $\Delta(L)$ between the ground state and the first excitation energies as a function of system sizes for various cases as illustrated in Fig.3, one can see that the energy gap $\Delta(\beta,J_{z})$ can be either zero, including the case of $J_{z}=1$ and $\beta=1$, or finite in the thermodynamic limit, which can be assigned to $XY$ and gapped quasi-WC phases, respectively. However, for given $J_{z}$, when $\beta$ approaches its critical values $\beta_{c}$ from either $XY$ phase or WC phase where $\Delta(L)=\Delta(\beta,J_{z})+A_{1}/L+O(1/L^{2})$ You14 , it becomes rather difficult to accurately determine the phase boundary between these two phases due to limited precisions on tiny values of $\Delta(L)$. Instead, we adopt the effective center charge $c_{\rm eff}$ deducted from the scaling behavior of the entanglement entropy given in Eq.(3) which enable us more accurately to allocate the phase boundary. We note that this scaling behavior is valid in the presence of the long range interaction as demonstrated numerically in Fig. 4, although it was originally derived for the short range interacting cases with conformal symmetries Cardy ; Laflorencie . Figure 4: (Color online) The Scaling behavior of entanglement entropy versus $\ln(x)=\ln[L/\pi\sin(\pi l/L)]$ for different values of $\beta^{-1}$ with $L=300$. Inset shows the fitted coefficients as a function of $\beta^{-1}$ for system sizes $L=200$ (square) and $L=300$ (circle). Figure 4 shows the entanglement entropy as a function of $\ln[L/\pi\sin(\pi l/L)]$ for various values of $\beta$ and positive $J_{z}$. It is instructive that the entanglement entropy still follows up the scaling behavior of Eq. (3), although conformal symmetries are not yet known here in general. Subsequently, the slope of the linear behavior gives rise to an effective central charge $c_{\rm eff}$ which varies with $\beta$ as illustrated for system sizes $L=200$ and $300$ at $J_{z}=1$ in the inset of Fig. 4. One can see that finite-size effects for small $1/\beta$ is small but still visible, resulting in the correction to $c^{0}_{\rm eff}=1$ for the thermodynamic limit, but diminishes with increasing $1/\beta$. The curves for these two sizes cross with a horizontal line corresponding to $c_{eff}=c^{0}_{\rm eff}$ at $1/\beta_{c}=0.756$, where irrelevant corrections vanish to Eq. (3). The finite-size effect then becomes negligible for $\beta\leq\beta_{c}$. This provides alternative way with higher accuracy to determine transition points between the $XY$ (critical) and WC (noncritical) phases Alet ; gong16 ; gong17 . Figure 5: (Color online) Phase diagram of Hamiltonian (1) as a functions of the interaction $J_{z}$ and $1/\beta$ with $\alpha\rightarrow+\infty$. In addition, we note that the FM phase is formed owing to the instability of effectively attractive density-density interaction for $J_{z}\leq 0$ upon changing $1/\beta$. Accordingly, the central charge is zero for the FM phase, but it has the value of 3/2 on its phase boundary with the $XY$ phase for the thermodynamic limitsChen ; Olalla ; Alba . To this end, the phase diagram is depicted in Fig. 5 for $\alpha=\infty$. One can see that the critical points between the $XY$ phase and the FM phase asymptotically approach $J_{z}=0$, while the critical points between the AFM phase and the $XY$ phase mounts up with increasing $1/\beta$. Moreover, it is worthwhile to mention that at $\beta=0$ with $J_{z}>0$, $J_{z}$ term effectively results in one sort of long-range frustrations and has the same strength for all the sites, among which diagonal elements cancel each other in the ground state in correspondence to $S^{z}_{total}=0$ subspaceZerobeta . In this case, the ground state again becomes gapless and the central charge equals to one. Particularly, the energy gap $\Delta(L)$ is scaled to zero in the limit of $L\rightarrow\infty$ independent of $J_{z}$ as illustrated for both $J_{z}=1$ and $J_{z}=2$ in Fig. 3. Moreover, the entanglement entropy behaves as same between $J_{z}=1,2$, resulting in $c_{\rm eff}\simeq 1.02$, as seen in Fig. 4. As connected to $J_{z}=0$, it is natural to consider that the system is indeed in the $XY$ phase, i.e. the transition between the FM and $XY$ phases takes place at $J_{z}=0$ for $1/\beta=\infty$. ### IV.2 $1/\beta=0$ In this section, we turn to the case of $\beta\rightarrow+\infty$. In this case, only the nearest neighbor interaction survives in the $J_{z}-$terms of the Hamiltonian Eq. (1). The exponent $\alpha$ of the $XY-$long range interaction can be considered a tunable parameter to explore the quantum phase transition for various values of $J_{z}$. Figure 6: (Color online) Correlation functions $\langle S^{+}_{i}S^{-}_{i+99}\rangle$ and $\langle S^{z}_{i}S^{z}_{i+99}\rangle$ are plotted as a function of the interaction $J_{z}$ for (a) $\alpha=4$ and (b) $\alpha=2$. Inset: A log-log plot of $\langle S^{+}_{i}S^{-}_{i+r}\rangle$ as a function of the distance $r$ with $J_{z}=\pm 0.5$. Figure 6 shows the dependence of two-spin correlations on $J_{z}$ with a distance of $|i-j|=99$ for different $\alpha$, calculated by using the iDMRG algorithm. When $J_{z}$ is negatively large enough, $\langle S^{+}_{i}S^{-}_{i+99}\rangle=0$, $\langle S^{z}_{i}S^{z}_{i+99}\rangle=1/4$, suggesting that the system is in the FM phase. When $J_{z}$ is sufficiently large, the transverse correlations remain zero, whereas $\langle S^{z}_{i}S^{z}_{i+99}\rangle$ becomes negative so that the ground state is a AFM state. Analogous to the case of $\alpha=\infty$, here we again utilize the discontinuity of the correlation functions to allocate the critical points for $\alpha$ and $J_{z}$ at the boundary of the FM phase, while the boundary of the AFM phase is also determined in terms of the entanglement entropy (see below). In an intermediate range of $J_{z}$, one can further see that the transverse correlations $\langle S^{+}_{i}S^{-}_{i+99}\rangle$ is positive but longitudinal correlations $\langle S^{z}_{i}S^{z}_{i+99}\rangle$ vanish. Interestingly, we find that $\langle S^{+}_{i}S^{-}_{i+r}\rangle$ is a concave function of $J_{z}$ for $\alpha=2$, but becomes a convex one for $\alpha=4$. Moreover, when $J_{z}=\pm 0.5$, $\langle S^{+}_{i}S^{-}_{i+r}\rangle$ behaves as a power-law of $1/r$, vanishing in the limit of $r\rightarrow\infty$ as illustrated for $\alpha=4$ in the inset of Fig. 6(a), but ${\lim_{r\to+\infty}}\langle S^{+}_{i}S^{-}_{i+r}\rangle$ approaches a finite constant as seen for $\alpha=2$ from the inset of Fig. 6(b). Therefore, the ground states for $\alpha=2$ in the intermediate range of $J_{z}$ is different that for $\alpha=4$. In this range of $J_{z}$, it is natural to assign the large$-\alpha$ phase to the $XY$ phase, since this phase contains a special case where $\alpha=\infty$ and $J_{z}=0$ such that the Hamiltonian (1) is reduced to describe a standard $XY$ chain, as already shown Fig. (5). Moreover, when $\alpha$ is small or even not too large, one can show that a $U(1)$ symmetry in the ground state is spontaneously broken at $J_{z}=0$ with using the conformal field analysis and perturbation calculationgong17 . It turns out that one can expect the emergence of a continuous symmetry breaking (CSB) phase with gapless excitations for a small$-\alpha$ phase. It has been shown that a Berezinskii- Kosterlitz-Thouless like transition happens between the CSB phase and the $XY$ phase at $1/\alpha_{c}\simeq 0.34$, at which the central charge is numerically increased by $4\%$ from unit. However, the criteria of the $4\%$ addition to the central charge might be invalid for the determination of the critical points with general values of $J_{z}$. To address this issue, we calculate the fidelity susceptibility which has been proposed for the identification of the critical points of continuous quantum phase transitionsGu2010 and even deconfined quantum critical points Sun19 , and successfully applied to various strongly correlated systems You15 ; You17 ; Ren18 ; Luo18 . As a quantum information metric Gu2010 ; You , the fidelity measures the similarity between the two closest ground states when the parameter $\alpha$ is tuned tiny for the Hamiltonian (1), which is defined as $F=|\langle\psi_{0}(\alpha)|\psi_{0}(\alpha+\delta\alpha)\rangle|,$ (5) where $\delta\alpha$ denotes a tiny deviation. Subsequently, we obtain the derivatives of interactions $\delta J_{i,j}=-\frac{J_{xy}}{|i-j|^{\alpha}}\ln|i-j|\delta\alpha$, where $J_{i,j}$ is the interaction strength between two spins at sites $i$ and $j$. The average derivatives of interactions per site are practically considered as an effective tuning parameter $\delta J=\frac{\sum_{i<j}\delta J_{i,j}}{L}$. Therefore, the fidelity susceptibility per site can be calculated numerically by $\chi=\lim_{\delta J\rightarrow 0}\frac{-2\textrm{ln}F}{L(\delta J)^{2}},$ (6) whose peak is thus used to identify the critical value of $\alpha$ and to separate the CSB phase from the $XY$ phase for each $J_{z}$. In our numerical calculations, we take $\delta\alpha=0.005$. For the case of $L=100$ and $\alpha=3$, the effective tuning parameter $\delta J\simeq 0.001$. The ground-state fidelity susceptibility per site $\chi$ is shown for $J_{z}=0,1$ as a function of the parameter $\alpha$ for different sizes in Fig. 7 (a) and (b), respectively. For each $J_{z}$, one can see that the peaks of $\chi$ grow with respect to increasing the system size so that a divergence peak would be expected for the $L\rightarrow\infty$ limit to signal the appearance of a quantum phase transition. In order to locate the quantum critical point $\alpha_{c}$ for the thermodynamic limit, we uses the finite- size scaling analysis to obtain $\alpha_{c}=2.83$ and $\nu=1$ at $J_{z}=0$ as seen in the inset of Fig. 7(a). This value of $\alpha_{c}$ is good consistent with that determined by the central charge and the perturbation theory calculation gong17 . Similarly, we can determine critical points at other values of $J_{z}$ for the boundary between the CSB and $XY$ pases. In particular, the critical value of $\alpha_{c}=2.45$ for $J_{z}=1.0$ is obtained from the results shown in Fig. 7(b). Figure 7: (Color online) Fidelity susceptibility per site is plotted as a function of parameter $\alpha$ for various system sizes with (a) $J_{z}=0$ and (b) $J_{z}=1.0$. Inset: Scaling behavior of the fidelity susceptibility peak points with respect to $1/L$. Now we turn to quantum phase transitions between the intermediate and AFM phases, which are characterized by the peaks of the entanglement entropies as demonstrated for $\alpha=2,4$ in Fig. 8. One can see that the peaks for both cases in (a) and (c) of Fig. 8 move to lower values of $J_{z}$ when $L$ increases. Fitting the locations of peaks with the formula (4) as shown in (b) and (d) of Fig. 8, one can obtain that $J_{z}^{c}=1.35$ and $2.21$, respectively. Such fitted results agree very well with the inflexion points of the correlations shown in Fig. 6. In the same manner, we allocate more critical values of $J_{z}$ and $\alpha$ for the boundary of the AFM phase with both $XY$ and CSB phases. Figure 8: (Color online) Entanglement entropy is plotted as a function of the interaction $J_{z}$ on different system sizes for (a) $\alpha=4$ and (c) $\alpha=2$. The peak positions of $S_{L/2}$ versus the system size $L$ for (b) $\alpha=4$ and (d) $\alpha=2$. Based on the above analysis on the properties of the correlation functions, the fidelity susceptibility and the entanglement entropy, we establish the ground-state phase diagram for the Hamiltonian (1) with $\alpha=\infty$ as shown in Fig. 9. Figure 9: (Color online) Phase diagram of Hamiltonian (1) as a functions of the interaction $J_{z}$ and $\alpha$ with $\beta\rightarrow+\infty$. ## V Discussion In this paper, we study quantum phase transitions for a quantum spin-$1/2$ chain with anisotropic power-law-decaying long-range interactions, which are characterized by exponent parameters $\alpha$ for $xy-$term and $\beta$ for $z-$term, by employing density-matrix renormalization-group method. With numerically analyzing the effects of $\alpha$ and $\beta$ on the spin-spin correlation functions, the entanglement entropy and the central charge, and the fidelity susceptibility, we establish two phase diagrams for $\alpha=\infty$ and $\beta=\infty$, respectively. Both cases involve a ferromagnetic phase and an antiferromagnetic phase corresponding to sufficiently negative and positive $J_{z}$, respectively. However, in the intermediate regime of $J_{z}$, the former involves not only a usual $XY$ phase effectively equivalent to a short range repulsive density- density interaction, but also a Wigner-crystal phase which essentially results from for a sufficient strong long-range $J_{z}$ term; for the later, the gapped Wigner crystal phase is replaced by a continuous $U(1)$ symmetry breaking phase. Moreover, it is interesting to notice that the WC and CSB phases actually reveal two different mechanisms, which intrinsically result from either two-body processes of the strong long-range repulsive interaction or one-body kinetic processes of the long-range hoping in the fermion representation. From this study, we found that the entanglement entropy and the central charge can be used efficiently to extract critical values of the quantum phase transition between two phases when one of them possesses a well-defined central charge but another one is gapful Luo2019 . However, when one is encountered with a quantum phase transition between two gapless phases, the fidelity susceptibility alternatively provides a more feasible way to allocate the critical points as applied to the transition between the $XY$ and continuous $U(1)$ symmetry breaking phases. We so far focus on the ground state phase diagrams only for $\alpha=\infty$ and $\beta=\infty$. There are actually a couple of important aspects beyond the above two cases for the Hamiltonian (1), such as ground phase diagrams with $\alpha=\beta$ and $J_{xy}>0$, extensions to two leg-ladders and even two dimensions, etc. The emergence of any non-trivial gapless phase, corresponding novel low-lying excitation spectra or exotic collective excitations with special symmetries, and thermodynamic and dynamic properties would be very interesting questions for the presence of long range interactions but are certainly open for further studies in the future. ###### Acknowledgements. This work is supported by the National Program on Key Research Project (Grant No. 2016YFA0300501) and the National Natural Science Foundation of China under Grants No. 11104021, 11474211, 61674110 and 11974244. W.L.Y is appreciative of support from the start-up fund of Nanjing University of Aeronautics and Astronautics. X.W. also acknowledges additional supports from a Shanghai talent program. ## References * (1) S. Sachdev, Quantum Phase Transitions (Cambridge University Press, Cambridge, England, 1999). * (2) Xiaoqun Wang, Mod. Phys. Lett. B 14, 327 (2000). * (3) Qiang Luo, Shijie Hu, Bin Xi, Jize Zhao, and Xiaoqun Wang, Phys. Rev. B 95, 165110 (2017). * (4) Qiang Luo, Jize Zhao and Xiaoqun Wang, Phys. Rev. B 100, 121111(R) (2019). * (5) T. C. Yi, W. L. You, N. Wu and A. M. Oleś, Phys. Rev. B 100, 024423 (2019). * (6) N. Wu and W. L. You, Phys. Rev. B 100, 085130 (2019). * (7) M. Saffman, T. G. Walker, and K. Mølmer, Rev. Mod. Phys. 82, 2313 (2010). * (8) T. Lahaye, C. Menotti, L. Santos, M. Lewenstein, and T. Pfau, Rep. Prog. Phys. 72, 126401 (2009). * (9) X. L. Deng, D. Porras, and J. I. Cirac, Phys. Rev. A 72, 063407 (2005). * (10) Bo Yan, Steven A. Moses, Bryce Gadway, Jacob P. Covey, Kaden R. A. Hazzard, Ana Maria Rey, Deborah S. Jin, and Jun Ye, Nature 501, 521 (2013). * (11) J. W. Britton, B. C. Sawyer, A. C. Keith, C.-C. Joseph Wang, J. K. Freericks, H. Uys, M. J. Biercuk, and J. J. Bollinger, Nature(London) 484, 489 (2012). * (12) R. Islam, C. Senkol, W. C. Campbell, S. Korenblit, J. Smith, A. Lee, E. E. Edwards, C. C. J. Wang, J. K. Freericks, and C. Monroe, Science 340, 583 (2013). * (13) P. Richerme, Z.-X. Gong, A. Lee, C. Senko, J. Smith, M. FossFeig, S. Michalakis, A. V. Gorshkov, and C. Monroe, Nature (London) 511, 198 (2014). * (14) P. Jurcevic, B. P. Lanyon, P. Hauke, C. Hempel, P. Zoller, R. Blatt, and C. F. Roos, Nature 511, 202 (2014). * (15) W. Dür, L. Hartmann, M. Hein, M. Lewenstein, and H. J. Briegel, Phys. Rev. Lett. 94, 097203 (2005). * (16) T. Koffel, M. Lewenstein, and L. Tagliacozzo, Phys. Rev. Lett. 109, 267203 (2012). * (17) G. Sun, Phys. Rev. A 96, 043621 (2017). * (18) Z. Zhu, G. Sun, W. L. You, D. N. Shi, Phys. Rev. A 98 023607 (2018). * (19) Z. X. Gong, M. F. Maghrebi, A. Hu, M. L. Wall, M. Foss-Feig, and A. V. Gorshkov, Phys. Rev. B 93, 041102(R) (2016). * (20) M. F. Maghrebi, Z. X. Gong, and Alexey V. Gorshkov, Phys. Rev. Lett. 119, 023001 (2017). * (21) Z. X. Gong, M. F. Maghrebi, A. Hu, M. Foss-Feig, P. Richerme, C. Monroe, and A. V. Gorshkov, Phys. Rev. B 93, 205115 (2016). * (22) Z. X. Gong, Michael Foss-Feig, Fernando G. S. L. Brandão, and Alexey V. Gorshkov, Phys. Rev. Lett. 119, 050501 (2017). * (23) Irénée Frérot, Piero Naldesi, and Tommaso Roscilde, Phys. Rev. B 95, 245111 (2017). * (24) Laurens Vanderstraeten, Maarten Van Damme, Hans Peter Büchler, and Frank Verstraete, Phys. Rev. Lett. 121, 090603(2018). * (25) S. R. White, Phys. Rev. B 48, 10345 (1993). * (26) I. Peschel, X. Q. Wang, M. Kaulke, and K. Hallberg, Density Matrix Renormalization, Lecture Notes in Physics Vol. 528 (Springer, Berlin, 1999). * (27) U. Schollwöck, Rev. Mod. Phys. 77, 259 (2005). * (28) I. P. McCulloch, arXiv:0804.2509. * (29) U. Schollwöck, Ann. Phys. (NY) 326, 96 (2011). * (30) Sabyasachi Nag and Arti Garg, Phys. Rev. B 99, 224203 (2019). * (31) G. M. Crosswhite, A. C. Doherty, and G. Vidal, Phys. Rev. B 78, 035116 (2008). * (32) ITensor library, http://itensor.org/. * (33) P. Calabrese and J. Cardy, J. Stat. Mech., P06002 (2004). https://doi.org/10.1088/1742-5468/2004/06/P06002. * (34) Jize Zhao, Ingo Peschel and Xiaoqun Wang, Phys. Rev. B 73, 024417 (2006). * (35) Shijie Hu, Bruce Normand, Xiaoqun Wang, Lu Yu, Phys. Rev. B 84, 220402(R) (2011). * (36) B. Wang, M. Feng, Z. Q. Chen, Phys. Rev. A 81, 064301 (2010). * (37) M. E. Fisher and M. N. Barber, Phys. Rev. Lett. 28, 1516 (1972). * (38) M. N. Barber, in Phase Transitions and Critical Phenomena, edited by C. Domb and J. L. Lebowitz (Academic, London, 1983), pp. 146-259. * (39) M Roncaglia, L Campos Venuti and C Degli Esposti Boschi, J. Stat. Mech. (2015) P04005. http://dx.doi.org/10.1088/1742-5468/2015/04/P04005. * (40) Wen-Long You and Yu-Li Dong, Phys. Rev. B 84, 174426 (2011). * (41) G. Vidal, J. I. Latorre, E. Rico, A. Kitaev, Rev. Lett. 90, 227902 (2003). * (42) H. J. Schulz, Phys. Rev. Lett. 71, 1864 (1993). * (43) Zhi-Hua Li, J. Phys.: Condens. Matter 31, 255601 (2019). * (44) W. L. You, G. H. Liu, P. Horsch, and A. M. Oleś, Phys. Rev. B 90, 094413 (2014). * (45) N. Laflorencie, E. S. Sørensen, M. S. Chang and I. Affleck, Phys. Rev. Lett. 96, 100603 (2006). * (46) F. Alet, I.P. McCulloch, S. Capponi, M. Mambrini, Phys. Rev. B 82, 094452 (2010). * (47) Pochung Chen, Zhi-long Xue, I. P. McCulloch, Ming-Chiang Chung, Miguel Cazalilla, S.-K. Yip, J. Stat. Mech., P10007 (2013). https://doi.org/10.1088/1742-5468/2013/10/P10007. * (48) Olalla. A Castro-Alvaredo and Benjamin Doyon., J. Stat. Mech., P02001 (2011). https://doi.org/10.1088/1742-5468/2011/02/P02001. * (49) Vincenzo Alba, Masudul Haque, and Andreas. M Läuchli., J. Stat. Mech., P08011 (2012) . https://doi.org/10.1088/1742-5468/2012/08/P08011. * (50) When $\beta=0$, $J_{z}$ term in Eq. (1) can be written as $J_{z}[(S^{z}_{total})^{2}-L/4]$ under the periodic boundary condition. For small values of $\beta$, whether effective cancelling of those diagonal elements remains is to be further explored. * (51) Shi-Jian Gu, Int. J. Mod. Phys. B 24, 4371 (2010) and more References therein. * (52) G. Sun, B. B. Wei, and S. P. Kou, Phys. Rev. B 100, 064427 (2019). * (53) W. L. You and L. He, J. Phys.: Condens. Matter 27, 205601 (2015). * (54) W. L. You, C. J. Zhang, W. Ni, M. Gong, and A. M. Oleś, Phys. Rev. B 95, 224404 (2017). * (55) J. Ren, Y. Wang, and W. L. You, Phys. Rev. A 97, 042318 (2018). * (56) Qiang Luo, Jize Zhao, and Xiaoqun Wang, Phys. Rev. E 98, 022106 (2018). * (57) W. L. You, Y. W. Li, and S. J. Gu, Phys. Rev. E 76, 022101 (2007).
2024-09-04T02:54:58.599700
2020-03-10T03:43:55
2003.04520
{ "authors": "Yongtao Li", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26129", "submitter": "Yongtao Li", "url": "https://arxiv.org/abs/2003.04520" }
arxiv-papers
# Extensions of some matrix inequalities related to trace and partial traces††thanks: This paper was firstly announced in March, 2020, and was later published on Linear Algebra and its Applications 639 (2022) 205–224. See https://doi.org/10.1016/j.laa.2022.01.006. E-mail addresses: <EMAIL_ADDRESS>(Yǒngtāo Lǐ). Yongtao Li∗ School of Mathematics, Hunan University Changsha, Hunan, 410082, P.R. China ###### Abstract We first present a determinant inequality related to partial traces for positive semidefinite block matrices. Our result extends a result of Lin [Czech. Math. J. 66 (2016)] and improves a result of Kuai [Linear Multilinear Algebra 66 (2018)]. Moreover, we provide a unified treatment of a result of Ando [ILAS Conference (2014)] and a recent result of Li, Liu and Huang [Operators and Matrices 15 (2021)]. Furthermore, we also extend some determinant inequalities involving partial traces to a larger class of matrices whose numerical ranges are contained in a sector. In addition, some extensions on trace inequalities for positive semidefinite $2\times 2$ block matrices are also included. Dedicated to Prof. Weijun Liu on his 60th birthday Key words: Partial traces; Trace inequalities; Fiedler and Markham; Numerical range in a sector; 2010 Mathematics Subject Classification. 15A45, 15A60, 47B65. ## 1 Introduction Throughout the paper, we use the following standard notation. The set of $n\times n$ complex matrices is denoted by $\mathbb{M}_{n}(\mathbb{C})$, or simply by $\mathbb{M}_{n}$, and the identity matrix of order $n$ by $I_{n}$, or $I$ for short. We write $\lambda_{i}(A)$ and $\sigma_{i}(A)$ for the $i$-th largest eigenvalue and singular value of $A$, respectively. By convention, if $A\in\mathbb{M}_{n}$ is positive semidefinite, we write $A\geq 0$. For Hermitian matrices $A$ and $B$ with the same size, $A\geq B$ means that $A-B$ is positive semidefinite, i.e., $A-B\geq 0$. If $A=[a_{i,j}]$ is of order $m\times n$ and $B$ is of order $s\times t$, the tensor product of $A$ with $B$, denoted by $A\otimes B$, is an $ms\times nt$ matrix that partitioned into $m\times n$ block matrices with the $(i,j)$-block being the $s\times t$ matrix $a_{i,j}B$. In this paper, we are interested in complex block matrices. Let $\mathbb{M}_{n}(\mathbb{M}_{k})$ be the set of complex matrices partitioned into $n\times n$ blocks with each block being $k\times k$. The element of $\mathbb{M}_{n}(\mathbb{M}_{k})$ is usually written as ${H}=[H_{i,j}]_{i,j=1}^{n}$, where $H_{i,j}\in\mathbb{M}_{k}$ for all $i,j$. Now we introduce the definition of partial traces, which comes from Quantum Information Theory [32, p. 12]. For $H\in\mathbb{M}_{n}(\mathbb{M}_{k})$, the first partial trace (map) $H\mapsto\mathrm{tr}_{1}H\in\mathbb{M}_{k}$ is defined as the adjoint map of the embedding map $X\mapsto I_{n}\otimes X\in\mathbb{M}_{n}\otimes\mathbb{M}_{k}$. Correspondingly, the second partial trace (map) $H\mapsto\mathrm{tr}_{2}H\in\mathbb{M}_{n}$ is defined as the adjoint map of the embedding map $Y\mapsto Y\otimes I_{k}\in\mathbb{M}_{n}\otimes\mathbb{M}_{k}$. Therefore, we have $\langle I_{n}\otimes X,H\rangle=\langle X,\mathrm{tr}_{1}H\rangle,\quad\forall X\in\mathbb{M}_{k},$ (1) and $\langle Y\otimes I_{k},H\rangle=\langle Y,\mathrm{tr}_{2}H\rangle,\quad\forall Y\in\mathbb{M}_{n},$ where $\langle\cdot,\cdot\rangle$ stands for the Hilbert-Schmidt inner product, i.e., $\langle A,B\rangle={\rm tr}(A^{*}B)$. The above definition of partial traces is implicit. Assume that $H=[H_{i,j}]_{i,j=1}^{n}$ is an $n\times n$ block matrix with $H_{i,j}\in\mathbb{M}_{k}$, the visualized version of the partial traces is equivalently given in [4, pp. 120–123] as $\mathrm{tr}_{1}{H}=\sum\limits_{i=1}^{n}H_{i,i},$ (2) and $\mathrm{tr}_{2}{H}=\bigl{[}\mathrm{tr}H_{i,j}\bigr{]}_{i,j=1}^{n}.$ It is easy to see that both ${\rm tr}_{1}H$ and ${\rm tr}_{2}H$ are positive semidefinite whenever ${H}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ is positive semidefinite; see, e.g. [36, p. 237] or [37] for more details. The first or second partial trace is a source for matrix inequalities and extensively studied in recent years; see [2, 7, 11, 22, 29] for related topics. Let $A=[A_{i,j}]_{i,j=1}^{n}$ be an $n\times n$ block matrix with each block being a $k\times k$ matrix. The usual transpose of $A$ is defined as $A^{T}=[A_{j,i}^{T}]_{i,j=1}^{n}$. We define the partial transpose of $A$ by $A^{\tau}=[A_{j,i}]_{i,j=1}^{n}$, that is, the partial transpose of $A$ is the matrix obtained by transposing blocks of $A$ independently. More precisely, $A^{T}=\begin{bmatrix}A_{1,1}^{T}&\cdots&A_{n,1}^{T}\\\ \vdots&\ddots&\vdots\\\ A_{1,n}^{T}&\cdots&A_{n,n}^{T}\end{bmatrix}~{}~{}\text{and}~{}~{}A^{\tau}=\begin{bmatrix}A_{1,1}&\cdots&A_{n,1}\\\ \vdots&\ddots&\vdots\\\ A_{1,n}&\cdots&A_{n,n}\end{bmatrix}.$ Although $A$ and $A^{\tau}$ have the same trace, they may have different eigenvalues, so they are not necessarily similar. Moreover, it is known that $A\geq 0$ does not necessarily imply $A^{\tau}\geq 0$. For example, taking $A=\begin{bmatrix}A_{1,1}&A_{1,2}\\\ A_{2,1}&A_{2,2}\end{bmatrix}=\left[\begin{array}[]{cc;{2pt/2pt}cc}1&0&0&1\\\ 0&0&0&0\\\ \hdashline[2pt/2pt]0&0&0&0\\\ 1&0&0&1\end{array}\right].$ (3) We can see from the definition that $A^{\tau}=\begin{bmatrix}A_{1,1}&A_{2,1}\\\ A_{1,2}&A_{2,2}\end{bmatrix}=\left[\begin{array}[]{cc;{2pt/2pt}cc}1&0&0&0\\\ 0&0&1&0\\\ \hdashline[2pt/2pt]0&1&0&0\\\ 0&0&0&1\end{array}\right].$ One could easily observe that $A$ is positive semidefinite, but $A^{\tau}$ is not positive semidefinite since it contains a principal submatrix $\left[\begin{smallmatrix}0&1\\\ 1&0\end{smallmatrix}\right]\ngeq 0$. Moreover, the eigenvalues of $A$ are $2,0,0,0$, and the eigenvalues of $A^{\tau}$ are $1,1,1,-1$, so $A$ and $A^{\tau}$ are not similar. In addition, replacing $A_{1,1}$ in the above matrix by $\left[\begin{smallmatrix}1&0\\\ 0&1\end{smallmatrix}\right]$ also gives a well example. From this discussion, we say that $A$ is positive partial transpose (or PPT for short) if both $A$ and $A^{\tau}$ are positive semidefinite. We recommend [10, 19, 26, 27] for recent progress. The paper is organized as follows. In Section 2, we shall review some preliminaries for a class of matrices whose numerical ranges are contained in a sector (known as the sector matrices). This is a natural extension of the class of positive definite matrices. In Section 3, we shall study the recent results involving the Fiedler–Markham inequality. We provide an extension of a result of Lin [29], and our result is also an improvement of a result of Kuai [17]; see Theorem 3.5. Moreover, we shall extend a result of Choi [6] to the so-called sector matrices; see Theorem 3.7. In Section 4, we give a unified treatment of a result of Ando [2] (or see [30]) as well as a recent result of Li, Liu and Huang [22]. Our new treatment is more concise than original proof. Moreover, we also present some Ando type determinant inequalities for partial traces, and then we extend these inequalities to sector matrices; see Theorems 4.7 and 4.8. In Section 5, we shall prove some inequalities for positive semidefinite $2\times 2$ block matrices; see Theorems 5.2, 5.3 and 5.4. Our result extend slightly the recent elegant work on trace inequalities that proved by Kittaneh and Lin [18] and Lin [26] as well. ## 2 Preliminaries Recall that $\sigma_{i}(A)$ denotes $i$-th largest singular value of $A$. When $A$ is Hermitian, we know that all eigenvalues of $A$ are real numbers, and we write $\lambda_{i}(A)$ for the $i$-th largest eigenvalue. The numerical range of $A\in\mathbb{M}_{n}$ is defined by $W(A)=\\{x^{*}Ax:x\in\mathbb{C}^{n},x^{*}x=1\\}.$ For $\alpha\in[0,{\pi}/{2})$, let $S_{\alpha}$ be the sector on complex plane defined as $S_{\alpha}=\\{z\in\mathbb{C}:\Re z>0,|\Im z|\leq(\Re z)\tan\alpha\\}=\\{re^{i\theta}:r>0,|\theta|\leq\alpha\\}.$ For $A\in\mathbb{M}_{n}$, the Cartesian (Toeptliz) decomposition is given as $A=\Re A+i\cdot\Im A$, where $\Re A=\frac{1}{2}(A+A^{*})$ and $\Im A=\frac{1}{2i}(A-A^{*})$. We know from the definition that if $W(A)\subseteq S_{0}$, then $A$ is positive definite. Moreover, it is easy to verify that if $W(A)\subseteq S_{\alpha}$ for some $\alpha\in[0,{\pi}/{2})$, then $\Re(A)$ is positive definite. Such class of matrices whose numerical ranges are contained in a sector is called the sector matrices class. Clearly, the concept of sector matrices is an extension of positive definite matrices. Over the past few years, various studies on sector matrices have been obtained in the literature; see, e.g., [8, 16, 17, 28, 34, 38]. Before starting our results, we now summarise the following lemmas. ###### Lemma 2.1 [28] Let $0\leq\alpha<{\pi}/{2}$ and $A\in\mathbb{M}_{n}$ with $W(A)\subseteq S_{\alpha}$. Then $|\det A|\leq(\sec\alpha)^{n}\det(\Re A).$ ###### Lemma 2.2 [14, p. 510] Let $X$ be an $n$-square complex matrix. Then $\lambda_{i}(\Re X)\leq\sigma_{i}(X),\quad i=1,2,\ldots,n.$ Moreover, if $\Re X$ is positive definite, then $\det\Re X+|\det\Im X|\leq|\det X|.$ The following lemma is called the Fischer inequality, which gives an upper bound for the determinant of a positive semidefinite block matrix in terms of the determinants of its principal diagonal blocks. In particular, when all blocks have order $1\times 1$, this inequality is also known as the Hadamard inequality; see, e.g., [14, p. 506] and [36, p. 217]. ###### Lemma 2.3 Let $H=[H_{i,j}]_{i,j=1}^{n}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be positive semidefinite. Then $\det H\leq\prod_{i=1}^{n}\det H_{i,i}.$ ###### Lemma 2.4 If $H\in\mathbb{M}_{n}(\mathbb{M}_{k})$ satisfies $W(H)\\!\subseteq S_{\alpha}$, then $W({\rm tr}_{1}H)\\!\subseteq S_{\alpha}$ and $W({\rm tr}_{2}H)\\!\subseteq S_{\alpha}$, i.e., if $H$ is a sector matrix with angle $\alpha\in[0,\pi/2)$, then so are ${\rm tr}_{1}H$ and ${\rm tr}_{2}H$. We remark that this lemma was partially proved in [17, Proposition 3.2] for the case ${\rm tr}_{2}H$. Motivated by [17], we here include a detailed proof for the remaining case ${\rm tr}_{1}H$. Proof. Consider the Cartesian decomposition $H=\Re H+i\cdot\Im H$, then ${\rm tr}_{1}H={\rm tr}_{1}(\Re H)+i\cdot{\rm tr}_{1}(\Im H).$ For every $x\in\mathbb{C}^{k}$ with $x^{*}x=1$, as $\Re H$ is positive definite, we get $\Re\bigl{(}x^{*}({\rm tr}_{1}H)x\bigr{)}=x^{*}\bigl{(}\Re({\rm tr}_{1}H)\bigr{)}x=x^{*}\bigl{(}{\rm tr}_{1}(\Re H)\bigr{)}x>0.$ On the other hand, by a direct computation, $\frac{\left|\Im\bigl{(}x^{*}({\rm tr}_{1}H)x\bigr{)}\right|}{\Re\bigl{(}x^{*}({\rm tr}_{1}H)x\bigr{)}}=\frac{\left|x^{*}({\rm tr}_{1}(\Im H))x\right|}{x^{*}({\rm tr}_{1}(\Re H))x}=\frac{\left|\langle xx^{*},{\rm tr}_{1}(\Im H)\rangle\right|}{\langle xx^{*},{\rm tr}_{1}(\Re H)\rangle}.$ Note that $I_{n}\otimes(xx^{*})$ is positive semidefinite. We consider the spectral decomposition $I_{n}\otimes(xx^{*})=\sum_{i=1}^{nk}\lambda_{i}u_{i}u_{i}^{*},$ where $\lambda_{i}\geq 0$ and $u_{i}$ are unit vectors in $\mathbb{C}^{nk}$. By the definition in (1), it follows that $\displaystyle\frac{\left|\langle xx^{*},{\rm tr}_{1}(\Im H)\rangle\right|}{\langle xx^{*},{\rm tr}_{1}(\Re H)\rangle}$ $\displaystyle=\frac{\left|\langle I_{n}\otimes(xx^{*}),\Im H\rangle\right|}{\langle I_{n}\otimes(xx^{*}),\Re H\rangle}=\frac{\left|\sum_{i=1}^{nk}\lambda_{i}\langle u_{i}u_{i}^{*},\Im H\rangle\right|}{\sum_{i=1}^{nk}\lambda_{i}\langle u_{i}u_{i}^{*},\Re H\rangle}$ $\displaystyle\leq\frac{\sum_{i=1}^{nk}\lambda_{i}\left|u_{i}^{*}(\Im H)u_{i}\right|}{\sum_{i=1}^{nk}\lambda_{i}u_{i}^{*}(\Re H)u_{i}}\leq\max_{1\leq i\leq nk}\frac{\left|u_{i}^{*}(\Im H)u_{i}\right|}{u_{i}^{*}(\Re H)u_{i}}=\max_{1\leq i\leq nk}\frac{\left|\Im(u_{i}^{*}Hu_{i})\right|}{\Re(u_{i}^{*}Hu_{i})}.$ This completes the proof. Remark. Based on the second equivalent definition (2), one could also give other ways to prove Lemma 2.4. We leave the details for the interested reader. ## 3 Extensions on Fiedler–Markham’s inequality Let ${H}=[H_{i,j}]_{i,j=1}^{n}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be positive semidefinite. Recall that both ${\rm tr}_{1}H$ and ${\rm tr}_{2}H$ are positive semidefinite; see, e.g., [37]. In 1994, Fiedler and Markham [9, Corollary 1] proved a celebrated determinant inequality involving the second partial trace. ###### Theorem 3.1 [9] Let ${H}=[H_{i,j}]_{i,j=1}^{n}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be positive semidefinite. Then $\left(\frac{\det\bigl{(}{\rm tr}_{2}H\bigr{)}}{k}\right)^{k}\geq\det{H}.$ In 2016, Lin [29] revisited this inequality using some terminology from quantum information theory, and gave an alternative proof of Theorem 3.1 by applying an important identity connecting ${\rm tr}_{2}H$ and $H$. Moreover, a natural question is that whether an analogous result corresponding to the Fiedler–Markham inequality holds for ${\rm tr}_{1}H$. Lin [29] answered this question and proved the following counterpart. ###### Theorem 3.2 [29] Let ${H}=[H_{ij}]_{i,j=1}^{n}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be positive semidefinite. Then $\left(\frac{\det({\rm tr}_{1}H)}{n}\right)^{n}\geq\det H.$ It is clear that in the proof of both Theorem 3.1 and Theorem 3.2, Fiedler and Markham, and Lin used the superadditivity of determinant functional, which states that $\det\left(\sum_{i=1}^{n}H_{i,i}\right)\geq\sum_{i=1}^{n}\det H_{i,i}\geq n\left(\prod_{i=1}^{n}\det H_{i,i}\right)^{1/n}.$ This inequality can be improved by the Fan-Ky determinant inequality (see [14, p. 488]), i.e., the log-concavity of the determinant over the cone of positive semidefinite matrices: $\det\left(\frac{1}{n}\sum_{i=1}^{n}H_{i,i}\right)\geq\left(\prod_{i=1}^{n}\det H_{i,i}\right)^{1/n}.$ (4) In addition, we mention here that a careful examination of the new proof of Theorem 3.1 in [29] can also reveal this improvement. This improvement was also pointed out in [6, 31]. Next, we state the strong version of Theorem 3.1 and Theorem 3.2. ###### Theorem 3.3 Let ${H}=[H_{ij}]_{i,j=1}^{n}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be positive semidefinite. Then $\left(\frac{\det\bigl{(}{\rm tr}_{2}H\bigr{)}}{k^{n}}\right)^{k}\geq\det{H},$ and $\left(\frac{\det({\rm tr}_{1}H)}{n^{k}}\right)^{n}\geq\det H.$ We observe in Theorem 3.3 that the second inequality seems easier to prove than the first inequality because it is more convenient to build inequalities on ${\rm tr}_{1}H=\sum_{i=1}^{n}H_{i,i}$. In [20], the authors showed that both inequalities can be deduced mutually. In 2018, Kuai [17] (or see [34]) further extended Theorem 3.3 to sector matrices and showed that if $0\leq\alpha<{\pi}/{2}$ and $H\in\mathbb{M}_{n}(\mathbb{M}_{k})$ satisfies $W(H)\subseteq S_{\alpha}$, then $\left|\frac{\det({\rm tr}_{2}H)}{k^{n}}\right|^{k}\geq(\cos\alpha)^{nk}|\det H|,$ (5) and $\left|\frac{\det({\rm tr}_{1}H)}{n}\right|^{n}\geq(\cos\alpha)^{(3n-2)k}|\det H|.$ (6) Our first goal in this section is to improve Kuai’s result (6). The key step in our improvement is the following identity connecting ${\rm tr}_{1}(H)$ and $H$, which has been applied to quantum information theory, such as the sub- additivity of $q$-entropies. This identity can be found in [15, eq.(26)] or [5, Lemma 2]. ###### Lemma 3.4 Let $X$ and $Y$ be generalized Pauli matrices on $\mathbb{C}^{n}$; these operators act as $Xe_{j}=e_{j+1}$ and $Ye_{j}=e^{2\pi j\sqrt{-1}/n}e_{j}$, where $e_{j}$ is the $j$-th column of the identity matrix $I_{n}$ and $e_{n+1}=e_{1}$. Then $\frac{1}{n}\sum_{l,j=1}^{n}(X^{l}Y^{j}\otimes I_{k})H(X^{l}Y^{j}\otimes I_{k})^{*}=I_{n}\otimes({\rm tr}_{1}H).$ Remark. The identity in this lemma can yield an alternative proof of Lemma 2.4. Moreover, the analogous identity for ${\rm tr}_{2}H$ can be seen in [15] or [33, eq.(14)]. Now, we are ready to present an improvement on inequality (6). ###### Theorem 3.5 Let $0\leq\alpha<{\pi}/{2}$ and $H\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be such that $W(H)\subseteq S_{\alpha}$. Then $\left|\frac{\det({\rm tr}_{1}H)}{n^{k}}\right|^{n}\geq(\cos\alpha)^{nk}|\det H|.$ Proof. Note that both $X$ and $Y$ in Lemma 3.4 are unitary, so are $X^{l}Y^{j}\otimes I_{k}$ for all $l,j$. Moreover, we have $\Re(UHU^{*})=U(\Re H)U^{*}$ for every unitary $U$. Thus, $\displaystyle|\det H|\\!\\!\\!\\!\\!\\!$ $\displaystyle=$ $\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\prod_{l,j=1}^{n}\left|\det(X^{l}Y^{j}\otimes I_{k})H(X^{l}Y^{j}\otimes I_{k})^{*}\right|^{1/n^{2}}$ (7) $\displaystyle\overset{\text{Lemma \ref{lem22}}}{\leq}$ $\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!(\sec\alpha)^{nk}\prod_{l,j=1}^{n}\left(\det(X^{l}Y^{j}\otimes I_{k})(\Re H)(X^{l}Y^{j}\otimes I_{k})^{*}\right)^{1/n^{2}}$ $\displaystyle\overset{\text{Fan-Ky ineq.(\ref{eqfk})}}{\leq}$ $\displaystyle\\!\\!\\!\\!\\!\\!\\!(\sec\alpha)^{nk}\det\Bigg{(}\frac{1}{n^{2}}\sum_{l,j=1}^{n}(X^{l}Y^{j}\otimes I_{k})(\Re H)(X^{l}Y^{j}\otimes I_{k})^{*}\Bigg{)}$ $\displaystyle\overset{\text{Lemma \ref{lem24}}}{=}$ $\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!(\sec\alpha)^{nk}\det\left(\frac{1}{n}\Bigl{(}I_{n}\otimes{\rm tr}_{1}(\Re H)\Bigr{)}\right)$ $\displaystyle=$ $\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\frac{(\sec\alpha)^{nk}}{n^{nk}}\det\Bigl{(}I_{n}\otimes{\rm tr}_{1}(\Re H)\Bigr{)}.$ Clearly, we have ${\rm tr}_{1}(\Re H)=\Re({\rm tr}_{1}H)$. For $X\in\mathbb{M}_{n}$ and $Y\in\mathbb{M}_{k}$, it is well-known that $\det(X\otimes Y)=(\det X)^{k}(\det Y)^{n}$; see, e.g., [35, Chapter 2]. It follows that $\det\Bigl{(}I_{n}\otimes{\rm tr}_{1}(\Re H)\Bigr{)}=(\det I_{n})^{k}\bigl{(}\det({\rm tr}_{1}\Re H)\bigr{)}^{n}=\bigl{(}\det\Re({\rm tr}_{1}H)\bigr{)}^{n}.$ By Proposition 2.4, we have $W({\rm tr}_{1}H)\subseteq S_{\alpha}$, which implies that $\Re({\rm tr}_{1}H)$ is positive definite. Therefore, by Lemma 2.2, we get $\bigl{(}\det\Re({\rm tr}_{1}H)\bigr{)}^{n}\leq\bigl{(}|\det({\rm tr}_{1}H)|-|\det\Im({\rm tr}_{1}H)|\bigr{)}^{n}\leq|\det({\rm tr}_{1}H)|^{n},$ which together with (7) yields the desired result. Remark. By applying the techniques from [20], we know that Kuai’s inequality (5) can also be deduced from the inequality in Theorem 3.5 and vice versa. In the sequel, we shall focus our attention on some recent results which are similar with the Fiedler–Markham inequality. Let ${H}=[H_{i,j}]_{i,j=1}^{n}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be a block matrix with $H_{i,j}=[h_{l,m}^{i,j}]_{l,m=1}^{k}$. We define an $n\times n$ matrix $G_{l,m}$ as below. $G_{l,m}:=\bigl{[}h_{l,m}^{i,j}\bigr{]}_{i,j=1}^{n}\in\mathbb{M}_{n}.$ A direct computation yields ${\rm tr}_{1}H=\sum_{i=1}^{n}H_{i,i}=\sum_{i=1}^{n}\bigl{[}h_{l,m}^{i,i}\bigr{]}_{l,m=1}^{k}=\left[\begin{matrix}\sum\limits_{i=1}^{n}h_{l,m}^{i,i}\end{matrix}\right]_{l,m=1}^{k}=\bigl{[}{\rm tr}\,G_{l,m}\bigr{]}_{l,m=1}^{k}.$ For notational convenience, we denote $\widetilde{H}=\bigl{[}G_{l,m}\bigr{]}_{l,m=1}^{k}\in\mathbb{M}_{k}(\mathbb{M}_{n}).$ We can see that $\widetilde{H}$ is obtained from $H$ by rearranging the entries in an appropriate order. The above observation yields ${\rm tr}_{1}H={\rm tr}_{2}\widetilde{H}$. Moreover, it is not hard to check that $\widetilde{H}$ and $H$ are unitarily similar; see, e.g., [6, Theorem 7] or [20, Theorem 4]. Motivated by these relations, Choi [6] introduced recently the definition of partial determinants corresponding to partial traces. For $H=[H_{i,j}]_{i,j=1}^{n}\in\mathbb{M}_{n}(\mathbb{M}_{k})$, the partial determinants are defined as $\mathrm{det}_{1}H:=\bigl{[}\det G_{l,m}\bigr{]}_{l,m=1}^{k},$ and $\mathrm{det}_{2}H:=\bigl{[}\det H_{i,j}\bigr{]}_{i,j=1}^{n}.$ To some extent, the partial determinants share some common properties relative to partial traces. For instance, it is easy to see that if ${H}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ is positive semidefinite, then both $\mathrm{det}_{1}H$ and $\mathrm{det}_{2}H$ are positive semidefinite; see, e.g. [36, p. 221]. Moreover, it was proved in [6] that $\mathrm{det}({\rm tr}_{1}H)\geq{\rm tr}(\mathrm{det}_{2}H),$ and $\mathrm{det}({\rm tr}_{2}H)\geq{\rm tr}(\mathrm{det}_{1}H).$ Additionally, Choi [6] proved two analogues of Theorem 3.1 and Theorem 3.2 for partial determinants. ###### Theorem 3.6 [6] Let ${H}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be positive semidefinite. Then $\left(\frac{{\rm tr}(\mathrm{det}_{1}H)}{k}\right)^{k}\geq\det H,$ and $\left(\frac{{\rm tr}(\mathrm{det}_{2}H)}{n}\right)^{n}\geq\det H.$ Next, we will extend Theorem 3.6 to sector matrices. We write $|A|$ for the nonnegative matrix whose entries are the absolute of the entries of $A$. This notation is only used in the following theorem. ###### Theorem 3.7 Let $0\leq\alpha<{\pi}/{2}$ and $H\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be such that $W(H)\subseteq S_{\alpha}$. Then $\left(\frac{{\rm tr}|\mathrm{det}_{1}H|}{k}\right)^{k}\geq(\cos\alpha)^{nk}|\det H|,$ and $\left(\frac{{\rm tr}|\mathrm{det}_{2}H|}{n}\right)^{n}\geq(\cos\alpha)^{nk}|\det H|.$ Proof. First of all, we shall prove the second inequality. We observe that $\Re H_{1,1}$, $\ldots,\Re H_{n,n}$ are the diagonal block matrices of $\Re H$. By Lemma 2.1 and Lemma 2.3, we obtain $\displaystyle|\det H|$ $\displaystyle\leq(\sec\alpha)^{nk}\det(\Re H)\leq(\sec\alpha)^{nk}\prod_{i=1}^{n}\det(\Re H_{i,i})$ $\displaystyle\leq(\sec\alpha)^{nk}\prod_{i=1}^{n}|\det H_{i,i}|\leq(\sec\alpha)^{nk}\left(\frac{1}{n}\sum_{i=1}^{n}|\det H_{i,i}|\right)^{n},$ where the third inequality follows from Lemma 2.2 and the last one follows from the arithmetic mean-geometric mean inequality. We now prove the first desired inequality by employing the relations between $\mathrm{det}_{1}$ and $\mathrm{det}_{2}$. Recall that $\widetilde{H}=[G_{l,m}]_{l,m=1}^{k}\in\mathbb{M}_{k}(\mathbb{M}_{n})$ and $\mathrm{det}_{1}H=\mathrm{det}_{2}\widetilde{H}$. Since $\widetilde{H}$ and $H$ are unitarily similar, we can get $\det\widetilde{H}=\det H$ and $W(\widetilde{H})\subseteq S_{\alpha}$. Moreover, $\widetilde{H}$ is also positive semidefinite. By applying the second inequality to $\widetilde{H}$, we get $\left(\frac{{\rm tr}|\mathrm{det}_{1}H|}{k}\right)^{k}=\left(\frac{{\rm tr}|\mathrm{det}_{2}\widetilde{H}|}{k}\right)^{k}\geq(\cos\alpha)^{kn}|\det\widetilde{H}|=(\cos\alpha)^{kn}|\det H|.$ This completes the proof. ## 4 Extensions on Ando’s inequality To make our statements more transparent and compatible with the previous works in the literature. In this section, we assume that $A$ is an $m\times m$ block matrix with each block being an $n\times n$ matrix. Let ${A}=[A_{i,j}]_{i,j=1}^{m}\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be positive semidefinite. We know that both ${\rm tr}_{1}A$ and ${\rm tr}_{2}A$ are positive semidefinite; see, e.g., [36, p. 237] and [37, Theorem 2.1]. To some degree, these two partial traces are closely related and mutually affect each other. We write $\lVert A\lVert_{q}=\left(\sum_{i}\sigma_{i}(A)^{q}\right)^{1/q}$ for the Schatten $q$-norm of $A$. In 2007, Audenaert [1] proved the following norm inequality, ${\rm tr}\,A+\lVert A\lVert_{q}\geq\lVert{\rm tr}_{1}A\rVert_{q}+\lVert{\rm tr}_{2}A\rVert_{q}.$ (8) A straightforward argument exploiting Audenaert’s result leads to a proof of the subadditivity of $q$-entropies (Tsallis entropies) for finite-dimensional bipartite quantum states; see [1, 5] and references therein. In 2014, Ando [2] (or see [30, Proposition 2.2] for an alternative proof) established the following remarkable inequality in the sense of the Löwner ordering. ###### Theorem 4.1 [2, 30] Let $A\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be positive semidefinite. Then $({\rm tr}A)I_{mn}+A\geq I_{m}\otimes(\mathrm{tr}_{1}A)+(\mathrm{tr}_{2}A)\otimes I_{n}.$ Ando’s result reveals closely the interplay between the first and second partial trace. Equivalently, this inequality can be rewritten as $({\rm tr}A)I_{mn}-(\mathrm{tr}_{2}A)\otimes I_{n}\geq I_{m}\otimes(\mathrm{tr}_{1}A)-A.$ (9) We observe that the positivity of $A$, together with the identity ${\rm tr}\,A=\sum_{i=1}^{m}{\rm tr}A_{i,i}={\rm tr}({\rm tr}_{2}A)$, leads to $({\rm tr}A)I_{m}\geq\lambda_{\max}({\rm tr}_{2}A)I_{m}\geq{\rm tr}_{2}A$, which guarantees that in (9) the left hand side $({\rm tr}A)I_{mn}-(\mathrm{tr}_{2}A)\otimes I_{n}$ is positive semidefinite. However, the two matrices of the right hand side in (9) might be incomparable. For instance, the matrix $A$ in (3) gives an example. Motivated by this observation, Li, Liu and Huang [22] presented a further generalization. ###### Theorem 4.2 [22] Let $A\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be positive semidefinite. Then $({\rm tr}A)I_{mn}-({\rm tr}_{2}A)\otimes I_{n}\geq A-I_{m}\otimes({\rm tr}_{1}A),$ and $({\rm tr}A)I_{mn}+({\rm tr}_{2}A)\otimes I_{n}\geq A+I_{m}\otimes({\rm tr}_{1}A).$ A map (not necessarily linear) $\Phi:\mathbb{M}_{n}\to\mathbb{M}_{k}$ is called positive if it maps positive semidefinite matrices to positive semidefinite matrices. A map $\Phi:\mathbb{M}_{n}\to\mathbb{M}_{k}$ is said to be $m$-positive if for every $m\times m$ block matrix $[A_{i,j}]_{i,j=1}^{m}\in\mathbb{M}_{m}(\mathbb{M}_{n})$, $[A_{i,j}]_{i,j=1}^{m}\geq 0\Rightarrow[\Phi(A_{i,j})]_{i,j=1}^{m}\geq 0.$ Clearly, being $1$-positive is equivalent to being positive. The map $\Phi$ is said to be completely positive if it is $m$-positive for every integer $m\geq 1$. It is well-known that both the trace map and determinant map are completely positive; see, e.g., [36, p. 221, p. 237] or [37]. On the other hand, a map $\Phi$ is said to be $m$-copositive if for every $[A_{i,j}]_{i,j=1}^{m}\in\mathbb{M}_{m}(\mathbb{M}_{n})$, $[A_{i,j}]_{i,j=1}^{m}\geq 0\Rightarrow[\Phi(A_{j,i})]_{i,j=1}^{m}\geq 0,$ and $\Phi$ is said to be completely copositive if it is $m$-copositive for every integer $m\geq 1$. Furthermore, a map $\Phi$ is called completely PPT if it is both completely positive and completely copositive; see [26, 10, 39] for related topics. Both Theorem 4.1 and Theorem 4.2 illustrated the implicit interaction and connection between the first trace and second trace. The proof of Theorem 4.1 depends mainly on the 2-copositivity of $\Psi(X)=({\rm tr}X)I-X$; see e.g., [2] and [30] for more details. Correspondingly, the proof of Theorem 4.2 relies similarly on the 2-copositivity of $\Phi(X)=({\rm tr}X)I+X$; see [22]. For more application of these two maps, we refer readers to papers [26, 21]. In this section, we give a unified treatment of both Theorem 4.1 and Theorem 4.2. Our treatment is more concise than the original proof. We need to use a recent result of Choi [6, 7], which investigates more relations between the partial traces and the partial transpose. ###### Lemma 4.3 [6, 7] Let $A\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be positive semidefinite. Then $({\rm tr}_{2}A^{\tau})\otimes I_{n}\geq\pm A^{\tau}$, and $I_{m}\otimes{\rm tr}_{1}A^{\tau}\geq\pm A^{\tau}.$ Now, we present a unified treatment of Theorems 4.1 and 4.2 as well. New proof of Theorem 4.1. We define the map $\Phi:\mathbb{M}_{m}(\mathbb{M}_{n})\to\mathbb{M}_{m}(\mathbb{M}_{n})$ as $\Phi_{2}^{-}(X):=({\rm tr}_{2}X^{\tau})\otimes I_{n}-X^{\tau}.$ On the other hand, we define $\Phi_{1}^{-}(X):=I_{m}\otimes{\rm tr}_{1}X^{\tau}-X^{\tau}.$ Lemma 4.3 implies that both $\Phi_{2}^{-}$ and $\Phi_{1}^{-}$ are positive linear maps on $\mathbb{M}_{m}(\mathbb{M}_{n})$. Let $A$ be a positive semidefinite block matrix. Thus, we have $\Phi_{2}^{-}(A)=({\rm tr}_{2}A^{\tau})\otimes I_{n}-A^{\tau}\geq 0.$ Acting the map $\Phi_{1}^{-}$ to the matrix $\Phi_{2}^{-}(A)$, we can obtain $\Phi_{1}^{-}\bigl{(}\Phi_{2}^{-}(A)\bigr{)}=I_{m}\otimes{\rm tr}_{1}{\Phi_{2}^{-}(A)}^{\tau}-{\Phi_{2}^{-}(A)}^{\tau}\geq 0.$ (10) By a directed computation, we can get ${\Phi_{2}^{-}(A)}^{\tau}=({\rm tr}_{2}A)\otimes I_{n}-A$ and ${\rm tr}_{1}{\Phi_{2}^{-}(A)}^{\tau}={\rm tr}_{1}\bigl{(}({\rm tr}_{2}A)\otimes I_{n}-A\bigr{)}=\sum_{i=1}^{m}({\rm tr}A_{i,i})I_{n}-{\rm tr}_{1}A=({\rm tr}A)I_{n}-{\rm tr}_{1}A.$ Therefore, inequality (10) yields the desired result in Theorem 4.1. $\blacksquare$ Remarks. In the above proof, we can see that Theorem 4.1 is just a direct consequence of Lemma 4.3. To our surprise, Theorem 4.1 can also be proved by using the positivity of $\Phi_{1}^{-}$ first, and then applying the positivity of $\Phi_{2}^{-}$ later. More precisely, we first derive ${\Phi_{1}^{-}(A)}\geq 0$, and then we have $\Phi_{2}^{-}(\Phi_{1}^{-}(A))\geq 0$. Upon simplification, one can immediately get Theorem 4.1 again. We summarize this observation as the following proposition. ###### Proposition 4.4 For every $X\in\mathbb{M}_{m}(\mathbb{M}_{n})$, we have $\Phi_{1}^{-}(\Phi_{2}^{-}(X))=\Phi_{2}^{-}(\Phi_{1}^{-}(X))$. Correspondingly, we can present an alternative proof of Theorem 4.2 similarly. New proof of Theorem 4.2. We define the maps $\Phi_{2}^{+}$ and $\Phi_{1}^{+}$ on $\mathbb{M}_{m}(\mathbb{M}_{n})$ as $\Phi_{2}^{+}(X):=({\rm tr}_{2}X^{\tau})\otimes I_{n}+X^{\tau},$ and $\Phi_{1}^{+}(X):=I_{m}\otimes{\rm tr}_{1}X^{\tau}+X^{\tau}.$ We can see from Lemma 4.3 that both $\Phi_{2}^{+}$ and $\Phi_{1}^{+}$ are positive linear maps. Similar to the lines of the previous proof, we get $\Phi_{1}^{-}(\Phi_{2}^{+}(A))=\Phi_{2}^{+}(\Phi_{1}^{-}(A))\geq 0$, which leads to $({\rm tr}A)I_{mn}-({\rm tr}_{2}A)\otimes I_{n}\geq A-I_{m}\otimes({\rm tr}_{1}A).$ Moreover, we have $\Phi_{1}^{+}(\Phi_{2}^{-}(A))=\Phi_{2}^{-}(\Phi_{1}^{+}(A))\geq 0$. It follows that $({\rm tr}A)I_{mn}+({\rm tr}_{2}A)\otimes I_{n}\geq A+I_{m}\otimes({\rm tr}_{1}A).$ We mention that the positivity of $\Phi_{1}^{+}(\Phi_{2}^{+}(A))$ yields a trivial result. $\blacksquare$ In the remaining of this section, we shall pay attention to determinant inequalities of sector matrices involving partial traces. Motivated by Audenaert’s result (8), Lin [29] recently obtained a determinantal inequality for partial traces, which states that if $A\in\mathbb{M}_{m}(\mathbb{M}_{n})$ is positive semidefinite, then $({\rm tr}A)^{mn}+\det A\geq\det({\rm tr}_{1}A)^{m}+\det({\rm tr}_{2}A)^{n}.$ (11) We remark here that Fu, Lau and Tam [11, Corollary 2.2] recently improved (11) when $A$ is a density matrix, i.e., a positive semidefinite matrix with trace equal to $1$. The key step in the proof of (11) attributes to Theorem 4.1 together with the following interesting lemma. It is worth noting that Lemma 4.5 is graceful and useful in deriving matrix inequalities; see, e.g., [23, 24, 25] for applications on Oppenheim type inequalities. ###### Lemma 4.5 [30] Let $X,Y,W$ and $Z$ be positive semidefinite matrices of the same order. If $X\geq W,X\geq Z$ and $X+Y\geq W+Z$, then $\det X+\det Y\geq\det W+\det Z.$ Remark. We observe that Lemma 4.5 implies the determinant inequality: $\det(A+B+C)+\det C\geq\det(A+C)+\det(B+C),$ where $A,B$ and $C$ are positive semidefinite matrices. With the help of Lemma 4.5, we can easily present two analogues of (11). ###### Proposition 4.6 Let $A\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be positive semidefinite. Then $({\rm tr}A)^{mn}+\det({\rm tr}_{1}A)^{m}\geq\det A+\det({\rm tr}_{2}A)^{n},$ and $({\rm tr}A)^{mn}+\det({\rm tr}_{2}A)^{n}\geq\det A+\det({\rm tr}_{1}A)^{m}.$ Proof. We prove the first inequality only, since the second one can be proved in exactly the same way. Let $X=({\rm tr}A)I_{mn},Y=I_{m}\otimes({\rm tr}_{1}A),W=A$ and $Z=({\rm tr}_{2}A)\otimes I_{n}$. It is easy to see that $({\rm tr}A)I_{m}=\sum_{i=1}^{m}({\rm tr}A_{i,i})I_{m}=\bigl{(}{\rm tr}({\rm tr}_{2}A)\bigr{)}I_{m}\geq\lambda_{\max}({\rm tr}_{2}A)I_{m}\geq{\rm tr}_{2}A,$ which implies that $X\geq Z\geq 0$, and clearly $X\geq W\geq 0$. Moreover, Theorem 4.2 says that $X+Y\geq W+Z$. That is, all conditions in Lemma 4.5 are satisfied. Therefore, $\displaystyle({\rm tr}A)^{mn}+\det\bigl{(}I_{m}\otimes({\rm tr}_{1}A)\bigr{)}\geq\det A+\det\bigl{(}({\rm tr}_{2}A)\otimes I_{n}\bigr{)}.$ It is well-known [35, p. 37] that for every $X\in\mathbb{M}_{m}$ and $Y\in\mathbb{M}_{n}$, $\det(X\otimes Y)=(\det X)^{n}(\det Y)^{m}.$ Thus, we complete the proof of the required result. We next give an improvement on Proposition 4.6. ###### Theorem 4.7 Let $A\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be positive semidefinite. Then $({\rm tr}A)^{mn}+\det({\rm tr}_{1}A)^{m}\geq m^{nm}\bigl{(}\det A+\det({\rm tr}_{2}A)^{n}\bigr{)},$ and $({\rm tr}A)^{mn}+\det({\rm tr}_{2}A)^{n}\geq n^{mn}\bigl{(}\det A+\det({\rm tr}_{1}A)^{m}\bigr{)}.$ Proof. We only prove the second inequality. Invoking Theorem 3.3, we get $\left(\frac{\det({\rm tr}_{2}A)}{n^{m}}\right)^{n}\geq\det A.$ Equivalently, we have $\det({\rm tr}_{2}A)^{n}\geq n^{mn}\det A$. It suffices to show that $({\rm tr}A)^{n}\geq n^{n}\det({\rm tr}_{1}A).$ Note that ${\rm tr}A=\sum_{i=1}^{m}{\rm tr}(A_{i,i})={\rm tr}\left(\sum_{i=1}^{m}A_{i,i}\right)={\rm tr}({\rm tr}_{1}A).$ We denote $X:={\rm tr}_{1}A$, which is a positive semidefinite matrix of order $n$. So we need to prove that $({\rm tr}X)^{n}\geq n^{n}\det X$. This is equivalent to showing $\left(\sum_{i=1}^{n}\lambda_{i}(X)\right)^{n}\geq n^{n}\prod_{i=1}^{n}\lambda_{i}(X),$ which is a direct consequence of the AM-GM inequality. Surprisingly, the proof of Theorem 4.7 seems simpler than that of Proposition 4.6 since it does not rely on Theorem 4.2 and Lemma 4.5. However, it allows us to provide a great improvement on Proposition 4.6 whenever $m,n$ are large integers. In the sequel, we shall denote $|A|=(A^{*}A)^{1/2}$, which is called the modulus of $A$. We remark that this notation is different from that in Theorem 3.7. Note that $|A|$ is positive semidefinite, and the eigenvalues of $|A|$ are called the singular values of $A$. In 2019, Yang, Lu and Chen [34] extended (11) to sector matrices. $({\rm tr}|A|)^{mn}+\det|A|\geq(\cos\alpha)^{mn}|\det({\rm tr}_{1}A)|^{m}+(\cos\alpha)^{mn}|\det({\rm tr}_{2}A)|^{n}.$ Now, we are ready to present an extension on Theorem 4.7. ###### Theorem 4.8 Let $A\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be such that $W(A)\subseteq S_{\alpha}$. Then $({\rm tr}|A|)^{mn}+\left|{\det({\rm tr}_{1}A)}\right|^{m}\geq(m\cos\alpha)^{mn}\bigl{(}\det|A|+|\det({\rm tr}_{2}A)|^{n}\bigr{)},$ and $({\rm tr}|A|)^{mn}+\left|{\det({\rm tr}_{2}A)}\right|^{n}\geq(n\cos\alpha)^{mn}\bigl{(}\det|A|+|\det({\rm tr}_{1}A)|^{m}\bigr{)}.$ Proof. We only prove the first inequality. According to the definition of $S_{\alpha}$, if $W(A)\subseteq S_{\alpha}$, then $\Re A$ is positive definite and its trace is positive. By Lemma 2.2, we have ${\rm tr}|A|=\sum_{i=1}^{mn}\sigma_{i}(A)\geq\sum_{i=1}^{mn}\lambda_{i}(\Re A)={\rm tr}(\Re A)\geq 0.$ It is noteworthy by Lemma 2.4 that $W({\rm tr}_{1}A)\subseteq S_{\alpha}$ and $W({\rm tr}_{2}A)\subseteq S_{\alpha}$. Clearly, we have $\Re({\rm tr}_{1}A)={\rm tr}_{1}(\Re A)$ and $\Re({\rm tr}_{2}A)={\rm tr}_{2}(\Re A)$. By setting $X={\rm tr}_{1}A$ in Lemma 2.2, we get $|\det({\rm tr}_{1}A)|\geq\det\bigl{(}\Re({\rm tr}_{1}A)\bigr{)}=\det\bigl{(}{\rm tr}_{1}(\Re A)\bigr{)}.$ Note that $\Re A$ is positive semidefinite. By applying Theorem 4.7, we can obtain $\displaystyle({\rm tr}|A|)^{mn}+\left|{\det({\rm tr}_{1}A)}\right|^{m}$ $\displaystyle\geq({\rm tr}\,\Re A)^{mn}+\bigl{(}{\det{\rm tr}_{1}(\Re A)}\bigr{)}^{m}$ $\displaystyle\geq m^{nm}\bigl{(}\det(\Re A)+\bigl{(}\det\Re({\rm tr}_{2}A)\bigr{)}^{n}\bigr{)}$ $\displaystyle\geq(m\cos\alpha)^{mn}|\det A|+(m\cos\alpha)^{mn}|\det({\rm tr}_{2}A)|^{n},$ where the last inequality holds from Lemma 2.2 by setting $X=A$ and ${\rm tr}_{2}A$ respectively. ## 5 Trace inequalities for two by two block matrices Positive semidefinite $2\times 2$ block matrices are extensively studied, such a partition yields a great deal of versatile and elegant matrix inequalities; see, e.g., [13, 18, 21, 12] for details. Recently, Kittaneh and Lin [18] (or see [26]) proved the following trace inequalities. ###### Theorem 5.1 [18, 26] Let $\begin{bmatrix}A&B\\\ B^{*}&C\end{bmatrix}\in\mathbb{M}_{2}(\mathbb{M}_{k})$ be positive semidefinite. Then ${\rm tr}A\,{\rm tr}C-{\rm tr}B^{*}\,{\rm tr}B\geq\bigl{|}{\rm tr}AC-{\rm tr}B^{*}B\bigr{|},$ and ${\rm tr}A\,{\rm tr}C+{\rm tr}B^{*}\,{\rm tr}B\geq{\rm tr}AC+{\rm tr}B^{*}B.$ In this section, we present some inequalities related to trace for $2\times 2$ block matrices, which are slight extensions of the result of Kittaneh and Lin. We now need to introduce some notations. Let $\otimes^{r}A:=A\otimes\cdots\otimes A$ be the $r$-fold tensor power of $A$. ###### Theorem 5.2 Let $\begin{bmatrix}A&B\\\ B^{*}&C\end{bmatrix}\in\mathbb{M}_{2}(\mathbb{M}_{k})$ be positive semidefinite. Then for $r\in\mathbb{N}^{*}$, $(\mathrm{tr}A\,\mathrm{tr}C)^{r}-({\rm tr}B^{*}\,\mathrm{tr}B)^{r}\geq\bigl{|}(\mathrm{tr}AC)^{r}-(\mathrm{tr}B^{*}B)^{r}\bigr{|},$ and $(\mathrm{tr}A\,\mathrm{tr}C)^{r}+({\rm tr}B^{*}\,\mathrm{tr}B)^{r}\geq(\mathrm{tr}AC)^{r}+(\mathrm{tr}B^{*}B)^{r}.$ Proof. Note that $\begin{bmatrix}\\!\\!\\!\otimes^{r}A&\otimes^{r}B\\\ \otimes^{r}B^{*}&\otimes^{r}C\end{bmatrix}$ is a principal submatrix of $\otimes^{r}\begin{bmatrix}A&B\\\ B^{*}&C\end{bmatrix}$. Thus $\begin{bmatrix}\\!\\!\\!\otimes^{r}A&\otimes^{r}B\\\ \otimes^{r}B^{*}&\otimes^{r}C\end{bmatrix}$ is again positive semidefinite. By applying Theorem 5.1 to this block matrix, we get $\bigl{|}{\rm tr}(\otimes^{r}A)(\otimes^{r}C)-{\rm tr}(\otimes^{r}B^{*})(\otimes^{r}B)\bigr{|}\leq{\rm tr}\otimes^{r}\\!\\!A\,{\rm tr}\otimes^{r}\\!\\!C-{\rm tr}\otimes^{r}\\!B^{*}{\rm tr}\otimes^{r}\\!\\!B,$ and ${\rm tr}(\otimes^{r}A)(\otimes^{r}C)+{\rm tr}(\otimes^{r}B^{*})(\otimes^{r}B)\leq{\rm tr}\otimes^{r}\\!\\!A\,{\rm tr}\otimes^{r}\\!\\!C+{\rm tr}\otimes^{r}\\!B^{*}{\rm tr}\otimes^{r}\\!\\!B.$ Invoking the well-known facts [35, Chapter 2]: $(\otimes^{r}X)(\otimes^{r}Y)=\otimes^{r}(XY)$ and ${\rm tr}(\otimes^{r}X)=({\rm tr}\,X)^{r}$, the desired inequalities follow immediately. Remark. Theorem 5.2 was proved in the first version of our manuscript (announced on March 10, 2020, arXiv: 2003.04520v1). We remark that this result was recently and independently rediscovered by Fu and Gumus in [12] using a quite different method. Let $e_{t}(X)$ denote the $t$-th elementary symmetric function of the eigenvalues of the square matrix $X$. $e_{t}(X):=\sum_{1\leq i_{1}<i_{2}<\cdots<i_{t}\leq n}\prod_{j=1}^{t}\lambda_{i_{j}}(X).$ In particular, we know that $e_{1}(X)={\rm tr}(X)$. We can get the following theorem. ###### Theorem 5.3 Let $\begin{bmatrix}A&B\\\ B^{*}&C\end{bmatrix}\in\mathbb{M}_{2}(\mathbb{M}_{k})$ be positive semidefinite. Then for $t\in\\{1,2,\ldots,k\\}$, $e_{t}(A)e_{t}(C)-e_{t}(B^{*})e_{t}(B)\geq|e_{t}(AC)-e_{t}(B^{*}B)|,$ and $e_{t}(A)e_{t}(C)+e_{t}(B^{*})e_{t}(B)\geq e_{t}(AC)+e_{t}(B^{*}B).$ Proof. The first inequality can be found in [18, Corollary 2.7]. We next give the outline of the proof of the second one. Note that $\begin{bmatrix}\\!\\!\\!\otimes^{t}A&\otimes^{t}B\\\ \otimes^{t}B^{*}&\otimes^{t}C\end{bmatrix}$ is positive semidefinite. By restricting this block matrix to the symmetric class of tensor product (see, e.g., [3, pp. 16–20]), we know that $\begin{bmatrix}\\!\\!\\!\wedge^{t}A&\wedge^{t}B\\\ \wedge^{t}B^{*}&\wedge^{t}C\end{bmatrix}$ is still positive semidefinite. Note that $e_{t}(X)={\rm tr}(\wedge^{t}X)$ and $(\wedge^{t}X)(\wedge^{t}Y)=\wedge^{t}(XY)$. Applying Theorem 5.1 to this block matrix yields the required result. Let $s_{t}(X)$ be the $t$-th complete symmetric polynomial of eigenvalues of $X$, i.e., $s_{t}(X):=\sum_{1\leq i_{1}\leq i_{2}\leq\cdots\leq i_{t}\leq n}\prod_{j=1}^{t}\lambda_{i_{j}}(X).$ Clearly, we have $s_{1}(X)={\rm tr}(X)$. We can get the following slight extension similarly. ###### Theorem 5.4 Let $\begin{bmatrix}A&B\\\ B^{*}&C\end{bmatrix}\in\mathbb{M}_{2}(\mathbb{M}_{k})$ be positive semidefinite. Then for $t\in\\{1,2,\ldots,k\\}$, $s_{t}(A)s_{t}(C)-s_{t}(B^{*})s_{t}(B)\geq|s_{t}(AC)-s_{t}(B^{*}B)|,$ and $s_{t}(A)s_{t}(C)+s_{t}(B^{*})s_{t}(B)\geq s_{t}(AC)+s_{t}(B^{*}B).$ Proof. Note that $\begin{bmatrix}\\!\\!\\!\otimes^{t}A&\otimes^{t}B\\\ \otimes^{t}B^{*}&\otimes^{t}C\end{bmatrix}$ is positive semidefinite. By restricting this block matrix to the symmetric class of tensor product (see, e.g., [3, pp. 16–20]), we know that $\begin{bmatrix}\\!\\!\\!\vee^{t}A&\vee^{t}B\\\ \vee^{t}B^{*}&\vee^{t}C\end{bmatrix}$ is still positive semidefinite. Similarly, we know that ${\rm tr}(\vee^{t}X)=s_{t}(X)$ and $(\vee^{t}X)(\vee^{t}Y)=\vee^{t}(XY)$. Applying Theorem 5.1 to this block matrix leads to the desired result. ## Acknowledgments This paper is dedicated to Prof. Weijun Liu (Central South University) on his 60th birthday, October 22 of the lunar calendar in 2021. I would like to thank Prof. Yuejian Peng for reading carefully through an earlier version of this paper. This work was supported by NSFC (Grant No. 11931002). ## References * [1] K.M.R. Audenaert, Subadditivity of $q$-entropies for $q>1$, J. Math. Phys. 48 (2007), no. 8, 083507. * [2] T. Ando, Matrix inequalities involving partial traces, ILAS Conference, 2014. * [3] R. Bhatia, Matrix Analysis, GTM 169, Springer-Verlag, New York, 1997. * [4] R. Bhatia, Positive Definite Matrices, Princeton University Press, Princeton, 2007. * [5] A. Desenyei, D. Petz, Partial subadditivity of entropies, Linear Algebra Appl. 439 (2013) 3297–3305. * [6] D. Choi, Inequalities related to trace and determinant of positive semidefinite block matrices, Linear Algebra Appl. 532 (2017) 1–7. * [7] D. Choi, Inequalities about partial transpose and partial traces, Linear Multilinear Algebra 66 (2018) 1619–1625. * [8] D. Choi, T.-Y. Tam, P. Zhang, Extension of Fischer’s inequality, Linear Algebra Appl. 569 (2019) 311–322. * [9] M. Fiedler, T.L. Markham, On a theorem of Everitt, Thompson and de Pillis, Math. Slovaca 44 (1994) 441–444. * [10] X. Fu, P.-S. Lau, T.-Y. Tam, Linear maps of positive partial transpose matrices and singular value inequalities, Math. Inequal. Appl. 23 (4) (2020) 1459–1468. * [11] X. Fu, P.-S. Lau, T.-Y. Tam, Inequalities on partial traces of positive semidefinite block matrices, Canad. Math. Bull. 64 (4) (2021) 964–969. * [12] X. Fu, M. Gumus, Trace inequalities involving positive semi-definite block matrices, Linear Multilinear Algebra (2021) https://doi.org/10.1080/03081087.2021.1942418. * [13] M. Gumus, J. Liu, S. Raouafi, T.-Y. Tam, Positive semi-definite $2\times 2$ block matrices and norm inequalities, Linear Algebra Appl. 551 (2018) 83–91. * [14] R.A. Horn, C.R. Johnson, Matrix Analysis, 2nd ed., Cambridge University Press, Cambridge, 2013. * [15] A. Jenčová, M.B. Ruskai, A unified treatment of convexity of relative entropy and related trace functions, with conditions for equality, Rev. Math. Phys. 22 (2010) 1099–1121. * [16] X. Jiang, Y. Zheng, X. Chen, Extending a refinement of Kotelianskii’s inequality, Linear Algebra Appl. 574 (2019) 252–261. * [17] L. Kuai, An extension of the Fiedler–Markham determinant inequality, Linear Multilinear Algebra 66 (2018) 547–553. * [18] F. Kittaneh, M. Lin, Trace inequalities for positive semidefinite block matrices, Linear Algebra Appl. 524 (2017) 153–158. * [19] E.-Y. Lee, The off-diagonal block of a PPT matrix, Linear Algebra Appl. 486 (2015), 449–453. * [20] Y. Li, L. Feng, Z. Huang, W. Liu, Inequalities regarding partial trace and partial determinant, Math. Inequal. Appl. 23 (2020) 477–485. * [21] Y. Li, Y. Huang, L. Feng, W. Liu, Some applications of two completely copositive maps, Linear Algebra Appl. 590 (2020) 124–132. * [22] Y. Li, W. Liu, Y. Huang, A new matrix inequality involving partial traces, Operators and Matrices 15 (2021), no. 3, 1189–1199. * [23] Y. Li, L. Feng, An Oppenheim type determinantal inequality for the Khatri–Rao product, Operators and Matrices 15 (2021), no. 2, 693–701. * [24] Y. Li, Y. Peng, An Oppenheim type inequality for positive definite block matrices, Linear Multilinear Algebra (2021) https://doi.org/10.1080/03081087.2021.1882370. * [25] M. Lin, An Oppenheim type inequality for a block Hadamard product, Linear Algebra Appl. 452 (2014) 1–6. * [26] M. Lin, A completely PPT map, Linear Algebra Appl. 459 (2014) 404–410. * [27] M. Lin, Inequalities related to $2\times 2$ block PPT matrices, Operators and Matrices 9 (2015), no.4, 917–924. * [28] M. Lin, Extension of a result of Haynsworth and Hartfiel, Arch. Math. 104 (2015) 93–100. * [29] M. Lin, A treatment of a determinant inequality of Fiedler and Markham, Czech. Math. J. 66 (2016) 737–742. * [30] M. Lin, A determinantal inequality involving partial traces, Canad. Math. Bull. 59 (2016) 585–591. * [31] M. Lin, P. Zhang, Unifying a result of Thompson and a result of Fiedler and Markham on block positive definite matrices, Linear Algebra Appl. 533 (2017) 380–385. * [32] D. Petz, Quantum Information Theory and Quantum Statistics. Theoretical and Mathematical Physics, Springer, Berlin, 2008. * [33] A.E. Rastegin, Relations for symmetric norms and anti-norms before and after partial trace, J. Stat. Phys. 148 (2012) 1040–1053. * [34] J. Yang, L. Lu, Z. Chen, Schatten $q$-norms and determinantal inequalities for matrices with numerical ranges in a sector, Linear Multilinear Algebra 67 (2019) 221–227. * [35] X. Zhan, Matrix Theory, Graduate Studies in Mathematics, vol. 147, Amer. Math. Soc., Providence, RI, 2013. * [36] F. Zhang, Matrix Theory: Basic Results and Techniques, 2nd ed., Springer, New York, 2011. * [37] F. Zhang, Positivity of matrices with generalized matrix functions. Acta Math. Sin. (Engl. Ser.) 28 (2012) 1779–1786. * [38] P. Zhang, Extension of Matic’s results, Linear Algebra Appl. 486 (2015) 328–334. * [39] P. Zhang, On some inequalities related to positive block matrices, Linear Algebra Appl. 576 (2019) 258–267.
2024-09-04T02:54:58.633886
2020-03-10T10:46:12
2003.04628
{ "authors": "Ygor Gallina, Florian Boudin, B\\'eatrice Daille", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26130", "submitter": "Ygor Gallina", "url": "https://arxiv.org/abs/2003.04628" }
arxiv-papers
# Large-Scale Evaluation of Keyphrase Extraction Models Ygor Gallina<EMAIL_ADDRESS>LS2N, Université de NantesNantesFrance , Florian Boudin<EMAIL_ADDRESS>LS2N, Université de NantesNantesFrance and Béatrice Daille beatrice.daille@univ- nantes.fr LS2N, Université de NantesNantesFrance (2020) ###### Abstract. Keyphrase extraction models are usually evaluated under different, not directly comparable, experimental setups. As a result, it remains unclear how well proposed models actually perform, and how they compare to each other. In this work, we address this issue by presenting a systematic large-scale analysis of state-of-the-art keyphrase extraction models involving multiple benchmark datasets from various sources and domains. Our main results reveal that state-of-the-art models are in fact still challenged by simple baselines on some datasets. We also present new insights about the impact of using author- or reader-assigned keyphrases as a proxy for gold standard, and give recommendations for strong baselines and reliable benchmark datasets. Keyphrase generation, natural language processing, evaluation ††copyright: acmcopyright††journalyear: 2020††doi: 10.1145/1122445.1122456††conference: JCDL ’20: ACM/IEEE Joint Conference on Digital Libraries; August 01–05, 2020; Xi’an, Shaanxi, China††booktitle: JCDL ’20: ACM/IEEE Joint Conference on Digital Libraries, August 01–05, 2020, Xi’an, Shaanxi, China††isbn: 978-1-4503-XXXX-X/20/06††ccs: Information systems Digital libraries and archives††ccs: Information systems Information retrieval††ccs: Computing methodologies Information extraction ## 1\. Introduction Keyphrases are single or multi-word lexical units that represent the main concepts in a document (Evans and Zhai, 1996). They are particularly useful for indexing, searching and browsing digital libraries (Barker et al., 1972; Zhai, 1997; Gutwin et al., 1999; Witten et al., 2009), and have proven themselves as effective features in many downstream natural language processing tasks (Hulth and Megyesi, 2006; Litvak and Last, 2008; Berend, 2011). Still, most documents do not have assigned keyphrases, and manual annotation is simply not a feasible option (Mao and Lu, 2017). There is therefore a great need for automated methods to assign relevant keyphrases to documents. Automatic keyphrase extraction111Also referred to as keyphrase generation or keyphrase annotation. – that is, the task of extracting keyphrases either from the content of the document or from a controlled vocabulary – has received much attention from the research community (Kim et al., 2010; Gollapalli et al., 2015; Augenstein et al., 2017). Thus, many keyphrase extraction models were proposed over the last years, ranging from early statistics-based models (Witten et al., 1999), to popular graph-based ranking models (Mihalcea and Tarau, 2004), and recent neural models (Meng et al., 2017). However, because of the great discrepancies in experimental setups among past studies, it is very difficult to compare and contrast the effectiveness of these models, and even more so to assess the progress of the field as a whole. More specifically, we observe striking differences in how models are parameterized, evaluated and compared in previous work. To name just a few examples, experiments are most often conducted on different benchmark datasets, all of which differ in domain, size, language or quality of the gold standard (that is, reference keyphrases supplied by authors, readers or professional indexers). This not only makes the reported results hard to contrast, but also has a profound impact on trained model performance (Gallina et al., 2019). In addition, and since there is no consensus as to which evaluation metric is most reliable for keyphrase extraction (Zesch and Gurevych, 2009; Hussey et al., 2012; Hasan and Ng, 2014), diverse measures are commonly seen in the literature, thus preventing any further direct comparisons. Moreover, the evaluation of missing keyphrases – that is, gold keyphrases that do not occur in the content of the document – is still an open question and there is little agreement on whether they should be included or not (Kim et al., 2010). We strongly believe that this lack of empirical rigor is a real hindrance to progress on keyphrase extraction, and that a systematic comparison of existing models under the same conditions is needed to fully understand how they actually perform. In this work, we resolve this issue by conducting the first large-scale study on automatic keyphrase extraction. More precisely, we present an extensive comparative analysis of state-of-the-art keyphrase extraction models involving 9 benchmark datasets from various domains. To ensure controlled, fair and reliable experiments, we embarked upon the difficult process of re-implementing all of the models presented in this paper222Link to the code will appear here after the review period. and pre- processing the datasets in a unified and systematic way333Link to the datasets will appear here after the review period.. Using these new large-scale experimental results, we seek to better understand how well state-of-the-art models perform across sources, domains and languages. We also go further than prior work and investigate the following research questions: 1. (1) How much progress have we made on keyphrase extraction since early models? 2. (2) What is the impact of using non-expert gold standards, that is, author- or reader-assigned keyphrases, when training and evaluating keyphrase extraction models? 3. (3) Which baselines and benchmark datasets should be included in future work for a better understanding of the pros and cons of a newly proposed model? ## 2\. Benchmark Datasets Benchmark datasets for evaluating automatic keyphrase extraction cover a wide range of sources ranging from scientific articles and web pages to twitter and email messages. We collected 9 of the most widely used datasets which we believe are representative of the different sources and domains found in previous work. Detailed statistics for each selected dataset are shown in Table 2. They are grouped into three categories that are outlined below: Scientific articles: Among the selected datasets, three are composed of full-text scientific publications: ACM (Krapivin et al., 2009) and SemEval (Kim et al., 2010) about computer science, and PubMed (Schutz, 2008) from the medical domain. Not surprisingly, they contain only a small number of documents due to copyright reasons. These datasets provide author-assigned keyphrases which serve as a reasonable, but far from perfect, proxy for expert annotations. In the case of SemEval, student annotators were hired to extend gold annotation labels. Paper abstracts: Scientific abstracts, often referred to as bibliographic records, are arguably the most prevalent documents for benchmarking keyphrase extraction. They are readily available in great quantities and come with author-assigned keyphrases that can be used as gold standard. We gathered three datasets, all dealing with the computer science domain: Inspec (Hulth, 2003), WWW (Caragea et al., 2014) and KP20k (Meng et al., 2017). It is worth noting that with more than half a million documents, KP20k is the largest dataset to date and one of the few that is large enough to train neural models. News articles: News texts are the last source of documents present among the collected datasets. Similar to paper abstracts, online news are available in large quantities and can be easily mined from the internet. We selected the following three datasets: DUC-2001 (Wan and Xiao, 2008), 500N-KPCrowd (Marujo et al., 2012) and KPTimes (Gallina et al., 2019). The first two datasets provide reader-assigned keyphrases, while KPTimes supplies indexer-assigned key-phrases extracted from metadata and initially intended for search engines. It is interesting to observe that only two datasets in our study, namely Inspec and KPTimes, provide gold keyphrases annotated by professional indexers. Dataset | Ann. | Train | Test | #words | #kp | %abs ---|---|---|---|---|---|--- 5pt. PubMed (Schutz, 2008) | $A$ | - | 1 320 | 5 323 | 5.4 | 16.9 ACM (Krapivin et al., 2009) | $A$ | - | 2 304 | 9 198 | 5.3 | 16.3 SemEval (Kim et al., 2010) | $A\cup R$ | 144 | 100 | 7 961 | 14.7 | 19.7 Scientific articles (avg.) | 7 494 | 8.5 | 17.6 5pt. Inspec (Hulth, 2003) | $I$ | 1 000 | 500 | 135 | 9.8 | 22.4 WWW (Caragea et al., 2014) | $A$ | - | 1 330 | 164 | 4.8 | 52.0 KP20k (Meng et al., 2017) | $A$ | 530K | 20K | 176 | 5.3 | 42.6 Paper abstracts (avg.) | 158 | 6.6 | 39.0 5pt. DUC-2001 (Wan and Xiao, 2008) | $R$ | - | 308 | 847 | 8.1 | 3.7 KPCrowd (Marujo et al., 2012) | $R$ | 450 | 50 | 465 | 46.2 | 11.2 KPTimes (Gallina et al., 2019) | $I$ | 260K | 10K | 921 | 5.0 | 54.7 News articles (avg.) | 744 | 19.8 | 23.2 Table 1. Statistics of the datasets. Gold annotation is supplied by authors ($A$), readers ($R$) or professional indexers ($I$). The number of documents in the training and testing splits are shown. The average number of keyphrases (#kp) and words (#words) per document, and the ratio of missing keyphrases (%abs) are computed on the test set. Datasets containing scientific articles or abstracts rely primarily on author- assigned keyphrases as gold standard. They therefore exhibit similar properties for the average number of ground truth keyphrases per document ($\approx 5$). On the other hand, articles are on average significantly longer than abstracts ($\approx 7500$ words vs. $\approx 160$ words respectively) and consequently reveal a much smaller fraction of missing keyphrases ($\approx 18\%$ vs. $\approx 39\%$ respectively). Datasets with reader-assigned keyphrases exhibit the lowest numbers of missing keyphrases, which can be explained by the fact that readers appear to produce gold-standard annotations in an extractive fashion (Wang et al., 2015). We also confirmed this empirically by computing the ratio of missing keyphrases in the author- assigned ($24\%$) and reader-assigned ($17.5\%$) gold annotations of the SemEval dataset. In contrast, the opposite trend is observed for KPTimes that comes with gold standards annotated by professional indexers and that shows the highest percentage of missing keyphrases ($54.7\%$). This indicates the the more abstractive nature of indexer-assigned keyphrases. Put differently, it is known that non-expert annotations are less constrained and may include seldom- used variants or misspellings (Sood et al., 2007), whereas indexers strive to rely on a consistent terminology and assign the same keyphrase to all documents for a given topic, even when it does not occur in these documents. To investigate this further, we looked at how many variants of an index term, in this case “artificial neural network”, could be found in the author- assigned keyphrases of KP20k. All in all, we found dozens of variants for this term, including “neural network”, “neural network (nns)”, “neural net”, “artificial neural net” or “nn”. This apparent lack of annotation consistency intuitively has two consequences: 1) it makes it harder for supervised approaches to learn a good model, 2) it makes automatic evaluation much less reliable as it is based on exact string matching. It is important to stress that datasets containing scientific articles may contain noisy texts. Indeed, most articles were automatically converted from PDF format to plain text and thus are likely to contain irrelevant pieces of text (e.g. muddled sentences, equations). Previous work show that noisy inputs undermine the overall performance of keyphrase extraction models (Boudin et al., 2016). In this study, we do not insist on a perfect input and we are aware that reported results may be improved with an increase in pre-processing effort. ## 3\. Models Roughly speaking, previous works on keyphrase extraction can be divided into two groups depending on whether they adopt a supervised learning procedure or not. This section starts by introducing the baselines we will use in our experiments, and then proceeds to describe the state-of-the-art keyphrase extraction models we re-implemented sorted into the aforementioned two groups. ### 3.1. Baselines Having strong baselines to compare with is a prerequisite for contrasting the results of proposed models. In previous studies, various baselines were considered, complicating the analysis and interpretation of the reported results. Our stance here is to establish three baselines, each associated with a particular feature that is commonly used in keyphrase extraction models. All baselines are also unsupervised, allowing their use and performance analysis on any of the benchmark datasets Keyphrase position is a strong signal for both unsupervised and supervised models, simply because texts are usually written so that the most important ideas go first (Marcu, 1997). In single document summarization for example, the lead baseline –that is, the first sentences from the document–, while incredibly simple, is still a competitive baseline (Kedzie et al., 2018). Similar to the lead baseline, we propose the FirstPhrases baseline that extracts the first $N$ keyphrase candidates from a document. We are not aware of any previous work reporting that baseline, yet, as we will see in §5, it achieves remarkably good results. Graph-based ranking models for keyphrase extraction are, perhaps, the most popular models in the literature. Therefore, as a second baseline, we use TextRank (Mihalcea and Tarau, 2004), which weights keyphrase candidates using a random walk over a word-graph representation of the document. In a nutshell, TextRank defines the importance of a word in terms of how it relates to other words in the document, and ranks candidates according to the words they contain. The third baseline, TF$\times$IDF (Salton and Buckley, 1988), have been repeatedly used in previous comparative studies (Kim et al., 2010; Meng et al., 2017, inter alia). In contrast with the other two baselines that do no require any resources whatsoever (beyond the document itself), TF$\times$IDF makes use of the statistics collected from unlabelled data to weight keyphrase candidates. As such, it often gives better results, in some cases even on par with state-of-the-art models (Ye and Wang, 2018). ### 3.2. Unsupervised models Annotated data are not always available or easy to obtain, which motivates the further development of unsupervised models for keyphrase extraction. Besides, looking back at previous work, most attempts to address this problem employ unsupervised approaches. In this study, we selected three recent state-of-the- art models based on their reported performance. The first model we investigate is PositionRank (Florescu and Caragea, 2017), a graph-based model that incorporates two features (position and frequency) into a biased PageRank algorithm. This model operates at the word level, and assigns a score to each candidate using the sum of its individual word scores. As such, it suffers from over-generation errors444These errors occur when a model correctly outputs a keyphrase because it contains an important word, but at the same time erroneously predicts other keyphrases because they contain the same word. (Hasan and Ng, 2014), but still achieves good performance on short texts. The second model we consider, MPRank (Boudin, 2018), relies on a multipartite graph representation to enforce topical diversity while ranking keyphrase candidates. It includes a mechanism to incorporate keyphrase selection preferences in order to introduce a bias towards candidates occurring first in the document. MultipartiteRank was shown to consistently outperform other unsupervised graph-based ranking models. Both aforementioned models only exploit the document itself to extract keyphrases. The third model we include, EmbedRank (Bennani-Smires et al., 2018), leverages sentence embeddings for ranking keyphrase candidates. Candidates are weighted according to their cosine distance to the document embedding, while diversity in the selected keyphrases is promoted using Maximal Marginal Relevance (MMR) (Goldstein and Carbonell, 1998). Despite its simplicity, this model was shown to outperform other unsupervised models on short texts (abstracts and news). ### 3.3. Supervised models Supervised models can be further divided into two categories, depending on whether they rely on a neural network or not. Traditional supervised models treat the keyphrase extraction problem as a binary classification task. Here, we include such a model, namely Kea (Witten et al., 1999), in order to precisely quantify the performance gap with recent neural-based models. KEA uses a Naive Bayes classifier trained on a set of only two handcrafted features we have elected as baseline features: the TF$\times$IDF score of the candidate and the normalized position of its first occurrence in the document. Previous work has reported confusing and conflicting results555On SemEval, (Meng et al., 2017) report an F@10 score of $2.6$ while (Boudin, 2016) report a score of $19.3$. for Kea, raising questions about how it actually performs. Neural models for keyphrase extraction rely on an encoder-decoder architecture (Cho et al., 2014; Sutskever et al., 2014) with an attention mechanism (Bahdanau et al., 2014; Luong et al., 2015). Training these models require large amounts of annotated training data, and is therefore only possible on the KP20k and KPTimes datasets. The second supervised model we include in this study is CopyRNN (Meng et al., 2017), an encoder-decoder model that incorporates a copying mechanism (Gu et al., 2016) in order to be able to predict phrases that rarely occur. When properly trained, this model was shown to be very effective in extracting keyphrases from scientific abstracts. The third supervised model we use, CorrRNN (Chen et al., 2018), extends the aforementioned model by introducing correlation constraints. It employs a coverage mechanism (Tu et al., 2016) that diversifies attention distributions to increase topic coverage, and a review mechanism to avoid generating duplicates. As such, it produces more diverse and less redundant keyphrases. Note that only neural models have the ability to generate missing keyphrases, which in theory gives them a clear advantage over the other models. ## 4\. Experimental settings In addition to the variation in the choice of benchmark datasets and baselines, there are also major discrepancies in parameter settings and evaluation metrics between previous studies. For example, there is no point in contrasting the results in (Meng et al., 2017), (Florescu and Caragea, 2017) and (Teneva and Cheng, 2017), three papers about keyphrase extraction published in the same year at ACL, since neither benchmark datasets, parameter settings nor evaluation metrics are comparable. To address this problem, we use the same pre-processing tools, parameter settings and evaluation procedure across all our experiments. | Scientific articles | Paper abstracts | News articles ---|---|---|--- | PubMed | ACM | SemEval | Inspec | WWW | KP20k | DUC-2001 | KPCrowd | KPTimes Model | $\text{F}@10$ | MAP | $\text{F}@10$ | MAP | $\text{F}@10$ | MAP | $\text{F}@10$ | MAP | $\text{F}@10$ | MAP | $\text{F}@10$ | MAP | $\text{F}@10$ | MAP | $\text{F}@10$ | MAP | $\text{F}@10$ | MAP FirstPhrases | 15.4 | 14.7 | 13.6 | 13.5 | 13.8 | 10.5 | 29.3 | 27.9 | 10.2 | 09.8 | 13.5 | 12.6 | 24.6 | 22.3 | 17.1 | 16.5 | 09.2 | 08.4 TextRank | 01.8 | 01.8 | 02.5 | 02.4 | 03.5 | 02.3 | 35.8 | 31.4 | 08.4 | 05.6 | 10.2 | 07.4 | 21.5 | 19.4 | 07.1 | 09.5 | 02.7 | 02.5 TF$\times$IDF | 16.7 | 16.9 | 12.1 | 11.4 | 17.7 | 12.7 | 36.5 | 34.4 | 09.3 | 10.1 | 11.6 | 12.3 | 23.3 | 21.6 | 16.9 | 15.8 | 09.6 | 09.4 PositionRank | 04.9 | 04.6 | 05.7 | 04.9 | 06.8 | 04.1 | 34.2 | 32.2 | 11.6† | 08.4 | 14.1† | 11.2 | 28.6† | 28.0† | 13.4 | 12.7 | 08.5 | 06.6 MPRank | 15.8 | 15.0 | 11.6 | 11.0 | 14.3 | 10.6 | 30.5 | 29.0 | 10.8† | 10.4 | 13.6† | 13.3† | 25.6 | 24.9† | 18.2 | 17.0 | 11.2† | 10.1† EmbedRank | 03.7 | 03.2 | 02.1 | 02.1 | 02.5 | 02.0 | 35.6 | 32.5 | 10.7† | 07.7 | 12.4 | 10.0 | 29.5† | 27.5† | 12.4 | 12.4 | 04.0 | 03.3 Kea | 18.6† | 18.6† | 14.2† | 13.3 | 19.5† | 14.7† | 34.5 | 33.2 | 11.0† | 10.9† | 14.0† | 13.8† | 26.5† | 24.5† | 17.3 | 16.7 | 11.0† | 10.8† CopyRNN | 24.2† | 25.4† | 24.4† | 26.3† | 20.3† | 13.8 | 28.2 | 26.4 | 22.2† | 24.9† | 25.4† | 28.7† | 10.5 | 07.2 | 08.4 | 04.2 | 39.3† | 50.9† CorrRNN | 20.8† | 19.4† | 21.1† | 20.5† | 19.4 | 10.9 | 27.9 | 23.6 | 19.9† | 20.3† | 21.8† | 22.7 | 10.5 | 06.5 | 07.8 | 03.2 | 20.5† | 20.3† Table 2. Performance of keyphrase extraction models. † indicates significance over the baselines. ### 4.1. Parameter settings We pre-process all the texts using the Stanford CoreNLP suite (Manning et al., 2014) for tokenization, sentence splitting and part-of-speech (POS) tagging. All non-neural models operate on a set of keyphrase candidates, extracted from the input document. Selecting appropriate candidates is particularly important since it determines the upper bound on recall, and the amount of irrelevant candidates that models will have to deal with. For a fair and meaningful comparison, we use the same candidate selection heuristic across models. We follow the recommendation by Wang et al. (2014) and select the sequences of adjacent nouns with one or more preceding adjectives of length up to five words. Candidates are further filtered by removing those shorter than 3 characters or containing non-alphanumeric symbols. We implemented the neural models in PyTorch (Paszke et al., 2017) using AllenNLP (Gardner et al., 2018), and the non-neural models using the pke toolkit (Boudin, 2016). As neural models require large amounts of annotated data to be trained, we trained our models on the KP20k dataset for both scientific papers and abstracts, and on KPTimes for news texts. We compute Document Frequency (DF) counts and learn Kea models on training sets. For datasets without training splits, we apply a leave-one-out cross-validation procedure on the test sets for calculating DF counts and training models. We use the optimal parameters suggested by the authors for each model, and leverage pre-trained sentence embeddings666https://github.com/epfml/sent2vec for EmbedRank. We also found out that the training set of KP20k contains a non-negligible number of documents from the test sets of other datasets. We removed those documents prior to training. ### 4.2. Evaluation metrics Although there is no consensus as to which metric is the most reliable for keyphrase extraction, a popular evaluation strategy is to compare the top $k$ extracted keyphrases against the gold standard. We adopt this strategy and report the f-measure at the top 10 extracted keyphrases. In previous work, we often see differences in how gold standards are handled during evaluation. For example, some studies evaluate their models on the present and missing portions of the gold standard separately (Meng et al., 2017; Ye and Wang, 2018; Chen et al., 2018, inter alia), whereas other work use the entire gold standard (Florescu and Caragea, 2017; Boudin, 2018, inter alia). We chose the latter because recent models, in addition to extracting keyphrases from the content of the document, are able to generate missing keyphrases. Following common practice, gold standard and output keyphrases are stemmed to reduce the number of mismatches. One issue with the f-measure is that the ranks of the correct keyphrases are not taken into account. To evaluate the overall ranking performance of the models, we also report the Mean Average Precision (MAP) scores of the ranked lists of keyphrases. We use the Student’s paired t-test to assess statistical significance at the $0.05$ level. ### 4.3. Replicability of results In Table 3, we compare the results of our re-implementations against those reported in the original papers. We note that all models show comparable results. We observe the largest differences with original scores for CopyRNN ($+2$) and CorrRNN ($-4.3$) that can be easily explained by minor differences in training parameters. Model | Dataset (metric) | Orig. | Ours ---|---|---|--- PositionRank | WWW (F$@$8) | 12.3 | 11.7 MPRank | SemEval-2010 (F$@$10) | 14.5 | 14.3 EmbedRank | Inspec (F$@$10) | 37.1 | 35.6 CopyRNN | KP20k (F$@$10 on present) | 26.2 | 28.2 CorrRNN | ACM (F$@$10 on present) | 27.8 | 23.5 Table 3. Original vs. re-implementation scores. ## 5\. Results Results are presented in Table 2. First of all, we notice that no model significantly outperforms the baselines on all datasets. This is rather surprising, as one would expect that neural models would be consistently better than a simple TF$\times$IDF model for example. Rather, we see that the TF$\times$IDF baseline is very competitive on long documents, while the FirstPhrases baseline performs remarkably well, especially on news texts. Still, overall, CopyRNN achieves the best performance with, in the case of KPTimes, MAP scores exceeding 50%. When we look at only unsupervised models, MPRank achieves the best results across datasets. Also, it comes as no surprise that Kea exhibits strong performance across datasets because it combines two effective features, as demonstrated by the results of the TF$\times$IDF and FirstPhrases baselines. Conversely, despite the addition of mechanisms for promoting diversity in the output, CorrRNN is almost always outperformed by CopyRNN, suggesting that the added correlation constraints are not effective at filtering out spurious keyphrases. In light of the above, we can now answer the following question: “How much progress have we made since early models?”. It is clear that neural-based models are the new state-of-the-art for keyphrase extraction, achieving F@10 scores up to three times that of previous models. That being said, CopyRNN, which is the best overall model, fails to consistently outperform the baselines on all datasets. One reason for that is the limited generalization ability of neural-based models (Meng et al., 2017; Chen et al., 2018; Gallina et al., 2019), which means that their performance degrades on documents that differ from the ones encountered during training. This is besides confirmed by the extremely low performance of these models on DUC-2001 and KPCrowd. Much more work needs to be done in tackling this issue if neural models are to substitute for older supervised models. Perhaps most disappointing is the fact that state-of-the-art unsupervised models are still challenged by the TF$\times$IDF baseline. Here, we suspect the reasons are twofold. First, the models we have investigated do not use in-domain data which may not only limit their performance, but also, as in the case of EmbedRank that uses out-of- domain (Wikipedia) data, be detrimental to their performance. Second, unlike neural generative models, they are not able to produce keyphrases that do not occur in the source document, further limiting their potential effectiveness. As outlined in §2, gold standards provided by lay annotators, such as authors and readers, exhibit strong inconsistency issues. One might therefore wonder “What is the impact of non-expert annotations on training and evaluating keyphrase extraction models?”. Intuitively, models evaluated against these annotations are likely to receive lower scores because they make training more difficult (that is, assigning different keyphrases to documents about the same topic may confuse the model) while increasing the number of false negatives during evaluation. This is exactly what we observe in Table 2 where the best scores for Inspec and KPTimes, whose gold standards are provided by professional indexers, are higher in magnitude than those of the other datasets. Precisely quantifying how much impact lay annotations have on performance is no easy task as it implies a double-annotation process by both expert and non-expert annotators. Luckily enough, a small sample of documents from Inspec are also found in KP20k, allowing us to compare the performance of keyphrases models between both annotation types. Results are shown in Table 4. First, we see that overall performance is nearly cut in half when evaluating against author-provided gold standard, suggesting that reported scores in previous studies are arguably underestimated. Second, neural models again do not show their superiority against indexer-assigned keyphrases, which advocates the need for more experiments on datasets that include expert annotations. | $\text{F}@10$ | MAP ---|---|--- Model | I | A | I | A FirstPhrases | 25.8 | 13.7 | 26.1 | 13.2 TextRank | 33.4 | 12.2 | 29.6 | 09.3 TF$\times$IDF | 34.6 | 14.2 | 33.3 | 16.1 PositionRank | 32.9 | 15.9 | 31.0 | 13.0 MPRank | 26.4 | 13.8 | 27.6 | 13.6 EmbedRank | 34.3 | 15.3 | 31.3 | 11.5 Kea | 32.5 | 15.2 | 31.9 | 15.9 CopyRNN | 33.7 | 28.9‡ | 29.8 | 33.8‡ CorrRNN | 28.6 | 25.3 | 24.2 | 28.2 Avg. | 31.3 | 17.2 | 29.4 | 17.2 Table 4. Results on a subset of 55 documents from Inspec for indexer (I) and author (A) gold annotations. ‡ indicates significance over every other model. Figure 1. Average number of keyphrases in common between model outputs. The third question we want to address in this study is “Which baselines and benchmark datasets should be included in future work for a better understanding of the pros and cons of a newly proposed model?”. Having strong baselines to compare with is of utmost importance, and our results give an indication of which model is relevant. When properly trained, neural models drastically outperform all other models and represent the state-of-the-art. Since CopyRNN achieve the best results, it should be included in future work for comparison. In an unsupervised setting, or in a data-sparse scenario where neural models can not be applied, the picture is less clear. To help us understand which model is worth investigating, we conducted an additional set of experiments aimed at comparing the outputs from all models in a pairwise manner. The motivation behind these experiments is that including multiple models that behave similarly is of limited interest. Similarities between model outputs, viewed in terms of the number of keyphrases in common, are graphed as a heatmap in Figure 1. Overall, we observe different patterns for each source of documents. The shorter the document is, the more similar outputs are, which is mostly due to a smaller search space (that is, a smaller number of keyphrase candidates). We note that the three best unsupervised models, namely FirstPhrases, MPRank and TF$\times$IDF, generate very similar keyphrases (up to 42% identical). Considering this, and given their reported performances (Table 2), we argue that TF$\times$IDF (or KEA if seed training data is available) should be considered as strong unsupervised baseline in subsequent work. These recommendations of baselines also affect the choice of which benchmark datasets one has to use. As neural models are data-hungry, KP20k and KPTimes are the default options for paper abstracts and news articles. For scientific articles, we recommend using SemEval for two reasons: 1) it is widely used by existing studies; and 2) it provides a double-annotated gold standard (author- and reader-assigned keyphrases) that alleviates annotation inconsistencies to some extent. Our experiments highlight several issues in evaluating keyphrase extraction models with existing benchmark datasets. Another way of assessing the effectiveness of these models would be to explore their impact on other tasks as an extrinsic evaluation. To the best of our knowledge, there is no previously published research on that matter despite many downstream tasks that already benefit from keyphrase information such as article recommendation (Collins and Beel, 2019) or browsing interfaces (Gutwin et al., 1999) in digital libraries. This points to an interesting future direction that allows for a deeper understanding of the limitations of current models. ## 6\. Conclusion This paper presents a large scale evaluation of keyphrase extraction models conducted on multiple benchmark datasets from different sources and domains. Results indicate that keyphrase extraction is still an open research question, with state-of-the-art neural-based models still challenged by simple baselines on some datasets. We hope that this work will serve as a point of departure for more rigorous analysis and evaluation of proposed keyphrase extraction models. We provide all the code and data on a public repository777Link to the repository will appear here after the review period., as well as a public leaderboard to facilitate the comparison between models. ## References * (1) * Augenstein et al. (2017) Isabelle Augenstein, Mrinal Das, Sebastian Riedel, Lakshmi Vikraman, and Andrew McCallum. 2017\. SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications. In _Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)_. Association for Computational Linguistics, Vancouver, Canada, 546–555. http://www.aclweb.org/anthology/S17-2091 * Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014\. Neural machine translation by jointly learning to align and translate. _arXiv preprint arXiv:1409.0473_ (2014). * Barker et al. (1972) Frances H Barker, Douglas C Veal, and Barry K Wyatt. 1972\. Comparative efficiency of searching titles, abstracts, and index terms in a free-text data base. _Journal of Documentation_ 28, 1 (1972), 22–36. * Bennani-Smires et al. (2018) Kamil Bennani-Smires, Claudiu Musat, Andreea Hossmann, Michael Baeriswyl, and Martin Jaggi. 2018\. Simple Unsupervised Keyphrase Extraction using Sentence Embeddings. In _Proceedings of the 22nd Conference on Computational Natural Language Learning_. Association for Computational Linguistics, Brussels, Belgium, 221–229. http://www.aclweb.org/anthology/K18-1022 * Berend (2011) Gábor Berend. 2011\. Opinion Expression Mining by Exploiting Keyphrase Extraction. In _Proceedings of 5th International Joint Conference on Natural Language Processing_. Asian Federation of Natural Language Processing, Chiang Mai, Thailand, 1162–1170. http://www.aclweb.org/anthology/I11-1130 * Boudin (2016) Florian Boudin. 2016\. pke: an open source python-based keyphrase extraction toolkit. In _Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations_. The COLING 2016 Organizing Committee, Osaka, Japan, 69–73. http://aclweb.org/anthology/C16-2015 * Boudin (2018) Florian Boudin. 2018\. Unsupervised Keyphrase Extraction with Multipartite Graphs. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_. Association for Computational Linguistics, New Orleans, Louisiana, 667–672. http://www.aclweb.org/anthology/N18-2105 * Boudin et al. (2016) Florian Boudin, Hugo Mougard, and Damien Cram. 2016\. How Document Pre-processing affects Keyphrase Extraction Performance. In _Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)_. The COLING 2016 Organizing Committee, Osaka, Japan, 121–128. http://aclweb.org/anthology/W16-3917 * Caragea et al. (2014) Cornelia Caragea, Florin Adrian Bulgarov, Andreea Godea, and Sujatha Das Gollapalli. 2014\. Citation-Enhanced Keyphrase Extraction from Research Papers: A Supervised Approach. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_. Association for Computational Linguistics, Doha, Qatar, 1435–1446. http://www.aclweb.org/anthology/D14-1150 * Chen et al. (2018) Jun Chen, Xiaoming Zhang, Yu Wu, Zhao Yan, and Zhoujun Li. 2018. Keyphrase Generation with Correlation Constraints. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, Brussels, Belgium, 4057–4066. http://www.aclweb.org/anthology/D18-1439 * Cho et al. (2014) Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014\. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_. Association for Computational Linguistics, Doha, Qatar, 1724–1734. http://www.aclweb.org/anthology/D14-1179 * Collins and Beel (2019) Andrew Collins and Jöran Beel. 2019. Document Embeddings vs. Keyphrases vs. Terms for Recommender Systems: A Large-Scale Online Evaluation. In _19th ACM/IEEE Joint Conference on Digital Libraries, JCDL 2019, Champaign, IL, USA, June 2-6, 2019_. 130–133. https://doi.org/10.1109/JCDL.2019.00027 * Evans and Zhai (1996) David A. Evans and Chengxiang Zhai. 1996. Noun Phrase Analysis in Large Unrestricted Text for Information Retrieval. In _Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics_. Association for Computational Linguistics, Santa Cruz, California, USA, 17–24. https://doi.org/10.3115/981863.981866 * Florescu and Caragea (2017) Corina Florescu and Cornelia Caragea. 2017. PositionRank: An Unsupervised Approach to Keyphrase Extraction from Scholarly Documents. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. Association for Computational Linguistics, Vancouver, Canada, 1105–1115. http://aclweb.org/anthology/P17-1102 * Gallina et al. (2019) Ygor Gallina, Florian Boudin, and Beatrice Daille. 2019\. KPTimes: A Large-Scale Dataset for Keyphrase Generation on News Documents. In _Proceedings of the 12th International Conference on Natural Language Generation_. Association for Computational Linguistics, Tokyo, Japan, 130–135. https://doi.org/10.18653/v1/W19-8617 * Gardner et al. (2018) Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A Deep Semantic Natural Language Processing Platform. In _Proceedings of Workshop for NLP Open Source Software (NLP-OSS)_. Association for Computational Linguistics, Melbourne, Australia, 1–6. https://doi.org/10.18653/v1/W18-2501 * Goldstein and Carbonell (1998) Jade Goldstein and Jaime Carbonell. 1998. SUMMARIZATION: (1) USING MMR FOR DIVERSITY- BASED RERANKING AND (2) EVALUATING SUMMARIES. In _Proceedings of the TIPSTER Text Program: Phase III_. Association for Computational Linguistics, Baltimore, Maryland, USA, 181–195. https://doi.org/10.3115/1119089.1119120 * Gollapalli et al. (2015) Sujatha Das Gollapalli, Cornelia Caragea, Xiaoli Li, and C. Lee Giles (Eds.). 2015. _Proceedings of the ACL 2015 Workshop on Novel Computational Approaches to Keyphrase Extraction_. Association for Computational Linguistics, Beijing, China. http://www.aclweb.org/anthology/W15-36 * Gu et al. (2016) Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016\. Incorporating Copying Mechanism in Sequence-to-Sequence Learning. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. Association for Computational Linguistics, Berlin, Germany, 1631–1640. http://www.aclweb.org/anthology/P16-1154 * Gutwin et al. (1999) Carl Gutwin, Gordon Paynter, Ian Witten, Craig Nevill-Manning, and Eibe Frank. 1999\. Improving Browsing in Digital Libraries with Keyphrase Indexes. _Decis. Support Syst._ 27, 1-2 (Nov. 1999), 81–104. https://doi.org/10.1016/S0167-9236(99)00038-X * Hasan and Ng (2014) Kazi Saidul Hasan and Vincent Ng. 2014. Automatic Keyphrase Extraction: A Survey of the State of the Art. In _Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. Association for Computational Linguistics, Baltimore, Maryland, 1262–1273. http://www.aclweb.org/anthology/P14-1119 * Hulth (2003) Anette Hulth. 2003\. Improved Automatic Keyword Extraction Given More Linguistic Knowledge. In _Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing_ , Michael Collins and Mark Steedman (Eds.). 216–223. http://www.aclweb.org/anthology/W03-1028.pdf * Hulth and Megyesi (2006) Anette Hulth and Beáta B. Megyesi. 2006. A Study on Automatically Extracted Keywords in Text Categorization. In _Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics_. Association for Computational Linguistics, Sydney, Australia, 537–544. https://doi.org/10.3115/1220175.1220243 * Hussey et al. (2012) Richard Hussey, Shirley Williams, Richard Mitchell, and Ian Field. 2012. A comparison of automated keyphrase extraction techniques and of automatic evaluation vs. human evaluation. _International Journal on Advances in Life Sciences_ 4, 3 and 4 (2012), 136–153. http://centaur.reading.ac.uk/32266/ * Kedzie et al. (2018) Chris Kedzie, Kathleen McKeown, and Hal Daume III. 2018\. Content Selection in Deep Learning Models of Summarization. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, Brussels, Belgium, 1818–1828. http://www.aclweb.org/anthology/D18-1208 * Kim et al. (2010) Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010. SemEval-2010 Task 5 : Automatic Keyphrase Extraction from Scientific Articles. In _Proceedings of the 5th International Workshop on Semantic Evaluation_. Association for Computational Linguistics, Uppsala, Sweden, 21–26. http://www.aclweb.org/anthology/S10-1004 * Krapivin et al. (2009) Mikalai Krapivin, Aliaksandr Autaeu, and Maurizio Marchese. 2009. _Large dataset for keyphrases extraction_. Technical Report. University of Trento. * Litvak and Last (2008) Marina Litvak and Mark Last. 2008. Graph-Based Keyword Extraction for Single-Document Summarization. In _Coling 2008: Proceedings of the workshop Multi-source Multilingual Information Extraction and Summarization_. Coling 2008 Organizing Committee, Manchester, UK, 17–24. http://www.aclweb.org/anthology/W08-1404 * Luong et al. (2015) Thang Luong, Hieu Pham, and Christopher D. Manning. 2015\. Effective Approaches to Attention-based Neural Machine Translation. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, Lisbon, Portugal, 1412–1421. http://aclweb.org/anthology/D15-1166 * Manning et al. (2014) Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In _Association for Computational Linguistics (ACL) System Demonstrations_. 55–60. http://www.aclweb.org/anthology/P/P14/P14-5010 * Mao and Lu (2017) Yuqing Mao and Zhiyong Lu. 2017. MeSH Now: automatic MeSH indexing at PubMed scale via learning to rank. _Journal of Biomedical Semantics_ 8, 1 (17 Apr 2017), 15. https://doi.org/10.1186/s13326-017-0123-3 * Marcu (1997) Daniel Marcu. 1997\. The Rhetorical Parsing of Unrestricted Natural Language Texts. In _Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics_. Association for Computational Linguistics, Madrid, Spain, 96–103. https://doi.org/10.3115/976909.979630 * Marujo et al. (2012) Luís Marujo, Anatole Gershman, Jaime Carbonell, Robert Frederking, and JoaÌfo P. Neto. 2012\. Supervised Topical Key Phrase Extraction of News Stories using Crowdsourcing, Light Filtering and Co-reference Normalization. In _Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12)_ (23-25), Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Mehmet Uğur Doğan, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (Eds.). European Language Resources Association (ELRA), Istanbul, Turkey. * Meng et al. (2017) Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, and Yu Chi. 2017\. Deep Keyphrase Generation. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. Association for Computational Linguistics, 582–592. https://doi.org/10.18653/v1/P17-1054 * Mihalcea and Tarau (2004) Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing Order into Texts. In _Proceedings of EMNLP 2004_ , Dekang Lin and Dekai Wu (Eds.). Association for Computational Linguistics, Barcelona, Spain, 404–411. * Paszke et al. (2017) Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017\. Automatic differentiation in pytorch. In _NIPS 2017 Workshop Autodiff_. * Salton and Buckley (1988) Gerard Salton and Christopher Buckley. 1988. Term-weighting approaches in automatic text retrieval. _Information Processing & Management_ 24, 5 (1988), 513 – 523. https://doi.org/10.1016/0306-4573(88)90021-0 * Schutz (2008) Alexander Thorsten Schutz. 2008\. Keyphrase extraction from single documents in the open domain exploiting linguistic and statistical methods. _Master’s thesis, National University of Ireland_ (2008). * Sood et al. (2007) Sanjay Sood, Sara Owsley, Kristian J. Hammond, and Larry Birnbaum. 2007. TagAssist: Automatic Tag Suggestion for Blog Posts. In _Proceedings of the First International Conference on Weblogs and Social Media, ICWSM 2007, Boulder, Colorado, USA, March 26-28, 2007_. http://www.icwsm.org/papers/paper10.html * Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014\. Sequence to Sequence Learning with Neural Networks. In _Advances in Neural Information Processing Systems 27_ , Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (Eds.). Curran Associates, Inc., 3104–3112. http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf * Teneva and Cheng (2017) Nedelina Teneva and Weiwei Cheng. 2017. Salience Rank: Efficient Keyphrase Extraction with Topic Modeling. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_. Association for Computational Linguistics, Vancouver, Canada, 530–535. http://aclweb.org/anthology/P17-2084 * Tu et al. (2016) Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling Coverage for Neural Machine Translation. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. Association for Computational Linguistics, Berlin, Germany, 76–85. https://doi.org/10.18653/v1/P16-1008 * Wan and Xiao (2008) Xiaojun Wan and Jianguo Xiao. 2008. Single Document Keyphrase Extraction Using Neighborhood Knowledge. In _Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 2_ _(AAAI’08)_. AAAI Press, 855–860. http://dl.acm.org/citation.cfm?id=1620163.1620205 * Wang et al. (2014) Rui Wang, Wei Liu, and Chris McDonald. 2014. How Preprocessing Affects Unsupervised Keyphrase Extraction. In _Computational Linguistics and Intelligent Text Processing_ , Alexander Gelbukh (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 163–176. * Wang et al. (2015) Rui Wang, Wei Liu, and Chris McDonald. 2015. Using Word Embeddings to Enhance Keyword Identification for Scientific Publications. In _Databases Theory and Applications_ , Mohamed A. Sharaf, Muhammad Aamir Cheema, and Jianzhong Qi (Eds.). Springer International Publishing, Cham, 257–268. * Witten et al. (2009) Ian H Witten, David Bainbridge, and David M Nichols. 2009\. _How to build a digital library_. Morgan Kaufmann. * Witten et al. (1999) Ian H. Witten, Gordon W. Paynter, Eibe Frank, Carl Gutwin, and Craig G. Nevill-Manning. 1999. KEA: Practical Automatic Keyphrase Extraction. In _Proceedings of the Fourth ACM Conference on Digital Libraries_ _(DL ’99)_. ACM, New York, NY, USA, 254–255. https://doi.org/10.1145/313238.313437 * Ye and Wang (2018) Hai Ye and Lu Wang. 2018\. Semi-Supervised Learning for Neural Keyphrase Generation. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, Brussels, Belgium, 4142–4153. http://www.aclweb.org/anthology/D18-1447 * Zesch and Gurevych (2009) Torsten Zesch and Iryna Gurevych. 2009. Approximate Matching for Evaluating Keyphrase Extraction. In _Proceedings of the International Conference RANLP-2009_. Association for Computational Linguistics, Borovets, Bulgaria, 484–489. http://www.aclweb.org/anthology/R09-1086 * Zhai (1997) Chengxiang Zhai. 1997\. Fast Statistical Parsing of Noun Phrases for Document Indexing. In _Fifth Conference on Applied Natural Language Processing_. Association for Computational Linguistics, Washington, DC, USA, 312–319. https://doi.org/10.3115/974557.974603
2024-09-04T02:54:58.652265
2020-03-10T11:56:55
2003.04654
{ "authors": "O.V. Kancheli", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26131", "submitter": "O. V. Kancheli", "url": "https://arxiv.org/abs/2003.04654" }
arxiv-papers
Parton models and frame independence of high-energy cross-sections. O.V. Kancheli 111 Email<EMAIL_ADDRESS> Institute for Theoretical and Experimental Physics, 117218, Moscow, Russia ###### Abstract We describe some ambiguities which take place when on calculates the cross- sections in parton models at high energies and the connected limitations on the asymptotic of high energy amplitudes that follows from the conditions of boost-invariance of cross-sections. It turns out that the resulting constraints are of the same type as the following from the t-channel unitarity conditions. So that on can suppose that this similarity, by their nature, has much more general grounds. ## 1 Introduction There are two main theoretical approaches to a study of the behavior of high energy amplitudes and cross-sections. In one approach, we directly calculate the amplitudes by summing the contributions of the Feynman diagrams of the corresponding field theory or use some effective theory like reggeon diagrams or various string-like dual models. In the other - parton like approach to high-energy collisions 222 A few useful reviews ( [1]\- [7]) we usually consider separately three main stages of the system evolution in the process of particles collision. Firstly, one constructs the quantum states $\Psi(P)$ of high energy particle with momentum $P\gg m$ in terms of superposition $\Psi(P)~{}=~{}\sum_{n}\int_{\\{k_{i}\\}}~{}f_{n}(P,\\{k_{i}\\})~{}|n,\\{k_{i}\\}>$ (1) of the n-particle states $|n,\\{k_{i}\\}>$ of some “primary” constituents - partons with 3-momenta $\\{k_{i}\\}$. The “choice” of these partons is not unique, and partons can be bare point like particles, particles with varying virtuality, QCD color dipoles, fast string configurations, distributions of the Coulomb-like fields, etc. The state $\Psi(\vec{P})$ must fulfill the Schroedinger equation $\hat{H}~{}\Psi(\vec{P})=\sqrt{\vec{P}^{2}+m^{2}}~{}~{}\Psi(\vec{P})~{},$ where the Hamiltonian $\hat{H}$ is the function of parton fields, so that $\Psi(\vec{P})$ is the eigenfunction of $\hat{H}$ with eigenvalues defining the particles physical mass $m$. After that one can use such a state $\Psi(P_{1})$ to calculate its interaction with some low energy target or with other fast particle in the state $\Psi(P_{2})$ in terms of “simple” amplitudes of parton interaction. There is also the third stage corresponding to an evolution in the final state when moving away partons transform and combine into physical particles (hadronization…). But often this stage is not very restrictive, especially when we calculate various integrated cross-sections. And we will not consider it in this article. It is essential that with the energy growing in most parton descriptions the structure of the parton state becomes more and more complicated for all the theories containing vector (like QCD) and tensor fields (gravity) and mean parton number in states $\Psi(P)$and the average transverse size of the region they occupy grow with $P$. When we consider the collision of two fast particles in the parton states $\Psi(P_{1})$ and $\Psi(P_{2})$ at some large $s=(P_{1}+P_{2})^{2}\gg m^{2}$ we can choose for this any longitudinal Lorentz system. But the resulting values of cross-sections of various processes must not depend from this choice of frame. And this is nontrivial condition in parton approach, because in different longitudinal systems (that is for various $P_{1}$ and $P_{2}$ at the same value of $s$) the different parton configurations firstly meet one another at the moment of particles collision. And, moreover, by choosing a different system we also can move the dynamics, from stage one to two and vice versa. If we make all calculation precisely - with hermitian Hamiltonian we probably can be sure that all restrictions coming from Lorentz-invariance and the unitarity conditions will be satisfied. But if we make some approximations, especially dictated by phenomenological or pictorial arguments, the unitarity conditions itself can probably be the only general way to check that the results are not contradictory. Various restrictions from the t-channel unitarity are very essential for the amplitudes describing high energy hadron interactions, and they are directly taken into account in reggeon amplitudes [10]. But in parton approaches it is not evident how to take them into account. In the reggeon field theory and in the dual (string) models the t-unitarity conditions are automatically fulfilled. But at high reggeon (pomeron) density such un approach can become unreliable. The parton approach has no problems with high parton density, but here there is no direct way how to control possible restrictions coming from the t-unitarity. One can hope that the longitudinal Lorentz (boost) invariance of all cross- sections calculated in a parton approach is in some sense equivalent to the mean form of the t-unitarity for multiparticle amplitudes. So, if we calculate any cross-section using the partonic wave functions $\Psi(P_{a})$ and $\Psi(P_{b})$ of fast colliding hadrons with momenta $P_{a},$ $P_{b}$ then we expect that this cross-section must be the same in all longitudinal Lorentz frames \- that is if we calculate the cross-sections using $\Psi(L(\vartheta)P_{a})$ and $\Psi(L^{-1}(\vartheta)P_{b})$, where $L(\vartheta)$ is a longitudinal boost. It is essential, that in a parton picture such boosts $L(\vartheta)$ act on hadrons Fock state very nontrivial changing the number of partons, etc. No precise arguments for such general propositions (the boost invariance for parton cross-sections $\simeq$ t-unitarity) are known. Although it is by itself natural that the calculations of cross-sections in the parton picture must give a frame independent answer. Also this is, in particular, confirmed in if we give the partonic interpretation to reggeon diagrams, by t-cutting them at various intermediate rapidities, as if we calculate various multiparticle inclusive cross-sections. In this article 333The material of this paper partially intercepts with the article of the author [8]. we consider some examples illustrating how the requirement of boost-invariance essentially restricts the structure of high energy collision dynamics. We see that it restricts in the same way as it follows from the conditions of t-unitarity. ## 2 Restrictions on a parton states from the boost invariance of high-energy collision cross-sections. Simple Examples In this section we illustrate how the requirement of the frame independence (boost-invariance - BI) restricts the behavior of high-energy cross-sections calculated in the parton approach. We suppose that partons are point like particles with perturbative interaction and consider here some examples which show how BI condition works. Also we choose very high energy interactions, where the mean number of parton in HE state is large, so one can consider firstly only states with mean number of partons and only after that take into account corrections from other components of the Fock wave function of a fast particle. So, the picture of interaction is almost quasiclassical. We consider the behavior at a boost-transformation of the inelastic cross- sections $\sigma_{in}$ or of the connected quantity - the transparency $T=1-\sigma_{in}=|S|^{2}$, which is often more sensitive to the breaking of BI. We choose some frame where the colliding particles have rapidities $y_{1}=y$ and $y_{2}=Y-y$, where $Y=\ln(s/m^{2})$, and require that calculated cross-sections do not depend on $y$ We begin from the simplest parton models of a fast hadron - the rare parton gas state and of the black disk state. ### 2.1 Collision of a rare gas like parton states Let us consider the collision of two particles which can be represented as the partonic clouds that are in a state of a very rare gas. This is the case usually described by reggeon diagrams, that, by their construction, include t-unitarity requirements. Let the mean number of partons in colliding hadrons be $n(y)$, $n(Y-y)$ and the mean transverse radii of regions occupied by these partons are $R(y)$, $R(Y-y)$, respectively. Then the total inelastic cross- section can be expressed as: $\displaystyle\sigma_{in}(Y)~{}=~{}\sigma_{0}~{}n(y)~{}n(Y-y)~{}~{}-~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$ $\displaystyle~{}~{}~{}-~{}c_{1}~{}\sigma_{0}~{}n(y)n(Y-y)~{}\Big{(}~{}\frac{\sigma_{0}~{}n(y)}{R^{2}(y)}~{}+~{}\frac{\sigma_{0}~{}n(Y-y)}{R^{2}(Y-y)}~{}+~{}$ (2) $\displaystyle~{}+~{}\frac{\sigma_{0}~{}n(y)~{}n(Y-y)}{R^{2}(y)+R^{2}(Y-y)}~{}\Big{)}~{}+...~{}~{},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$ where $\sigma_{0}$ is the parton-parton cross-section, $c_{1}\sim 1$. The first term in (2.1) corresponds to a collision of at least one pair of partons. The next terms describe corrections from screening and multiple collisions 444The cross-sections of local interactions of point-like particles decrease as a function of their relative energy as $\sigma_{0}(s)\sim 1/s$. As a result in (2.1) enter, in fact, only the numbers of low energy partons $n(y),n(Y-y)$ of the colliding particles in this coordinate system. . For the rare parton gas one can at first approximation neglect multiple collisions and screening, that is to leave only the first term in (2.1). Then, from the requirement of the independence of $\sigma_{0}~{}n(y)n(Y-y)$ on $y$ follows the unique solution for $n(y)=~{}n_{0}e^{y\Delta_{0}}$ (3) with some real constants $n_{0}$, $\Delta_{0}$. The following from (3) behavior of $\sigma_{in}(Y)=\sigma_{0}~{}n_{0}^{2}~{}e^{Y\Delta_{0}}$ (4) in the elastic amplitude corresponds to a regge pole in the complex angular momentum plane (and not to a cut or some more complicated regge singularity ). And this condition follows [9] in a relativistic Regge approach only from the 2-particle t-unitarity of the elastic amplitude. Note that the coefficient in (4) is in fact factorized - for the collision of different particles a+b one must $n^{2}\rightarrow n_{a}n_{b}$ . This factorization in regge approach also follows from t-unitarity. Moreover, it is interesting to consider [6] the behavior cross-section $\sigma_{in}$ with the definite impact parameter $B$, normalized so, that $\sigma_{in}(Y)~{}=~{}\int~{}d^{2}B~{}\sigma_{in}(Y,y,B)~{}~{}.$ (5) In this case the analog of the first term in (2.1) can be represented as $\sigma_{in}(Y,y,B)~{}=~{}\sigma_{0}\int d^{2}x_{\bot}~{}\rho(y,|x_{\bot}|)~{}\rho(Y-y,|B-x_{\bot}|)~{}~{}~{}~{},~{}~{}~{}~{}$ (6) where $\rho(y,x_{\bot})$ is the transverse parton density $(~{}n(y)=\int d^{2}x_{\bot}\rho(y,x_{\bot})~{})$ . Then from the frame independence of the $\sigma_{in}(Y,y,B)$ the form of transverse parton density $n(y,x_{\perp})$ can be essentially restricted. The condition of y-independence can be writhen as $\frac{\partial}{\partial y}~{}\sigma_{in}(Y,y,B)~{}=~{}0~{}.$ (7) Going here to conjugate to $x_{\bot}$ variable $\rho(y,x_{\bot})=\int d^{2}k\cdot e^{ikx_{\bot}}~{}\tilde{\rho}(y,k)$ we come from (7) to the equation $\frac{\partial}{\partial y}~{}\big{(}~{}\tilde{\rho}(y,k)~{}\tilde{\rho}(Y-y,k)~{}\big{)}~{}=~{}0~{},$ which has the solution $\tilde{\rho}(y,k)=f_{1}(k)\cdot e^{yf_{2}(k)}$ and then as a result $\rho(y,x_{\bot})\sim\int d^{2}k~{}e^{ikx_{\bot}}~{}f_{1}(k)~{}e^{yf_{2}(k)}~{},$ (8) where $f_{1}$, $f_{2}$ are arbitrary functions of $k$. For $y\rightarrow\infty$ the integral in (8) can be taken by the steepest decent method, so that only the neighborhoods of zeros of $\partial f_{2}(k)/\partial k$ are essential. Then from the positivity of the parton density $\rho$ it follows that $f_{2}$ is positive and so the dominant contribution must come from the region $k\sim 0$, otherwise $\rho(y,x_{\bot})$ will oscillate in $x_{\bot}$. So in the essential region $f_{2}(k)\simeq c_{1}-c_{2}k^{2},~{}~{}c_{2}>0$, and estimating the integral (8) we come to the expression for the density of low energy partons $\rho(y,x_{\bot})~{}\sim~{}y^{-1}~{}e^{\big{(}c_{1}y-x_{\bot}^{2}/4c_{2}r_{0}^{2}y\big{)}}~{}~{},~{}~{}~{}c_{2}>0~{}.$ (9) The expression (9) corresponds to the Gauss form of parton distribution in $x_{\bot}$ which usually results from the diffusion of partons in $x_{\bot}$ plane during the parton cascading. The mean radius of a low energy parton cloud $R(y)~{}\sim r_{0}\sqrt{~{}y}$ is also fixed here only from the condition of the frame independence. In the elastic amplitude the Eq. (9) corresponds to the contribution of the regge pole with the trajectory $\alpha(t)=1+\Delta+\alpha^{\prime}t$ , where $\Delta=c_{1},~{}\alpha^{\prime}=c_{2}$. If we make the next step and impose the condition of $y$ independence on the sum of two terms in the right side of (2.1) and assume that the correction to (3) is small, we become instead of (3) the corrected expression $n(y)=n_{0}~{}e^{\Delta_{0}y}-~{}a_{2}~{}n_{0}^{2}\frac{\sigma_{0}}{R^{2}(y)}~{}e^{2\Delta_{0}y}~{}+...~{}~{}~{}~{}$ (10) From here it is simple to conclude that $\sigma_{in}(Y)=n_{0}^{2}~{}\sigma_{0}e^{(\Delta_{0}Y)}~{}-~{}~{}n_{0}^{4}\frac{a_{2}\sigma_{0}^{2}}{R^{2}(Y)}~{}e^{2(\Delta_{0}Y)}~{}~{}.$ (11) The second term in (11) corresponds to the the contribution of two reggeon cuts, whose structure is almost complectly fixed here from the boost- invariance. The arbitrary coefficient $a_{2}>1$, depends on the weight of the diffractive amplitudes entering in the two regeon emission vertex. The possible next terms in (10), corresponding to higher regge cuts, can be found in the same way by iterative applying the boost-invariance condition to the combinations of screening terms in the expression (2.1) for $\sigma_{in}$. Thus, it can be seen that for rare parton states we come to the restrictions on there structure that arise from the reggeon diagrams and are defined by the t-unitarity. At the end of this section note that at at all currently available energies the dominant high energy hadron interactions are well described by the regge approach with the soft pomeron exchange and the respective cuts. This directly corresponds to the Gauss-like parton distribution consistent with the parton frame independence. ### 2.2 Collision of a black disks Now let us consider the opposite limiting case of colliding parton clouds, when the mean parton density is very high and partons fill a transverse disk with the radius $R(y)$ depending on particles energy $E=me^{y}$. Then the total inelastic cross-section can be determined from purely geometrical conditions - it is defined by the area of an impact parameter space, corresponding to the overlapping of the colliding black disks : $\sigma_{in}(Y)~{}=~{}\pi~{}\Big{(}R(y)+R(Y-y)\Big{)}^{2}~{}.$ (12) From the condition of independence of the right side of Eq.(12) on $y$ evidently follows the unique solution for $R(y)~{}=~{}r_{0}\cdot y+r_{1}$ (13) It is interesting that in this case we immediately come directly to asymptotically constant cross-sections (when $r_{1}=0$), or to the Froissart type behavior of cross-sections $\sigma_{in}(Y)\simeq\pi r_{0}^{2}Y^{2}+\pi r_{0}r_{1}Y+\pi r_{1}^{2}~{}~{}.$ (14) Here, in the Froissart case the elastic cross-section is diffractive and $\sigma_{el}=\sigma_{in}$. Also the terms $~{}\pi r_{0}r_{1}Y$ in (14) correspond to a diffraction generation as is natural in the Froissart case. ### 2.3 Collision of the parton grey disks The real parton disk (even at $Y\gg 1$) cannot be absolutely black because the parton density at every particles energy is finite. Besides that the local parton density fluctuations also can lower the parton density in the individual events and this leads to the grow of the locale transparency of such disks. For such parton disks the conditions of BI can lead to rather strong restrictions on the structure of “grey” parton states and their interactions. Firstly, consider the collisions of grey disks with some constant grayness - when the mean transverse parton density is stabilized at some fixed value and do not grow with energy i.e. the local disk transparency also does not change with energy 555One can expect this type of the behavior in the (2+1)D QCD, which is soft and if here the parton saturation takes place [8].. Then it is easy to see that the condition of the boost invariance can at all not be fulfilled for such models. In the lab.frame of one particle the transparency $T_{lab}(Y,B)=const(Y)~{}~{}~{}~{}~{}at~{}~{}~{}~{}~{}Y\rightarrow\infty~{}~{},$ because at all $B$ only a finite number ($\sim 1$) of partons must penetrate through the grey parton disk of the other fast particle. And in the center of the mass system at the same impact parameter the large number of partons $N_{12}$ must penetrate. For the grey disk $N_{12}\sim S_{12}(y,Y,B)$ \- the transverse area of two disks overlapping region. And for growing with $Y$ disk radius the $S_{12}$ also grows. For the Froisart type growth we have $S_{12}\sim Y^{2}$ at $B\ll R(Y)$. Then in systems close to the center of mass $T_{scm}(Y,B)\sim e^{(-cN_{12})}\sim\exp{(-cY^{2})}\rightarrow 0~{}~{}~{}~{},~{}~{}~{}~{}~{}~{}~{}~{}c\sim 1~{}~{}~{}.$ Therefore, the case of a grey disk with a constant (or a slowly ($\sim 1$) varying ) parton density should be probably excluded. In all more or less realistic situations, such, for example that one can expect in the QCD, the parton disk can have a grey parton border, even when the inner parts of the disk become almost black. In this case the average parton density can be roughly represented as $\rho(y,x_{\bot})~{}\simeq~{}\rho_{d}(x_{\bot})~{}\theta(R(y)-x_{\bot})~{}+~{}\rho_{0}~{}\theta(x_{\bot}-R(y))f(x_{\bot})~{}~{},$ (15) where $\rho_{d}(x_{\bot})$ describes the behavior of the parton density in the inner part of the disk, and where the grey border has the width $\lambda(y)\ll R(y)$. In this border the parton density varies from the high value (almost black) to small one. For example, it can have the form $f(x_{\bot})~{}\simeq~{}e^{-(x_{\bot}-R(y))/\lambda(y)}~{}~{}.$ (16) For collisions with an impact parameter $B<R(Y)$, when colliding disks stuck with their almost black parts, we possibly can have a boost-invariant behavior of $\sigma_{in}$. But for collisions with $B-R(Y)\sim\lambda(Y))$, when the discs collide with their grey edges, the situation is different. In the Lab frame of one of particle the transparency $T_{lab}=const\sim 1$, because here only some ($\sim 1$) partons must penetrate without interaction through the grey edge of the large disc. And in the arbitrary system at the same impact parameter the transparency is $T(y,Y-y,B)\sim e^{-N_{12}(y,Y-y,B)}\sim e^{-S_{12}(y,Y-y,B)/r_{0}^{2}}~{}~{},$ where $N_{12}(y,Y-y,B)$ is the average number of parton interactions during the collision and $S_{12}(y,Y-y,B)$ is the area of the two disks intersection region. This region has a form of elongated figure whose width is $\sim\lambda(y)$ and the length $l(y)\sim\sqrt{R(y)\lambda(y)}$ for $y\lesssim Y-y$. So, for such B the two disks intersection area is $S_{12}(y,Y-y,B\simeq R(Y)+\lambda)~{}~{}\sim~{}~{}R(y)^{1/2}*\lambda(y)^{3/2}~{}~{}.$ In the center of mass system this gives for $Y\gg 1$ $T_{scm}\sim\exp(-c(R(y)/r_{0})^{1/2})\rightarrow 0$ even for parton disks with $\lambda\sim const(Y)$, although the width of the grey border can also grows together with the disk size 666One can expect [8] that for a realistic parton disc due to border shape fluctuations the mean width of the grey zone grows with Y as $\lambda\sim\sqrt{Y}$. Therefore, for the border collisions with such $B$ and $Y$ we have no boost-invariance of T , and this conclusion in fact almost does not depend on the explicit form of the border distribution $f(x_{\bot})$. Probably, the only exception is the Gauss type distribution of the parton density when the whole disk has the “structure of border”. ### 2.4 Particle to heavy nuclei interaction A slightly different type of restriction on the parton structure follows from the boost invariance if we consider the high energy collision of a particle p (for example a proton, a pion or any test color dipole) with heavy nucleus $(A\gg 1)$. To see this we compare the estimate of transparency T in Lab frame of nuclei and the Lab frame of p. Also we choose Y not very large, but so that $Y\gg\ln A$, and consider a collision at $B=0$. In fact, in this case we have a collision of p with a long ($\sim A^{1/3}$) tube of nucleons, and we want to calculate the probability of the passage of p through A without interaction. First, consider the p $\bigotimes$ A collision in the Lab frame of p. For such an Y due to the Lorentz contraction of the moving nuclei all soft partons of the fast nuclei are placed in a tiny transverse region of the longitudinal size $\sim 1/m$. And if the parton saturation take place, the number of soft partons $N_{A}$ interacting with p should almost not depend on A, because all “additional” soft partons coming from different nucleons in the A-tube are absorbed one by another. Therefore, one can expect that the transparency in the p-lab. frame is $T_{p}~{}\sim~{}e^{-N_{p}(Y)}~{}~{}.$ (17) On the other hand, to calculate T in the Lab frame of A at the same B and Y we must find the probability that fast particle p penetrate without interaction trough the $A^{1/3}$ long tube of nucleons. Here one can expect that $T_{A}~{}\sim~{}e^{-c(Y)A^{1/3}}$ (18) Because in such a “thought experiment” we can arbitrary choose Y and A and the distance between the nucleons in the tube, we come to an apparent contradiction with the frame independence. This means that some constrains must be imposed on the parton dynamics. The simplest way is to suppose that there is almost no parton saturation in the A-tube. Or on the contrary - that some kind of the mechanism works, which makes the interaction of a fast p particle with the nucleus otherwise dependent on A. Possibly some indications on the causes of this inconsistency can be found if we consider the regge description of this reaction, where we can calculate $\sigma_{in}(Y,B)=1-T$ for a large A and not to a large Y. If we take for a single pomeron exchange in the p $\bigotimes$ A reaction the amplitude $v(y,b)\sim ig^{2}A^{1/3}\exp{(\Delta y-b^{2}/4\alpha^{\prime}y)}$ and consider firstly the simple eiconal case which corresponds to a situation without parton saturation we become for the corresponding S-matrix $S(Y,B)=\exp{(iv(Y,B))}$, and this gives for the transparency $T(Y,B=0)~{}=~{}|~{}S(Y,B=0~{}|^{2}~{}\sim~{}\exp{\Big{(}-2g^{2}A^{1/3}e^{\Delta Y}\Big{)}}~{}~{}.$ (19) The simplest way to take into account something similar to the parton saturation is to include into the single pomeron exchange amplitude the pomeron cascading from the side of A-vertices. So that from the p-side the pomeron line joins to p and from the A side the pomeron line branching joins to many nucleons. This corresponds to the new amplitude $v~{}\rightarrow~{}\tilde{v}~{}=~{}\frac{v}{1-i\frac{r}{g\Delta}v}~{}~{},$ (20) were $r$ is the 3-pomeron vertex. In this case for large $A^{1/3}$ and $B~{}=~{}0$ the amplitude $\tilde{v}$ is stabilized at the value $|\tilde{v}|=g\Delta/r$. And, therefore, the corresponding transparency approaches to $T(Y,B=0)~{}=~{}\exp{\big{(}-2|\tilde{v}|~{}\big{)}}~{}=~{}\exp{\Big{(}-2g\Delta/r\Big{)}}~{}~{}.$ (21) Comparing the expressions ( 17 ) with (21) and (18) with (19) we see their similarity, but this unfortunately does not help to find the right answer, because the simple expression like (20) dos not take into account various pomeron interactions in the $\tilde{v}$-cascade and also the other pomeron interactions in eiconal multipomeron diagrams 777 Note that approximately the same inconsistency appears if we consider the heavy A $\bigotimes$ A interaction end compare the estimates of T in the Lab frame and in the CM system . ### 2.5 Possible boost-invariant parton density distributions in a grey disk In fact, in the case of asymptotically growing cross-section all parton distributions corresponding to real theories like QCD will, probably, lead to the grey dick. And it is interesting to find the sensible examples of parton distributions that correspond to the boost-invariant T. Let us consider collisions of particles with some parton distribution $\rho(y,x_{\bot})$ and try to find the minimal conditions on the form of $\rho(y,x_{\bot})$ for which cross-sections are boost-invariant With the exponential precision the transparency can be expressed as: $T(Y,y,B)~{}\sim~{}\exp\Big{(}~{}-N(y,Y-y,B)~{}\Big{)}~{}~{}~{},$ (22) where $N(y,Y-y,B)=\sigma_{0}\cdot\int d^{2}b\cdot\rho(y,|b|)\cdot\rho(Y-y,|B-b|)$ (23) is proportional to the mean number of the parton scattering when two $\cal F$ d penetrate one through another during their collision at the impact parameter B. Because the expression (23) has the same structure as (6) one can repeat here the calculation given above. Then we find that the expression ( 23 ) can be boost invariant only for some very special Gaussian form of parton density $\rho$ inside the disk : $\rho(y,x_{\bot})~{}\sim~{}\rho_{0}~{}\frac{1}{y}~{}e^{\Delta y-x_{\bot}^{2}/yr_{0}^{2}}~{}~{},~{}~{}$ (24) This corresponds to the distribution arising in the parton cascade when partons only split and do not join. The same answer (Eq (9)) for $\rho(y,x_{\bot})$ was found for the rare parton systems - but here the density can be arbitrary high. In the connected elastic amplitude it corresponds to a regge pole exchange with the intercept $\Delta$. In fact, the expression (24) for $\Delta>0$ corresponds again to almost black disk (but without parton saturation !) of the radius $r_{0}\Delta~{}y$ with a thin grey border, because the parton density changes here very fast from a small to a big values at the distances $\delta x_{\bot}\sim r_{0}/\sqrt{\Delta}$. In general case one must take into account that partons in the colliding disks can have different virtualities $u\sim\ln k^{2}_{\bot}/m^{2}$ , where the parton density $\rho(y,b,u)$ has now nontrivial dependence on $u$. Partons with large $u$ are more strongly localized in transverse coordinates and their interaction cross-sections $\sigma(u_{1},u_{2})$ usually decrease for large $u_{i}$. The expression for the transparency in the process of collision of two parton disks has again the form (22), where the mean number of parton interactions during the collision is given by the following generalization of (23) $N(y,Y-y,B)=\int d^{2}b\int du_{1}du_{2}~{}\sigma(u_{1},u_{2})\cdot\rho(y,|b|,u_{1})\rho(Y-y,|B-b|,u_{2})~{}.$ (25) In this case the restrictions on the form of $\rho(y,b,u)$ coming from the frame independence condition $(\partial/\partial y)N(y,Y-y,B)=0$ are not so strong as for (23 \- 24). If the parton cross-sections that enter (25) can be approximately factorized as $\sigma(u_{1},u_{2})\sim\ell(u_{1})\cdot\ell(u_{2})~{}~{},$ then the condition for the boost invariance of $N$ can be reduced to the more simple equation $\int du~{}\ell(u)\rho(y,b,u)~{}=\rho_{0}\frac{1}{y}~{}e^{\Delta y-b^{2}/yr_{0}^{2}}$ (26) In this case the form of $\rho(y,b,u)$ for some interesting models are again almost completely fixed. For example, so is the superposition of grey saturated disks with different virtualities $\rho(y,b,u)\sim\varphi(y,u)~{}\theta(r_{1}\chi(y)-bu^{a})~{},$ so that the mean radii of these disks $r_{1}\chi(y)/u^{a}$ decrease 888In QCD the radii of hard subdisks can grow as $\sim y/\sqrt{u}$, and this corresponds to $a=2$ with growth of u. Here, from equation(26), one can find the explicit expression for $\varphi(y,u)~{}=~{}\varphi_{0}~{}\frac{e^{\Delta y}}{\ell(u)~{}u^{2a}~{}y}~{}\exp{\Big{(}-c_{2}\frac{\chi^{2}(y)}{y~{}u^{2a}}~{}~{},\Big{)}}$ (27) where $\varphi_{0},~{}a,~{}\Delta,~{}c_{2}$ and functions $\ell(u),~{}\chi(y)$ can be chosen arbitrary. If we choose $\chi(y)=~{}\chi_{0}y$ so to have the Froissart type of the growth of disk radius we will have from (27) for the disk density $\varphi(y,u)~{}\sim~{}~{}\frac{e^{\Delta y}}{\ell(u)u^{2a}y}~{}\exp{\Big{(}-\tilde{c_{2}}\frac{y}{u^{2a}}~{}~{}.\Big{)}}$ (28) For large $u$ it is natural to expect that $\ell(u)\sim e^{-cu}$ and therefore the mean density of hard subdisks will grow with u and y. ### 2.6 Corrections to the mean picture from a big fluctuations in the colliding states To discuss if the boost-invariance can be somehow restored also when the mean parton density $\rho(y,x_{\bot})$ is not of the Gauss form (24) on must take into account all essential parton configurations, and also these ones that are very far from the mean one. In this case, one can hope that in different frames the main contribution into cross-sections comes from some different parton components so to compensate the variation of the contribution of the mean states. Here especially interesting can be the rare components of $\Psi(P)$. In the Fock state of a fast particle such rare parton configurations contain a relatively small number of partons and therefore it can give large contribution to the transparency and compensate the boost non- invariance of $T$ and other quantities in the mean density states. Such configurations can mainly arise due to large fluctuations in the initial stages of the patron cascade. CM one can ask for such a parton component $|~{}bare>$ for a fast hadron that does not contain a black disk at all and interacts slowly (or does not interact at all). We can schematically represent such a state of fast particle : $\Psi(P\rightarrow\infty)~{}\simeq~{}f_{d}~{}|disk>+~{}f_{b}~{}|~{}bare>~{},~{}~{}~{}f_{d}\gg f_{b}~{},$ where $f_{b}$ is the amplitude of the rare component $|bare>$ and $|disk>$ is the superposition of “big” parton components that gives the main contributions in a various cross-sections. The probability for a fast hadron to be in the rare state is $w(y)\sim|f_{b}|^{2}$. In this case the expression for the transparency can be generalized to : $\displaystyle T(y,Y-y)~{}~{}\simeq~{}~{}T_{mean}(y,Y-y)+~{}\tau_{bd}\cdot\big{(}w(y)+w(Y-y)\big{)}~{}~{}+$ $\displaystyle+~{}\tau_{bb}\cdot w(Y-y))\cdot w(y)~{},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$ (29) where the transparencies of the rare component $\tau_{bd}$ and $\tau_{bb}$ can be finite and do not decrease with growth of $y$. The term $T_{mean}(y,Y-y)$ coming from the $|disk>\bigotimes|disk>$ interaction is not boost-invariant and it can be $const(Y\rightarrow\infty$) in the lab.frame for a saturated (grey) disk, and very small in csm. The two last terms in (2.6) coming from the $|bare>\cdot~{}|disk>$ and $|bare>\cdot~{}|bare>$ components can dominate and so can make $T$ boost invariant. But it is possible only if $w(y)$ is approximately constant for high $y$. Various estimates of $w(y)$ lead to a decreasing function of the type $w(y)\sim\exp(-\gamma\cdot y)$ for the case of the growing total cross-section. It corresponds to the choice at every rapidity stage of such an evolution direction, that does not increase the parton number 999Such a behavior of $w(y)$ can also be found from the boost-invariance condition applied to the behavior of some hard cross- sections. See Eq.(31 \- 32) . Such a behavior of $w$ leads to the expression $T(y,Y-y)~{}\sim~{}\tau_{bd}\cdot\big{(}~{}e^{-\gamma~{}(Y-y)}~{}+~{}e^{-\gamma~{}y}~{}\big{)}+\tau_{bb}\cdot e^{-\gamma~{}Y}~{},$ (30) corresponding to the collision of the rare state $|~{}bare>$ with other particle. Such contribution to $T$ is $y$ dependent, and therefore on this way the frame indeprndence can also not be restored. ### 2.7 Collision of parton disks in the case of particles moving in the same direction When we impose the condition of the independence of the cross-sections $\sigma_{in}(Y,y,b)$ on the choice of system (i.e. on $y$), we can choose the values of $y$ not only in the interval $0<y<Y$, i.e. between Lab and center of mass ( CM ) systems. But let us also consider systems with $y<0$ and $y>Y$ , when both parton disks move in one direction. This, in principle, can lead to additional constraints on the amplitudes. But in this case, at first glance, paradoxes may also arise when estimating the probability of the interaction. Especially this is seen in the case of the growing cross-section. To illustrate this let us consider the case of colliding Froissart type disks when their radii $R(y)$ and $R(Y-y)$ grow with the particles rapidity as $R(y)=r_{0}y$ and $R(Y-y)=r_{0}(Y-y)$, and estimate the behavior of the inelastic cross-section with the definite impact parameter $\sigma_{in}(Y,y,B)$ . We chose $B>r_{0}Y$. In this case, when $0<y<Y$, the parton disks pass one by another without interaction. But this is only if they move towards each other because here $B>R(y)+R(Y-y)$ and therefore $\sigma_{in}=0$. But if we at the same $B$ choose the system so that disks move in the same direction and so that $y\gg Y\gg 1$ then disks will overlap when one disk will go through another. And therefore partons from one disk can interact with partons from another disk. But it is essential that in such disk interaction no new particles can be created. Indeed, if in this case a particles can be created, their momenta will be small ($\sim m$) in this system. And the creation of such a particle in CM system would correspond to a creation of a particle with energy $\sim me^{y}$, where $y\gg Y$ and this is forbidden by the energy-momentum conservation ; so $\sigma_{in}=0$. From the other hand, the exchange of particles between these discs with an approximate momentum conservation (or, with the exchange of small transverse momenta) can give a contribution to their elastic scattering and which comes here also from large transverse distances ( $B>R(Y)$ ). The parton wave functions of these “ intersecting disks” can be entangled one by another by such a mechanism, and also the conversion of a pure state to a mixed one for every disk can in principal take place. There is here probably no contradiction with the parton picture, since there is no way to distinguish between low energy partons in the wave function (1) and the close energy partons from vacuum fluctuations. The entanglement between states of disks in such a collisions is proportional to their area. This suggests that these discs have entropy $\sim$ their area ($\sim$ the number of low energy partons), i.e. $\sim y^{2}$ in this case of the Froissart type growth of cross-sections. ### 2.8 Limitations on the dynamic of a hard elastic scattering In the field theory the high energy hard elastic scattering of point-like particles leads usually to the power behavior of elastic cross sections $d\sigma_{1}^{el}(s,t\simeq-s/2)/dt\sim 1/s^{a}~{},~{}~{}~{}~{}~{}$ For the scattering of particles composed from n constituents with approximately equal momenta we have $d\sigma_{n}^{el}(s,t\sim-s/2)/dt\sim\mu^{-4(n-1)}(d\sigma_{1}(s/n^{2},t\sim-s/n^{2})/dt)^{n}~{}~{}.$ But the mean state can contain the growing number of partons and the direct application of this expression leads to a small contribution. In this case the main contribution to $d\sigma/dt$ can come from the rare parton configurations containing the minimal number of partons (when both particles are in a “bare” state). Then, the cross-section of particles in the system, where $~{}s=m^{2}e^{Y}$ and the energies of colliding particles are $me^{y},~{}me^{Y-u}$, can be represented as $d\sigma^{el}(s,t\sim-s/2)/dt\sim\big{(}~{}d\sigma_{0}(s,t\sim-s/2)/dt~{}\big{)}^{n_{0}}~{}w(y)w(Y-y)~{}~{},$ (31) where $w(y)$ is the probability that particle with energy $me^{y}$ is in the bare state, and $n_{0}$ \- the number of “valent” components in the bare state ($n_{0}\simeq 2\div 3$ for meson $\div$ baryon). It follows from the boost- invariance of (31) that $w(y)\sim e^{-2cy}$ (32) This condition essentially restricts the behavior of the asymptotic of hard scattering and, in particular, gives the information about the amplitude ( $\sim\sqrt{w(y)}$ ) of the bare component of $\Psi(P)$. The similar limitation follows from the consideration of the asymptotic cross- sections of two particle reactions with exchange of quantum numbers (such as $\pi^{-}+p~{}\rightarrow~{}\pi^{0}+n$). Here again, the dominant parton configuration contributing to such reactions must contain the minimum number of partons. So again, we have the factor $w(y)w(Y-y)$ in the cross-section. Additionally, there is the factor of type $e^{-2gy}$ connected with the probability that this parton configuration contains also the small energy parton with “needed” quantum numbers. Therefore, from the frame independence of amplitudes of such reactions we also come to the condition (32). And, if interpreting in terms of the exchange of some nonvacuum reggeon we come to estimate their intercept as $\alpha(0)\simeq 1-c-g$ . ## 3 Summary The main aim of this note was to illustrate that the condition of boost- invariants essentially restricts the behavior of high energy cross-sections calculated in parton approaches. And the form of resulting constrains is of the same type as coming from the t-channel unitarity condition. So that one can suppose that this similarity, by their nature, has much more general grounds. Such a condition works especially effectively in the case of growing with energy cross-section, that is, just when the t-unitarity conditions for amplitudes is complicated to apply - because here the multiparticle exchange becomes important. In this case the resulting restrictions on the asymptotic behavior are rather strong and can, in principle, exclude some popular models. ## References * [1] V.N. Gribov, arXiv:hep-ph/0006158 * [2] S.Brodsky, H-C Pauli, S.Pinsky, 9705477, Phys.Rept 301 (1998) 299 * [3] M.Perry, arXiv:hep-ph/9612244 * [4] T.Heinzl, arXiv hep-th/0008096 * [5] A.Harindranath, arXiv:hep-ph/9612244 * [6] A.B.Kaidalov, ITEP School of Physics 1983 * [7] Y. Kovchegov, E. Levin , Quantum Chromodynamics at High Energy, Cambridge University Press, 2012 * [8] O.V. Kancheli arXiv 1609.07657 * [9] V.N. Gribov, I.Ya. Pomeranchuk, Phys.Rev.Lett. 8,343,412 * [10] V.N. Gribov, I.Ya. Pomeranchuk and K.A. Ter-Martirosyan, Phys. Rev. 139B (1965) 184 ; V.N. Gribov, Soviet Phys. JETP 26, 414, (1968) .
2024-09-04T02:54:58.663665
2020-03-10T12:35:38
2003.04662
{ "authors": "Isaac Alonso Asensio, Claudio Dalla Vecchia, Yannick M. Bah\\'e, David\n J. Barnes and Scott T. Kay", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26132", "submitter": "Isaac Alonso Asensio", "url": "https://arxiv.org/abs/2003.04662" }
arxiv-papers
# The intra-cluster light as a tracer of the total matter density distribution: a view from simulations Isaac Alonso Asensio,1,2 Claudio Dalla Vecchia,1,2 Yannick M. Bahé,3David J. Barnes4 and Scott T. Kay5 1Instituto de Astrofísica de Canarias, C/Vía Láctea s/n, E-38205 La Laguna, Tenerife, Spain 2Departamento de Astrofísica, Universidad de La Laguna, Av. Astrofísico Francisco Sánchez s/n, E-38206 La Laguna, Tenerife, Spain 3Leiden Observatory, Leiden University, PO Box 9513, 2300 RA Leiden, The Netherlands 4Department of Physics, Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA 5Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy, School of Natural Sciences, The University of Manchester, Manchester M13 9PL, UK E-mail<EMAIL_ADDRESS>(IAA) (Accepted XXX. Received YYY; in original form ZZZ) ###### Abstract By using deep observations of clusters of galaxies, it has been recently found that the projected stellar mass density closely follows the projected total (dark and baryonic) mass density within the innermost $\sim 140$ kpc. In this work, we aim to test these observations using the Cluster-EAGLE simulations, comparing the projected densities inferred directly from the simulations. We compare the iso-density contours using the procedure of Montes & Trujillo (2019), and find that the shape of the stellar mass distribution follows that of the total matter even more closely than observed, although their radial profiles differ substantially. The ratio between stellar and total matter density profiles in circular apertures, shows a slope close to $-1$, with a small dependence on the cluster’s total mass. We propose an indirect method to calculate the halo mass and mass density profile from the radial profile of the intra-cluster stellar mass density. ###### keywords: galaxies: clusters: general – methods: numerical ††pubyear: 2020††pagerange: The intra-cluster light as a tracer of the total matter density distribution: a view from simulations–References ## 1 Introduction Ten to more than thirty percent of the stellar light of clusters of galaxies comes from a diffuse distribution of stars emitting the so called intra- cluster light (ICL), the inferred fraction depending on the definition of the border between the brightest central galaxy and the diffuse stellar component, the radial extent at which the stellar mass distribution is integrated and the relaxation state of the clusters (e.g., Krick & Bernstein, 2007; Gonzalez et al., 2013; Mihos et al., 2017; Jiménez-Teja et al., 2018; Zhang et al., 2019). This distribution is produced by the stripping of stars from galaxies undergoing mergers and tidal interactions during their evolution in the cluster environment (see Mihos, 2015, for a review). Due to its low surface brightness, the observational study of the stellar population producing the intra-cluster light has been challenging. An increasing effort towards deep imaging of clusters of galaxies in the recent years, both through individual cluster imaging, up to $z\simeq 1.5$, (e.g., Mihos et al., 2005; Montes & Trujillo, 2014; Burke et al., 2015; Morishita et al., 2017; Ko & Jee, 2018; Jiménez-Teja et al., 2018; Montes & Trujillo, 2018; DeMaio et al., 2018; DeMaio et al., 2020), and by stacking observations of multiple clusters (e.g., Zibetti et al., 2005; Zhang et al., 2019) has allowed new insights into the ICL. Figure 1: Projected density of stars (top) and matter (bottom) for three different cluster of the C-EAGLE simulations at $z=0.352$. The secondary density peak in the Cluster CE-21 (middle panel) can be due to a recent major merger event which has stripped stars from the interacting galaxies, or numerical artifacts produced by SUBFIND not being able to correctly assign stellar particles to substructures. One recent, remarkable result achieved with deep imaging is the tight correlation between the distribution of the stellar surface density, inferred from its surface brightness, and the surface density of the total mass, measured by modelling the gravitational lensing signal (Montes & Trujillo, 2019, hereafter MT19). MT19 proposed that the surface density of the stellar mass not bound to galaxies should settle in the potential well of the cluster similarly to the dark matter. This could be used to trace the total matter distribution of clusters within a cluster-centric distance set by the depth of the observations. They also compared their result with total mass surface densities inferred from the X-ray emission of the intra-cluster medium, and concluded that this method is limited by the misalignment of the gaseous component with respect to the dark matter and stellar mass in non-relaxed clusters. Their quantitative analysis made use of the Modified Haussdorf Distance (MHD) (Dubuisson & Jain, 1994) to quantify the deviation between iso- density contours of stars and total matter. They found that, in general, the stellar surface density has smaller MHD values than that of the intra-cluster medium where both are compared with the iso-density contours of total mass. In this Letter, we test this observational result with state-of-the-art cosmological, hydrodynamic simulations of the Cluster-EAGLE project (C-EAGLE, Barnes et al., 2017; Bahé et al., 2017). We give a brief description of the simulations in the next section and a description of the analysis in section 3. The main results of this work are shown in section 4 and discussed in section 5, along with some concluding remarks. ## 2 Simulations We have used the set of 30 zoom-in cluster simulations performed within the C-EAGLE project. The simulated clusters are uniformly distributed in the mass range $10^{14}<M_{200}/\mathrm{M}_{\odot}<10^{15.4}$, where $M_{200}$ is the halo mass.111$M_{200}$ is the mass enclosed in a sphere of radius $r_{200}$, whose mean density equals 200 times the critical density of the Universe. The simulations were performed with the EAGLE model for galaxy formation and evolution, with the AGNdT9 calibration (Schaye et al., 2015). They provide a physical spatial resolution of $\epsilon=0.7~{}\mathrm{kpc}$ (at $z<2.8$) and baryonic mass resolution of $m_{\mathrm{gas}}\approx 1.81\times 10^{6}~{}\mathrm{M}_{\odot}$. For more information on the EAGLE model and its comparison with global relations of the observed galaxy population, the reader is referred to Schaye et al. (2015) and Crain et al. (2015). For more details on the numerical algorithms describing photo-ionization equilibrium cooling, star formation, stellar evolution, stellar feedback, black hole growth and feedback, and the hydrodynamic scheme we refer the reader to Wiersma et al. (2009a), Schaye & Dalla Vecchia (2007), Wiersma et al. (2009b), Dalla Vecchia & Schaye (2012), Rosas-Guevara et al. (2015), and Schaller et al. (2015), respectively. Figure 2: Fraction of stellar mass contributing to the ICL, $f_{\mathrm{ICL}}$, as function of halo mass, $M_{200}$, for all clusters in the sample. There is no evidence for a correlation with halo mass. The solid line marks the average value of $f_{\mathrm{ICL}}$, and the dashed lines the spread around it. For the results presented here, we have used the particle data, friends-of- friends and SUBFIND (Dolag et al., 2009) groups at $z=0.352$ to match the average redshift of the Hubble Frontier-Fields clusters (Lotz et al., 2017). Furthermore, the same analysis was performed at $z=0$, and we found no significant difference. Throughout the paper we assume the cosmological parameters of the C-EAGLE simulations, $(\Omega_{0},\Omega_{\Lambda},h,n_{\mathrm{s}},\sigma_{8})=(0.307,0.693,0.6777,0.961,0.8288)$ (Planck Collaboration et al., 2014), where $\Omega_{0}$ and $\Omega_{\Lambda}$ are the matter and dark energy fractions, $h$ is the Hubble constant in units of $100~{}\mathrm{km}\,\mathrm{Mpc}^{-1}\,\mathrm{s}^{-1}$, $n_{\mathrm{s}}$ and $\sigma_{8}$ are the spectral index and the power spectrum normalisation used to generate the initial conditions. In the analysis, we have used all particles belonging to the main halo of the largest friends-of-friends group in each simulation, i.e., we excluded all particles bound to satellite galaxies and substructures within the same friends-of-friends group. Maps of projected stellar and total matter density were produced with a spatial resolution of $5~{}\mathrm{kpc}$, in order to mimic the spatial resolution employed in the analysis of the observational data ($3\times 3~{}\mathrm{arcsec}^{2}$ at $z\simeq 0.35$). We have repeated the analysis with higher ($3.75~{}\mathrm{kpc}$) and lower ($7.5~{}\mathrm{kpc}$) resolution without finding any remarkable difference. The main advantage with respect to observations is that there is no need of masking the light of satellite galaxies. However, debris from tidal interactions between galaxies will be included in the projected matter density. Furthermore, there are biases due to SUBFIND failing to assign stellar particles to satellites (Bahé et al., in prep). Figure 3: Isodensity contours of the inner ($R\leq 140~{}\mathrm{kpc}$) and outer ($R>140~{}\mathrm{kpc}$) regions (top and bottom, respectively) of total matter (blue dotted lines) and stars (red dashed lines) for three different clusters. Lighter colours indicate larger distances (lower densities) from the centre. Examples of the projected stellar and total mass density are shown in Fig. 1 for three simulated clusters of increasing virial mass. The top row corresponds to the projected density of stars, while the bottom row shows the density of total matter (dark and baryonic). Uncertainties on the amount of ICL mass produced and its radial distribution may arise from the modelling of the star formation rate and the spatial and mass resolution of numerical simulations. The EAGLE model matches quite accurately the observed stellar mass and luminosity functions (Schaye et al., 2015; Trayford et al., 2015). Moreover, it reproduces the evolution of the stellar mass function and the observationally inferred density of stars in the universe up to high redshift ($z=7$) (Furlong et al., 2015). However, while the reference simulation matches the observed sizes of galaxies over several decades in stellar mass, the AGNdT9 calibration yields an offset in the relation towards more compact galaxies (Schaye et al., 2015). This last point seems to be relevant in the interpretation of the ICL mass fractions described in the next section, where the inferred values are on the low side of the distribution of those derived from observations (see references in section 1): compact galaxies are less prone to stripping. On the other hand, Henden et al. (2019) noted that having too large in size galaxies in their simulations boosts the effect of tidal stripping, increasing the fraction of stellar mass in the ICL, and that uncertainties in galaxy sizes are the major contributors to the uncertainty in the determining the fraction of mass in the ICL in simulations. ## 3 Analysis Before describing the methodology used in the analysis of the simulation data, we briefly discuss a consistency check for the simulated clusters. We computed the fraction of stellar mass in the ICL, $f_{\mathrm{ICL}}$, and compared it with expected observational and theoretical values. For the sake of ease, we adopted the methodology of (Rudick et al., 2011). The mass fraction has been computed as the stellar mass with projected stellar density below some threshold surface brightness, $\mu$, with respect to the total stellar mass within $r_{200}$. As in (Rudick et al., 2011), we have converted the stellar surface density into surface brightness assuming a constant mass-to-light ratio of $5~{}\mathrm{M}_{\odot}\,\mathrm{L}^{-1}_{\odot}$, and set $\mu=26.5~{}\mathrm{mag}\,\mathrm{arcsec}^{-2}$ as the threshold. We show in figure 2 the computed $f_{\mathrm{ICL}}$ as function of halo mass. We find that $f_{\mathrm{ICL}}=0.091\pm 0.013$ (solid and dashed lines), with no significant correlation with the total mass of the clusters (the Pearson correlation coefficient is $0.0063$). The result is consistent with that of (Rudick et al., 2011). Although the range of halo masses in our sample is rather narrower, similar fractions and the lack of correlation have been reported by (Pillepich et al., 2018), when using a definition of the ICL related to the size of the central galaxy. The result is consistent with previous simulations (Rudick et al., 2011; Contini et al., 2014), where they applied semi-analytical models to N-body simulations, and hydrodynamical cosmological simulations (Pillepich et al., 2018; Henden et al., 2019). Finally, observations using similar thresholds have reported as well similar mass fractions (Krick & Bernstein, 2007; Montes & Trujillo, 2014). We have followed a methodology similar to MT19 to extract iso-density contours. We computed circularly averaged radial profiles of the density of the stellar and total mass. For this, we take the position of the minimum of the potential energy as centre of the cluster (McAlpine et al., 2016). The projected densities for drawing the contours222We used the contour function of matplotlib to compute the contours. were selected interpolating the profiles at radii of 50, 75, 100, 125, $140~{}\mathrm{kpc}$ (the distances used by MT19) for the inner part, and of 170, 220, 300, 460, 620, 780, 940, $1100~{}\mathrm{kpc}$ for the outer regions, and only up to $r_{200}$. At large distance from the centre of the clusters ($r>140~{}\mathrm{kpc}$), we down-sample the images merging $4\times 4$ pixels, thus degrading the spatial resolution to $20~{}\mathrm{kpc}$, to smooth the otherwise very noisy contours. The contours of the projected densities are shown in Fig. 3, for the same three clusters as depicted in Fig. 1. The projected total mass density contours are drawn with blue dotted lines, and the projected stellar density contours with red dashed lines, where a darker colour indicates a smaller radius. The top row is a close-up view of the contours near the centre of the clusters, out to $140~{}\mathrm{kpc}$, whereas the contours at larger distances are shown in the bottom row. We measured projected radial distances from the centre of the cluster instead of elliptical distances to the centre of the brightest central galaxy, as usually done in observations. This simplification is not crucial to derive the iso-density contours, as it only changes the values of density at which the contours will be drawn. In practice, this means that the distances we use are systematically different from those of MT19, the difference depending on the eccentricity of the brightest central galaxy, or the presence of more than one central galaxy, that we excluded from the analysis, or both. As this is only an exploratory analysis we ignore these differences. As in MT19, to compare the shape of the contours, we estimated the Modified Hausdorff distance (MHD) defined by Dubuisson & Jain (1994): $d_{\mathrm{MH}}(X,Y)=\max\left(d(X,Y),d(Y,X)\right),\\\ $ (1) where $d(X,Y)=\frac{1}{N_{X}}\sum_{\textbf{x}\in X}\min_{\textbf{y}\in Y}\|\textbf{x}-\textbf{y}\|.$ (2) The two samples, $X\equiv\\{\textbf{x}_{1},\textbf{x}_{2},\dots,\textbf{x}_{N_{x}}\\}$ and $Y\equiv\\{\textbf{y}_{1},\textbf{y}_{2},\dots,\textbf{y}_{N_{y}}\\}$, contains the points defining two contours, and $\|\cdot\|$ is the Euclidean norm. As we may have different closed contours for the same density value, we select for each distance the contour composed by the largest number of segments. The selected contours are shown in Fig. 3. Figure 4: Left panel. Comparison of the MHD of MT19 (in green, signle measurements with error bars) and that from the C-EAGLE simulations (in blue, solid line). The shadows indicate the 1-$\sigma$ region for each method. A small scatter in the radial distance of the MT19 data has been added for clarity. Right panel. Histogram of $\zeta$ computed from all the contours taken inside the virial radius of each C-EAGLE cluster. The vertical (green) solid line represents the mean value of $\zeta$ obtained by MT19, embedded in its 1-$\sigma$ region. The dotted, vertical line indicates their lowest value. When measuring the MHD close to the virial radius of the clusters, we would expect an increase of its value, as the outskirts of clusters are not dynamically relaxed and fewer stellar particles are populating it, producing noisier contours. In order to compare the MHD across different distances, we define the relative MHD as $\zeta=\frac{d_{\mathrm{MH}}(r)}{r}\,,$ (3) where $r$ is the distance at which the iso-density contours have been computed. This way, we are measuring deviations as fraction of the distance. We find that this definition removes almost entirely the correlation with distance. ## 4 Results In the left panel of Fig. 4, we show with the blue, solid line the mean value of $d_{\mathrm{MH}}$, the shaded area depicts the 1-$\sigma$ confidence interval. We overplotted the MHDs calculated by MT19, as well as their 1-$\sigma$ area, in green. For sake of clarity, observational points for individual cluster are slightly displaced along the x-axis. From that panel, we can highlight that: 1. 1. the $d_{\mathrm{MH}}$ from both simulations and observations are of the same order of magnitude; 2. 2. they show the same trends with radius; 3. 3. and simulations have a $\sim 50\%$ lower $d_{\mathrm{MH}}$ than observations, with smaller scatter. As $d_{\mathrm{MH}}$ increases monotonically with the distance at which it is computed, we introduced the relative MHD, $\zeta$, to obtain a distance-free similarity measurement. We show in Fig. 4 (right panel) the distribution of $\zeta$ for all contours and clusters, in blue, and the $\zeta$ extracted from MT19’s data, in green. Most of the values of $\zeta$ are lower than those observed: 96 percent of the relative MHDs are below the mean observed value. The shape of the distribution is remarkably close to a Gaussian distribution in logarithmic space, with mean $\langle\zeta\rangle=0.107$ and dispersion $\sigma_{\zeta}=0.080$, indicating that $\zeta$ is a solid, scale-free estimate of the similarity of contours at any cluster-centric distance. Figure 5: Left panel. Stellar (solid lines) and total matter(dashed lines) surface density profiles from the particles of the main halo of the cluster. We consider only the ICL mass (see text), including the particles not bounded to any substructure. The dashed line is the threshold used for computing the ICL mass fraction (i.e., $\mu=26.5$ mag arcsec-2, or $\Sigma_{*}\approx 1.4\times 10^{6}~{}\mathrm{M}_{\odot}\,\mathrm{kpc}^{-2}$). Right panel. The ratio between the stellar and total matter density profiles for all the clusters. The red, dashed line is the best-fit power law given in equation 4. We would like to highlight two relevant issues with the definition of $d_{\mathrm{MH}}$ that can bias the observational values towards higher values. The $d_{\mathrm{MH}}$ is defined based on points and not continuous segments. This obviously simplifies the computation, but it has to be taken into account when dealing with coarse datasets, as two similar shapes can have a non-negligible $d_{\mathrm{MH}}$. Second, each point’s contribution is defined positive and with respect to the other set of points. This provides a distance that increases monotonically with noise (Dubuisson & Jain, 1994), thus special care must be taken when dealing with data with low signal-to- noise or large uncertainties. Both these points could be driving the observed $d_{\mathrm{MH}}$ towards higher values, as masking galaxies introduces non- continuous contours and the spatial resolution of the lensing models is limited. In addition to the study of the similarity between the total matter and stellar mass distribution, we have also compared the density profiles of the stellar component. In Fig. 5 (left panel) we show circularly averaged density profiles of the stellar particles. They follow a power-law behaviour up to $\sim 500~{}\mathrm{kpc}$ for the lightest halos, and $\sim 1~{}\mathrm{Mpc}$ for the more massive ones. At such distances, the interactions between substructures are weaker, and fewer particles get ejected to the intra-cluster medium, thus they can no longer successfully trace the potential well. In the right panel of Fig. 5 we show the ratio between the stellar and total matter density profiles. This ratio is close to a power law with scatter of $0.1~{}\mathrm{dex}$ and a slope of about $-1$. We have performed a fit to all the profiles at once, with and without normalising the radial distance using $r_{200}$, yielding the relations: $\displaystyle\log_{10}\Sigma_{\mathrm{tot}}=$ $\displaystyle\log_{10}\Sigma_{*}+$ $\displaystyle(1.115\pm 0.005)\log_{10}r-(0.25\pm 0.01)\,,$ (4) $\displaystyle\log_{10}\Sigma_{\mathrm{tot}}=$ $\displaystyle\log_{10}\Sigma_{*}+$ $\displaystyle(1.085\pm 0.004)\log_{10}(r/r_{200})+(3.144\pm 0.005)\,.$ (5) The residuals of both fits have a similar scatter: $0.147$ and $0.127$ dex for equations 4 and 5, respectively. We recall that the AGNdT9 feedback calibration, used in the C-EAGLE simulations, yields more compact galaxies than the reference model for stellar masses $M_{\star}>10^{10}~{}\mathrm{M}_{\odot}$. The less efficient tidal stripping may therefore deposit more stellar mass closer to the centre of the cluster, resulting in a steeper density profile. However, this bias may be of secondary importance, at least within the central $100~{}\mathrm{kpc}$ (Bahé et al., in prep.). We propose a new, indirect way of measuring a cluster’s mass knowing its stellar density profile in the innermost region. First, via deep imaging as that performed by MT19, the stellar density profile can be obtained and extrapolated up to $r_{200}$ assuming a power law. Then, using equation 4 or 5, the total mass density profile can be computed. This profile can be integrated to obtain an estimation of the cluster’s total mass. This procedure would be similar to that proposed by Pillepich et al. (2018). In that case, however, only the power law slope of the 3D stellar mass density profile was used to infer the total halo mass, in our case we use more information (the 2D stellar density profile and equation 4 or 5), expecting less scatter in the mass estimate. ## 5 Discussion & Conclusions We have studied the similarity of the projected stellar and total matter distributions in the halos of massive galaxy clusters using the C-EAGLE set of 30 zoom-in simulations of clusters of galaxies. In the analysis, we considered as constituents of the diffuse distribution of stellar mass only particles in the friends-of-friends group that were not assigned to any substructure by the SUBFIND algorithm. We can summarise our results as follows: 1. 1. we confirm the finding of MT19: the projected distribution of stars closely follow the projected distribution of the total mass, although their radial profiles differ substantially; 2. 2. the ICL, approximated as those stars in the region where $\mu>26.5~{}\mathrm{mag}\,\mathrm{arcsec}^{-2}$ ($\Sigma_{*}\approx 1.4\times 10^{6}~{}\mathrm{M}_{\odot}$), accounts for $\sim 10$ percent of the stellar content of the cluster within $r_{200}$; this fraction does not show any correlation with the mass of the cluster; 3. 3. the ratio between the surface density profiles of the stellar to the total matter follows a simple power-law up to the virial radius, equations 4 and 5; as the slope and amplitude of the stellar surface density profile can be extracted from observation, we proposed a method to estimate the total mass surface density profile, thus the mass of the halo; 4. 4. the similarity between the stellar and total matter distributions in the cluster halo is even higher in the simulations than that observed by MT19 (Fig. 4); This indicates that stars closely trace the underlying gravitational potential; 5. 5. in order to show any self-similarity, we have introduced the relative measure, $\zeta=d_{\mathrm{MH}}/r$, whose distribution resembles a log normal when using all the clusters and contours pairs; the parameter $\zeta$ could be used to study the relaxation state of a cluster; the maximum of this distribution is located at $\zeta\sim 0.1$, thus the typical $d_{\mathrm{MH}}$ is about $10\%$ of the distance at which it is computed. The study of the spatial distribution of the ICL can be used to infer, in high detail, the distribution of the underlying dark matter in clusters of galaxies. Moreover, the average density profile of total matter can be extracted, and extrapolated up to the virial radius, only by measuring the slope of the stellar mass density profile and its normalisation close to the centre of the cluster. This is complementary to the study of Pillepich et al. (2018), where only the total halo mass was given as function of the slope of the 3D stellar density profile, with larger uncertainty. ## Acknowledgements We are very grateful to Ignacio Trujillo and Mireia Montes for supporting this work with useful ideas and discussions. CDV acknowledges the support of the Spanish Ministry of Science, Innovation and Universities (MCIU) through grants RYC-2015-18078 and PGC2018-094975-B-C22. YMB acknowledges funding from the EU Horizon 2020 research and innovation programme under Marie Skłodowska-Curie grant agreement 747645 (ClusterGal) and the Netherlands Organisation for Scientific Research (NWO) through VENI grant 639.041.751. This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K00042X/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. ## References * Bahé et al. (2017) Bahé Y. M., et al., 2017, MNRAS, 470, 4186 * Barnes et al. (2017) Barnes D. J., et al., 2017, MNRAS, 471, 1088 * Burke et al. (2015) Burke C., Hilton M., Collins C., 2015, MNRAS, 449, 2353 * Contini et al. (2014) Contini E., De Lucia G., Villalobos Á., Borgani S., 2014, MNRAS, 437, 3787 * Crain et al. (2015) Crain R. A., et al., 2015, MNRAS, 450, 1937 * Dalla Vecchia & Schaye (2012) Dalla Vecchia C., Schaye J., 2012, MNRAS, 426, 140 * DeMaio et al. (2018) DeMaio T., Gonzalez A. H., Zabludoff A., Zaritsky D., Connor T., Donahue M., Mulchaey J. S., 2018, MNRAS, 474, 3009 * DeMaio et al. (2020) DeMaio T., et al., 2020, MNRAS, 491, 3751 * Dolag et al. (2009) Dolag K., Borgani S., Murante G., Springel V., 2009, Monthly Notices of the Royal Astronomical Society, 399, 497 * Dubuisson & Jain (1994) Dubuisson M.-P., Jain A., 1994, in Proceedings of 12th International Conference on Pattern Recognition. IEEE Comput. Soc. Press, pp 566–568, doi:10.1109/ICPR.1994.576361, http://ieeexplore.ieee.org/document/576361/ * Furlong et al. (2015) Furlong M., et al., 2015, MNRAS, 450, 4486 * Gonzalez et al. (2013) Gonzalez A. H., Sivanandam S., Zabludoff A. I., Zaritsky D., 2013, ApJ, 778, 14 * Henden et al. (2019) Henden N. A., Puchwein E., Sijacki D., 2019, arXiv e-prints, p. arXiv:1911.12367 * Jiménez-Teja et al. (2018) Jiménez-Teja Y., et al., 2018, ApJ, 857, 79 * Ko & Jee (2018) Ko J., Jee M. J., 2018, ApJ, 862, 95 * Krick & Bernstein (2007) Krick J. E., Bernstein R. A., 2007, AJ, 134, 466 * Lotz et al. (2017) Lotz J. M., et al., 2017, The Astrophysical Journal, 837, 97 * McAlpine et al. (2016) McAlpine S., et al., 2016, Astronomy and Computing, 15, 72 * Mihos (2015) Mihos J. C., 2015, in Proceedings of the International Astronomical Union. pp 27–34 (arXiv:1312.5380), doi:10.1017/S1743921315006857 * Mihos et al. (2005) Mihos J. C., Harding P., Feldmeier J., Morrison H., 2005, ApJ, 631, L41 * Mihos et al. (2017) Mihos J. C., Harding P., Feldmeier J. J., Rudick C., Janowiecki S., Morrison H., Slater C., Watkins A., 2017, ApJ, 834, 16 * Montes & Trujillo (2014) Montes M., Trujillo I., 2014, ApJ, 794 * Montes & Trujillo (2018) Montes M., Trujillo I., 2018, MNRAS, 474, 917 * Montes & Trujillo (2019) Montes M., Trujillo I., 2019, MNRAS, 482, 2838 * Morishita et al. (2017) Morishita T., Abramson L. E., Treu T., Schmidt K. B., Vulcani B., Wang X., 2017, ApJ, 846, 139 * Pillepich et al. (2018) Pillepich A., et al., 2018, MNRAS, 475, 648 * Planck Collaboration et al. (2014) Planck Collaboration et al., 2014, A&A, 571, A16 * Rosas-Guevara et al. (2015) Rosas-Guevara Y. M., et al., 2015, MNRAS, 454, 1038 * Rudick et al. (2011) Rudick C. S., Mihos J. C., McBride C. K., 2011, ApJ, 732 * Schaller et al. (2015) Schaller M., Dalla Vecchia C., Schaye J., Bower R. G., Theuns T., Crain R. A., Furlong M., McCarthy I. G., 2015, MNRAS, 454, 2277 * Schaye & Dalla Vecchia (2007) Schaye J., Dalla Vecchia C., 2007, MNRAS, 383, 1210 * Schaye et al. (2015) Schaye J., et al., 2015, MNRAS, 446, 521 * Trayford et al. (2015) Trayford J. W., et al., 2015, MNRAS, 452, 2879 * Wiersma et al. (2009a) Wiersma R. P. C., Schaye J., Smith B. D., 2009a, MNRAS, 393, 99 * Wiersma et al. (2009b) Wiersma R. P. C., Schaye J., Theuns T., Dalla Vecchia C., Tornatore L., 2009b, MNRAS, 399, 574 * Zhang et al. (2019) Zhang Y., et al., 2019, ApJ, 874, 165 * Zibetti et al. (2005) Zibetti S., White S. D. M., Schneider D. P., Brinkmann J., 2005, MNRAS, 358, 949
2024-09-04T02:54:58.674905
2020-03-08T06:14:13
2003.04720
{ "authors": "Rohit Pandey", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26133", "submitter": "Rohit Pandey", "url": "https://arxiv.org/abs/2003.04720" }
arxiv-papers
# The mean and variance in coupons required to complete a collection Rohit Pandey ###### Abstract This paper is about the Coupon collector’s problem. There are some coupons, or baseball cards, or other plastic knick-knacks that are put into bags of chips or under soda bottles, etc. A collector starts collecting these trinkets and wants to form a complete collection of all possible ones. Every time they buy the product however, they don’t know which coupon they will “collect” until they open the product. How many coupons do they need to collect before they complete the collection? In this paper, we explore the mean and variance of this random variable, $N$ using various methods. Some of them work only for the special case with the coupons having equal probabilities of being collected, while others generalize to the case where the coupons are collected with unequal probabilities (which is closer to a real world scenario). ## Problems and expressions ### Problems There are $n$ coupons in a collection. A collector has the ability to purchase a coupon, but can’t choose the coupon he purchases. Instead, the coupon is revealed to be coupon $i$ with probability $p_{i}=\frac{1}{n}$. Let $N$ be the number of coupons he’ll need to collect before he has at least one coupon of each type. Let’s call this random variable $N$. Now, we want to solve the following problems: P1 The expected value of $N$ when the coupons have equal probabilities of being collected. P2 The expected value of $N$ when the coupons have unequal probabilities of being collected. P3 The variance of $N$ when the coupons have equal probabilities of being collected. P4 The variance of $N$ when the coupons have unequal probabilities of being collected. P5 The density function of $N$ (meaning the entire distribution) when the coupons have equal probabilities. P6 The density function of $N$ (meaning the entire distribution) when the coupons have unequal probabilities. This paper will go over various solutions, some more powerful (can answer more of the above questions) than others. It’s also clear that if we can solve the even numbered problems (2,4,6) we can simply substitute $p_{i}=\frac{1}{n}\;\;\forall i$ and solve the corresponding odd numbered problems (1,3,5) respectively. ### Expressions In this section, we provide the solutions to the problems, P1 through P6 and devote the rest of the paper to their derivations. ###### Theorem 1 (Expression for P1). The expected number of coupons a collector will need to complete the collection when the probabilities of collecting each of the $n$ coupons is $\frac{1}{n}$ is: $E(N)=n\sum\limits_{m=1}^{n}\frac{1}{m}$ ###### Theorem 2 (Expression for P2). The variance in the number of coupons a collector will need to complete the collection when the probabilities of collecting each of the $n$ coupons is $\frac{1}{n}$ is: $V(N)=n^{2}\sum\limits_{i=1}^{n}\frac{1}{i^{2}}-n\sum\limits_{k=1}^{n}\frac{1}{k}$ ###### Theorem 3 (Expression for P3). The expected number of coupons a collector will need to complete the collection when the probabilities of collecting coupon $i$ is $p_{i}$ ($\sum\limits_{i=1}^{n}p_{i}=1$) is: $E(N)=\sum\limits_{j}\frac{1}{p}_{j}-\sum\limits_{i<j}\frac{1}{p_{i}+p_{j}}+\dots+(-1)^{m-1}\frac{1}{p_{1}+\dots+p_{m}}$ ###### Theorem 4 (Expression for P4). The variance in the number of coupons a collector will need to complete the collection when the probabilities of collecting coupon $i$ is $p_{i}$ ($\sum\limits_{i=1}^{n}p_{i}=1$) is: $V(N)=\left(\sum\frac{1}{p_{j}^{2}}-\sum_{i<j}\frac{1}{(p_{i}+p_{j})^{2}}+\dots+(-1)^{n-1}\frac{1}{(p_{1}+\dots+p_{n})^{2}}\right)-\\\ \left(\sum\frac{1}{p_{j}}-\sum_{i<j}\frac{1}{(p_{i}+p_{j})}+\dots+(-1)^{n-1}\frac{1}{(p_{1}+\dots+p_{n})}\right)^{2}-\\\ \left(\sum\frac{1}{p_{j}}-\sum_{i<j}\frac{1}{(p_{i}+p_{j})}+\dots+(-1)^{n-1}\frac{1}{(p_{1}+\dots+p_{n})}\right)$ ## 1 A sum of geometric random variables ### 1.1 Proof 1 of theorem 1 Consider a state where the collector has already collected $m$ coupons. How many coupons does he need to collect to get to $m+1$? Let this be represented by the random variable, $N_{m}$. Then, if the total coupons needed is $N$, we have: $N=\sum\limits_{m=1}^{n}N_{m}$ Every coupon collected from here is like a coin toss where with probability $\frac{m}{n}$, the collector hits a coupon he already has and makes no progress. With probability $\frac{n-m}{n}$, he collects a new coupon. So, this becomes a geometric random variable with $p=\frac{n-m}{n}$. We know that a geometric random variable has a mean $\frac{1}{p}$ and variance $\frac{1-p}{p^{2}}$. Hence, $E(N_{m})=\frac{n}{n-m}$ Taking expectation of equation (1) and substituting we have: $E(N)=E(N_{m})=\sum\limits_{m=1}^{n}\frac{n}{n-m}=n\sum\limits_{m=1}^{n}\frac{1}{n-m}$ Substituting $m=n-m$ we get: $E(N)=n\sum\limits_{m=1}^{n}\frac{1}{m}$ ### 1.2 Proof 1 of theorem 3 Since the random variables $N_{m}$ are independent, the variance of their sum is equal to the sum of their variances. So, proceeding similarly to section 1.1 the variance, $V(N)$ can be calculated. $V(N)=n^{2}\sum\limits_{i=1}^{n}\frac{1}{i^{2}}-n\sum\limits_{k=1}^{n}\frac{1}{k}$ ## 2 Maximum of minimums identity With this approach, we can prove theorems 1 and 2 ### 2.1 Proof 1 of theorem 3 Let $N_{j}$ be the number of coupons to be collected before we see the first coupon of type $j$ and $N$ the number of coupons until all are collected. We have: $N=\max_{1\leq j\leq n}N_{j}$ In conjunction with the maximum of minimums identity we get: $N=\sum N_{j}-\sum_{1\leq j\leq k\leq n}\min N_{j},N_{k}+\sum_{1\leq j\leq k\leq i\leq n}\min N_{j},N_{k},N_{i}-\dots$ (1) and the fact that $\min_{1\leq j\leq m}N_{j}$ is a geometric random variable with parameter $p=\sum\limits_{j=1}^{m}p_{j}$ lead to the result of theorem 3 and from there, we can substitute $p_{j}=\frac{1}{n}\forall j$ to get the result of theorem 1 $E(N)=n\sum\limits_{k=1}^{n}\frac{1}{k}$ Note that it’s not easy to get the variance, $V(N)$ with this approach because the terms in equation 1 are not independent. ## 3 A recurrence With this approach, we can prove theorems 1 and 3. Consider a state where the collector has $m$ coupons in his collection. Let $T_{m}$ be the number of coupons needed to complete the collection. If the total coupons he needs to collect to complete the collection is $N$, we then have: $N=T_{0}$ Now, we could observe that (the $N_{m}$ are the variables defined in section 1): $N_{m}=T_{m+1}-T_{m}$ and summing over all $m$ (and noting that $T_{n}=0$) leads us to: $T_{0}=\sum_{m}N_{m}$ and this leads to the approach in section 1 which makes the problem much easier to solve. Alternately, we can continue working with the $T_{m}$’s and construct a recurrence. Consider what happens when the collector has $m$ coupons and he collects one more. With probability $\frac{m}{n}$, he fails to add a new coupon and is back to where he started, making no progress. Let $I(\frac{n}{m})$ be a Bernoulli random variable with $p=\frac{n}{m}$. We then have the expression: $T_{m}=1+I\left(\frac{m}{n}\right)T_{m}^{\prime}+\left(1-I\left(\frac{m}{n}\right)\right)T_{m+1}$ (2) Where $T_{m}^{\prime}$ is i.i.d with $T_{m}$. ### 3.1 Proof 2 of theorem 1 Taking expectation to both sides, $E(T_{m})=1+\frac{m}{n}E(T_{m})+\frac{n-m}{n}T_{m+1}$ $E(T_{m})\left(1-\frac{m}{n}\right)=1+\left(1-\frac{m}{n}\right)T_{m+1}$ $E(T_{m})-E(T_{m+1})=\frac{n}{n-m}$ As noted before, the L.H.S is simply $E(N_{m})$ as defined in A1. In general we have, $\sum\limits_{m=k}^{n-1}E(T_{m})-\sum\limits_{m=k}^{n-1}E(T_{m+1})=\sum\limits_{m=k}^{n-1}\frac{n}{n-m}$ Noting that $T_{n}=0$ we have, $E(T_{k})=\sum\limits_{m=k}^{n-1}\frac{n}{n-m}$ And letting $m=n-k$ $E(T_{n-m})=n\sum\limits_{k=1}^{m}\frac{1}{k}$ We’re interested in $T_{0}$, so let’s substitute $m=n$ in equation (3). $E(T_{0})=n\sum\limits_{k=1}^{n}\frac{1}{k}$ ### 3.2 Proof 2 of theorem 3 Now, let’s try and find the variance, $V(N)=V(T_{0})$. Let’s square both sides of equation (1). To make the algebra easier, let’s re-arrange and note that $I(\frac{m}{n})(1-I(\frac{m}{n}))=I(\frac{m}{n})-I(\frac{m}{n})^{2}=0$. $=>(T_{m}-1)^{2}=I\left(\frac{m}{n}\right)^{2}T_{m}^{\prime 2}+(1+I\left(\frac{m}{n}\right)^{2}-2I\left(\frac{m}{n}\right))T_{m+1}^{2}$ Now, note the following property of Bernoulli random variables: $I(\frac{m}{n})^{2}=I(\frac{m}{n})$. This means: $T_{m}^{2}-2T_{m}+1=I\left(\frac{m}{n}\right)T_{m}^{\prime 2}+(1-I\left(\frac{m}{n}\right))T_{m+1}^{2}$ We have to be careful here to note which random variables are i.i.d. and which are identical. See here. Taking expectation and doing some algebra gives us, $\left(1-\frac{m}{n}\right)E(T_{m}^{2})=2E(T_{m})+\left(1-\frac{m}{n}\right)E(T_{m+1}^{2})-1$ $=>E(T_{m}^{2})-E(T_{m+1}^{2})=2E(T_{m})\frac{n}{n-m}-\frac{n}{n-m}$ $=>\sum\limits_{m=0}^{n-1}E(T_{m}^{2})-\sum\limits_{m=0}^{n-1}E(T_{m+1}^{2})=\sum\limits_{m=0}^{n-1}2E(T_{m})\frac{n}{n-m}-\sum\limits_{m=0}^{n-1}\frac{n}{n-m}$ $=>E(T_{0}^{2})-E(T_{n}^{2})=\sum\limits_{m=0}^{n-1}2E(T_{m})\frac{n}{n-m}-\sum\limits_{m=0}^{n-1}\frac{n}{n-m}$ But, $T_{n}=0$ and from equation (3), $E(T_{m})=n\sum\limits_{k=1}^{n-m}\frac{1}{k}$. So we get: $E(T_{0}^{2})=\sum\limits_{m=0}^{n-1}2E(T_{m})\frac{n}{n-m}-\sum\limits_{m=0}^{n-1}\frac{n}{n-m}$ $=>E(T_{0}^{2})=2n^{2}\sum\limits_{m=0}^{n-1}\frac{1}{n-m}\sum\limits_{k=1}^{n-m}\frac{1}{k}-n\sum\limits_{m=0}^{n-1}\frac{1}{n-m}$ Now, change variables $j=n-m$ $=>E(T_{0}^{2})=2n^{2}\sum\limits_{j=n}^{1}\frac{1}{j}\sum\limits_{k=1}^{j}\frac{1}{k}-n\sum\limits_{j=n}^{1}\frac{1}{j}$ $=>E(T_{0}^{2})=2n^{2}\sum\limits_{1\leq k\leq j\leq n}\frac{1}{jk}-E(T_{0})$ This can be used in conjunction with the result of theorem 1 to get the variance. $V(T_{0})=2n^{2}\sum\limits_{1\leq k\leq j\leq n}\frac{1}{jk}-E(T_{0})-E(T_{0})^{2}$ Substituting the result of theorem 1, $V(T_{0})=2n^{2}\sum\limits_{1\leq k\leq j\leq n}\frac{1}{jk}-n\sum\limits_{i=1}^{n}\frac{1}{i}-\left(n\sum\limits_{i=1}^{n}\frac{1}{i}\right)^{2}$ (3) Comparing equation 3 above with the result of theorem 3 we get the easily verifiable identity: $2\sum_{1\leq j\leq k\leq n}\frac{1}{jk}=\sum\limits_{i=1}^{n}\frac{1}{i^{2}}+\left(\sum\limits_{i=1}^{n}\frac{1}{i}\right)^{2}$ ## 4 Using a Poisson process to make dependence disappear Using the Poisson process to magically concoct independent random variables. This is the most powerful of all approaches since it’s the only one that allows us to solve for both mean and variance for the coupon collector’s problem for the general case of coupons having unequal probabilities (and higher moments as well). It is hence able to solve problems P1 through P4. In example 5.17 of [1], the Coupon collector’s problem is tackled for the general case where the probability of drawing coupon $j$ is given by $p_{j}$ and of course, $\sum\limits_{j}p_{j}=1$. Now, he imagines that the collector collects the coupons in accordance to a Poisson process with rate $\lambda=1$. Furthermore, every coupon that arrives is of type $j$ with probability $p_{j}$. Now, he defines $X_{j}$ as the first time a coupon of type $j$ is observed, if the $j$th coupon arrives in accordance to a Poisson process with rate $p_{j}$. We’re interested in the time it takes to collect all coupons, $X$ (for now, eventually, we’re interested in the number of coupons to be collected, $N$). So we get: $X=\max_{1\leq j\leq m}X_{j}$ Note that if we denote $N_{j}$ as the number of coupons to be collected before the first coupon of type $j$ is seen, we also have for the number needed to collect all coupons, $N$: $N=\max_{1\leq j\leq m}N_{j}$ This equation is less useful since the $N_{j}$ are not independent. It can still be used to get the mean (see section 2), but trying to get the variance with this approach gets considerably more challenging due to this lack of independence of the underlying random variables (the are positively correlated). But, the incredible fact that the $X_{j}$ are independent (discussion on that here), allows us to get: $F_{X}(t)=P(X<t)=P(X_{j}<t\;\forall\;j)=\prod\limits_{j=1}^{m}(1-e^{-p_{j}t})$ (4) ### 4.1 Proof 2 of theorem 2 Now, Ross uses the expression: $E(X)=\int\limits_{0}^{\infty}S_{X}(t)dt$, where $S_{X}(t)$ is the survival function to get: $E(X)=\int\limits_{0}^{\infty}\left(1-\prod\limits_{j=1}^{m}(1-e^{-p_{j}t})\right)dt$ $=\sum\limits_{j}\frac{1}{p}_{j}-\sum\limits_{i<j}\frac{1}{p_{i}+p_{j}}+\dots+(-1)^{m-1}\frac{1}{p_{1}+\dots+p_{m}}$ and this proves the result of theorem 2. ### 4.2 Proof 4 of theorem 1 In the special case of all coupons having equal probabilities of being collected we have: $p_{j}=\frac{1}{n}\forall j$ Substituting in the equation above we get: $E(X)=\sum\limits_{k=1}^{n}(-1)^{k}\frac{{n\choose k}}{k}$ (5) Let’s solve a general version of the binomial sum in equation 5. ###### Proposition 5. We have the following binomial sum: $\sum_{k=1}^{n}(-1)^{k-1}\frac{{n\choose k}}{k^{r}}=\sum_{i_{1}<i_{2}<\dots<i_{r}}\frac{1}{i_{1}i_{2}\dots i_{r}}$ ###### Proof. Using the Binomial theorem: $\frac{1-(1-t)^{n}}{t}=\sum\limits_{k=1}^{n}(-1)^{k-1}{{n\choose k}}{t^{k-1}}$ Integrate both sides from $0$ to $x$. $\int\limits_{0}^{x}\frac{1-(1-t)^{n}}{t}dx=\sum\limits_{k=1}^{n}(-1)^{k-1}{{n\choose k}}\frac{x^{k}}{k}$ For the LHS, let $1-t=u$ $\int\limits_{1}^{1-x}\frac{1-(u)^{n}}{1-u}(-du)=\sum\limits_{k=1}^{n}(-1)^{k-1}{{n\choose k}}\frac{x^{k}}{k}$ $\frac{\sum\limits_{k=1}^{n}\frac{1-(1-x)^{k}}{k}}{x}=\sum\limits_{k=1}^{n}(-1)^{k-1}\frac{{n\choose k}}{k}x^{k-1}$ Integrate both sides from $0$ to $1$, we get: $\sum\limits_{k=1}^{n}\frac{1}{k}\int\limits_{0}^{1}\frac{1-(1-x)^{k}}{x}dx=\sum\frac{{n\choose k}}{k^{2}}(-1)^{k-1}$ Substituting $1-x=t$ in the integral and expanding the geometric series we get: $\sum\limits_{k=1}^{n}\frac{1}{k}\sum\limits_{j=1}^{k}\frac{1}{j}=\sum\frac{{n\choose k}}{k^{2}}(-1)^{k-1}=\sum\limits_{k=1}^{n}\sum\limits_{j=1}^{k}\frac{1}{jk}$ This can very easily be extended to $k^{r}$ in the denominator: $\sum_{k=1}^{n}(-1)^{k-1}\frac{{n\choose k}}{k^{r}}=\sum_{i_{1}<i_{2}<\dots<i_{r}}\frac{1}{i_{1}i_{2}\dots i_{r}}$ (6) ∎ Substituting $r=1$ in equation 6 and equation 5 we have, $E(X)=n\sum\limits_{k=1}^{n}\frac{1}{k}$ Further, Ross shows that $E(N)=E(X)$ using the law of total expectation. First, he notes, $E(X|N=n)=nE(T_{i})$ where $T_{i}$ are the inter-arrival times for coupon arrivals. Since these are assume to be exponential with rate 1, $E(X|N)=N$ Taking expectations on both sides and using the law of total expectation we get: $E(X)=E(N)$ ### 4.3 Proof 1 of theorem 4 This approach can easily be extended to find $V(N)$, the variance (not covered by Ross). We can use the following expression to get $E(X^{2})$: $E(X^{2})=\int\limits_{0}^{\infty}2tP(X>t)dt=\int\limits_{0}^{\infty}2t\left(1-\prod\limits_{j=1}^{n}(1-e^{-p_{j}t})\right)dt$ Using the fact that $\int\limits_{0}^{\infty}te^{-pt}=\frac{1}{p^{2}}$ and the same algebra as for $E(X)$ we get: $\frac{E(X^{2})}{2}=\sum\frac{1}{p_{j}^{2}}-\sum_{i<j}\frac{1}{(p_{i}+p_{j})^{2}}+\dots+(-1)^{n-1}\frac{1}{(p_{1}+\dots+p_{n})^{2}}$ (7) Equation 7 has given us $E(X^{2})$ but remember that we’re interested in finding $E(N^{2})$ and from there, $V(N)$. So, we need to relate the variances of the two random variables. Using the law of total variance we get: $V(X)=E(V(X|N))+V(E(X|N))$ So per equation (3) we have: $V(X)=E(V(X|N))+V(N)$ Now, $V(X|N)=NV(T_{i})$ And since $T_{i}\sim Exp(1)$, we have $V(T_{i})=1$ meaning, $V(X|N)=N$. Substituting into (2), $V(X)=E(N)+V(N)$ So, $V(N)=E(X^{2})-E(N)-E(N)^{2}$ (8) Substituting equation 7 and the result of theorem 2 into equation 8 we get: $V(N)=\left(\sum\frac{1}{p_{j}^{2}}-\sum_{i<j}\frac{1}{(p_{i}+p_{j})^{2}}+\dots+(-1)^{n-1}\frac{1}{(p_{1}+\dots+p_{n})^{2}}\right)-\\\ \left(\sum\frac{1}{p_{j}}-\sum_{i<j}\frac{1}{(p_{i}+p_{j})}+\dots+(-1)^{n-1}\frac{1}{(p_{1}+\dots+p_{n})}\right)^{2}-\\\ \left(\sum\frac{1}{p_{j}}-\sum_{i<j}\frac{1}{(p_{i}+p_{j})}+\dots+(-1)^{n-1}\frac{1}{(p_{1}+\dots+p_{n})}\right)$ (9) ### 4.4 Proof 3 of theorem 2 Now, let’s consider the special case where all coupons have an equal probability of being selected. In other words, $p_{j}=\frac{1}{n}\;\forall\;j$. We get: $\frac{E(X^{2})}{2}=n^{2}\left(\sum\limits_{k=1}^{n}(-1)^{k-1}\frac{{n\choose k}}{k^{2}}\right)$ (10) We now solve a general version of the binomial summation in equation 10 above. Using equations 6 and 10 we get: $E(X^{2})=2n^{2}\left(\sum_{j=1}^{n}\sum_{k=1}^{j}\frac{1}{jk}\right)$ (11) Using equations 11 and 8, we get the same result we got from the recurrence in section 3, equation 3. ## Acknowledgements I’d like to thank mathexchange user, Simon for encouraging me to convert the Q&A page on this into a paper. ## References * [1] Ross, S. (2010). Introduction to Probability Models, 10th ed. Elsevier.
2024-09-04T02:54:58.689254
2020-03-07T13:23:01
2003.04730
{ "authors": "Rapha\\\"el Berthon, Bastien Maubert, Aniello Murano, Sasha Rubin, Moshe\n Vardi", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26134", "submitter": "Bastien Maubert", "url": "https://arxiv.org/abs/2003.04730" }
arxiv-papers
# Strategy Logic with Imperfect Information Raphaël Berthon École Normale Supérieure de RennesComputer Science and TelecommunicationRennesFrance<EMAIL_ADDRESS>, Bastien Maubert 0000-0002-9081-2920 Università degli Studi di Napoli “Federico II”DIETINaplesItaly<EMAIL_ADDRESS>, Aniello Murano Università degli Studi di Napoli “Federico II”DIETINaplesItaly<EMAIL_ADDRESS>, Sasha Rubin Università degli Studi di Napoli “Federico II”DIETINaplesItaly <EMAIL_ADDRESS>and Moshe Y. Vardi Rice UniversityHoustonTexasUSA <EMAIL_ADDRESS> (September 2018) ###### Abstract. We introduce an extension of Strategy Logic for the imperfect-information setting, called $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, and study its model-checking problem. As this logic naturally captures multi-player games with imperfect information, this problem is undecidable; but we introduce a syntactical class of “hierarchical instances” for which, intuitively, as one goes down the syntactic tree of the formula, strategy quantifications are concerned with finer observations of the model, and we prove that model-checking $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ restricted to hierarchical instances is decidable. This result, because it allows for complex patterns of existential and universal quantification on strategies, greatly generalises the decidability of distributed synthesis for systems with hierarchical information. It allows us to easily derive new decidability results concerning strategic problems under imperfect information such as the existence of Nash equilibria, or rational synthesis. To establish this result we go through an intermediary, “low-level” logic much more adapted to automata techniques. $\textnormal{{QCTL}}^{*}$ is an extension of $\textnormal{{CTL}}^{*}$ with second-order quantification over atomic propositions that has been used to study strategic logics with perfect information. We extend it to the imperfect information setting by parameterising second-order quantifiers with observations. The simple syntax of the resulting logic, $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, allows us to provide a conceptually neat reduction of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ to $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ that separates concerns, allowing one to forget about strategies and players and focus solely on second-order quantification. While the model-checking problem of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is, in general, undecidable, we identify a syntactic fragment of hierarchical formulas and prove, using an automata-theoretic approach, that it is decidable. We apply our result to solve complex strategic problems in the imperfect- information setting. We first show that the existence of Nash equilibria for deterministic strategies is decidable in games with hierarchical information. We also introduce distributed rational synthesis, a generalisation of rational synthesis to the imperfect-information setting. Because it can easily be expressed in our logic, our main result provides a solution to this problem in the case of hierarchical information. strategic reasoning, imperfect information, perfect recall, distributed synthesis, hierarchical information, Nash equilibria, rational synthesis ††journal: TOCL††copyright: acmlicensed††doi: 0000001.0000001††ccs: Theory of computation Logic and verification††ccs: Theory of computation Modal and temporal logics††ccs: Theory of computation Automata over infinite objects ## 1\. Introduction Temporal logics such as LTL (Pnueli, 1977) or $\textnormal{{CTL}}^{*}$ (Emerson and Halpern, 1986) are extremely successful logics that have been studied in great detail and extended in many directions along the past decades, notably in relation with the development of the model-checking approach to program verification (Clarke et al., 1999). When considering systems with multiple components such as multi-agent systems or distributed programs, popular extensions of temporal logics are the family of so-called _logics for strategic reasoning_ , or _strategic logics_ , which introduce operators that can express the existence of strategies for components to ensure that the system’s executions satisfy certain temporal properties. A fundational logic in this family is Alternating-time Temporal Logic (ATL) (Alur et al., 2002). It extends $\textnormal{{CTL}}^{*}$ with a coalition operator $\langle A\rangle\varphi$, where $A$ is a subset of components/agents of the system, which reads as “coalition $A$ has a strategy to enforce property $\varphi$ no matter what the other components/agents do”. This logic is thus quite expressive, as it allows for instance to express the existence of winning strategies in games played on graphs. However it is not well suited to reason about other important solution concepts in game theory, such as Nash equilibria. To address this problem Strategy Logic (SL) was introduced (Chatterjee et al., 2010a; Mogavero et al., 2014). In SL strategies are treated as first-order objects, thanks to strategy variables $x$ that can be quantified upon and bound to players: $\langle\\!\langle x\rangle\\!\rangle$ reads as “there exists a strategy $x$”, and $(a,x)$ reads as “strategy $x$ is assigned to player $a$”. This leads to a very expressive logic that can express many solution concepts from game-theory such as best response, existence of Nash equilibria or subgame-perfect equilibria. Imperfect information. An essential property of realistic multi-player games is that players often have a limited view of the system. Such imperfect information, or partial observation, is usually captured by equipping the models with equivalence relations $o$ (called _observations_) over the state space, that specify indistinguishable states. Strategies are then required to be _uniform_ , i.e., they cannot assign different moves to indistinguishable situations. Imperfect information is known to make games computationally harder to solve. For two-player reachability games, Reif showed in (Reif, 1984) that deciding the existence of winning strategies is Exptime -complete for imperfect information, while it is in Ptime for perfect information. This result has later been generalised to omega-regular objectives (Berwanger et al., 2010; Doyen and Raskin, 2011), and adapted to the setting of program synthesis from temporal specifications (Pnueli and Rosner, 1989; Kupferman and Vardi, 1999). In the case of multiple players/components/agents, which interests us here, the situation is even worse: the existence of distributed winning strategies is undecidable already for two players with incomparable observation trying to enforce some reachability objective in the presence of an adversarial third player (Peterson and Reif, 1979), and a similar result was also proved in the framework of distributed synthesis (Pnueli and Rosner, 1990). Since then, the formal-methods community has spent much effort finding restrictions and variations that ensure decidability (Kupferman and Vardi, 2001; Pnueli and Rosner, 1990; Gastin et al., 2009; Peterson et al., 2002; Finkbeiner and Schewe, 2005; Pinchinat and Riedweg, 2005; Schewe and Finkbeiner, 2007; Berwanger et al., 2018). The common thread in these approaches is hierarchical information: players can be totally ordered according to how well they observe the game. Another line of works establishes that decidability can be retained by forbidding private communication, i.e., by considering variants around the idea that all new information should be public (van der Meyden and Vardi, 1998; van der Meyden and Wilke, 2005; Ramanujam and Simon, 2010; Belardinelli et al., 2017b, a; Bouyer, 2018). Strategy Logic with imperfect information. We propose an extension of Strategy Logic to the imperfect-information setting, which we call $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. The first step is to choose how to introduce imperfect information in the logic. In the formal-methods literature it is typical to associate observations to players. In $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, instead, we associate observations to strategies: the strategy quantifier $\langle\\!\langle x\rangle\\!\rangle{}$ from SL is now parameterised by observation $o$, written $\langle\\!\langle x\rangle\\!\rangle^{o}$. This novelty allows one to express, in the logic, that a player’s observation changes over time, to capture for instance the loss of a sensor resulting in a diminished observation power. We also add to our logic $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ the outcome quantifier ${\bf A}$ from Branching-time Strategy Logic (BSL) (Knight and Maubert, 2019), which quantifies on outcomes of strategies currently used by the agents, and the unbinding operator $(a,\operatorname{?})$, which frees an agent from her current strategy. This does not increase the expressivity of the logic but presents advantages that we discuss in Section 2.2. For instance it allows us to naturally consider nondeterministic strategies (Strategy Logic only considers deterministic ones), which in turn allows us to capture module checking, the extension of model checking to open systems (Kupferman et al., 2001; Jamroga and Murano, 2014, 2015). The logic $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ is very powerful: it is an extension of SL (which considers perfect information), and of the imperfect-information strategic logics $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize i,R}}$ (Bulling and Jamroga, 2014) and $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc,i}}$ (Laroussinie et al., 2015). As already mentioned, $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ can express the distributed synthesis problem (Pnueli and Rosner, 1990). This problem asks whether there are strategies for components $a_{1},\dots,a_{n}$ of a distributed system to enforce some property given as an LTL formula $\psi$ against all behaviours of the environment. This can be expressed by the $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ formula $\Phi_{\textsc{Synth}}:=\langle\\!\langle x_{1}\rangle\\!\rangle^{o_{1}}\dots\langle\\!\langle x_{n}\rangle\\!\rangle^{o_{n}}(a_{1},x_{1})\dots(a_{n},x_{n}){\bf A}\psi$, where $o_{i}$ represents the local view of component $a_{i}$. Also, $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ can express more complicated specifications by alternating quantifiers, binding the same strategy to different agents and rebinding (these are inherited from SL), as well as changing observations. For instance, it can express the existence of Nash equilibria. Main result. Of course, the high expressivity of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ comes at a cost from a computational complexity point of view. Its satisfiability problem is undecidable (this is already true of SL), and so is its model-checking problem (this is already true of $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize i,R}}$ even for the single formula $\langle\\{a,b\\}\rangle{\bf F}p$ (Dima and Tiplea, 2011), which means that agents $a$ and $b$ have a strategy profile to reach a situation where $p$ holds). We mentioned that the two main settings in which decidability is retrieved for distributed synthesis are hierarchical information and public actions. We extend the first approach to the setting of strategic logics by introducing a syntactic class of “hierarchical instances” of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, i.e., formula/model pairs, and proving that the model-checking problem on this class of instances is decidable. Intuitively, an instance of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ is hierarchical if, as one goes down the syntactic tree of the formula, the observations annotating strategy quantifications can only become finer. Although the class of hierarchical instances refers not only to the syntax of the logic but also to the model, the class is syntactical in the sense that it depends only on the structure of the formula and the observations in the model. Moreover, it is straightforward to check (in linear time) whether an instance is hierarchical or not. Applications. Because the syntax of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ allows for arbitrary alternations of quantifiers in the formulas, our decidability result for hierarchical instances allows one to decide strategic problems more involved than module checking and distributed synthesis. For instance, we show in Section 7 how one can apply our result to establish that the existence of Nash equilibria is decidable in games with imperfect information, in the case of hierarchical observations and deterministic strategies. This problem is relevant as Nash equilibria do not always exist in games with imperfect information (Filiot et al., 2018). We then consider the problem of rational synthesis (Fisman et al., 2010; Kupferman et al., 2016; Condurache et al., 2016; Filiot et al., 2018), both in its cooperative and non-cooperative variants. We introduce the generalisations of these problems to the case of imperfect information, and call them cooperative and non-cooperative _rational distributed synthesis_. We then apply again our main result to establish that they are decidable in hierarchical systems for deterministic strategies. For the non-cooperative variant, we need the additional assumption that the environment is at least as informed as the system. This is the case for example when one ignores the actual observation power of the environment, and considers that it plays with perfect information. Doing so yields systems that are robust to any observation power the environment may have. As Reif puts it, this amounts to synthesising strategies that are winning even if the opponent “cheats” and uses information it is not supposed to have access to (Reif, 1984). Approach. In order to solve the model-checking problem for $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ we introduce an intermediate logic $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, an extension to the imperfect-information setting of $\textnormal{{QCTL}}^{*}$ (Laroussinie and Markey, 2014), itself an extension of $\textnormal{{CTL}}^{*}$ by second- order quantifiers over atoms. This is a low-level logic that does not mention strategies and into which one can effectively compile instances of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. States of the models of the logic $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ have internal structure, much like the multi-player game structures from (Peterson et al., 2001) and distributed systems (Halpern and Vardi, 1989). Model-checking $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is also undecidable (indeed, we show how to reduce from the MSO-theory of the binary tree extended with the equal-length predicate, known to be undecidable (Läuchli and Savioz, 1987)). We introduce the syntactical class $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$ of hierarchical formulas as those in which innermost quantifiers observe more than outermost quantifiers, and prove that model-checking is decidable using an extension of the automata-theoretic approach for branching-time logics. We provide a reduction from model checking $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ to model checking $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ that preserves being hierarchical, thus establishing our main contribution, i.e., that model checking the hierarchical instances of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ is decidable. Complexity. To establish the precise complexity of the problems we solve, we introduce a new measure on formulas called _simulation depth_. This measure resembles the notion of alternation depth (see, e.g., (Mogavero et al., 2014)), which counts alternations between existential and universal strategy (or second-order) quantifications. But instead of merely counting alternations between such operators, simulation depth reflects the underlying automata operations required to treat formulas, while remaining a purely syntactical notion. We prove that the model-checking problem for the hierarchical fragment of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ and $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ are both $(k+1)$-Exptime -complete for formulas of simulation depth at most $k$. Already for the perfect-information fragment, this result is more precise than what was previously known. Indeed, precise upper bounds based on alternation depth were known for syntactic fragments of SL but not for the full logic (Mogavero et al., 2014). Related work. The literature on imperfect information in formal methods and artificial intelligence is very vast. Imperfect information has been considered in two-player games (Reif, 1984; Doyen and Raskin, 2011; Berwanger et al., 2010), module checking (Kupferman et al., 2001; Jamroga and Murano, 2015), distributed synthesis of reactive systems (Pnueli and Rosner, 1990; Kupferman and Vardi, 2001; Finkbeiner and Schewe, 2005) and strategies in multiplayer games (Peterson and Reif, 1979; Peterson et al., 2002; Berwanger et al., 2018), Nash equilibria (Ramanujam and Simon, 2010; Bouyer et al., 2017; Bouyer, 2018), rational synthesis (Filiot et al., 2018; Gutierrez et al., 2018), doomsday equilibria (Chatterjee et al., 2017), admissible strategies (Brenguier et al., 2017), quantitative objectives (Degorre et al., 2010; Pérez, 2017), and more, some of which we detail below. Limited alternation of strategy quantification was studied in (Chatterjee and Doyen, 2014a), in which several decidability results are proved for two and three alternations of existential and universal quantifiers. Except for one where the first player has perfect information, all the problems solved in this work are hierarchical instances, and are thus particular cases of our main result. Quantified $\mu$-Calculus with partial observation is studied in (Pinchinat and Riedweg, 2005), where the model-checking problem is solved by considering a syntactic constraint based on hierarchical information, as we do for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$. However they consider asynchronous perfect recall, and the automata techniques they use to deal with imperfect information cannot be used in the synchronous perfect-recall setting that we consider in this work. Similarly the narrowing operation on tree automata (see Section 4.1), which is crucial in our model-checking procedure, considers synchronous perfect recall and does not seem easy to adapt to the asynchronous setting. A number of works have considered strategic logics with imperfect information. Various semantics for ATL with imperfect information have been studied in, e.g., (Jamroga and Bulling, 2011; Jamroga and van der Hoek, 2004). The model- checking problem for these logics, which is undecidable for agents with perfect recall (Dima and Tiplea, 2011), has been studied for agents with bounded memory, for which decidability is recovered (Schobbens, 2004; Lomuscio and Raimondi, 2006). An epistemic strategic logic with original operators different from those of ATL and SL is proposed in (Huang and Van Der Meyden, 2014). It considers imperfect information strategies, but only for agents without memory. Concerning perfect recall, which interest us in this work, decidability results have also been obtained for ATL (Guelev et al., 2011) and ATL with strategy context (Laroussinie et al., 2015) when agents have the same information. In (Knight and Maubert, 2019), a branching-time variant of SL is extended with epistemic operators and agents with perfect recall. Strategies are not required to be uniform in the semantics, but this requirement can be expressed in the language. However no decidability result is provided. Another variant of SL extended with epistemic operators and imperfect-information, perfect- recall strategies is presented in (Belardinelli, 2015), but model checking is not studied. The latter logic is extended in (Belardinelli et al., 2017a), in which its model-checking problem is solved on the class of systems where all agents’ actions are public, which is an assumption orthogonal to hierarchical information. The work closest to ours is (Finkbeiner and Schewe, 2010) which introduces a logic CL in which one can encode many distributed synthesis problems. In this logic, hierarchical information is a necessary consequence of the syntax and semantics, and as a result its model-checking problem is decidable. However, CL is close in spirit to our $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$, and its semantics is less intuitive than that of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. Furthermore, by means of a natural translation we derive that CL is strictly included in the hierarchical instances of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ (Section 6.2). In particular, hierarchical instances of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ can express non-observable goals, while CL cannot. When considering players that choose their own goals it may be natural to assume that they can observe the facts that define whether their objectives are satisfied or not. But when synthesising programs for instance, it may be enough that their behaviours enforce the desired properties, without them having the knowledge that it is enforced. Such non- observable winning conditions have been studied in, e.g., (Chatterjee and Doyen, 2010; Degorre et al., 2010; Berwanger et al., 2018). Outline. In Section 2 we define $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ and hierarchical instances, and present some examples. In Section 3 we define $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ and its hierarchical fragment $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$. The proof that model checking $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$ is decidable, including the required automata preliminaries, is in Section 4. The hierarchy-preserving translation of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ into $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is in Section 5. In Section 6 we compare $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ with related logics, and in Section 7 we apply our main result to obtain decidability results for various strategic problems under imperfect information. Finally we conclude and discuss future work in Section 8. ## 2\. SL with imperfect information In this section we introduce $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, an extension of SL to the imperfect-information setting with synchronous perfect-recall. Our logic presents several original features compared to SL, which we discuss in detail in Section 2.3: we introduce an _outcome quantifier_ akin to the path quantifier in branching-time temporal logics, we allow for nondeterministic strategies and unbinding agents from their strategies, and we annotate strategy quantifiers with observation symbols which denote the information available to strategies. We first fix some basic notations. ### 2.1. Notations Let $\Sigma$ be an alphabet. A _finite_ (resp. _infinite_) _word_ over $\Sigma$ is an element of $\Sigma^{*}$ (resp. $\Sigma^{\omega}$). Words are written $w=w_{0}w_{1}w_{2}\ldots$, i.e., indexing begins with $0$. The _length_ of a finite word $w=w_{0}w_{1}\ldots w_{n}$ is $|w|:=n+1$, and $\mbox{last}(w):=w_{n}$ is its last letter. Given a finite (resp. infinite) word $w$ and $0\leq i<|w|$ (resp. $i\in\mathbb{N}$), we let $w_{i}$ be the letter at position $i$ in $w$, $w_{\leq i}$ is the prefix of $w$ that ends at position $i$ and $w_{\geq i}$ is the suffix of $w$ that starts at position $i$. We write $w\preccurlyeq w^{\prime}$ if $w$ is a prefix of $w^{\prime}$, and $\textit{pref}\,(w)$ is the set of finite prefixes of word $w$. Finally, the domain of a mapping $f$ is written $\textit{dom}(f)$, its codomain $\textit{codom}(f)$, and for $n\in\mathbb{N}$ we let $[n]:=\\{i\in\mathbb{N}:1\leq i\leq n\\}$. ### 2.2. Syntax For the rest of the paper, for convenience we fix a number of parameters for our logics and models: AP is a finite non-empty set of _atomic propositions_ , Ag is a finite non-empty set of _agents_ or _players_ , and Var is a finite non-empty set of _variables_. The main novelty of our logic is that we specify which information is available to a strategy, by annotating strategy quantifiers $\langle\\!\langle x\rangle\\!\rangle$ with _observation symbols_ $o$ from a finite set Obs, that we also fix for the rest of the paper. When we consider model-checking problems, these data are implicitly part of the input. ###### Definition 2.1 ($\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ Syntax). The syntax of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ is defined by the following grammar: $\displaystyle\varphi:=$ $\displaystyle\;p\mid\neg\varphi\mid\varphi\vee\varphi\mid\langle\\!\langle x\rangle\\!\rangle^{o}\varphi\mid(a,x)\varphi\mid(a,\operatorname{?})\varphi\mid{\bf E}\psi$ $\displaystyle\psi:=$ $\displaystyle\;\varphi\mid\neg\psi\mid\psi\vee\psi\mid{\bf X}\psi\mid\psi{\bf U}\psi$ where $p\in\textnormal{AP}$, $x\in\textnormal{Var}$, $o\in\textnormal{Obs}$ and $a\in\textnormal{Ag}$. Formulas of type $\varphi$ are called _state formulas_ , those of type $\psi$ are called _path formulas_ , and $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ consists of all the state formulas defined by the grammar. Boolean operators and temporal operators, ${\bf X}$ (read “next”) and ${\bf U}$ (read “until”), have the usual meaning. The _strategy quantifier_ $\langle\\!\langle x\rangle\\!\rangle^{o}$ is a first-order-like quantification on strategies: $\langle\\!\langle x\rangle\\!\rangle^{o}\varphi$ reads as “there exists a strategy $x$ that takes decisions based on observation $o$ such that $\varphi$ holds”, where $x$ is a strategy variable. The _binding operator_ $(a,x)$ assigns a strategy to an agent, and $(a,x)\varphi$ reads as “when agent $a$ plays strategy $x$, $\varphi$ holds”. The _unbinding operator_ $(a,\operatorname{?})$ instead releases agent $a$ from her current strategy, if she has one, and $(a,\operatorname{?})\varphi$ reads as “when agent $a$ is not assigned any strategy, $\varphi$ holds”. Finally, the _outcome quantifier_ ${\bf E}$ quantifies on outcomes of strategies currently in use: ${\bf E}\psi$ reads as “$\psi$ holds in some outcome of the strategies currently used by the players”. We use abbreviations $\top:=p\vee\neg p$, $\perp:=\neg\top$, $\varphi\to\varphi^{\prime}:=\neg\varphi\vee\varphi^{\prime}$, $\varphi\leftrightarrow\varphi^{\prime}:=\varphi\to\varphi^{\prime}\wedge\varphi^{\prime}\to\varphi$ for boolean connectives, ${\bf F}\varphi:=\top{\bf U}\varphi$ (read “eventually $\varphi$”), ${\bf G}\varphi:=\neg{\bf F}\neg\varphi$ (read “globally $\varphi$”) for temporal operators, $[\\![x]\\!]^{o}\varphi:=\neg\langle\\!\langle x\rangle\\!\rangle^{o}\neg\varphi$ (read “for all strategies $x$ based on observation $o$, $\varphi$ holds”) and ${\bf A}\psi:=\neg{\bf E}\neg\psi$ (read “all outcomes of the current strategies satisfy $\psi$”). For every formula $\varphi\in\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, we let $\textit{free}\,(\varphi)$ be the set of variables that appear free in $\varphi$, i.e., that appear out of the scope of a strategy quantifier. A formula $\varphi$ is a _sentence_ if $\textit{free}\,(\varphi)$ is empty. Finally, we let the _size_ $|\varphi|$ of a formula $\varphi$ be the number of symbols in $\varphi$. ### 2.3. Discussion on the syntax We discuss the syntactic differences between our logic and usual Strategy Logic. Outcome quantifier. This quantifier was introduced in Branching-time Strategy Logic (BSL) (Knight and Maubert, 2019), which corresponds to the perfect- information fragment of the logic we define here. It removes a quirk of previous definitions, in which temporal operators could only be evaluated in contexts where all agents were assigned a strategy. The outcome quantifier, instead, allows for evaluation of temporal properties on partial assignments. As a result, the notions of free agents and agent-complete assignments from previous definitions of Strategy Logic are no longer needed (see, e.g., (Mogavero et al., 2014)). In addition, the outcome quantifier highlights the inherent branching-time nature of Strategy Logic: indeed, in SL, branching- time properties can be expressed by resorting to artificial strategy quantifications for all agents. It will also make the correspondence with $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ tighter, which will allow us to establish the precise complexity of the problem we solve, while the exact complexity of model checking classic SL with perfect information is still not known. Finally, since the usual definition of SL requires that the current strategies define a unique outcome on which linear-time temporal operators are evaluated, only deterministic strategies were considered. The introduction of the outcome quantifier allows us to consider nondeterministic strategies. Unbinding. With the possibility to evaluate temporal operators even when some agents are not bound to any strategy, it becomes interesting to include the unbinding operator $(a,\operatorname{?})$, introduced in (Laroussinie and Markey, 2015) for ATL with strategy context and also present in BSL. Note that the outcome quantifier and unbinding operator do not increase the expressivity of SL, at the level of sentences (Knight and Maubert, 2019). Observations. In games with imperfect information and ATL-like logics with imperfect information, a strategy is always bound to some player, and thus it is clear with regards to what observations it should be defined. In SL on the other hand, strategy quantification and binding are separate. This adds expressive power with regards to ATL by allowing, for instance, to assign the same strategy to two different players, but it also entails that when a quantification is made on a strategy, one does not know with regards to which observation this strategy should be defined. We know of three ways to solve this. One is the approach followed here, which consists in associating with strategy quantifiers an observation power. The second solution is to abandon the separation between quantification and binding and to use instead quantifiers of the form $\exists_{a}$, meaning “there exists a strategy for player $a$”, like in (Chatterjee et al., 2010b; Belardinelli, 2014): with this operator, the strategy is immediately bound to player $a$, which indicates with regards to which observation the strategy should be compatible. The third one, adopted in (Belardinelli et al., 2017a), consists in requiring that a strategy be uniform for all agents to whom it will be bound in the formula. We chose to adopt the first solution for its simplicity and expressiveness. Indeed the second solution limits expressiveness by disallowing, for instance, binding the same strategy to different agents. The third solution leads to a logic that is more expressive than the second one, but less than the first one. Indeed, the logic that we study here can capture the logic from (Belardinelli et al., 2017a) (assuming that models contain observations corresponding to unions of individual observations), and in addition $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ can express changes of agents’ observation power. ### 2.4. Semantics The models of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ are classic concurrent game structures extended by an interpretation for observation symbols in Obs. ###### Definition 2.2 ($\textrm{CGS}_{\textnormal{ii}}$ ). A _concurrent game structure with imperfect information_ (or $\textrm{CGS}_{\textnormal{ii}}$ for short) is a tuple $\mathcal{G}=(\textnormal{Ac},V,E,\ell,v_{\iota},\mathcal{O})$ where * • Ac is a finite non-empty set of _actions_ , * • $V$ is a finite non-empty set of _positions_ , * • $E:V\times\textnormal{Ac}^{\textnormal{Ag}}\to V$ is a _transition function_ , * • $\ell:V\to 2^{\textnormal{AP}}$ is a _labelling function_ , * • $v_{\iota}\in V$ is an _initial position_ , and * • $\mathcal{O}:\textnormal{Obs}\to 2^{V\times V}$ is an _observation interpretation_. For $o\in\textnormal{Obs}$, $\mathcal{O}(o)$ is an equivalence relation on positions, that we may write $\sim_{o}$. It represents what a strategy with observation $o$ can see: $\mathcal{O}(o)$-equivalent positions are indistinguishable to such a strategy. Also, $\ell(v)$ is the set of atomic propositions that hold in position $v$. We define the size $|\mathcal{G}|$ of a $\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}=(\textnormal{Ac},V,E,\ell,v_{\iota},\mathcal{O})$ as the size of an explicit encoding of the transition function: $|\mathcal{G}|:=|V|\times|\textnormal{Ac}|^{|\textnormal{Ag}|}\times\lceil\log(|V|)\rceil$. We may write $v\in\mathcal{G}$ for $v\in V$. We now introduce a number of notions involved in the semantics of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. Consider a $\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}=(\textnormal{Ac},V,E,\ell,v_{\iota},\mathcal{O})$. Joint actions. In a position $v\in V$, each player $a$ chooses an action $c_{a}\in\textnormal{Ac}$, and the game proceeds to position $E(v,\bm{c})$, where $\bm{c}\in\textnormal{Ac}^{\textnormal{Ag}}$ stands for the _joint action_ $(c_{a})_{a\in\textnormal{Ag}}$. Given a joint action $\bm{c}=(c_{a})_{a\in\textnormal{Ag}}$ and $a\in\textnormal{Ag}$, we let $\bm{c}_{a}$ denote $c_{a}$. Plays. A _finite_ (resp. _infinite_) _play_ is a finite (resp. infinite) word $\rho=v_{0}\ldots v_{n}$ (resp. $\pi=v_{0}v_{1}\ldots$) such that $v_{0}=v_{\iota}$ and for every $i$ such that $0\leq i<|\rho|-1$ (resp. $i\geq 0$), there exists a joint action $\bm{c}$ such that $E(v_{i},\bm{c})=v_{i+1}$. Strategies. A (nondeterministic) _strategy_ is a function $\sigma:V^{+}\to 2^{\textnormal{Ac}}\setminus\emptyset$ that maps each finite play to a nonempty finite set of actions that the player may play. A strategy $\sigma$ is _deterministic_ if for all $\rho$, $\sigma(\rho)$ is a singleton. We let Str denote the set of all strategies. Assignments. An _assignment_ is a partial function $\chi:\textnormal{Ag}\cup\textnormal{Var}\rightharpoonup\mbox{\emph{Str}}$, assigning to each player and variable in its domain a strategy. For an assignment $\chi$, a player $a$ and a strategy $\sigma$, $\chi[a\mapsto\sigma]$ is the assignment of domain $\textit{dom}(\chi)\cup\\{a\\}$ that maps $a$ to $\sigma$ and is equal to $\chi$ on the rest of its domain, and $\chi[x\mapsto\sigma]$ is defined similarly, where $x$ is a variable; also, $\chi[a\mapsto\operatorname{?}]$ is the restriction of $\chi$ to domain $\textit{dom}(\chi)\setminus\\{a\\}$. In addition, given a formula $\varphi\in\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, an assignment is _variable-complete for $\varphi$_ if its domain contains all free variables of $\varphi$. Outcomes. For an assignment $\chi$ and a finite play $\rho$, we let $\textnormal{Out}(\chi,\rho)$ be the set of infinite plays that start with $\rho$ and are then extended by letting players follow the strategies assigned by $\chi$. Formally, $\textnormal{Out}(\chi,\rho)$ is the set of plays of the form $\rho\cdot v_{1}v_{2}\ldots$ such that for all $i\geq 0$, there exists $\bm{c}$ such that for all $a\in\textit{dom}(\chi)\cap\textnormal{Ag}$, $\bm{c}_{a}\in\chi(a)(\rho\cdot v_{1}\ldots v_{i})$ and $v_{i+1}=E(v_{i},\bm{c})$, with $v_{0}=\mbox{last}(\rho)$. Synchronous perfect recall. In this work we consider players with _synchronous perfect recall_ , meaning that each player remembers the whole history of a play, a classic assumption in games with imperfect information and logics of knowledge and time. Each observation relation is thus extended to finite plays as follows: $\rho\sim_{o}\rho^{\prime}$ if $|\rho|=|\rho^{\prime}|$ and $\rho_{i}\sim_{o}\rho^{\prime}_{i}$ for every $i\in\\{0,\ldots,|\rho|-1\\}$. Imperfect-information strategies. For $o\in\textnormal{Obs}$, a strategy $\sigma$ is an _$o$ -strategy_ if $\sigma(\rho)=\sigma(\rho^{\prime})$ whenever $\rho\sim_{o}\rho^{\prime}$. The latter constraint captures the essence of imperfect information, which is that players can base their strategic choices only on the information available to them. For $o\in\textnormal{Obs}$ we let $\mbox{\emph{Str}}_{o}$ be the set of all $o$-strategies. ###### Definition 2.3 ($\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ semantics). The semantics of a state formula is defined on a $\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}$, an assignment $\chi$ that is variable-complete for $\varphi$, and a finite play $\rho$. For a path formula $\psi$, the finite play is replaced with an infinite play $\pi$ and an index $i\in\mathbb{N}$. The definition by mutual induction is as follows: $\begin{array}[]{lcl}\mathcal{G},\chi,\rho\models p&\text{ if }&p\in\ell(\mbox{last}(\rho))\\\\[1.0pt] \mathcal{G},\chi,\rho\models\neg\varphi&\text{ if }&\mathcal{G},\chi,\rho\not\models\varphi\\\\[1.0pt] \mathcal{G},\chi,\rho\models\varphi\vee\varphi^{\prime}&\text{ if }&\mathcal{G},\chi,\rho\models\varphi\;\text{ or }\;\mathcal{G},\chi,\rho\models\varphi^{\prime}\\\\[1.0pt] \mathcal{G},\chi,\rho\models\langle\\!\langle x\rangle\\!\rangle^{o}\varphi&\text{ if }&\exists\,\sigma\in\mbox{\emph{Str}}_{o}\;\text{ s.t. }\;\mathcal{G},\chi[x\mapsto\sigma],\rho\models\varphi\\\\[1.0pt] \mathcal{G},\chi,\rho\models(a,x)\varphi&\text{ if }&\mathcal{G},\chi[a\mapsto\chi(x)],\rho\models\varphi\\\\[1.0pt] \mathcal{G},\chi,\rho\models(a,\operatorname{?})\varphi&\text{ if }&\mathcal{G},\chi[a\mapsto\operatorname{?}],\rho\models\varphi\\\\[1.0pt] \mathcal{G},\chi,\rho\models{\bf E}\psi&\text{ if }&\text{there exists }\pi\in\textnormal{Out}(\chi,\rho)\text{ such that }\mathcal{G},\chi,\pi,|\rho|-1\models\psi\\\\[5.0pt] \mathcal{G},\chi,\pi,i\models\varphi&\text{ if }&\mathcal{G},\chi,\pi_{\leq i}\models\varphi\\\\[1.0pt] \mathcal{G},\chi,\pi,i\models\neg\psi&\text{ if }&\mathcal{G},\chi,\pi,i\not\models\psi\\\\[1.0pt] \mathcal{G},\chi,\pi,i\models\psi\vee\psi^{\prime}&\text{ if }&\mathcal{G},\chi,\pi,i\models\psi\;\text{ or }\;\mathcal{G},\chi,\pi,i\models\psi^{\prime}\\\\[1.0pt] \mathcal{G},\chi,\pi,i\models{\bf X}\psi&\text{ if }&\mathcal{G},\chi,\pi,i+1\models\psi\\\\[1.0pt] \mathcal{G},\chi,\pi,i\models\psi{\bf U}\psi^{\prime}&\text{ if }&\exists\,j\geq i\mbox{ s.t. }\mathcal{G},\chi,\pi,j\models\psi^{\prime}\\\ &&\text{ and }\forall\,k\text{ s.t. }i\leq k<j,\;\mathcal{G},\chi,\pi,k\models\psi\end{array}$ ###### Remark 1. Observe that because of the semantics of the outcome quantifier, and unlike usual definitions of SL, the meaning of an $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ sentence depends on the assignment in which it is evaluated. For instance the $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ formula ${\bf A}{\bf F}p$ is clearly a sentence, but whether $\mathcal{G},\chi,\rho\models{\bf A}{\bf F}p$ holds or not depends on which agents are bound to a strategy in $\chi$ and what these strategies are. However, as usual, a sentence does not require an assignment to be evaluated, and for an $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ sentence $\varphi$ we let $\mathcal{G},\rho\models\varphi$ if $\mathcal{G},\emptyset,\rho\models\varphi$ for the empty assignment $\emptyset$, and we write $\mathcal{G}\models\varphi$ if $\mathcal{G},v_{\iota}\models\varphi$. SL is the fragment of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ obtained by interpreting all observation symbols as the identity relation (which models perfect information), restricting to deterministic strategies, and considering only assignments in which each agent has a strategy (in this case the outcome of an assignment consists of a single play; one can thus get rid of the outcome quantifier and evaluate temporal operators in the unique outcome of the current assignment, as usually done in SL). Also, $\textnormal{{CTL}}^{*}$ is the fragment of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ which uses no binding, unbinding or strategy quantification. ### 2.5. Discussion on the semantics We now discuss some aspects of the semantics. Evaluation on finite plays. Unlike previous definitions of Strategy Logic, we evaluate formulas on finite plays (instead of positions), where the finite play represents the whole history starting from the initial position of the $\textrm{CGS}_{\textnormal{ii}}$ in which the formula is evaluated. There are several reasons to do so. First, it allows us to define the semantics more simply without having to resort to the notion of assignment translations. Second, it makes it easier to see the correctness of the reduction to $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, that we present in Section 5. In SL, a strategy only has access to the history of the game starting from the point where the strategy quantifier from which it arises has been evaluated. In contrast, in $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ strategies have access to the whole history, starting from the initial position. However this does not affect the semantics, in the sense that the perfect-information fragment of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ with deterministic strategies corresponds to SL. Indeed, when agents have perfect information, having access to the past or not does not affect the existence of strategies to enforce temporal properties that only concern the future. Players not remembering their actions. Our definition of synchronous perfect recall only considers the sequence of positions in finite plays, and forgets about actions taken by players. In particular, it is possible in this definition that a player cannot distinguish between two finite plays in which she plays different actions. This definition is standard in games with imperfect information (van der Meyden and Wilke, 2005; Berwanger et al., 2010; Doyen and Raskin, 2011; Berwanger et al., 2018), since remembering one’s actions or not is indifferent for the existence of distributed winning strategies or Nash equilibria. However it makes a difference for some more involved solution concepts that are expressible in strategic logics such as $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. For instance it is observed in (Bouyer, 2017, Appendix A) that some games admit subgame-perfect equilibria only if agents remember their own past actions. Nonetheless we consider the setting where agents do not remember their actions, as it is the most general. Indeed, as noted in (Chatterjee and Doyen, 2014b, Remark 2.1, p.8), one can simulate agents that remember their own actions by storing in positions of the game the information of the last joint move played (this may create $|\textnormal{Ac}|^{|\textnormal{Ag}|}$ copies of each position, but the branching degree is unchanged). One can then adapt indistinguishability relations to take actions into account. For instance, for an observation symbol $o$ and an agent $a$, one could consider a new observation symbol $o_{a}$ that would be interpreted in the enriched game structure as the refinement of $\sim_{o}$ that considers two positions indistinguishable if they are indistinguishable for $\sim_{o}$ and contain the same last action for agent $a$. Binding agent $a$ only to strategies that use observation of the form $o_{a}$ for some $o$ captures the fact that agent $a$ remembers her actions. Agents changing observation. In $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ observations are not bound to agents but to strategies. And because agents can change their strategy thanks to the binding operator, it follows that they can change observation, or more precisely they can successively play with strategies that have different observations. For instance consider a controller that observes a system through a set of $n$ sensors $S=\\{s_{1},\ldots,s_{n}\\}$ as in, e.g., (Bittner et al., 2012). Let $o_{i}$ be the observation power provided by the set of sensors $S\setminus\\{s_{i}\\}$ (one can think of a system where states are tuples of local states, each sensor observing one component). Also let $o$ be the observation power provided by the full set $S$ of sensors, and let atom $\text{fault}_{i}$ represent the fact that a fault occurs on sensor $s_{i}$. The formula $\varphi:=\langle\\!\langle x\rangle\\!\rangle^{o}(a,x){\bf A}{\bf G}\left(\text{safe}\wedge\bigwedge_{i=1}^{n}\text{fault}_{i}\to\langle\\!\langle x\rangle\\!\rangle^{o_{i}}(a,x){\bf A}{\bf G}\text{\,safe}_{i}\right)$ expresses that the controller $a$ has a strategy (which uses all sensors in $S$) to maintain the system safe, and if a sensor is lost, it can respond by switching to a strategy using the remaining sensors to maintain some alternative, possibly weaker, security requirement $\text{safe}_{i}$. ### 2.6. Model checking and hierarchical instances We now introduce the main decision problem of this paper, which is the model- checking problem for $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. An _$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ -instance_ is a model together with a formula, i.e., it is a pair $(\mathcal{G},\Phi)$ where $\mathcal{G}$ is a $\textrm{CGS}_{\textnormal{ii}}$ and $\Phi\in\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. ###### Definition 2.4 (Model checking $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$). The _model-checking problem_ for $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ is the decision problem that, given an $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-instance $(\mathcal{G},\Phi)$, returns ‘Yes’ if $\mathcal{G}\models\Phi$, and ‘No’ otherwise. It is well known that deciding the existence of winning strategies in multi- player games with imperfect information is undecidable for reachability objectives (Peterson et al., 2001). Since this problem is easily reduced to the model-checking problem for $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, we get the following result. ###### Theorem 2.5. The model-checking problem for $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ is undecidable. Hierarchical instances. We now isolate a sub-problem obtained by restricting attention to _hierarchical instances_. Intuitively, an $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-instance $(\mathcal{G},\Phi)$ is hierarchical if, as one goes down a path in the syntactic tree of $\Phi$, the observations tied to quantifications become finer. ###### Definition 2.6 (Hierarchical instances). An $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-instance $(\mathcal{G},\Phi)$ is _hierarchical_ if for every subformula $\varphi_{1}=\langle\\!\langle y\rangle\\!\rangle^{o_{1}}\varphi^{\prime}_{1}$ of $\Phi$ and subformula $\varphi_{2}=\langle\\!\langle x\rangle\\!\rangle^{o_{2}}\varphi^{\prime}_{2}$ of $\varphi^{\prime}_{1}$, it holds that $\mathcal{O}(o_{2})\subseteq\mathcal{O}(o_{1})$. If $\mathcal{O}(o_{2})\subseteq\mathcal{O}(o_{1})$ we say that $o_{2}$ is _finer_ than $o_{1}$ in $\mathcal{G}$, and that $o_{1}$ is _coarser_ than $o_{2}$ in $\mathcal{G}$. Intuitively, this means that a player with observation $o_{2}$ observes game $\mathcal{G}$ no worse than, i.e., knows at least as much as a player with observation $o_{1}$. ###### Remark 2. If one uses the trick described in Section 2.5 to model agents that remember their own actions, then for an agent $a$ to know at least as much as another agent $b$ it needs to be the case that, in particular, agent $a$ observes all actions played by agent $b$. ###### Example 2.7 (Fault-tolerant diagnosibility). Consider the following formula from Section 2.5: $\varphi:=\langle\\!\langle x\rangle\\!\rangle^{o}(a,x){\bf A}{\bf G}\left(\text{safe}\wedge\bigwedge_{i=1}^{n}\text{fault}_{i}\to\langle\\!\langle x\rangle\\!\rangle^{o_{i}}(a,x){\bf A}{\bf G}\text{\,safe}_{i}\right)$ As already discussed, it expresses that the controller can react to the loss of a sensor to keep ensuring some property of the system. Clearly, the controller’s observation $o_{i}$ after the loss of sensor $i$ is coarser than its original observation $o$, and thus formula $\varphi$ in such a system does not form a hierarchical instance. We now give an example of scenario where hierarchical instances occur naturally. ###### Example 2.8 (Security levels). Consider a system with different “security levels”, where higher levels have access to more data (i.e., can observe more). Assume that the $\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}$ is such that $\mathcal{O}(o_{n})\subseteq\mathcal{O}(o_{n-1})\subseteq\ldots\subseteq\mathcal{O}(o_{1})$: in other words, level $n$ has the highest security clearance, while level $1$ has the lowest. Consider that agent $a$ wants to reach some objective marked by atom “goal”, that it starts with the lowest observation clearance $o_{1}$, and that atomic formula “$\text{promote}_{i}$” means that the agent is granted access to level $i$ (observe that whenever we have $\text{promote}_{i}$, we should also have $\text{promote}_{j}$ for all $j<i$). For every $i$ we let $\varphi_{i}(\varphi^{\prime}):=\text{goal}\vee(\text{promote}_{i}\wedge\langle\\!\langle x\rangle\\!\rangle^{o_{i}}(a,x){\bf A}{\bf F}\varphi^{\prime})$ Now the formula $\varphi:=\varphi_{1}(\varphi_{2}(\ldots\varphi_{n-1}(\varphi_{n}(\text{goal}))\ldots))$ means that agent $a$ can enforce her goal, possibly by first getting access to higher security levels and using this additional observation power to reach the goal. Because the strategy quantifications that are deeper in the formula have access to more information, this formula forms a hierarchical instance in $\mathcal{G}$. Here is the main contribution of this work: ###### Theorem 2.9. The model-checking problem for $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ restricted to the class of hierarchical instances is decidable. We prove this result in Section 5 by reducing it to the model-checking problem for the hierarchical fragment of a logic called $\textnormal{{QCTL}}^{*}$ with imperfect information, which we now introduce and study in order to use it as an intermediate, “low-level” logic between tree automata and $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. We then discuss some applications of this theorem in Section 7. ## 3\. $\textnormal{{QCTL}}^{*}$ with imperfect information In this section we introduce an imperfect-information extension of $\textnormal{{QCTL}}^{*}$ (Sistla, 1983; Kupferman, 1999; Kupferman et al., 2000a; French, 2001; Laroussinie and Markey, 2014), which is an extension of $\textnormal{{CTL}}^{*}$ with second-order quantification on atomic propositions. In order to introduce imperfect information, instead of considering equivalence relations between states as in concurrent game structures, we will enrich Kripke structures by giving internal structure to their states, i.e., we see states as $n$-tuples of local states. This way of modelling imperfect information is inspired from Reif’s multi-player game structures (Peterson et al., 2001) and distributed systems (Halpern and Vardi, 1989), and we find it very suitable to application of automata techniques, as discussed in Section 3.3. The syntax of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is similar to that of $\textnormal{{QCTL}}^{*}$, except that we annotate second- order quantifiers by subsets $\textnormal{{o}}\subseteq[n]$. The idea is that quantifiers annotated by o can only “observe” the local states indexed by $i\in\textnormal{{o}}$. We define the tree-semantics of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$: this means that we interpret formulas on trees that are the unfoldings of Kripke structures (this will capture the fact that players in $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ have synchronous perfect recall). We then define the syntactic class of _hierarchical formulas_ and prove, using an automata-theoretic approach, that model checking this class of formulas is decidable. For the rest of the section we fix some natural number $n\in\mathbb{N}$ which parameterises the logic $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, and which is the number of components in states of the models. ### 3.1. $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ Syntax The syntax of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is very similar to that of $\textnormal{{QCTL}}^{*}$: the only difference is that we annotate quantifiers by a set of indices that defines the “observation” of that quantifier. Concrete observations. A set $\textnormal{{o}}\subseteq[n]$ is called a _concrete observation_ (to distinguish it from observations $o$ in the definitions of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$). ###### Definition 3.1 ($\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ Syntax). The syntax of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is defined by the following grammar: $\displaystyle\varphi:=$ $\displaystyle\;p\mid\neg\varphi\mid\varphi\vee\varphi\mid{\bf E}\psi\mid\exists^{\textnormal{{o}}}p.\,\varphi$ $\displaystyle\psi:=$ $\displaystyle\;\varphi\mid\neg\psi\mid\psi\vee\psi\mid{\bf X}\psi\mid\psi{\bf U}\psi$ where $p\in\textnormal{AP}$ and $\textnormal{{o}}\subseteq[n]$. Formulas of type $\varphi$ are called _state formulas_ , those of type $\psi$ are called _path formulas_ , and $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ consists of all the state formulas defined by the grammar. We use standard abbreviation ${\bf A}\psi:=\neg{\bf E}\neg\psi$. We also use $\exists p.\,\varphi$ as a shorthand for $\exists^{[n]}p.\,\varphi$, and we let $\forall p.\,\varphi:=\neg\exists p.\,\neg\varphi$. Given a $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula $\varphi$, we define the set of _quantified propositions_ ${\textnormal{AP}_{\exists}}(\varphi)\subseteq\textnormal{AP}$ as the set of atomic propositions $p$ such that $\varphi$ has a subformula of the form $\exists^{\textnormal{{o}}}p.\,\varphi$. We also define the set of _free propositions_ $\textnormal{AP}_{f}(\varphi)\subseteq\textnormal{AP}$ as the set of atomic propositions that have an occurrence which is not under the scope of any quantifier of the form $\exists^{\textnormal{{o}}}p.\,$ Observe that ${\textnormal{AP}_{\exists}}(\varphi)\cap\textnormal{AP}_{f}(\varphi)$ may not be empty, i.e., a proposition may appear both free and quantified in (different places of) a formula. ### 3.2. $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ semantics Several semantics have been considered for $\textnormal{{QCTL}}^{*}$, the two most studied being the _structure semantics_ and the _tree semantics_ (see (Laroussinie and Markey, 2014) for more details). For the semantics of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ we adapt the tree semantics, and we explain the reasons for doing so in Section 3.3. As already mentioned, for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ we consider structures whose states are tuples of local states. We now define these structures and related notions. ###### Definition 3.2 (Compound Kripke structures). A _compound Kripke structure_ , or CKS , over AP is a tuple $\mathcal{S}=(S,R,\ell,s_{\iota})$ where * • $S\subseteq\prod_{i\in[n]}L_{i}$ is a set of _states_ , with $\\{L_{i}\\}_{i\in[n]}$ a family of $n$ disjoint finite sets of _local states_ , * • $R\subseteq S\times S$ is a left-total111i.e., for all $s\in S$, there exists $s^{\prime}$ such that $(s,s^{\prime})\in R$. _transition relation_ , * • $\ell:S\to 2^{\textnormal{AP}}$ is a _labelling function_ and * • $s_{\iota}\in S$ is an _initial state_. A _path_ in $\mathcal{S}$ is an infinite sequence of states $\lambda=s_{0}s_{1}\ldots$ such that for all $i\in\mathbb{N}$, $(s_{i},s_{i+1})\in R$. A _finite path_ is a finite non-empty prefix of a path. We may write $s\in\mathcal{S}$ for $s\in S$, and we define the _size_ $|\mathcal{S}|$ of a CKS $\mathcal{S}=(S,R,s_{\iota},\ell)$ as its number of states: $|\mathcal{S}|:=|S|$. Since we will interpret $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ on unfoldings of CKS , we now define infinite trees. Trees. In many works, trees are defined as prefix-closed sets of words with the empty word $\epsilon$ as root. Here trees represent unfoldings of Kripke structures, and we find it more convenient to see a node $u$ as a sequence of states and the root as the initial state. Let $X$ be a finite set of _directions_ (typically a set of states). An _$X$ -tree_ $\tau$ is a nonempty set of words $\tau\subseteq X^{+}$ such that: * • there exists $r\in X$, called the _root_ of $\tau$, such that each $u\in\tau$ starts with $r$ ($r\preccurlyeq u$); * • if $u\cdot x\in\tau$ and $u\cdot x\neq r$, then $u\in\tau$, * • if $u\in\tau$ then there exists $x\in X$ such that $u\cdot x\in\tau$. The elements of a tree $\tau$ are called _nodes_. If $u\cdot x\in\tau$, we say that $u\cdot x$ is a _child_ of $u$. The _depth_ of a node $u$ is $|u|$. An $X$-tree $\tau$ is _complete_ if for every $u\in\tau$ and $x\in X$, $u\cdot x\in\tau$. A _path_ in $\tau$ is an infinite sequence of nodes $\lambda=u_{0}u_{1}\ldots$ such that for all $i\in\mathbb{N}$, $u_{i+1}$ is a child of $u_{i}$, and $Paths(u)$ is the set of paths that start in node $u$. Labellings. An _AP -labelled $X$-tree_, or _$(\textnormal{AP},X)$ -tree_ for short, is a pair $t=(\tau,\ell)$, where $\tau$ is an $X$-tree called the _domain_ of $t$ and $\ell:\tau\rightarrow 2^{\textnormal{AP}}$ is a _labelling_ , which maps each node to the set of propositions that hold there. For $p\in\textnormal{AP}$, a _$p$ -labelling_ for a tree is a mapping $\ell_{p}:\tau\to\\{0,1\\}$ that indicates in which nodes $p$ holds, and for a labelled tree $t=(\tau,\ell)$, the $p$-labelling of $t$ is the $p$-labelling $u\mapsto 1$ if $p\in\ell(u)$, 0 otherwise. The composition of a labelled tree $t=(\tau,\ell)$ with a $p$-labelling $\ell_{p}$ for $\tau$ is defined as $t\otimes\ell_{p}:=(\tau,\ell^{\prime})$, where $\ell^{\prime}(u)=\ell(u)\cup\\{p\\}$ if $\ell_{p}(u)=1$, and $\ell(u)\setminus\\{p\\}$ otherwise. A $p$-labelling for a labelled tree $t=(\tau,\ell)$ is a $p$-labelling for its domain $\tau$. A _pointed labelled tree_ is a pair $(t,u)$ where $u$ is a node of $t$. If $u=w\cdot x$, the _subtree_ $t_{u}$ of $t=(\tau,\ell)$ is defined as $t_{u}:=(\tau_{u},\ell_{u})$ with $\tau_{u}=\\{x\cdot w^{\prime}\mid w\cdot x\cdot w^{\prime}\in\tau\\}$, and $\ell_{u}(x\cdot w^{\prime})=\ell(w\cdot x\cdot w^{\prime})$. A labelled tree is _regular_ if it has finitely many disctinct subtrees. In the tree semantics of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ that we consider here, formulas are evaluated on tree unfoldings of CKS , which we now define. Tree unfoldings. Let $\mathcal{S}=(S,R,\ell,s_{\iota})$ be a compound Kripke structure over AP. The _tree-unfolding of $\mathcal{S}$_ is the $(\textnormal{AP},S)$-tree $t_{\mathcal{S}}:=(\tau,\ell^{\prime})$, where $\tau$ is the set of all finite paths that start in $s_{\iota}$, and for every $u\in\tau$, $\ell^{\prime}(u):=\ell(\mbox{last}(u))$. Note that a labelled tree is regular if and only if it is the unfolding of some finite Kripke structure. Narrowing. Let $X$ and $Y$ be two finite sets, and let $(x,y)\in X\times Y$. The _$X$ -narrowing_ of $(x,y)$ is ${(x,y)\\!\downarrow_{X}}:=x$. This definition extends naturally to words and trees over $X\times Y$ (point-wise). Given a family of (disjoint) sets of local states $\\{L_{i}\\}_{i\in[n]}$ and a subset $I\subseteq[n]$, we let $L_{I}:=\prod_{i\in I}L_{i}$ if $I\neq\emptyset$ and $L_{\emptyset}:=\\{\mathbf{0}\\}$, where $\mathbf{0}$ is a special symbol. For $I,J\subseteq[n]$ and $z\in L_{I}$, we also define ${z\\!\downarrow_{J}}:=z\\!\downarrow_{L_{I\cap J}}$, where $z$ is seen as a pair $z=(x,y)\in L_{I\cap J}\times L_{I\setminus J}$, i.e., we apply the above definition with $X=L_{I\cap J}$ and $Y=L_{I\setminus J}$. This is well defined because having taken sets $L_{i}$ to be disjoint, the ordering of local states in $z$ is indifferent. We also extend this definition to words and trees. In particular, for every $L_{I}$-tree $\tau$, $\tau\\!\downarrow_{\emptyset}$ is the only $L_{\emptyset}$-tree, $\mathbf{0}^{\omega}$. Quantification and uniformity. In $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ $\exists^{\textnormal{{o}}}p.\,\varphi$ holds in a tree $t$ if there is some o-uniform $p$-labelling of $t$ such that $t$ with this $p$-labelling satisfies $\varphi$. Intuitively, a $p$-labelling of a tree is o-uniform if every two nodes that are indistinguishable for observation o agree on $p$. ###### Definition 3.3 (o-indistinguishability and o-uniformity in $p$). Fix $\textnormal{{o}}\subseteq[n]$ and $I\subseteq[n]$. * • Two tuples $x,x^{\prime}\in L_{I}$ are _o -indistinguishable_, written $x\approx_{\textnormal{{o}}}x^{\prime}$, if $x\\!\downarrow_{\textnormal{{o}}}=x^{\prime}\\!\downarrow_{\textnormal{{o}}}$. * • Two words $u=u_{0}\ldots u_{i}$ and $u^{\prime}=u^{\prime}_{0}\ldots u^{\prime}_{j}$ over alphabet $L_{I}$ are _o -indistinguishable_, written $u\approx_{\textnormal{{o}}}u^{\prime}$, if $i=j$ and for all $k\in\\{0,\ldots,i\\}$ we have $u_{k}\approx_{\textnormal{{o}}}u^{\prime}_{k}$. * • A $p$-labelling for a tree $\tau$ is _o -uniform_ if for all $u,u^{\prime}\in\tau$, $u\approx_{\textnormal{{o}}}u^{\prime}$ implies $\ell_{p}(u)=\ell_{p}(u^{\prime})$. ###### Definition 3.4 ($\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ semantics). We define by induction the satisfaction relation $\models$ of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$. Let $t=(\tau,\ell)$ be an AP-labelled $L_{I}$-tree, $u$ a node and $\lambda$ a path in $\tau$: $\displaystyle t,u\models$ $\displaystyle\,p$ if $\displaystyle\quad p\in\ell(u)$ $\displaystyle t,u\models$ $\displaystyle\,\neg\varphi$ if $\displaystyle\quad t,u\not\models\varphi$ $\displaystyle t,u\models$ $\displaystyle\,\varphi\vee\varphi^{\prime}$ if $\displaystyle\quad t,u\models\varphi\mbox{ or }t,u\models\varphi^{\prime}$ $\displaystyle t,u\models$ $\displaystyle\,{\bf E}\psi$ if $\displaystyle\quad\exists\,\lambda\in Paths(u)\mbox{ s.t. }t,\lambda\models\psi$ $\displaystyle t,u\models$ $\displaystyle\,\exists^{\textnormal{{o}}}p.\,\varphi$ if $\displaystyle\quad\exists\,\ell_{p}\mbox{ a $\textnormal{{o}}$-uniform $p$-labelling for $t$ such that }t\otimes\ell_{p},u\models\varphi$ $\displaystyle t,\lambda\models$ $\displaystyle\,\varphi$ if $\displaystyle\quad t,\lambda_{0}\models\varphi$ $\displaystyle t,\lambda\models$ $\displaystyle\,\neg\psi$ if $\displaystyle\quad t,\lambda\not\models\psi$ $\displaystyle t,\lambda\models$ $\displaystyle\,\psi\vee\psi^{\prime}\quad$ if $\displaystyle\quad t,\lambda\models\psi\mbox{ or }t,\lambda\models\psi^{\prime}$ $\displaystyle t,\lambda\models$ $\displaystyle\,{\bf X}\psi$ if $\displaystyle\quad t,\lambda_{\geq 1}\models\psi$ $\displaystyle t,\lambda\models$ $\displaystyle\,\psi{\bf U}\psi^{\prime}$ if $\displaystyle\quad\exists\,i\geq 0\mbox{ s.t. }t,\lambda_{\geq i}\models\psi^{\prime}\text{ and }\forall j\text{ s.t. }0\leq j<i,\;t,\lambda_{\geq j}\models\psi$ We write $t\models\varphi$ for $t,r\models\varphi$, where $r$ is the root of $t$. Given a CKS $\mathcal{S}$ and a $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula $\varphi$, we also write $\mathcal{S}\models\varphi$ if $\mathcal{S},s_{\iota}\models\varphi$. ###### Example 3.5. Consider the following CTL formula: $\mathbf{border}(p):={\bf A}{\bf F}p\wedge{\bf A}{\bf G}(p\rightarrow{\bf A}{\bf X}{\bf A}{\bf G}\neg p).$ This formula holds in a labelled tree if and only if each path contains exactly one node labelled with $p$. Now, consider the following $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula: $\mathbf{level}(p):=\exists^{\emptyset}p.\,\mathbf{border}(p).$ For a blind quantifier, two nodes of a tree are indistinguishable if and only if they have same depth. Therefore, this formula holds on a tree iff the $p$’s label all and only the nodes at some fixed depth. This formula can thus be used to capture the equal level predicate on trees. Actually, just as $\textnormal{{QCTL}}^{*}$ captures MSO, one can prove that $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ with tree semantics subsumes MSO with equal level (Elgot and Rabin, 1966; Läuchli and Savioz, 1987; Thomas, 1992). In Theorem 3.7 we make use of a similar observation to prove that model-checking $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is undecidable. ### 3.3. Discussion on the definition of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ We now motivate in detail some aspects of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$. Modelling of imperfect information. We model imperfect information by means of local states (rather than equivalence relations) because this greatly facilitates the use of automata techniques. More precisely, in our decision procedure of Section 4 we use an operation on tree automata called _narrowing_ , which was introduced in (Kupferman and Vardi, 1999) to deal with imperfect- information in the context of distributed synthesis for temporal specifications. Given an automaton $\mathcal{A}$ that works on $X\times Y$-trees, where $X$ and $Y$ are two finite sets, and assuming that we want to model an operation performed on trees while observing only the $X$ component of each node, this narrowing operation allows one to build from $\mathcal{A}$ an automaton $\mathcal{A}^{\prime}$ that works on $X$-trees, such that $\mathcal{A}^{\prime}$ accepts an $X$-tree if and only if $\mathcal{A}$ accepts its widening to $X\times Y$ (intuitively, this widening is the $X\times Y$-tree in which each node is labelled as its projection on the original $X$-tree; see Section 4 for details). With our definition of compound Kripke structures, their unfoldings are trees over the Cartesian product $L_{[n]}$. To model a quantification $\exists^{\textnormal{{o}}}p$ with observation $\textnormal{{o}}\subseteq[n]$, we can thus use the narrowing operation to forget about components $L_{i}$, for $i\in[n]\setminus\textnormal{{o}}$. We then use the classic projection of nondeterministic tree automata to perform existential quantification on atomic proposition $p$. Since the choice of the $p$-labelling is made directly on $L_{\textnormal{{o}}}$-trees, it is necessarily o-uniform. Choice of the tree semantics. The two most studied semantics for $\textnormal{{QCTL}}^{*}$ are the _structure semantics_ , in which formulas are evaluated directly on Kripke structures, and the _tree semantics_ , in which Kripke structures are first unfolded into infinite trees. Tree semantics thus allows quantifiers to choose the value of a quantified atomic proposition in each _finite path_ of the model, while in structure semantics the choice is only made in each state. When $\textnormal{{QCTL}}^{*}$ is used to express existence of strategies, existential quantification on atomic propositions labels the structure with strategic choices; in this kind of application, structure semantics reflects so-called _positional_ or _memoryless_ strategies, while tree semantics captures _perfect-recall_ or _memoryful_ strategies. Since in this work we are interested in perfect-recall strategies, we only consider the tree semantics. ### 3.4. Model checking $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ We now define the model-checking problem studied in the rest of this section. ###### Definition 3.6 (Model checking $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$). The _model-checking problem for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$_ is the following decision problem: given an instance $(\mathcal{S},\Phi)$ where $\mathcal{S}$ is a CKS, and $\Phi$ is a $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula, return ‘Yes’ if $\mathcal{S}\models\Phi$ and ‘No’ otherwise. We now prove that the model-checking problem for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is undecidable. This comes as no surprise since, as we will show in Section 5, $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ can express the existence of distributed winning strategies in imperfect-information games. However we propose a proof that shows the connection between $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ and MSO with equal- level predicate (Elgot and Rabin, 1966; Läuchli and Savioz, 1987; Thomas, 1992). This proof also has the benefit of showing that $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is undecidable already for formulas that involve only propositional quantifiers that observe either everything or nothing. ###### Theorem 3.7. The model-checking problem for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is undecidable. ###### Proof. Let $\textnormal{{MSO}}_{\textnormal{eq}}$ denote the extension of the logic MSO (without unary predicates) by a binary predicate symbol eq. $\textnormal{{MSO}}_{\textnormal{eq}}$ is interpreted on the full binary tree, and the semantics of $\text{eq}(x,y)$ is that $x$ and $y$ have the same depth in the tree. We show how to effectively translate $\textnormal{{MSO}}_{\textnormal{eq}}$ into $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, and our result follows since the $\textnormal{{MSO}}_{\textnormal{eq}}$-theory of the binary tree is undecidable (Läuchli and Savioz, 1987). The translation from $\textnormal{{MSO}}_{\textnormal{eq}}$ to $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is obtained by extending that from MSO to QCTL (Laroussinie and Markey, 2014), using the formula $\mathbf{level}(\cdot)$ from Example 3.5 to help capture the equal- length predicate. We define a translation $\widehat{\quad}$ from $\textnormal{{MSO}}_{\textnormal{eq}}$ to $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ such that for every tree $t$ with root $r$, nodes $u_{1},\ldots,u_{i}\in t$ and sets of nodes $U_{1},\ldots,U_{j}\subseteq t$, and every $\textnormal{{MSO}}_{\textnormal{eq}}$ formula ${\varphi(x,x_{1},\ldots,x_{i},X_{1},\ldots,X_{j})}$, we have that (1) $t,r,u_{1},\ldots,u_{i},U_{1},\ldots,U_{j}\models\varphi(x,x_{1},\ldots,x_{i},X_{1},\ldots,X_{j})\text{\quad if and only if \quad}\widehat{t},r\models\widehat{\varphi}$ where $\widehat{t}$ is obtained from $t$ by defining the labelling for fresh atomic propositions $p_{x_{k}}$ and $p_{X_{k}}$, with $k\in[i]$, as follows: $p_{x_{k}}\in\widehat{\ell}(u)$ if $u=u_{k}$ and $p_{X_{k}}\in\widehat{\ell}(u)$ if $u\in U_{k}$. The translation of MSO to $\textnormal{{QCTL}}^{*}$ from (Laroussinie and Markey, 2014) can be extended to one from $\textnormal{{MSO}}_{\textnormal{eq}}$ to $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ by adding rules for the equal level predicate. Indeed, for $\varphi(x,x_{1},\ldots,x_{i},X_{1},\ldots,X_{j})\in\textnormal{{MSO}}_{\textnormal{eq}}$, we inductively define the $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula $\widehat{\varphi}$ as follows, where $k\in[i]$: $\begin{array}[]{rclcrcl}\widehat{x=x_{k}}&:=&p_{x_{k}}&&\widehat{x_{k}=x_{l}}&:=&{\bf E}{\bf F}(p_{x_{k}}\wedge p_{x_{l}})\\\\[5.0pt] \widehat{x\in X_{k}}&:=&p_{X_{k}}&&\widehat{x_{k}\in X_{l}}&:=&{\bf E}{\bf F}(p_{x_{k}}\wedge p_{X_{l}})\\\\[5.0pt] \widehat{\neg\varphi^{\prime}}&:=&\neg\widehat{\varphi^{\prime}}&&\widehat{\varphi_{1}\vee\varphi_{2}}&:=&\widehat{\varphi_{1}}\vee\widehat{\varphi_{2}}\\\\[5.0pt] \widehat{\exists x_{k}.\varphi^{\prime}}&:=&\lx@intercol\exists <EMAIL_ADDRESS>\widehat{\exists X_{k}.\varphi^{\prime}}&:=&\lx@intercol\exists <EMAIL_ADDRESS>\widehat{S(x,x_{k})}&:=&{\bf E}{\bf X}p_{x_{k}}&&\widehat{S(x_{k},x)}&:=&\perp\\\\[5.0pt] \widehat{S(x_{k},x_{l})}&:=&\lx@intercol{\bf E}{\bf F}(p_{x_{k}}\wedge{\bf E}{\bf X}p_{x_{l}})\hfil\lx@intercol\end{array}$ where $\mathrm{uniq}(p):={\bf E}{\bf F}p\wedge\forall q.\;\left({\bf E}{\bf F}(p\wedge q)\rightarrow{\bf A}{\bf G}(p\rightarrow q)\right)$ holds in a tree iff it has exactly one node labelled with $p$. To understand the $x=x_{k}$ and $x\in X_{k}$ cases, consider that $x$ will be interpreted as the root. For the $S(x_{k},x)$ case, observe that $x$ has no incoming edge since it is interpreted as the root. Second-order quantification $\exists X_{k}$ is translated into quantification on atomic proposition $p_{X_{k}}$, and first- order quantification $\exists x_{k}$ is treated similarly, with the additional constraint that quantification is limited to $p_{x_{k}}$-labellings that set $p_{x_{k}}$ to true in one and only one node of the tree. The rules for eq are as follows: $\displaystyle\widehat{\text{eq}(x,x_{k})}$ $\displaystyle:=p_{x_{k}}$ $\displaystyle\widehat{\text{eq}(x_{k},x_{l})}$ $\displaystyle:=\exists^{\emptyset}p.\,\mathbf{border}(p)\wedge{\bf A}{\bf G}(p_{x_{k}}\rightarrow p\wedge p_{x_{l}}\rightarrow p)$ To understand the first case, observe that since $x$ is interpreted as the root, $x_{k}$ is on the same level as $x$ if and only if it is also assigned the root. For the second case, recall from Example 3.5 that the $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula $\exists^{\emptyset}p.\,\mathbf{border}(p)$ places one unique horizontal line of $p$’s in the tree, and thus requiring that $x_{k}$ and $x_{l}$ be both on this line ensures that they are on the same level. The correctness of the translation follows from (1), which is proven by induction. Now take an instance $(t,\varphi(x))$ of the model-checking problem for $\textnormal{{MSO}}_{\textnormal{eq}}$ on the full binary tree $t$. Let $\mathcal{S}$ be a CKS with two states $s_{0}$ and $s_{1}$ (local states are irrelevant here), whose transition relation is the complete relation, and with empty labelling function. Clearly, $t_{\mathcal{S}}=t$, and applying (1) we get: $t,s_{0}\models\varphi(x)\text{\quad iff\quad}\widehat{t},s_{0}\models\widehat{\varphi}.$ Observe that in the previous line, because there are no free variables besides $x$, which stands for the root, we have that $\widehat{t}=t=t_{\mathcal{S}}$, hence we have indeed produced an instance of the model-checking problem for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$. ∎ ## 4\. A decidable fragment of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$: hierarchy on observations The main result of this section is the identification of an important decidable fragment of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$. ###### Definition 4.1 (Hierarchical formulas). A $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula $\varphi$ is _hierarchical_ if for all subformulas $\varphi_{1}=\exists^{\textnormal{{o}}_{1}}p_{1}.\,\varphi^{\prime}_{1}$ and $\varphi_{2}=\exists^{\textnormal{{o}}_{2}}p_{2}.\,\varphi^{\prime}_{2}$ of $\varphi$ where $\varphi_{2}$ is a subformula of $\varphi^{\prime}_{1}$, we have $\textnormal{{o}}_{1}\subseteq\textnormal{{o}}_{2}$. In other words, a formula is hierarchical if innermore propositional quantifiers observe at least as much as outermore ones. ###### Example 4.2. Formula $\exists^{\\{1,2\\}}p.\,\exists^{\\{1,2,4\\}}q.\,{\bf A}{\bf G}(p\vee q)$ is hierarchical because $\\{1,2\\}\subseteq\\{1,2,4\\}$. On the other hand, formula $\exists^{\\{1,2\\}}p.\,\big{(}\exists^{\\{1,2,4\\}}q.\,{\bf A}{\bf G}(p\vee q)\wedge\exists^{\\{3\\}}q^{\prime}.\,{\bf E}{\bf F}(p\wedge q^{\prime})\big{)}$ is not, because $\\{1,2\\}\not\subseteq\\{3\\}$. Note that neither is it the case that $\\{3\\}\subseteq\\{1,2\\}$: the observation power of quantifiers $\exists^{\\{1,2\\}}p.\,$ and $\exists^{\\{3\\}}q^{\prime}.\,$ are incomparable. Finally, formula $\forall^{\\{1,2,3\\}}p.\,\exists^{\\{1,2\\}}q.\,.{\bf A}{\bf G}(p\vee q)$ is not hierarchical even though $\\{1,2\\}\subseteq\\{1,2,3\\}$, as the quantifier that observes best is _higher_ in the syntactic tree. We let $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$ be the set of hierarchical $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formulas. ###### Theorem 4.3. Model checking $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$ is non-elementary decidable. Since our decision procedure for the hierarchical fragment of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is based on an automata-theoretic approach, we recall some definitions and results for alternating tree automata. ### 4.1. Alternating parity tree automata We recall alternating parity tree automata. Because their semantics is defined via acceptance games, we start with basic definitions for two-player turn- based parity games, or simply parity games. Parity games. A _parity game_ is a structure $\mathcal{G}=(V,E,v_{\iota},C)$, where $V=V_{E}\uplus V_{A}$ is a set of _positions_ partitioned between positions of Eve ($V_{E}$) and those of Adam ($V_{A}$), $E\subseteq V\times V$ is a set of _moves_ , $v_{\iota}$ is an initial position and $C:V\to\mathbb{N}$ is a colouring function of finite codomain. In positions $V_{E}$, Eve chooses the next position, while Adam chooses in positions $V_{A}$. A play is an infinite sequence of positions $v_{0}v_{1}v_{2}\ldots$ such that $v_{0}=v_{\iota}$ and for all $i\geq 0$, $(v_{i},v_{i+1})\in E$ (written $v_{i}\to v_{i+1}$). We assume that for every $v\in V$ there exists $v^{\prime}\in V$ such that $v\to v^{\prime}$. A strategy for Eve is a partial function $V^{*}\rightharpoonup V$ that maps each finite prefix of a play ending in a position $v\in V_{E}$ to a next position $v^{\prime}$ such that $v\to v^{\prime}$. A play $v_{0}v_{1}v_{2}\ldots$ _follows_ a strategy $\sigma$ of Eve if for every $i\geq 0$ such that $v_{i}\in V_{E}$, $v_{i+1}=\sigma(v_{0}\ldots v_{i})$. A strategy $\sigma$ is winning if every play that follows it satisfies the parity condition, i.e., the least colour seen infinitely often along the play is even. Parity tree automata. Because it is sufficient for our needs and simplifies definitions, we assume that all input trees are complete trees. For a set $Z$, $\mathbb{B}^{+}(Z)$ is the set of formulas built from the elements of $Z$ as atomic propositions using the connectives $\vee$ and $\wedge$, and with $\top,\perp\in\mathbb{B}^{+}(Z)$. An _alternating tree automaton (ATA ) on $(\textnormal{AP},X)$-trees_ is a structure $\mathcal{A}=(Q,\delta,q_{{\iota}},C)$ where $Q$ is a finite set of states, $q_{{\iota}}\in Q$ is an initial state, $\delta:Q\times 2^{\textnormal{AP}}\rightarrow\mathbb{B}^{+}(X\times Q)$ is a transition function, and $C:Q\to\mathbb{N}$ is a colouring function. To ease reading we shall write atoms in $\mathbb{B}^{+}(X\times Q)$ between brackets, such as $[x,q]$. A _nondeterministic tree automaton (NTA ) on $(\textnormal{AP},X)$-trees_ is an ATA $\mathcal{A}=(Q,\delta,q_{{\iota}},C)$ such that for every $q\in Q$ and $a\in 2^{\textnormal{AP}}$, $\delta(q,a)$ is written in disjunctive normal form and for every direction $x\in X$ each disjunct contains exactly one element of $\\{x\\}\times Q$. An NTA is _deterministic_ if for each $q\in Q$ and $a\in 2^{\textnormal{AP}}$, $\delta(q,a)$ consists of a single disjunct. Acceptance of a pointed labelled tree $(t,u_{\iota})$, where $t=(\tau,\ell)$, by an ATA $\mathcal{A}=(Q,\delta,q_{\iota},C)$ is defined via the parity game $\mathcal{G}(\mathcal{A},t,u_{\iota})=(V,E,v_{\iota},C^{\prime})$ where $V=\tau\times Q\times\mathbb{B}^{+}(X\times Q)$, position $(u,q,\alpha)$ belongs to Eve if $\alpha$ is of the form $\alpha_{1}\vee\alpha_{2}$ or $[x,q^{\prime}]$, and to Adam otherwise, $v_{{\iota}}=(u_{\iota},q_{\iota},\delta(q_{\iota},u_{\iota}))$, and $C^{\prime}(u,q,\alpha)=C(q)$. Moves in $\mathcal{G}(\mathcal{A},t,u_{\iota})$ are defined by the following rules: $\begin{array}[]{ll}(u,q,\alpha_{1}\;\mbox{$\dagger$}\;\alpha_{2})\rightarrow(u,q,\alpha_{i})&\mbox{where }\mbox{$\dagger$}\in\\{\vee,\wedge\\}\mbox{ and }i\in\\{1,2\\},\\\ \lx@intercol(u,q,[x,q^{\prime}])\rightarrow(u\cdot x,q^{\prime},\delta(q^{\prime},\ell(u\cdot x)))\hfil\lx@intercol\end{array}$ Positions of the form $(u,q,\top)$ and $(u,q,\perp)$ are sinks, winning for Eve and Adam respectively. A pointed labelled tree $(t,u)$ is _accepted_ by $\mathcal{A}$ if Eve has a winning strategy in $\mathcal{G}(\mathcal{A},t,u)$, and the _language_ of $\mathcal{A}$ is the set of pointed labelled trees accepted by $\mathcal{A}$, written $\mathcal{L}(\mathcal{A})$. We write $t\in\mathcal{L}(\mathcal{A})$ if $(t,r)\in\mathcal{L}(\mathcal{A})$, where $r$ is the root of $t$. Finally, the _size_ $|\mathcal{A}|$ of an ATA $\mathcal{A}$ is its number of states plus the sum of the sizes of all formulas appearing in the transition function. Word automata. When the set of directions $X$ is a singleton, directions can be forgotten and infinite trees can be identified with infinite words. We thus call _parity word automaton_ a parity tree automaton on $(\textnormal{AP},X)$-trees where $X$ is a singleton. In the case of a nondeterministic parity word automaton, transitions can be represented as usual as a mapping $\Delta:Q\times 2^{\textnormal{AP}}\to 2^{Q}$ which, in a state $q\in Q$, reading the label $a\in 2^{\textnormal{AP}}$ of the current position in the word, indicates a set of states $\Delta(q,a)$ from which Eve can choose to send in the next position of the word. We recall four classic operations on tree automata. Complementation. Given an ATA $\mathcal{A}=(Q,\delta,q_{{\iota}},C)$, we define its _dual_ $\overline{\mathcal{A}}=(Q,\overline{\delta},q_{{\iota}},\overline{C})$ where, for each $q\in Q$ and $a\in 2^{\textnormal{AP}}$, $\overline{\delta}(q,a)$ is the dual of $\delta(q,a)$, i.e., conjunctions become disjunctions and vice versa, and $C(q):=C(q)+1$. ###### Theorem 4.4 (Complementation (Muller and Schupp, 1995)). For every labelled tree $t$ and node $u$ in $t$, $(t,u)\in\mathcal{L}(\overline{\mathcal{A}})\mbox{ if, and only if, }(t,u)\notin\mathcal{L}(\mathcal{A}).$ Projection. The second construction is a projection operation, used by Rabin to deal with second-order monadic quantification: ###### Theorem 4.5 (Projection (Rabin, 1969)). Given an NTA $\mathcal{N}$ on $(\textnormal{AP},X)$-trees and an atomic proposition $p\in\textnormal{AP}$, one can build in linear time an NTA $\mathcal{N}\\!\Downarrow_{p}$ on $(\textnormal{AP}\setminus\\{p\\},X)$-trees such that $(t,u)\in\mathcal{L}(\mathcal{N}\\!\Downarrow_{p})\mbox{\;\;\;iff\;\;\;}\mbox{ there exists a $p$-labelling $\ell_{p}$ for $t$ s.t. }(t\otimes\ell_{p},u)\in\mathcal{L}(\mathcal{N}).$ Intuitively, ${\mathcal{N}\\!\Downarrow_{p}}$ is automaton $\mathcal{N}$ with the only difference that when it reads the label of a node, it can choose to run as if $p$ was either true or false: if $\delta$ is the transition function of $\mathcal{N}$, that of ${\mathcal{N}\\!\Downarrow_{p}}$ is $\delta^{\prime}(q,a)=\delta(q,a\cup\\{p\\})\vee\delta(q,a\setminus\\{p\\})$, for any state $q$ and label $a\in 2^{\textnormal{AP}}$. Another way of seeing it is that $\mathcal{N}\\!\Downarrow_{p}$ guesses a $p$-labelling for the input tree, and simulates $\mathcal{N}$ on this modified input. Simulation. To prevent $\mathcal{N}\\!\Downarrow_{p}$ from guessing different labels for a same node in different executions, it is crucial that $\mathcal{N}$ be nondeterministic, which is the reason why we need the following result: ###### Theorem 4.6 (Simulation (Muller and Schupp, 1995)). Given an ATA $\mathcal{A}$, one can build in exponential time an NTA $\mathcal{N}$ such that $\mathcal{L}(\mathcal{N})=\mathcal{L}(\mathcal{A})$. The last construction was introduced by Kupferman and Vardi to deal with imperfect information aspects in distributed synthesis. To describe it we need to define a widening operation on trees which expands the directions in a tree. Tree widening. We generalise the widening operation defined in (Kupferman and Vardi, 1999). In the following definitions we fix a CKS $\mathcal{S}=(S,R,s_{\iota},\ell)$, and for $I\subseteq[n]$ we let $S_{I}:=\\{s\\!\downarrow_{I}\,\mid s\in S\\}\subseteq L_{I}$ (recall that $L_{I}=\prod_{i\in I}L_{i}$). Let $J\subseteq I\subseteq[n]$. For every $S_{J}$-tree $\tau$ rooted in $s_{J}$ and $s_{I}\in S_{I}$ such that $s_{I}\\!\downarrow_{J}=s_{J}$, we define the _$I$ -widening_ of $\tau$ as the $S_{I}$-tree $\tau\\!\uparrow^{I}_{s_{I}}:=\\{u\in s_{I}\cdot S_{I}^{*}\mid u\\!\downarrow_{J}\in\tau\\}.$ For an $(\textnormal{AP},S_{J})$-tree $t=(\tau,\ell)$ rooted in $s_{J}$ and $s_{I}\in S_{I}$ such that $s_{I}\\!\downarrow_{J}=s_{J}$, we let $t\\!\uparrow^{I}_{s_{I}}:=(\tau\\!\uparrow^{I}_{s_{I}},\ell^{\prime}),\mbox{ where }\ell^{\prime}(u):=\ell(u\\!\downarrow_{J}).$ When clear from the context we may omit the subscript $s_{I}$. It is the case in particular when referring to _pointed_ widenings of trees: $(t\\!\uparrow^{I},u)$ stands for $(t\\!\uparrow^{I}_{u_{0}},u)$. Narrowing. We now state a result from (Kupferman and Vardi, 1999) in our slightly more general setting (the proof can be adapted straightforwardly). The rough idea of this narrowing operation on ATA is that, if one just observes $S_{J}$, uniform $p$-labellings on $S_{I}$-trees can be obtained by choosing the labellings directly on $S_{J}$-trees, and then lifting them to $S_{I}$. ###### Theorem 4.7 (Narrowing (Kupferman and Vardi, 1999)). Given an ATA $\mathcal{A}$ on $S_{I}$-trees one can build in linear time an ATA ${\mathcal{A}\\!\downarrow_{J}}$ on $S_{J}$-trees such that for every pointed $(\textnormal{AP},S_{J})$-tree $(t,u)$ and every $u^{\prime}\in S_{I}^{+}$ such that $u^{\prime}\\!\downarrow_{J}=u$, $(t,u)\in\mathcal{L}(\mathcal{A}\\!\downarrow_{J})\mbox{ iff }(t\\!\uparrow^{I},u^{\prime})\in\mathcal{L}(\mathcal{A}).$ ### 4.2. Translating $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$ to ATA In order to prove Theorem 4.3 we need some more notations and a technical lemma that contains the automata construction. ###### Definition 4.8. For every $\varphi\in\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, we let $I_{\varphi}:=\bigcap_{\textnormal{{o}}\in\textnormal{Obs}(\varphi)}\textnormal{{o}}\subseteq[n],$ where $\textnormal{Obs}(\varphi)$ is the set of concrete observations that occur in $\varphi$, with the intersection over the empty set defined as $[n]$. For a CKS $\mathcal{S}$ with state set $S\subseteq\prod_{i\in[n]}L_{i}$ we also let $S_{\varphi}:=\\{s\\!\downarrow_{I_{\varphi}}\mid s\in S\\}$. Elements of $S_{\varphi}$ will be the possible directions used by the automaton we build for $\varphi$. In other words, the automaton for $\varphi$ will work on $S_{\varphi}$-trees. The intuition is that the observations in $\varphi$ determine which components of the model’s states can be observed by the automaton. Our construction, that transforms a $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$ formula $\varphi$ and a CKS $\mathcal{S}$ into an ATA, builds upon the classic construction from (Kupferman et al., 2000b), which builds ATA for $\textnormal{{CTL}}^{*}$ formulas. In addition, we use projection of automata to treat second-order quantification, and to deal with imperfect information we resort to automata narrowing. Moreover, we use tree automata in an original way that allows us to deal with non-observable atomic propositions, which in turn makes it possible to consider non-observable winning conditions in our decidable fragment of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. The classical approach to model checking via tree automata is to build an automaton that accepts all tree models of the input formula, and check whether it accepts the unfolding of the model (Kupferman et al., 2000b). We instead encode the model in the automata, using the input tree only to guess labellings for quantified propositions. Encoding the model in the automaton. Quantification on atomic propositions is classically performed by means of automata projection (see Theorem 4.5). But in order to obtain a labelling that is uniform with regards to the observation of the quantifier, we need to make use of the narrowing operation (see Theorem 4.7). Intuitively, to check that a formula $\exists^{\textnormal{{o}}}p.\,\varphi$ holds in a tree $t$, we would like to work on its narrowing $t^{\prime}:=t\\!\downarrow_{\textnormal{{o}}}$, guess a labelling for $p$ on this tree thanks to automata projection, thus obtaining a tree $t^{\prime}_{p}$, take its widening $t_{p}^{\prime\prime}:=t^{\prime}_{p}\\!\uparrow^{[n]}$, obtaining a tree with an o-uniform labelling for $p$, and then check that $\varphi$ holds on $t_{p}^{\prime\prime}$. The problem is that unless $t=(\tau,\ell)$ is o-uniform in every atomic proposition in AP, there is no way to define the labelling of $\tau\\!\downarrow_{\textnormal{{o}}}$ without losing information. This implies that, unless we restrict to models where all atomic propositions are observable for all observations o, we cannot pass the model as input to our automata, which will work on narrowings of trees. Therefore, to model check a $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula $\varphi$ on a CKS $\mathcal{S}$, each state of the automaton that we build for $\varphi$ will contain a state of $\mathcal{S}$. The automaton can thus guess paths in $\mathcal{S}$, and evaluate free occurrences of atomic propositions in $\mathcal{S}$ without reading the input tree. The input tree no longer represents the model, but we use it to carry labellings for quantified atomic propositions in ${\textnormal{AP}_{\exists}}(\varphi)$: we provide the automaton with an input tree whose labelling is initially empty, and the automaton, through successive narrowing and projection operations, decorates it with uniform labellings for quantified atomic propositions. We remark that this technique allows one to go beyond Coordination Logic (Finkbeiner and Schewe, 2010): by separating between quantified atomic propositions (that need to be uniform and are carried by the input tree) and free atomic propositions (that state facts about the model and are coded in the automaton), we manage to remove the restriction present in CL, that requires all facts about the model to be known to every strategy (see Proposition 6.3 in Section 6.2). To do this we assume without loss of generality that propositions that are quantified in $\varphi$ do not appear free in $\varphi$, i.e., ${\textnormal{AP}_{\exists}}(\varphi)\cap\textnormal{AP}_{f}(\varphi)=\emptyset$. Finally, given a formula $\varphi$, a CKS $\mathcal{S}$ and a state $s\in\mathcal{S}$, the truth value of $\varphi$ in $(\mathcal{S},s)$ does not depend on the labelling of $\mathcal{S}$ for atoms in ${\textnormal{AP}_{\exists}}(\varphi)$, which can thus be forgotten. Thus, from now on we will assume that an instance $(\mathcal{S},\Phi)$ of the model- checking problem for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is such that ${\textnormal{AP}_{\exists}}(\Phi)\cap\textnormal{AP}_{f}(\Phi)=\emptyset$ and $\mathcal{S}$ is a CKS over $\textnormal{AP}_{f}(\Phi)$. Merging the decorated input tree and the model. To state the correctness of our construction, we will need to merge the labels for quantified propositions, carried by the input tree, with those for free propositions, carried by CKS $\mathcal{S}$. Because, through successive widenings, the input tree (represented by $t$ in the definition below) will necessarily be a complete tree, its domain will always contain the domain of the unfolding of $\mathcal{S}$ (represented by $t^{\prime}$ below), hence the following definition. ###### Definition 4.9 (Merge). Let $t=(\tau,\ell)$ be a complete $(\textnormal{AP},X)$-tree and $t^{\prime}=(\tau^{\prime},\ell^{\prime})$ an $(\textnormal{AP}\,^{\prime},X)$-tree with same root as $t$, where $\textnormal{AP}\cap\textnormal{AP}\,^{\prime}=\emptyset$. We define the _merge_ of $t$ and $t^{\prime}$ as the $(\textnormal{AP}\cup\textnormal{AP}\,^{\prime},X)$-tree $t\merge t^{\prime}:=(\tau\cap\tau^{\prime}=\tau^{\prime},\ell^{\prime\prime}),$ where $\ell^{\prime\prime}(u)=\ell(u)\cup\ell^{\prime}(u)$. We now describe our automata construction. Let $(\mathcal{S},\Phi)$ be an instance of the model-checking problem for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$, where $\mathcal{S}=(S,R,\ell_{\mathcal{S}},s_{\iota})$. ###### Lemma 4.10 (Translation). For every subformula $\varphi$ of $\Phi$ and state $s$ of $\mathcal{S}$, one can build an ATA $\mathcal{A}_{s}^{\varphi}$ on $({\textnormal{AP}_{\exists}}(\Phi),S_{\varphi})$-trees such that for every $({\textnormal{AP}_{\exists}}(\Phi),S_{\varphi})$-tree $t$ rooted in $s_{\iota}\\!\downarrow_{I_{\varphi}}$, every $u\in t_{\mathcal{S}}$ ending in $s$, it holds that $(t,u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi})\mbox{\;\;\;iff\;\;\;}t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi.$ ###### Proof. Let ${\textnormal{AP}_{\exists}}={\textnormal{AP}_{\exists}}(\Phi)$ and $\textnormal{AP}_{f}=\textnormal{AP}_{f}(\Phi)$, and recall that $\mathcal{S}$ is labelled over $\textnormal{AP}_{f}$. For each state $s\in S$ and each subformula $\varphi$ of $\Phi$ (note that all subformulas of $\Phi$ are also hierarchical), we define by induction on $\varphi$ the ATA $\mathcal{A}_{s}^{\varphi}$ on $({\textnormal{AP}_{\exists}},S_{\varphi})$-trees. $\bm{\varphi=p:}$ First, by Definition 4.8, $S_{\varphi}=S_{[n]}=S$. We let $\mathcal{A}_{s}^{p}$ be the ATA over $S$-trees with one unique state $q_{\iota}$, with transition function defined as follows: $\delta(q_{\iota},a)=\begin{cases}\top&\mbox{if }\begin{array}[]{c}p\in\textnormal{AP}_{f}\mbox{ and }p\in\ell_{\mathcal{S}}(s)\\\ \mbox{ or }\\\ p\in{\textnormal{AP}_{\exists}}\mbox{ and }p\in a\end{array}\\\ \perp&\mbox{if }\begin{array}[]{c}p\in\textnormal{AP}_{f}\mbox{ and }p\notin\ell_{\mathcal{S}}(s)\\\ \mbox{ or }\\\ p\in{\textnormal{AP}_{\exists}}\mbox{ and }p\notin a\end{array}\end{cases}$ $\bm{\varphi=\neg\varphi^{\prime}:}$ We let $\mathcal{A}_{s}^{\varphi}:=\overline{\mathcal{A}_{s}^{\varphi^{\prime}}}$. $\bm{\varphi=\varphi_{1}\vee\varphi_{2}:}$ Because $I_{\varphi}=I_{\varphi_{1}}\cap I_{\varphi_{2}}$, and each $\mathcal{A}_{s}^{\varphi_{i}}$ for $i\in\\{1,2\\}$ works on $L_{\varphi_{i}}$-trees, we first narrow them so that they work on $L_{\varphi}$-trees: for $i\in\\{1,2\\}$, we let $\mathcal{A}_{i}:={\mathcal{A}_{s}^{\varphi_{i}}\\!\downarrow_{I_{\varphi}}}=(Q^{i},\delta^{i},q_{\iota}^{i},C^{i})$. Letting $q_{\iota}$ be a fresh initial state we define $\mathcal{A}_{s}^{\varphi}:=(\\{q_{\iota}\\}\cup Q^{1}\cup Q^{2},\delta,q_{\iota},C)$, where $\delta$ and $C$ agree with $\delta^{i}$ and $C^{i}$, respectively, on states from $Q^{i}$, and $\delta(q_{\iota},a)=\delta^{1}(q_{\iota}^{1},a)\vee\delta^{2}(q_{\iota}^{2},a)$. The colour of $q_{\iota}$ does not matter. $\bm{\varphi={\bf E}\psi:}$ Let $\max(\psi)=\\{\varphi_{1},\ldots,\varphi_{k}\\}$ be the set of maximal state subformulas of $\psi$. In a first step we see these maximal state subformulas as atomic propositions, we see $\psi$ as an LTL formula over $\max(\psi)$, and we build a nondeterministic parity word automaton $\mathcal{W}^{\psi}=(Q^{\psi},\Delta^{\psi},q^{\psi}_{\iota},C^{\psi})$ over alphabet $2^{\max(\psi)}$ that accepts exactly the models of $\psi$ (and uses two colours) (Vardi and Wolper, 1994). We define the ATA $\mathcal{A}$ that, given as input a $(\max(\psi),S_{\varphi})$-tree $t$, nondeterministically guesses a path $\lambda$ in $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}$, or equivalently a path in $\mathcal{S}$ starting from $s$, and simulates $\mathcal{W}^{\psi}$ on it, assuming that the labels it reads while following $\lambda\\!\downarrow_{I_{\varphi}}$ in its input $t$ correctly represent the truth value of formulas in $\max(\psi)$ along $\lambda$. Recall that $\mathcal{S}=(S,R,s_{\iota},\ell_{\mathcal{S}})$; we define $\mathcal{A}:=(Q,\delta,q_{{\iota}},C)$, where * • $Q=Q^{\psi}\times S$, * • $q_{{\iota}}=(q^{\psi}_{{\iota}},s)$, * • for each $(q^{\psi},s^{\prime})\in Q$, $C(q^{\psi},s^{\prime})=C^{\psi}(q^{\psi})$, and * • for each $(q^{\psi},s^{\prime})\in Q$ and $a\in 2^{\max(\psi)}$, $\delta((q^{\psi},s^{\prime}),a)=\bigvee_{q^{\prime}\in\Delta^{\psi}(q^{\psi},a)}\bigvee_{s^{\prime\prime}\in R(s^{\prime})}[s^{\prime\prime}\\!\downarrow_{I_{\varphi}},\left(q^{\prime},s^{\prime\prime}\right)].$ The intuition is that $\mathcal{A}$ reads the current label in $2^{\max(\psi)}$, chooses nondeterministically a transition in $\mathcal{W}^{\psi}$, chooses a next state $s^{\prime\prime}$ in $S$ and proceeds in the corresponding direction $s^{\prime\prime}\\!\downarrow_{I_{\varphi}}\in S_{\varphi}$. Now from $\mathcal{A}$ we build the automaton $\mathcal{A}_{s}^{\varphi}$ over $S_{\varphi}$-trees labelled with “real” atomic propositions in ${\textnormal{AP}_{\exists}}$. Intuitively, in each node it visits, $\mathcal{A}_{s}^{\varphi}$ guesses what should be its labelling over $\max(\psi)$, it simulates $\mathcal{A}$ accordingly, and checks that the guess it made is correct. If, after having guessed a finite path $u\in t_{\mathcal{S}}$ ending in state $s^{\prime}$, $\mathcal{A}_{s}^{\varphi}$ guesses that $\varphi_{i}$ holds, it checks this guess by starting a copy of automaton $\mathcal{A}_{s^{\prime}}^{\varphi_{i}}$ from node $v=u\\!\downarrow_{I_{\varphi}}$ in its input $t$. Formally, for each $s^{\prime}\in\mathcal{S}$ and each $\varphi_{i}\in\max(\psi)$ we first build $\mathcal{A}_{s^{\prime}}^{\varphi_{i}}$, which works on $S_{\varphi_{i}}$-trees. Observe that $I_{\varphi}=\cap_{i=1}^{k}I_{\varphi_{i}}$, so that we need to narrow down these automata222In the conference version of this work (Berthon et al., 2017) we made a mistake here: we wrote that $I_{\varphi}=I_{\varphi_{i}}$, which is not the case in general. As a consequence we do need to narrow down automata, unlike what was written in the conference version.: We let $\mathcal{A}^{i}_{s^{\prime}}:=\mathcal{A}_{s^{\prime}}^{\varphi_{i}}\\!\downarrow_{I_{\varphi}}=(Q^{i}_{s^{\prime}},\delta^{i}_{s^{\prime}},q^{i}_{s^{\prime}},C^{i}_{s^{\prime}})$. We also let $\overline{\mathcal{A}^{i}_{s^{\prime}}}=(\overline{Q^{i}_{s^{\prime}}},\overline{\delta^{i}_{s^{\prime}}},\overline{q^{i}_{s^{\prime}}},\overline{C^{i}_{s^{\prime}}})$ be the dualisation of $\mathcal{A}^{i}_{s^{\prime}}$, and we assume without loss of generality all the state sets are pairwise disjoint. We define the ATA $\mathcal{A}_{s}^{\varphi}=(Q\cup\bigcup_{i,s^{\prime}}Q^{i}_{s^{\prime}}\cup\overline{Q^{i}_{s^{\prime}}},\delta^{\prime},q_{{\iota}},C^{\prime}),$ where the colours of states are left as they were in their original automaton, and $\delta^{\prime}$ is defined as follows. For states in $Q^{i}_{s^{\prime}}$ (resp. $\overline{Q^{i}_{s^{\prime}}}$), $\delta^{\prime}$ agrees with $\delta^{i}_{s^{\prime}}$ (resp. $\overline{\delta^{i}_{s^{\prime}}}$), and for $(q^{\psi},s^{\prime})\in Q$ and $a\in 2^{{\textnormal{AP}_{\exists}}}$ we let $\delta^{\prime}((q^{\psi},s^{\prime}),a)$ be the disjunction over ${a^{\prime}\in 2^{\max(\psi)}}$ of (2) $\displaystyle\Bigg{(}\delta\left((q^{\psi},s^{\prime}),a^{\prime}\right)\wedge\bigwedge_{\varphi_{i}\in a^{\prime}}\delta^{i}_{s^{\prime}}(q^{i}_{s^{\prime}},a)\;\wedge\bigwedge_{\varphi_{i}\notin a^{\prime}}\overline{\delta^{i}_{s^{\prime}}}(\overline{q^{i}_{s^{\prime}}},a)\Bigg{)}.$ Note that in general it is not possible to define a $\max(\psi)$-labelling of $t$ that faithfully represents the truth values of formulas in $\max(\psi)$ for all nodes in $t_{\mathcal{S}}$, because a node in $t$ may correspond to different nodes in $t_{\mathcal{S}}$ that have same projection on $S_{\varphi}$ but satisfy different formulas of $\max(\psi)$. However this is not a problem because different copies of $\mathcal{A}_{s}^{\varphi}$ that visit the same node can guess different labellings, depending on the actual state of $\mathcal{S}$ (which is part of the state of $\mathcal{A}_{s}^{\varphi}$). $\bm{\varphi=\exists}^{\bm{\textnormal{{o}}}}\bm{p.\,\varphi^{\prime}:}$ We build automaton $\mathcal{A}_{s}^{\varphi^{\prime}}$ that works on $S_{\varphi^{\prime}}$-trees; because $\varphi$ is hierarchical, we have that $\textnormal{{o}}\subseteq I_{\varphi^{\prime}}$ and we can narrow down $\mathcal{A}_{s}^{\varphi^{\prime}}$ to work on $S_{\textnormal{{o}}}$-trees and obtain $\mathcal{A}_{1}:={\mathcal{A}_{s}^{\varphi^{\prime}}\\!\downarrow_{\textnormal{{o}}}}$. By Theorem 4.6 we can nondeterminise it to get $\mathcal{A}_{2}$, which by Theorem 4.5 we can project with respect to $p$, finally obtaining $\mathcal{A}_{s}^{\varphi}:=\mathcal{A}_{2}\\!\Downarrow_{p}$. Correctness. We now prove by induction on $\varphi$ that the construction is correct. In each case, we let $t=(\tau,\ell)$ be a complete $({\textnormal{AP}_{\exists}},S_{\varphi})$-tree rooted in $s_{\iota}\\!\downarrow_{I_{\varphi}}$. $\bm{\varphi=p:}$ First, note that $I_{p}=[n]$, so that $t$ is rooted in $s_{\iota}\\!\downarrow_{I_{\varphi}}=s_{\iota}$, and $u\\!\downarrow_{I_{\varphi}}=u$. Also recall that $u$ ends in $s$. Let us consider first the case where $p\in\textnormal{AP}_{f}$. By definition of $\mathcal{A}_{s}^{p}$, we have that $(t,u)\in\mathcal{L}(\mathcal{A}_{s}^{p})$ if and only if $p\in\ell_{\mathcal{S}}(s)$. We also have $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models p$ if and only if $p\in\ell^{\prime}(u)$, where $\ell^{\prime}$ is the labelling of tree $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}$. By definition of unfolding and merge, we have that $\ell^{\prime}(u)=\ell_{\mathcal{S}}(s)$, which concludes this direction. Now if $p\in{\textnormal{AP}_{\exists}}$: by definition of $\mathcal{A}_{s}^{p}$, we have $(t,u)\in\mathcal{L}(\mathcal{A}_{s}^{p})$ if and only if $p\in\ell(u)$; also, by definition of the merge and unfolding, we have that $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models p$ if and only if $p\in\ell(u)$, and we are done. $\bm{\varphi=\neg\varphi^{\prime}:}$ Correctness follows from the induction hypothesis and Theorem 4.4. $\bm{\varphi_{1}\vee\varphi_{2}:}$ We have $\mathcal{A}_{i}=\mathcal{A}_{s}^{\varphi_{i}}\\!\downarrow_{I_{\varphi}}$, so by Theorem 4.7 we have $(t,u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{i})$ if and only if $(t\\!\uparrow^{I_{\varphi_{i}}},u\\!\downarrow_{I_{\varphi_{i}}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi_{i}})$, which by induction hypothesis holds if and only if $(t\\!\uparrow^{I_{\varphi_{i}}})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi_{i}$, i.e., $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi_{i}$. We conclude by observing that $\mathcal{L}(\mathcal{A}_{s}^{\varphi})=\mathcal{L}(\mathcal{A}_{1})\cup\mathcal{L}(\mathcal{A}_{2})$. $\bm{\varphi={\bf E}\psi:}$ Suppose that $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models{\bf E}\psi$. There exists an infinite path $\lambda$ in $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}$ starting at $u$ such that $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},\lambda\models\psi$. Again, let $\max(\psi)$ be the set of maximal state subformulas of $\varphi$, and let $w$ be the infinite word over $2^{\max(\psi)}$ that agrees with $\lambda$ on the state formulas in $\max(\psi)$, i.e., for each node $\lambda_{k}$ of $\lambda$ and formula $\varphi_{i}\in\max(\psi)$, it holds that $\varphi_{i}\in w_{k}$ if and only if $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},\lambda_{k}\models\varphi_{i}$. To show that $(t,u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi})$ we show that Eve can win the acceptance game $\mathcal{G}(\mathcal{A}_{s}^{\varphi},t,u\\!\downarrow_{I_{\varphi}})$. In this game, Eve can guess the path $\lambda$ while the automaton follows $\lambda\\!\downarrow_{I_{\varphi}}$ in its input $t$, and she can also guess the corresponding word $w$ on $2^{\max(\psi)}$. By construction of $\mathcal{W}^{\psi}$, Eve has a winning strategy $\sigma_{\psi}$ in the acceptance game of $\mathcal{W}^{\psi}$ on $w$. From $\lambda$, $w$ and $\sigma_{\psi}$ we can easily define a strategy for Eve in $\mathcal{G}(\mathcal{A}_{s}^{\varphi},t,u\\!\downarrow_{I_{\varphi}})$ on all positions that can be reached while Adam does not choose to challenge her on a guess she made for the truth value of some maximal state subformula, and on such plays this strategy is winning because $\sigma_{\psi}$ is winning. Now if Adam challenges her on one of these guesses: Let $\lambda_{k}\in t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}$ be a node along $\lambda$, let $s^{\prime}$ be its last direction and let $\lambda_{k}^{\prime}=\lambda_{k}\\!\downarrow_{I_{\varphi}}\in t$. Assume that in node $\lambda^{\prime}_{k}$ of the input tree, in a state $(q^{\psi},s^{\prime})\in Q$, Adam challenges Eve on some $\varphi_{i}\in\max(\psi)$ that she assumes to be true in $\lambda^{\prime}_{k}$, i.e., such that $\varphi_{i}\in w_{k}$. Formally, in the evaluation game this means that Adam chooses the conjunct $\delta^{i}_{s^{\prime}}(q^{i}_{s^{\prime}},a)$ in transition formula 2, where $a=\ell(\lambda^{\prime}_{k})$, thus moving to position $(\lambda^{\prime}_{k},(q^{\psi},s^{\prime}),\delta^{i}_{s^{\prime}}(q^{i}_{s^{\prime}},a))$. We want to show that Eve wins from this position. To do so we first show that $(t,\lambda^{\prime}_{k})\in\mathcal{L}(\mathcal{A}^{i}_{s^{\prime}})$. First, since $\mathcal{A}^{i}_{s^{\prime}}=\mathcal{A}_{s^{\prime}}^{\varphi_{i}}\\!\downarrow_{I_{\varphi}}$, by Theorem 4.7, $(t,\lambda^{\prime}_{k})\in\mathcal{L}(\mathcal{A}^{i}_{s^{\prime}})$ if and only if $(t\\!\uparrow^{I_{\varphi_{i}}},\lambda_{k}\\!\downarrow_{I_{\varphi_{i}}})\in\mathcal{L}(\mathcal{A}_{s^{\prime}}^{\varphi_{i}})$. Next, by applying the induction hypothesis we get that $(t\\!\uparrow^{I_{\varphi_{i}}},\lambda_{k}\\!\downarrow_{I_{\varphi_{i}}})\in\mathcal{L}(\mathcal{A}_{s^{\prime}}^{\varphi_{i}})$ if and only if $t\\!\uparrow^{I_{\varphi_{i}}}\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},\lambda_{k}\models\varphi_{i}$, i.e., $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},\lambda_{k}\models\varphi_{i}$. The latter holds because $\varphi_{i}\in w_{k}$, and by assumption $w_{k}$ agrees with $\lambda_{k}$ on $\varphi_{i}$. Thus $(t,\lambda^{\prime}_{k})\in\mathcal{L}(\mathcal{A}^{i}_{s^{\prime}})$. This means that Eve has a winning strategy from the initial position $(\lambda^{\prime}_{k},q^{i}_{s^{\prime}},\delta^{i}_{s^{\prime}}(q^{i}_{s^{\prime}},a))$ of the acceptance game of $\mathcal{A}^{i}_{s^{\prime}}$ on $(t,\lambda^{\prime}_{k})$. Since $(\lambda^{\prime}_{k},q^{i}_{s^{\prime}},\delta^{i}_{s^{\prime}}(q^{i}_{s^{\prime}},a))$ and $(\lambda^{\prime}_{k},(q^{\psi},s^{\prime}),\delta^{i}_{s^{\prime}}(q^{i}_{s^{\prime}},a))$ contain the same node $\lambda^{\prime}_{k}$ and transition formula $\delta^{i}_{s^{\prime}}(q^{i}_{s^{\prime}},a)$, the subgames that start in these positions are isomorphic and a winning strategy in one of these positions induces a winning strategy in the other, and therefore Eve wins Adam’s challenge (recall that positional strategies are sufficient in parity games (Zielonka, 1998)). With a similar argument, we get that also when Adam challenges Eve on some $\varphi_{i}\in\max(\psi)$ assumed not to be true in node $\lambda_{k}$, Eve wins the challenge. Finally, Eve wins the acceptance game of $\mathcal{A}_{s}^{\varphi}$ on $(t,u\\!\downarrow_{I_{\varphi}})$, and thus $(t,u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi})$. For the other direction, assume that $(t,u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi})$, i.e., Eve wins the evaluation game of $\mathcal{A}_{s}^{\varphi}$ on $(t,u\\!\downarrow_{I_{\varphi}})$. A winning strategy for Eve describes a path $\lambda$ in $t_{\mathcal{S}}$ from $s$, which is also a path in $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}$ from $u$. This winning strategy also defines an infinite word $w$ over $2^{\max(\psi)}$ such that $w$ agrees with $\lambda$ on the formulas in $\max(\psi)$, and it also describes a winning strategy for Eve in the acceptance game of $\mathcal{W}^{\psi}$ on $w$. Hence $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},\lambda\models\psi$, and $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi$. $\bm{\varphi=\exists}^{\bm{\textnormal{{o}}}}\bm{p.\,\varphi^{\prime}:}$ First, by definition we have $I_{\varphi}=\textnormal{{o}}\cap I_{\varphi^{\prime}}$. Because $\varphi$ is hierarchical, $\textnormal{{o}}\subseteq\textnormal{{o}}^{\prime}$ for every $\textnormal{{o}}^{\prime}$ that occurs in $\varphi^{\prime}$, and thus $\textnormal{{o}}\subseteq I_{\varphi^{\prime}}$. It follows that $I_{\varphi}=\textnormal{{o}}$. Next, by Theorem 4.5 we have that (3) $(t,u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi})\mbox{\;\;\;iff\;\;\;}\exists\,\ell_{p}\mbox{ a $p$-labelling for $t$ such that }(t\otimes\ell_{p},u)\in\mathcal{L}(\mathcal{A}_{2}).$ By Theorem 4.6, $\mathcal{L}(\mathcal{A}_{2})=\mathcal{L}(\mathcal{A}_{1})$, and since $\mathcal{A}_{1}=\mathcal{A}_{s}^{\varphi^{\prime}}\\!\downarrow_{\textnormal{{o}}}=\mathcal{A}_{s}^{\varphi^{\prime}}\\!\downarrow_{I_{\varphi}}$ we get by Theorem 4.7 that (4) $(t\otimes\ell_{p},u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{2})\mbox{\;\;\;iff\;\;\;}((t\otimes\ell_{p})\\!\uparrow^{L_{\varphi^{\prime}}},u\\!\downarrow_{I_{\varphi^{\prime}}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi^{\prime}}).$ By induction hypothesis, (5) $((t\otimes\ell_{p})\\!\uparrow^{L_{\varphi^{\prime}}},u\\!\downarrow_{I_{\varphi^{\prime}}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi^{\prime}})\mbox{\;\;\;iff\;\;\;}(t\otimes\ell_{p})\\!\uparrow^{L_{\varphi^{\prime}}}\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi^{\prime}.$ Now, by points (3), (4) and (5) and the fact that $(t\otimes\ell_{p})\\!\uparrow^{I_{\varphi^{\prime}}}\\!\uparrow^{[n]}=(t\otimes\ell_{p})\\!\uparrow^{[n]}$, we get that (6) $(t,u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi})\mbox{\;\;\;iff\;\;\;}\exists\,\ell_{p}\mbox{ a $p$-labelling for $t$ such that }(t\otimes\ell_{p})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi^{\prime}.$ We now prove the following equation which, together with point (6), concludes the proof: (7) $\begin{array}[]{c}\exists\,\ell_{p}\mbox{ a $p$-labelling for $t$ such that }(t\otimes\ell_{p})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi^{\prime}\\\ \mbox{\;\;\;iff\;\;\;}\\\ t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\exists^{\textnormal{{o}}}p.\,\varphi^{\prime}\end{array}$ Assume that there exists a $p$-labelling $\ell_{p}$ for $t$ such that $(t\otimes\ell_{p})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi^{\prime}$. Let $\ell_{p}^{\prime}$ be the $p$-labelling of $(t\otimes\ell_{p})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}$. By definition of the merge, $\ell_{p}^{\prime}$ is equal to the $p$-labelling of $(t\otimes\ell_{p})\\!\uparrow^{[n]}$, which by definition of the widening is $I_{\varphi}$-uniform, i.e., it is o-uniform. In addition, it is clear that $(t\otimes\ell_{p})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}=(t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}})\otimes\ell_{p}^{\prime}$, which concludes this direction. For the other direction, assume that $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\exists^{\textnormal{{o}}}p.\,\varphi^{\prime}$: there exists a o-uniform $p$-labelling $\ell_{p}^{\prime}$ for $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}$ such that $(t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}})\otimes\ell_{p}^{\prime},u\models\varphi^{\prime}$. We define a $p$-labelling $\ell_{p}$ for $t$ such that $(t\otimes\ell_{p})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi^{\prime}$. First, let us write $t^{\prime}=t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}=(\tau^{\prime},\ell^{\prime})$. For each node $u$ of $t$, let $\ell_{p}(u)=\begin{cases}\ell_{p}^{\prime}(u^{\prime})&\mbox{if there exists }u^{\prime}\in\tau^{\prime}\mbox{ such that }u^{\prime}\\!\downarrow_{\textnormal{{o}}}=u,\\\ 0&\mbox{otherwise.}\end{cases}$ This is well defined because $\ell_{p}^{\prime}$ is o-uniform in $p$, so that if two nodes $u^{\prime},v^{\prime}$ project on $u$, we have $u^{\prime}\approx_{\textnormal{{o}}}v^{\prime}$ and thus $\ell_{p}^{\prime}(u^{\prime})=\ell_{p}^{\prime}(v^{\prime})$. In case there is no $u^{\prime}\in\tau^{\prime}$ such that $u^{\prime}\\!\downarrow_{I_{\varphi}}=u$, the value of $\ell_{p}(u)$ has no impact on $(t\otimes\ell_{p})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}$. Finally, $(t\otimes\ell_{p})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}=(t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}})\otimes\ell_{p}^{\prime}$, hence the result. ∎ ### 4.3. Proof of Theorem 4.3 We now prove Theorem 4.3. Let $\mathcal{S}$ be a CKS with initial state $s_{\iota}$, and let $\Phi\in\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$. By Lemma 4.10 one can build an ATA $\mathcal{A}_{s_{\iota}}^{\Phi}$ such that for every labelled $S_{\varphi}$-tree $t$ rooted in $s_{\iota}\\!\downarrow_{I_{\varphi}}$, and every node $u\in t_{\mathcal{S}}$, $(t,u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{s_{\iota}}^{\varphi})$ if, and only if, $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\Phi$. Let $\tau$ be the full $S_{\varphi}$-tree rooted in $s_{\iota}\\!\downarrow_{I_{\varphi}}$, and let $t=(\tau,\ell_{\emptyset})$, where $\ell_{\emptyset}$ is the empty labelling. Clearly, $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}=t_{\mathcal{S}}$, and because $t$ is rooted in $s_{\iota}\\!\downarrow_{I_{\varphi}}$, we get that $t\in\mathcal{L}(\mathcal{A}_{s_{\iota}}^{\varphi})$ if, and only if $t_{\mathcal{S}}\models\Phi$, i.e., $\mathcal{S}\models\Phi$. It remains to check whether tree $t$, which is regular, is accepted by $\mathcal{A}_{s_{\iota}}^{\Phi}$. This can be done by solving a parity game built from the product of $\mathcal{A}_{s_{\iota}}^{\Phi}$ with a finite Kripke structure representing $t$ (Löding, 2011). ### 4.4. Complexity To state a precise upper bound on the complexity of our procedure, we first introduce a syntactic notion of _simulation depth_ for formulas of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. While alternation depth (see, e.g., (Mogavero et al., 2014)) simply counts the number of alternations between existential and universal strategy quantifications, simulation depth reflects automata operations required to treat a formula, and counts the maximum number of nested simulations of alternating tree automata that need to be performed when applying our automata construction. However, like alternation depth, it is a purely syntactic notion. Formally we define a function $\mbox{sd}:\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}\to\mathbb{N}\times\\{\mbox{nd},\mbox{alt}\\}$ which returns, for each formula $\varphi$, a pair $\mbox{sd}(\varphi)=(k,x)$ where $k$ is the simulation depth of $\varphi$, and $x\in\\{\mbox{nd},\mbox{alt}\\}$ indicates whether the automaton $\mathcal{A}_{s}^{\varphi}$ built from $\varphi$ and a state $s$ of a CKS $\mathcal{S}$ is nondeterministic (nd) or alternating (alt). If $\mbox{sd}(\varphi)=(k,x)$ we shall denote $k$ by $\mbox{sd}_{k}(\varphi)$ and $x$ by $\mbox{sd}_{x}(\varphi)$. The inductive definition for state formulas is as follows: $\begin{array}[]{l}\mbox{sd}(p):=(0,\mbox{nd})\\\\[5.0pt] \mbox{sd}(\neg\varphi):=(\mbox{sd}_{k}(\varphi),\mbox{alt})\\\\[5.0pt] \mbox{sd}(\varphi_{1}\vee\varphi_{2}):=\left(\max_{i\in\\{1,2\\}}\mbox{sd}_{k}(\varphi_{i}),x\right),\\\ \hfill\mbox{where }x=\begin{cases}\mbox{nd}&\mbox{if }\mbox{sd}_{x}(\varphi_{1})=\mbox{sd}_{x}(\varphi_{2})=\mbox{nd}\\\ \mbox{alt}&\mbox{otherwise}\end{cases}\\\\[15.0pt] \mbox{sd}({\bf E}\psi):=\begin{cases}(0,\mbox{nd})&\mbox{if }\psi\in\textnormal{LTL}\\\ (\max_{\varphi\in\max(\psi)}\mbox{sd}_{k}(\varphi),\mbox{alt})&\mbox{otherwise}\end{cases}\\\\[15.0pt] \mbox{sd}(\exists^{\textnormal{{o}}}p.\,\varphi):=(k,\mbox{nd}),\\\ \hfill\quad\quad\quad\quad\quad\mbox{where }k=\begin{cases}\mbox{sd}_{k}(\varphi)&\mbox{if }\mbox{sd}_{x}(\varphi)=\mbox{nd}\mbox{ and }\textnormal{{o}}=I_{\varphi}\quad\mbox{(recall Definition~{}\ref{def- Iphi})}\\\ \mbox{sd}_{k}(\varphi)+1&\mbox{otherwise}\end{cases}\end{array}$ We explain each case. For an atomic proposition $p$, the automaton $\mathcal{A}_{s}^{p}$ is clearly nondeterministic and no simulation is involved in its construction. For a formula $\neg\varphi$, the automaton $\mathcal{A}_{s}^{\neg\varphi}$ is obtained by dualising $\mathcal{A}_{s}^{\varphi}$, an operation that in general does not return a nondeterministic automaton but an alternating one; also this dualisation does not involve any simulation, hence the definition of the first component. Now for the disjunction, the first component should be clear; for the second one, observe that by construction of $\mathcal{A}_{s}^{\varphi_{1}\vee\varphi_{2}}$, if both $\mathcal{A}_{s}^{\varphi_{1}}$ and $\mathcal{A}_{s}^{\varphi_{2}}$ are nondeterministic, then so is $\mathcal{A}_{s}^{\varphi_{1}\vee\varphi_{2}}$; otherwise, it is alternating. For the path quantifier, by construction $\mathcal{A}_{s}^{{\bf E}\psi}$ is alternating in the general case as it starts copies of automata for each maximal state subformula in $\psi$; for the first component, we recall that $\max(\psi)$ denotes the set of these maximal state subformulas and we observe that no additional simulation is performed to build $\mathcal{A}_{s}^{{\bf E}\psi}$ besides those needed to construct the automata for the maximal state subformulas. If $\psi$ is an LTL formula, then one can build the nondeterministic word automaton $\mathcal{W}^{\psi}$ directly working on “real” atomic propositions in ${\textnormal{AP}_{\exists}}\cup\textnormal{AP}_{f}$. The automaton $\mathcal{A}$ can then be built working directly on ${\textnormal{AP}_{\exists}}$, with $\mathcal{W}^{\psi}$ reading valuations for ${\textnormal{AP}_{\exists}}$ in the input tree and those for atoms in $\textnormal{AP}_{f}$ in the current state of $\mathcal{S}$. Because we do not need to guess valuations of maximal state subformulas and launch additional automata to check that these guesses are correct, we obtain a nondeterministic automaton. Finally, for a formula of the form $\exists^{\textnormal{{o}}}p.\,\varphi$, to build automaton $\mathcal{A}_{s}^{\exists^{\textnormal{{o}}}p.\,\varphi}$ we first build $\mathcal{A}_{s}^{\varphi}$, which we then narrow down to work on $L_{\textnormal{{o}}}$-trees. Since the narrowing operation introduces alternation, we need to nondeterminise the resulting automaton before projecting it with respect to $p$. Now observe that if $I_{\varphi}=\textnormal{{o}}$ we do not need to perform this narrowing, and thus if $\mathcal{A}_{s}^{\varphi}$ is a nondeterministic automaton we can directly perform the projection. This justifies the definition of the first component; for the second one, observe that the projection of a nondeterministic automaton is also nondeterministic. ###### Example 4.11. Assume that $n=3$, i.e., states of CKS have three components (recall that $[3]=\\{1,2,3\\}$). Let us consider formula $\varphi=\forall^{\\{1,3\\}}p.\,\forall^{[3]}q.\,\exists^{[3]}r.\,{\bf E}{\bf G}(p\wedge q\vee r)$. We describe how its simulation depth is computed. First, let us rewrite $\varphi=\neg\exists^{\\{1,3\\}}p.\,\exists^{[3]}q.\,\neg\exists^{[3]}r.\,{\bf E}{\bf G}(p\wedge q\vee r)$. Since ${\bf G}(p\wedge q\vee r)$ is an LTL formula, $\mbox{sd}({\bf E}{\bf G}(p\wedge q\vee r))=(0,\mbox{nd})$. Next, because $I_{{\bf E}{\bf G}(p\wedge q\vee r)}=[3]$, it follows that $\mbox{sd}(\exists^{[3]}r.\,{\bf E}{\bf G}(p\wedge q\vee r))=(0,\mbox{nd})$, and $\mbox{sd}(\neg\exists^{[3]}r.\,{\bf E}{\bf G}(p\wedge q\vee r))=(0,\mbox{alt})$. Next we have that $\mbox{sd}(\exists^{[3]}q.\,\neg\exists^{[3]}r.\,{\bf E}{\bf G}(p\wedge q\vee r))=(1,\mbox{nd})$. This reflects the fact that the automaton obtained for formula $\neg\exists^{[3]}r.\,{\bf E}{\bf G}(p\wedge q\vee r)$, which is alternating because of complementation, needs to be simulated before projecting it over $q$. Then, because $\\{1,3\\}\neq[3]$, it holds that $\mbox{sd}(\exists^{\\{1,3\\}}p.\,\exists^{[3]}q.\,\neg\exists^{[3]}r.\,{\bf E}{\bf G}(p\wedge q\vee r))=(2,\mbox{nd})$: to project over $p$ we first need to narrow down the previous automaton to make it see only components 1 and 3, and because the narrowing operation introduces alternation, the resulting automaton needs to be simulated before projecting it. Finally, we get that $\mbox{sd}(\varphi)=(2,\mbox{alt})$ We now introduce two additional depth measures on $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formulas, which help us establish more precise upper bounds on the sizes of the automata we build. For every $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula $\varphi$, we let ${\bf E}\mathrm{d}(\varphi)$ be the maximum number of nested path quantifiers ${\bf E}$ in $\varphi$, and $\exists\mathrm{d}(\varphi)$ is the maximum number of nested second-order quantifiers $\exists$ in $\varphi$. We also inductively define the function $\mathrm{exp}\big{(}k\mid n\big{)}$, for $k,n\in\mathbb{N}$, as follows: $\mathrm{exp}\big{(}0\mid n\big{)}:=n$ and $\mathrm{exp}\big{(}k+1\mid n\big{)}:=2^{\mathrm{exp}\big{(}k\mid n\big{)}}$. ###### Proposition 4.12. Let $\Phi$ be a $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$ formula, $\mathcal{S}$ a CKS and $s\in\mathcal{S}$ a state. * • If $\mbox{sd}_{k}(\Phi)=0$, $\mathcal{A}_{s}^{\Phi}$ has at most $f_{\mathcal{S}}^{\Phi}$ states and 2 colours, and * • if $\mbox{sd}_{k}(\Phi)\geq 1$, $\mathcal{A}_{s}^{\Phi}$ has at most $\mathrm{exp}\big{(}\mbox{sd}_{k}(\Phi)\mid f_{\mathcal{S}}^{\Phi}\log f_{\mathcal{S}}^{\Phi}\big{)}$ states and its number of colours is at most $\mathrm{exp}\big{(}\mbox{sd}_{k}(\Phi)-1\mid f_{\mathcal{S}}^{\Phi}\log f_{\mathcal{S}}^{\Phi}\big{)}$, where $f_{\mathcal{S}}^{\Phi}=m_{1}^{\exists\mathrm{d}(\Phi)}|\Phi||\mathcal{S}|^{{\bf E}\mathrm{d}(\Phi)}2^{m_{2}|\Phi|{\bf E}\mathrm{d}(\Phi)}$, with $m_{1},m_{2}\in\mathbb{N}$ constants. Also, if $\mathcal{A}_{s}^{\varphi}$ has state set $Q$ then for each $q\in Q$ and $a\in 2^{{\textnormal{AP}_{\exists}}(\Phi)}$ we have $|\delta(q,a)|\leq|\mathcal{S}||Q|^{|\mathcal{S}|}2^{H|\varphi|}$, where $H=1+{\bf E}\mathrm{d}(\varphi)$. Constants $m_{1}$ and $m_{2}$ are derived from constants in the complexity of, respectively, the simulation procedure, and the procedure that builds a nondeterministic word automaton for an LTL formula. For more detail, see the proof of Proposition 4.12 in Appendix A. From this we get the following complexity result. ###### Proposition 4.13. The model-checking problem for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$ formulas of simulation depth at most $k$ is $(k+1)$-Exptime -complete. ###### Proof. We start with the upper bounds. For an instance $(\Phi,\mathcal{S})$, our decision procedure in Section 4.3 first builds automaton $\mathcal{A}_{s_{\iota}}^{\Phi}$, and concludes by testing whether the full $S_{\Phi}$-tree with empty labelling $t$ is accepted by $\mathcal{A}_{s_{\iota}}^{\Phi}$. This can be done in time $O((|\mathcal{A}_{s_{\iota}}^{\Phi}|\cdot|t|)^{l})$, where $|t|$ is the size of a smallest Kripke structure representing the regular tree $t$, $|\mathcal{A}_{s_{\iota}}^{\Phi}|$ is the sum of the number of states and sizes of formulas in the transition function of $\mathcal{A}_{s_{\iota}}^{\Phi}$, and $l$ the number of colours it uses (Löding, 2011). Clearly $t$ can be represented by a Kripke structure of size $|S_{\Phi}|$, so that $|t|\leq|S_{\Phi}|\leq|\mathcal{S}|$. By Proposition 4.12, each formula in the transition function of $\mathcal{A}_{s_{\iota}}^{\Phi}$ is of size at most $|\mathcal{S}||Q|^{|\mathcal{S}|}2^{H|\Phi|}$, where $Q$ is the set of states in $\mathcal{A}_{s_{\iota}}^{\Phi}$ and $H=1+{\bf E}\mathrm{d}(\Phi)$. There are at most $|Q|2^{|{\textnormal{AP}_{\exists}}(\Phi)|}$ such formulas333In fact the final automaton $\mathcal{A}_{s_{\iota}}^{\Phi}$ does not read anything in its input, hence the alphabet could be considered to be a singleton. We thus have only $|Q|$ different formulas in the transition function, at most. and $|{\textnormal{AP}_{\exists}}(\Phi)|\leq|\Phi|$, so that $|\mathcal{A}_{s_{\iota}}^{\Phi}|\leq|Q|+|Q|2^{|{\textnormal{AP}_{\exists}}(\Phi)|}|\mathcal{S}||Q|^{|\mathcal{S}|}2^{H|\Phi|}\leq 2|\mathcal{S}||Q|^{|\mathcal{S}|+1}2^{(H+1)|\Phi|}$. Also $H+1\leq|\Phi|$, so we finally have $|\mathcal{A}_{s_{\iota}}^{\Phi}|\leq 2|\mathcal{S}||Q|^{|\mathcal{S}|+1}2^{|\Phi|^{2}}$. If $k=0$, by Proposition 4.12 $\mathcal{A}_{s_{\iota}}^{\Phi}$ has at most $f_{\mathcal{S}}^{\Phi}$ states and 2 colours, and $f_{\mathcal{S}}^{\Phi}$ is polynomial in $|\mathcal{S}|$ but exponential in $|\Phi|$. Therefore $|\mathcal{A}_{s_{\iota}}^{\Phi}|$ is exponential in $|\Phi|$ and in $|\mathcal{S}|$, and so is the complexity of checking that $t$ is accepted by $\mathcal{A}_{s_{\iota}}^{\Phi}$. If $k\geq 1$, by Proposition 4.12, $|Q|$ is $k$-exponential in $f_{\mathcal{S}}^{\Phi}\log f_{\mathcal{S}}^{\Phi}$, and $f_{\mathcal{S}}^{\Phi}\log f_{\mathcal{S}}^{\Phi}$ itself is polynomial in $|\mathcal{S}|$ but exponential in $|\Phi|$. As a result, $|\mathcal{A}_{s_{\iota}}^{\Phi}|$ is $(k+1)$-exponential in $|\Phi|$ and $k$-exponential in $|\mathcal{S}|$. Finally, still by Proposition 4.12, the number of colours $l$ is $(k-1)$-exponential in $f_{\mathcal{S}}^{\Phi}\log f_{\mathcal{S}}^{\Phi}$, hence $k$-exponential in $|\Phi|$. Checking that $t$ is accepted by $\mathcal{A}_{s_{\iota}}^{\Phi}$ can thus be done in time $(k+1)$-exponential in $|\Phi|$, and $k$-exponential in $|\mathcal{S}|$, which finishes to establish the upper bounds. For the lower bounds, consider the fragment $\textnormal{{EQ}}^{k}\textnormal{{CTL}}^{*}$ of $\textnormal{{QCTL}}^{*}$ (with perfect information) which consists in formulas in prenex normal form, i.e., with all second-order quantifications at the beginning, with at most $k$ alternations between existential and universal quantifiers, counting the first quantifier as one alternation (see (Laroussinie and Markey, 2014, p.8) for a formal definition). Clearly, $\textnormal{{EQ}}^{k}\textnormal{{CTL}}^{*}$ is a fragment of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ (with $n=1$), and formulas of $\textnormal{{EQ}}^{k}\textnormal{{CTL}}^{*}$ have simulation depth at most $k$. It is proved in (Laroussinie and Markey, 2014) that model checking $\textnormal{{EQ}}^{k}\textnormal{{CTL}}^{*}$ is $(k+1)$-Exptime -hard. ∎ ###### Remark 3. One may wonder why we do not get our lower bounds from the distributed synthesis problem in systems with hierarchical information. The reason is that this problem is $k$-Exptime -complete for LTL or $\textnormal{{CTL}}^{*}$ specifications (Pnueli and Rosner, 1990; Kupferman and Vardi, 2001) and can be expressed with formulas of simulation depth $k$, and thus would only provide $k$-Exptime lower-bounds for simulation depth $k$, while our problem is $k+1$-Exptime -complete. This may seem surprising, but we point out that thanks to alternation of existential and universal quantifiers, $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formulas with simulation depth $k$ can express more complex problems than classic distributed synthesis, such as existence of Nash equilibria (see Section 7.1). Improved upper bound. We now refine the previous result by observing that some subformulas can be model-checked independently in a bottom-up labelling algorithm which uses the above model-checking procedure as a subroutine. The height of exponential of the overall procedure for a formula $\Phi$ is thus determined by the maximal simulation-depth of the successive independent subformulas $\varphi$ treated by the labelling algorithm, instead of the simulation depth of the full formula $\Phi$. To make this precise we define the _simulation number_ of a sentence, akin to the alternation number introduced in (Mogavero et al., 2014). Let $\Phi\in\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, and assume without loss of generality that ${\textnormal{AP}_{\exists}}(\Phi)\cap\textnormal{AP}_{f}(\Phi)=\emptyset$. A state subformula $\varphi$ of $\Phi$ is a _subsentence_ if no atom quantified in $\Phi$ appears free in $\varphi$, i.e., $\varphi$ is a subsentence of $\Phi$ if ${\textnormal{AP}_{\exists}}(\Phi)\cap\textnormal{AP}_{f}(\varphi)=\emptyset$.444Observe that since we always assume that ${\textnormal{AP}_{\exists}}(\Phi)\cap\textnormal{AP}_{f}(\Phi)=\emptyset$, $\Phi$ is a subsentence of itself. The _simulation number_ $\mbox{sn}(\Phi)$ of a $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula $\Phi$ is the maximal simulation depth of $\Phi$’s subsentences, where the simulation depth is computed by considering strict subsentences as atoms. Note that because temporal operators of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ can only talk about the future, the truth value of a subsentence in a node $u$ of an unfolding $t_{\mathcal{S}}$ only depends on the current state $\mbox{last}(u)$. The bottom-up labelling algorithm for an instance $(\Phi,\mathcal{S})$ thus consists in iteratively model checking innermore subsentences of $\Phi$ in all states of $\mathcal{S}$, marking the states where they hold with fresh atomic propositions with which the corresponding subsentences are replaced in $\Phi$. ###### Proposition 4.14. The model-checking problem for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$ formulas of simulation number at most $k$ is $(k+1)$-Exptime -complete. ## 5\. Model-checking hierarchical instances of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ In this section we establish that the model-checking problem for $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ restricted to the class of hierarchical instances is decidable (Theorem 2.9). ### 5.1. Reduction to $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ We build upon the proof in (Laroussinie and Markey, 2015) that establishes the decidability of the model-checking problem for $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc}}$ by reduction to the model-checking problem for $\textnormal{{QCTL}}^{*}$. The main difference is that we reduce to the model-checking problem for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ instead, using quantifiers on atomic propositions parameterised with observations that reflect the ones used in the $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ model-checking instance. Let $(\mathcal{G},\Phi)$ be a hierarchical instance of the $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ model-checking problem, and assume without loss of generality that each strategy variable is quantified at most once in $\Phi$. We define an equivalent instance of the model-checking problem for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$. Constructing the CKS $\mathcal{S}_{\mathcal{G}}$. We define $\mathcal{S}_{\mathcal{G}}$ so that (indistinguishable) nodes in its tree- unfolding correspond to (indistinguishable) finite plays in $\mathcal{G}$. The CKS will make use of atomic propositions $\textnormal{AP}_{v}:=\\{p_{v}\mid v\in V\\}$ (that we assume to be disjoint from AP). The idea is that $p_{v}$ allows the $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula $(\Phi)_{s}^{\,\emptyset}$ to refer to the current position $v$ in $\mathcal{G}$. Later we will see that $(\Phi)_{s}^{\,\emptyset}$ will also make use of atomic propositions $\textnormal{AP}_{c}:=\\{p_{c}^{x}\mid c\in\textnormal{Ac}\mbox{ and }x\in\textnormal{Var}\\}$ that we assume, again, are disjoint from $\textnormal{AP}\cup\textnormal{AP}_{v}$. This allows the formula to use $p_{c}^{x}$ to refer to the actions $c$ advised by strategies $x$. Suppose $\textnormal{Obs}=\\{o_{1},\ldots,o_{n}\\}$, and let $\mathcal{G}=(\textnormal{Ac},V,E,\ell,v_{\iota},\mathcal{O})$. For $i\in[n]$, define the local states $L_{i}:=\\{[v]_{o_{i}}\mid v\in V\\}$ where $[v]_{o}$ is the equivalence class of $v$ for relation $\sim_{o}$. Since we need to know the actual position of the $\textrm{CGS}_{\textnormal{ii}}$ to define the dynamics, we also let $L_{n+1}:=V$. Define the CKS $\mathcal{S}_{\mathcal{G}}:=(S,R,s_{{\iota}},\ell^{\prime})$ where * • $S:=\\{s_{v}\mid v\in V\\}$, * • $R:=\\{(s_{v},s_{v^{\prime}})\mid\exists\bm{c}\in\textnormal{Ac}^{\textnormal{Ag}}\mbox{ s.t. }E(v,\bm{c})=v^{\prime}\\}\subseteq S^{2}$, * • $s_{{\iota}}:=s_{v_{{\iota}}}$, * • $\ell^{\prime}(s_{v}):=\ell(v)\cup\\{p_{v}\\}\subseteq\textnormal{AP}\cup\textnormal{AP}_{v}$, and $s_{v}:=([v]_{o_{1}},\ldots,[v]_{o_{n}},v)\in\prod_{i\in[n+1]}L_{i}$. For every finite play $\rho=v_{0}\ldots v_{k}$, define the node $u_{\rho}:=s_{v_{0}}\ldots s_{v_{k}}$ in $t_{\mathcal{S}_{\mathcal{G}}}$ (which exists, by definition of $\mathcal{S}_{\mathcal{G}}$ and of tree unfoldings). Note that the mapping $\rho\mapsto u_{\rho}$ defines a bijection between the set of finite plays and the set of nodes in $t_{\mathcal{S}_{\mathcal{G}}}$. Constructing the $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$ formulas $(\varphi)_{s}^{\,f}$. We now describe how to transform an $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ formula $\varphi$ and a partial function $f:\textnormal{Ag}\rightharpoonup\textnormal{Var}$ into a $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula $(\varphi)_{s}^{\,f}$ (that will also depend on $\mathcal{G}$). Suppose that $\textnormal{Ac}=\\{c_{1},\ldots,c_{l}\\}$, and define $(\varphi)_{s}^{\,f}$ and $(\psi)_{p}^{\,f}$ by mutual induction on state and path formulas. The base cases are as follows: $(p)_{s}^{\,f}:=p$ and $(\varphi)_{p}^{\,f}:=(\varphi)_{s}^{\,f}$. Boolean and temporal operators are simply obtained by distributing the translation: $(\neg\varphi)_{s}^{\,f}:=\neg(\varphi)_{s}^{\,f}$, $(\neg\psi)_{p}^{\,f}:=\neg(\psi)_{p}^{\,f}$, $(\varphi_{1}\vee\varphi_{2})_{s}^{\,f}:=(\varphi_{1})_{s}^{\,f}\vee(\varphi_{2})_{s}^{\,f}$, $(\psi_{1}\vee\psi_{2})_{p}^{\,f}:=(\psi_{1})_{p}^{\,f}\vee(\psi_{2})_{p}^{\,f}$, $({\bf X}\psi)_{p}^{\,f}:={\bf X}(\psi)_{p}^{\,f}$ and $(\psi_{1}{\bf U}\psi_{2})_{p}^{\,f}:=(\psi_{1})_{p}^{\,f}{\bf U}(\psi_{2})_{p}^{\,f}$. We continue with the case of the strategy quantifier: $\begin{array}[]{lrl}&(\langle\\!\langle x\rangle\\!\rangle^{o}\varphi)_{s}^{\,f}&:=\exists^{\widetilde{o}}p_{c_{1}}^{x}\ldots\exists^{\widetilde{o}}p_{c_{l}}^{x}.\varphi_{\text{str}}(x)\wedge(\varphi)_{s}^{\,f}\\\\[5.0pt] \mbox{where}&\varphi_{\text{str}}(x)&:={\bf A}{\bf G}\bigvee_{c\in\textnormal{Ac}}p_{c}^{x}\\\\[5.0pt] \mbox{and}&\widetilde{o_{i}}&:=\\{j\mid\mathcal{O}(o_{i})\subseteq\mathcal{O}(o_{j})\\}.\end{array}$ The intuition is that for each possible action $c\in\textnormal{Ac}$, an existential quantification on the atomic proposition $p_{c}^{x}$ “chooses” for each node $u_{\rho}$ of the tree $t_{\mathcal{S}_{\mathcal{G}}}$ whether strategy $x$ allows action $c$ in $\rho$ or not, and it does so uniformly with regards to observation $\widetilde{o}$. $\varphi_{\text{str}}(x)$ checks that at least one action is allowed in each node, and thus that atomic propositions $p_{c}^{x}$ indeed define a strategy. We define $\widetilde{o_{i}}$ as $\\{j\mid\mathcal{O}(o_{i})\subseteq\mathcal{O}(o_{j})\\}$ instead of $\\{i\\}$ in order to obtain a hierarchical instance. Note that including all coarser observations does not increase the information accessible to the quantifier: indeed, two nodes are $\\{i\\}$-indistinguishable if and only if they are $\widetilde{o_{i}}$-indistinguishable. Here are the remaining cases: $\begin{array}[]{lrl}&((a,x)\varphi)_{s}^{\,f}&:=(\varphi)_{s}^{\,f[a\mapsto x]}\quad\quad\text{for }x\in\textnormal{Var}\cup\\{\operatorname{?}\\}\\\\[5.0pt] \mbox{and}&({\bf E}\psi)_{s}^{\,f}&:={\bf E}\,(\psi_{\text{out}}^{\,f}\wedge(\psi)_{p}^{\,f})\\\\[5.0pt] \mbox{where}&\psi_{\text{out}}^{\,f}&:={\bf G}\bigvee_{v\in V}\left(p_{v}\wedge\bigvee_{\bm{c}\in\textnormal{Ac}^{\textnormal{Ag}}}\bigwedge_{a\in\textit{dom}(f)}p_{\bm{c}_{a}}^{f(a)}\wedge{\bf X}\,p_{E(v,\bm{c})}\right).\end{array}$ $\psi_{\text{out}}^{\,f}$ checks that each player $a$ in the domain of $f$ follows the strategy coded by the $p_{c}^{f(a)}$. ###### Remark 4. If we consider the fragment of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ that only allows for deterministic strategies, the translation can be adapted by simply replacing formula $\varphi_{\text{str}}(x)$ above with its deterministic variant $\varphi_{\text{str}}^{\text{det}}(x):={\bf A}{\bf G}\bigvee_{c\in\textnormal{Ac}}(p_{c}^{x}\wedge\bigwedge_{c^{\prime}\neq c}\neg p_{c^{\prime}}^{x}),$ which ensures that _exactly one_ action is chosen for strategy $x$ in each finite play, and thus that atomic propositions $p_{c}^{x}$ characterise a deterministic strategy. To prove correctness of the translation, given a strategy $\sigma$ and a strategy variable $x$ we let $\ell_{\sigma}^{x}:=\\{\ell_{p_{c}^{x}}\mid c\in\textnormal{Ac}\\}$ be the family of $p_{c}^{x}$-labellings for tree $t_{\mathcal{S}_{\mathcal{G}}}$ defined as follows: for each finite play $\rho$ in $\mathcal{G}$ and $c\in\textnormal{Ac}$, we let $\ell_{p_{c}^{x}}(u_{\rho}):=1$ if $c\in\sigma(\rho)$, 0 otherwise. For a labelled tree $t$ with same domain as $t_{\mathcal{S}_{\mathcal{G}}}$ we write $t\otimes\ell_{\sigma}^{x}$ for $t\otimes\ell_{p_{c_{1}}^{x}}\otimes\ldots\otimes\ell_{p_{c_{l}}^{x}}$. Given an infinite play $\pi$ and a point $i\in\mathbb{N}$, we also let $\lambda_{\pi,i}$ be the infinite path in $t_{\mathcal{S}_{\mathcal{G}}}$ that starts in node $u_{\pi_{\leq i}}$ and is defined as $\lambda_{\pi,i}:=u_{\pi_{\leq i}}u_{\pi_{\leq i+1}}u_{\pi_{\leq i+2}}\ldots$ Finally, for an assignment $\chi$ and a partial function $f:\textnormal{Ag}\rightharpoonup\textnormal{Var}$, we say that $f$ is _compatible_ with $\chi$ if $\textit{dom}(\chi)\cap\textnormal{Ag}=\textit{dom}(f)$ and for all $a\in\textit{dom}(f)$, $\chi(a)=\chi(f(a))$. ###### Proposition 5.1. For every state subformula $\varphi$ and path subformula $\psi$ of $\Phi$, finite play $\rho$, infinite play $\pi$, point $i\in\mathbb{N}$, for every assignment $\chi$ variable-complete for $\varphi$ (resp. $\psi$) and partial function $f:\textnormal{Ag}\rightharpoonup\textnormal{Var}$ compatible with $\chi$, assuming also that no $x_{i}$ in $\textit{dom}(\chi)\cap\textnormal{Var}=\\{x_{1},\ldots,x_{k}\\}$ is quantified in $\varphi$ or $\psi$, we have $\displaystyle\mathcal{G},\chi,{\rho}\models\varphi$ if and only if $\displaystyle t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}},u_{\rho}\models(\varphi)_{s}^{\,f}$ $\displaystyle\mathcal{G},\chi,{\pi},i\models\psi$ if and only if $\displaystyle t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}},\lambda_{\pi,i}\models(\psi)_{p}^{\,f}$ In addition, $\mathcal{S}_{\mathcal{G}}$ is of size linear in $|\mathcal{G}|$, and $(\varphi)_{s}^{\,f}$ and $(\psi)_{p}^{\,f}$ are of size linear in $|\mathcal{G}|^{2}+|\varphi|$. ###### Proof. The proof is by induction on $\varphi$. We detail the cases for binding, strategy quantification and outcome quantification, the others follow simply by definition of $\mathcal{S}_{\mathcal{G}}$ for atomic propositions and induction hypothesis for remaining cases. For $\varphi=(a,x)\varphi^{\prime}$, we have $\mathcal{G},\chi,{\rho}\models(a,x)\varphi^{\prime}$ if and only if $\mathcal{G},\chi[a\mapsto\chi(x)],{\rho}\models\varphi^{\prime}$. The result follows by using the induction hypothesis with assignment $\chi[a\mapsto x]$ and function $f[a\mapsto x]$. This is possible because $f[a\mapsto x]$ is compatible with $\chi[a\mapsto x]$: indeed $\textit{dom}(\chi[a\mapsto x])\cap\textnormal{Ag}$ is equal to $\textit{dom}(\chi)\cap\textnormal{Ag}\cup\\{a\\}$ which, by assumption, is equal to $\textit{dom}(f)\cup\\{a\\}=\textit{dom}(f[a\mapsto x])$. Also by assumption, for all $a^{\prime}\in\textit{dom}(f)$, $\chi(a^{\prime})=\chi(f(a^{\prime}))$, and by definition $\chi[a\mapsto\chi(x)](a)=\chi(x)=\chi(f[a\mapsto x](a))$. For $\varphi=\langle\\!\langle x\rangle\\!\rangle^{o}\varphi^{\prime}$, assume first that $\mathcal{G},\chi,{\rho}\models\langle\\!\langle x\rangle\\!\rangle^{o}\varphi^{\prime}$. There exists an $o$-uniform strategy $\sigma$ such that $\mathcal{G},\chi[x\mapsto\sigma],\rho\models\varphi^{\prime}.$ Since $f$ is compatible with $\chi$, it is also compatible with assignment $\chi^{\prime}=\chi[x\mapsto\sigma]$. By assumption, no variable in $\\{x_{1},\ldots,x_{k}\\}$ is quantified in $\varphi$, so that $x\neq x_{i}$ for all $i$, and thus $\chi^{\prime}(x_{i})=\chi(x_{i})$ for all $i$; and because no strategy variable is quantified twice in a same formula, $x$ is not quantified in $\varphi^{\prime}$, so that no variable in $\\{x_{1},\ldots,x_{k},x\\}$ is quantified in $\varphi^{\prime}$. By induction hypothesis $t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi^{\prime}(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi^{\prime}(x_{k})}^{x_{k}}\otimes\ell_{\chi^{\prime}(x)}^{x},u_{\rho}\models(\varphi^{\prime})_{s}^{\,f}.$ Because $\sigma$ is $o$-uniform, each $\ell_{p_{c}^{x}}\in\ell_{\sigma}^{x}=\ell_{\chi^{\prime}(x)}^{x}$ is $\widetilde{o}$-uniform, and it follows that $t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi^{\prime}(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi^{\prime}(x_{k})}^{x_{k}},u_{\rho}\models\exists^{\widetilde{o}}p_{c_{1}}^{x}\ldots\exists^{\widetilde{o}}p_{c_{l}}^{x}.\varphi_{\text{str}}(x)\wedge(\varphi^{\prime})_{s}^{\,f}.$ Finally, since $\chi^{\prime}(x_{i})=\chi(x_{i})$ for all $i$, we conclude that $t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}},u_{\rho}\models(\langle\\!\langle x\rangle\\!\rangle^{o}\varphi^{\prime})_{s}^{\,f}.$ For the other direction, assume that $t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}},u_{\rho}\models(\varphi)_{s}^{\,f},$ and recall that $(\varphi)_{s}^{\,f}=\exists^{\widetilde{o}}p_{c_{1}}^{x}\ldots\exists^{\widetilde{o}}p_{c_{l}}^{x}.\varphi_{\text{str}}(x)\wedge(\varphi^{\prime})_{s}^{\,f}$. Write $t=t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}}$. There exist $\widetilde{o}$-uniform $\ell_{p_{c}^{x}}$-labellings such that $t\otimes\ell_{p_{c_{1}}^{x}}\otimes\ldots\otimes\ell_{p_{c_{l}}^{x}}\models\varphi_{\text{str}}(x)\wedge(\varphi^{\prime})_{s}^{\,f}.$ By $\varphi_{\text{str}}(x)$, these labellings code for a strategy $\sigma$, and because they are $\widetilde{o}$-uniform, $\sigma$ is $o$-uniform. Let $\chi^{\prime}=\chi[x\mapsto\sigma]$. For all $1\leq i\leq k$, by assumption $x\neq x_{i}$, and thus $\chi^{\prime}(x_{i})=\chi(x_{i})$. The above can thus be rewritten $t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi^{\prime}(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi^{\prime}(x_{k})}^{x_{k}}\otimes\ell_{\chi^{\prime}(x)}^{x}\models\varphi_{\text{str}}(x)\wedge(\varphi^{\prime})_{s}^{\,f}.$ By induction hypothesis we have $\mathcal{G},\chi[x\mapsto\sigma],\rho\models\varphi^{\prime}$, hence $\mathcal{G},\chi,\rho\models\langle\\!\langle x\rangle\\!\rangle^{o}\varphi^{\prime}$. For $\varphi={\bf E}\psi$, assume first that $\mathcal{G},\chi,{\rho}\models{\bf E}\psi$. There exists a play $\pi\in\textnormal{Out}(\chi,\rho)$ such that $\mathcal{G},\chi,\pi,|\rho|-1\models\psi$. By induction hypothesis, $t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}},\lambda_{\pi,|\rho|-1}\models(\psi)_{p}^{\,f}$. Since $\pi$ is an outcome of $\chi$, each agent $a\in\textit{dom}(\chi)\cap\textnormal{Ag}$ follows strategy $\chi(a)$ in $\pi$. Because $\textit{dom}(\chi)\cap\textnormal{Ag}=\textit{dom}(f)$ and for all $a\in\textit{dom}(f)$, $\chi(a)=\chi(f(a))$, each agent $a\in\textit{dom}(f)$ follows the strategy $\chi(f(a))$, which is coded by atoms $p_{c}^{f(a)}$ in the translation of $\Phi$. Therefore $\lambda_{\pi,|\rho|-1}$ also satisfies $\psi_{\text{out}}^{\,\chi}$, hence $t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}},\lambda_{\pi,|\rho|-1}\models\psi_{\text{out}}^{\,\chi}\wedge(\psi)_{p}^{\,f}$, and we are done. For the other direction, assume that $t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}},u_{\rho}\models{\bf E}(\psi_{\text{out}}^{\,f}\wedge(\psi)_{p}^{\,f})$. There exists a path $\lambda$ in $t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}}$ starting in node $u_{\rho}$ that satisfies both $\psi_{\text{out}}^{\,f}$ and $(\psi)_{p}^{\,f}$. By construction of $\mathcal{S}_{\mathcal{G}}$ there exists an infinite play $\pi$ such that $\pi_{\leq|\rho|-1}=\rho$ and $\lambda=\lambda_{\pi,|\rho|-1}$. By induction hypothesis, $\mathcal{G},\chi,\pi,|\rho|-1\models\psi$. Because $\lambda_{\pi,|\rho|-1}$ satisfies $\psi_{\text{out}}^{\,f}$, $\textit{dom}(\chi)\cap\textnormal{Ag}=\textit{dom}(f)$, and for all $a\in\textit{dom}(f)$, $\chi(a)=\chi(f(a))$, it is also the case that $\pi\in\textnormal{Out}(\chi,\rho)$, hence $\mathcal{G},\chi,\rho\models{\bf E}\psi$. The size of $\mathcal{S}_{\mathcal{G}}$, $(\varphi)_{s}^{\,f}$ and $(\psi)_{p}^{\,f}$ are easily verified. ∎ Applying Proposition 5.1 to the sentence $\Phi$, $\rho=v_{\iota}$, any assignment $\chi$, and the empty function $\emptyset$, we get: $\mathcal{G}\models\Phi\quad\mbox{if and only if}\quad t_{\mathcal{S}_{\mathcal{G}}}\models(\Phi)_{s}^{\,\emptyset}.$ Preserving hierarchy. To complete the proof of Theorem 2.9 it remains to check that $(\Phi)_{s}^{\,\emptyset}$ is a hierarchical $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula, which is the case because $\Phi$ is hierarchical in $\mathcal{G}$ and for every two observations $o_{i}$ and $o_{j}$ in Obs such that $\mathcal{O}(o_{i})\subseteq\mathcal{O}(o_{j})$, by definition of $\widetilde{o_{k}}$ we have that $\widetilde{o_{i}}\subseteq\widetilde{o_{j}}$. ### 5.2. Complexity We now establish the complexity of model checking hierarchical instances of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. As we did for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, we first define the simulation depth of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ state formulas. In the following inductive definition, $\mathcal{O}_{\varphi}$ denotes the intersection of all indistinguishability relations used in $\varphi$: $\mathcal{O}_{\varphi}:=\cap_{o\in\varphi}\mathcal{O}(o)$, with the empty intersection being defined as the identity relation (perfect information). Also, for a path formula $\psi$, $\max(\psi)$ is the set of maximal state subformulas in $\psi$. $\begin{array}[]{lcc}\mbox{sd}(p):=(0,\mbox{nd})&&\mbox{sd}(\neg\varphi):=(\mbox{sd}_{k}(\varphi),\mbox{alt})\\\\[7.0pt] \lx<EMAIL_ADDRESS>\lx@intercol\hfil\mbox{where }x=\begin{cases}\mbox{nd}&\mbox{if }\mbox{sd}_{x}(\varphi_{1})=\mbox{sd}_{x}(\varphi_{2})=\mbox{nd}\\\ <EMAIL_ADDRESS>\lx@intercol\mbox{sd}(\langle\\!\langle <EMAIL_ADDRESS>\lx@intercol\hfil\mbox{where }k=\begin{cases}\mbox{sd}_{k}(\varphi)&\mbox{if }\mbox{sd}_{x}(\varphi)=\mbox{nd}\mbox{ and }\mathcal{O}(o)=\mathcal{O}_{\varphi}\\\ <EMAIL_ADDRESS>\mbox{sd}((a,x)\varphi):=\mbox{sd}(\varphi)\\\\[7.0pt] \lx@intercol\mbox{sd}({\bf E}\psi):=\begin{cases}(0,\mbox{nd})&\mbox{if }\psi\in\textnormal{LTL}\\\ (\max_{\varphi\in\max(\psi)}\mbox{sd}_{k}(\varphi),\mbox{alt})&\mbox{otherwise}\end{cases}\hfil\lx@intercol\end{array}$ ###### Proposition 5.2. The model-checking problem for hierarchical instances of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ of simulation depth at most $k$ is $(k+1)$-Exptime -complete. ###### Proof. The upper bounds follow from the fact that the translated formulas in our reduction have essentially the same simulation depth as the original ones. However this is not quite right, because in the case where $\mbox{sd}_{x}(\varphi)=\mbox{nd}$ and $\mathcal{O}(o)=\mathcal{O}_{\varphi}$ we have $\mbox{sd}(\langle\\!\langle x\rangle\\!\rangle^{o}\varphi)=(\mbox{sd}_{k}(\varphi),\mbox{nd})$, while $\mbox{sd}((\langle\\!\langle x\rangle\\!\rangle^{o}\varphi)_{s}^{\,f})=(\mbox{sd}_{k}((\varphi)_{s}^{\,f})+1,\mbox{nd})$: indeed, while it is the case that $\mathcal{O}(o)=\mathcal{O}_{\varphi}$ implies that $\widetilde{o}=I_{(\varphi)_{s}^{\,f}}$, the translation introduces a conjunction with $\varphi_{\text{str}}(x)$, and even when $\mbox{sd}_{x}((\varphi)_{s}^{\,f})=\mbox{nd}$, we have $\mbox{sd}_{x}(\varphi_{\text{str}}(x)\wedge(\varphi)_{s}^{\,f})=\mbox{alt}$. According to Proposition 4.13, this should thus induce an additional exponential to check the translated formula. However, this can be avoided by noticing that the fixed formula $\varphi_{\text{str}}(x)={\bf A}{\bf G}\bigvee_{c\in\textnormal{Ac}}p_{c}^{x}$ can be checked by a simple _deterministic_ tree automaton with two states $q_{\text{check}}$ and $q_{\text{rej}}$: the automaton starts in state $q_{\text{check}}$, which is accepting (it has parity zero); when it visits a node $u$ in state $q_{\text{check}}$, if $\ell(u)$ satisfies $\bigvee_{c\in\textnormal{Ac}}p_{c}^{x}$, then the automaton sends state $q_{\text{check}}$ to all children of $u$, otherwise it sends the state $q_{\text{rej}}$ to all children. State $q_{\text{rej}}$ is rejecting (it has parity one) and is a sink: it sends itself to all children, independently on the label of the visited node. If we restrict $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ to deterministic strategies, the same observation can be made: the automaton that checks formula $\varphi_{\text{str}}^{\text{det}}(x)={\bf A}{\bf G}\bigvee_{c\in\textnormal{Ac}}(p_{c}^{x}\wedge\bigwedge_{c^{\prime}\neq c}\neg p_{c^{\prime}}^{x})$ is the same as the one described above, except that it checks whether $\bigvee_{c\in\textnormal{Ac}}(p_{c}^{x}\wedge\bigwedge_{c^{\prime}\neq c}\neg p_{c^{\prime}}^{x})$ is satisfied by the label of the current node. Given two tree automata $\mathcal{A}_{1}$ and $\mathcal{A}_{2}$, one deterministic and one nondeterministic, one can easily build a nondeterministic automaton $\mathcal{A}_{1}\cap\mathcal{A}_{2}$ of size $|\mathcal{A}_{1}|\times|\mathcal{A}_{2}|$ that accepts the intersection of their languages, so that in this case the conjunction does not introduce alternation, and thus we do not need an additional simulation before projecting to guess the strategy. We could refine the notion of simulation depth to reflect this, but we find that it would become very cumbersome for little added benefit, so we keep this observation in this proof. The lower bounds are inherited from $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ thanks to the polynomial reduction presented in Section 6.2.2, which preserves simulation depth. ∎ We point out that all instances of the model-checking problem for the perfect- information fragment are hierarchical, and thus this result provides improved upper-bounds for SL, which was only known to be in $k$-Exptime for formulas of length at most $k$ (Mogavero et al., 2014). Also the lower bounds for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ are inherited directly from the perfect-information fragment $\textnormal{{QCTL}}^{*}$, which reduces to the perfect-information fragment of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ following the construction from Section 6.2.2. Therefore the lower bounds hold already for the perfect- information fragment of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. Note however that this does not provide lower bounds for the usual, linear- time variant of Strategy Logic, where path quantifiers in $\textnormal{{QCTL}}^{*}$ formulas must be simulated with strategy quantifications which increase the simulation depth of the resulting Strategy Logic formulas. The exact complexity of the linear-time variant is not known, even in the perfect-information case. Simulation number. The intuition behind the alternation number as considered in (Mogavero et al., 2014) is to refine the classic alternation depth between existential and universal quantifiers by observing that subsentences of a sentence $\Phi$ to model-check can be treated independently thanks to a bottom-up labelling algorithm: innermost sentences are evaluated in all states of the model and replaced in $\Phi$ by atomic propositions that label the states where they hold. The alternation number of $\Phi$ is the maximum alternation depth of the successive subsentences that are treated by this bottom-up procedure, and it determines the complexity of the overall model- checking procedure. However, as discussed in Remark 1, the semantics of the outcome quantifier makes sentences sensitive to the assignment in which they are evaluated. As a result, to define the notion of alternation number in our setting, we introduce a notion of _independent subsentence_. Intuitively, a subsentence $\varphi$ of a sentence $\Phi$ is _independent_ if it redefines or unbinds the strategies of all players who are bound to a strategy when $\varphi$ is reached in the evaluation of $\Phi$. More precisely, we say that an agent $a$ is _bound_ in a syntactic subformula $\varphi$ of $\Phi$ if the path that leads to $\varphi$ in $\Phi$’s syntactic tree contains a binding operator $(a,x)$ for $a$ which is not followed by an unbinding $(a,\operatorname{?})$ for her. A subsentence $\varphi$ of $\Phi$ is _independent_ if all agents that are bound in $\varphi$ are either rebound by an operator $(a,x)$ or unbound by an operator $(a,\operatorname{?})$ before any outcome quantifier is met in $\varphi$. In an independent subsentence $\varphi$, the semantics of the outcome quantifier does not depend on strategies that are quantified outside $\varphi$, and in fact a subsentence $\varphi$ of $\Phi$ is independent if and only if the formula that corresponds to $\varphi$ in $(\Phi)_{s}^{\,\emptyset}$ is a subsentence of $(\Phi)_{s}^{\,\emptyset}$. Similarly to what we did for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ we now define the _simulation number_ $\mbox{sn}(\Phi)$ of an $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ sentence $\Phi$ as the maximum of the simulation depths for independent subsentences, where strict independent subsentences are counted as atoms. ###### Lemma 5.3. For every hierarchical instance $(\mathcal{G},\Phi)$ of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, $\mbox{sn}(\Phi)=\mbox{sn}((\Phi)_{s}^{\,\emptyset})$. The following then follows from Proposition 5.1, Lemma 5.3 and Proposition 4.14. ###### Proposition 5.4. The model-checking problem for hierarchical instances of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ of simulation number at most $k$ is $(k+1)$-Exptime -complete. We now compare the latter result with the complexity of model checking SL[NG], the nested goal fragment of Strategy Logic with perfect information (we refer the interested reader to (Mogavero et al., 2014) for a definition of this fragment). It is established in (Chatterjee et al., 2010b; Mogavero et al., 2014) that this problem is in $(k+1)$-Exptime for formulas of _alternation number_ $k$. We remark that the simulation number of an SL[NG] formula translated in our branching-time version of SL (this is done by adding outcome quantifiers between bindings and temporal operators) is equal to its alternation number plus one, and thus Proposition 5.4 gives a $(k+2)$-Exptime upper bound for SL[NG] formulas of alternation number $k$. In (Chatterjee et al., 2010b; Mogavero et al., 2014) the extra exponential is avoided by resorting to universal and nondeterministic tree automata, depending on whether the innermost strategy quantification is existential or universal, to deal with temporal formulas. Thus, the innermost strategy quantification can be dealt with without incurring an exponential blowup. The same thing cannot be done for $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, for two reasons. The first one is that in general the innermost strategy quantification may have imperfect information and thus require a narrowing of the automaton; this operation introduces alternation, which needs to be removed at the cost of one exponential before dealing with strategy quantification. The second reason is that even when the innermost strategy has perfect information, the outcome quantifier that we introduce in Strategy Logic allows the expression of $\textnormal{{CTL}}^{*}$ formulas which cannot be dealt with by nondeterministic and universal automata as is done in (Chatterjee et al., 2010b; Mogavero et al., 2014). ## 6\. Comparison with related logics In this section we first show that $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ subsumes SL and the main imperfect-information extensions of ATL. Then we show that model checking Coordination Logic (CL) reduces to model checking hierarchical instances of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ where the truth of all atomic propositions in the model is known by all agents (or more precisely, all observations in the concurrent game structures are fine enough to observe the truth value of all atomic propositions). ### 6.1. Comparison with ATL The main difference between SL and ATL-like strategic logics is that in the latter a strategy is always bound to some player, while in the former bindings and quantifications are separated. This separation adds expressive power, e.g., one can bind the same strategy to different players. Extending ATL with imperfect-information is done by giving each player an indistinguishability relation that its strategies must respect (Bulling and Jamroga, 2014). In $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ instead each strategy $x$ is assigned an indistinguishability relation $o$ when it is quantified. Associating observations to strategies rather than players allows us to obtain a logic $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ that is a clean generalisation of (perfect-information) SL, and subsumes imperfect-information extensions of $\textnormal{{ATL}}^{*}$ that associate observations to players. Concerning SL, it is rather easy to see that every sentence in SL has an equivalent in the fragment of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ with deterministic strategies where all observation symbols are interpreted as perfect information. We now prove that $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ also subsumes $\textnormal{{ATL}}^{*}$ with imperfect information. ###### Proposition 6.1. For every $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize i,R}}$ formula555See (Bulling and Jamroga, 2014) for the definition of $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize i,R}}$, where subscript i refers to “imperfect information” and subscript R to “perfect recall”. Also, we consider the so-called _objective semantics_ for $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize i,R}}$. $\varphi$ there is an $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ formula $\varphi^{\prime}$ such that for every $\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}$ there is a $\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}^{\prime}$ such that $\mathcal{G}\models\varphi$ if, and only if, $\mathcal{G}^{\prime}\models\varphi^{\prime}$. We recall that an $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize i,R}}$ formula $\langle A\rangle\psi$ reads as “there are strategies for players in $A$ such that $\psi$ holds whatever players in $\textnormal{Ag}\setminus A$ do”. Formula $\varphi^{\prime}$ is built from $\varphi$ by replacing each subformula of the form $\langle A\rangle\psi$, where $A=\\{a_{1},\ldots,a_{k}\\}\subset\textnormal{Ag}$ is a coalition of players and $\textnormal{Ag}\setminus A=\\{a_{k+1},\ldots,a_{n}\\}$ with formula $\langle\\!\langle x_{1}\rangle\\!\rangle^{o_{1}}\ldots\langle\\!\langle x_{k}\rangle\\!\rangle^{o_{k}}(a_{1},x_{1})\ldots(a_{k},x_{k})(a_{k+1},\operatorname{?})\ldots(a_{n},\operatorname{?}){\bf A}\,\psi^{\prime}$, where $\psi^{\prime}$ is the translation of $\psi$. Then $\mathcal{G}^{\prime}$ is obtained from $\mathcal{G}$ by interpreting each $o_{i}$ as the equivalence relation for player $i$ in $\mathcal{G}$, and interpreting $o_{p}$ as the identity relation. Third, $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ also subsumes the imperfect-information extension of $\textnormal{{ATL}}^{*}$ with strategy context (see (Laroussinie et al., 2015) for the definition of $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc}}$ with partial observation, which we refer to as $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc,i}}$). ###### Proposition 6.2. For every $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc,i}}$ formula $\varphi$ there is an $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ formula $\varphi^{\prime}$ such that for every $\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}$ there is a $\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}^{\prime}$ such that $\mathcal{G}\models\varphi$ if, and only if, $\mathcal{G}^{\prime}\models\varphi^{\prime}$. The only difference between $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc,i}}$ and $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize i,R}}$ is the following: in $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize i,R}}$, when a subformula of the form $\langle A\rangle\psi$ is met, we quantify existentially on strategies for players in $A$ and quantify universally on possible outcomes obtained by letting other players behave however they want. Therefore, if any player in $\textnormal{Ag}\setminus A$ had previously been assigned a strategy, it is forgotten. In $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc,i}}$ on the other hand, these strategies are stored in a _strategy context_ , which is a _partial_ assignment $\chi$, defined for the subset of players currently bound to a strategy. A strategy context allows one to quantify universally only on strategies of players who are not in $A$ and who are not already bound to a strategy. It is then easy to adapt the translation presented for Proposition 6.1: it suffices not to unbind agents outside the coalition from their strategies. $\mathcal{G}^{\prime}$ is defined as for Proposition 6.1. ### 6.2. Comparison with Coordination Logic There is a natural and simple translation of instances of the model-checking problem of CL (Finkbeiner and Schewe, 2010) into the hierarchical instances of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. Moreover, the image of this translation consists of instances of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ with a very restricted form: atoms mentioned in the $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-formula are observable by all observations of the $\textrm{CGS}_{\textnormal{ii}}$ , i.e., for all $o\in\textnormal{Obs}$ and $p\in\textnormal{AP}$, $v\sim_{o}v^{\prime}$ implies that $p\in\ell(v)$ iff $p\in\ell(v^{\prime})$. ###### Proposition 6.3. There is an effective translation that, given a CL-instance $(\mathcal{S},\varphi)$ produces a hierarchical $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-instance $(\mathcal{G},\Phi)$ such that 1. (1) $\mathcal{S}\models\varphi$ if, and only if, $\mathcal{G}\models\Phi$, 2. (2) For all atoms $p\in\textnormal{AP}$ and observations $o\in\textnormal{Obs}$, $v\sim_{o}v^{\prime}$ implies that $p\in\ell(v)$ iff $p\in\ell(v^{\prime})$. To do this, one first translates CL into (hierarchical) $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, the latter is defined in the next section. This step is a simple reflection of the semantics of CL in that of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$. Then one translates $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ into $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ by a simple adaptation of the translation of $\textnormal{{QCTL}}^{*}$ into $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc}}$ (Laroussinie and Markey, 2015). We briefly recall the syntax and semantics of CL, and refer to (Finkbeiner and Schewe, 2010) for further details. Notation for trees. Note that our definition for trees (see Section 3.2) differs slightly from the one in (Finkbeiner and Schewe, 2010), where the root is the empty word. Here we adopt this convention to stay closer to notations in (Finkbeiner and Schewe, 2010). Thus, $(Y,X)$-trees in CL are of the form $(\tau,l)$ where $\tau\subseteq X^{*}$ and $l:\tau\to 2^{Y}$. For two disjoint sets $X$ and $Y$, we identify $2^{X}\times 2^{Y}$ with $2^{X\cup Y}$. Let $X$ and $Y$ be two sets with $Z=X\cup Y$, and let $M$ and $N$ be two disjoint sets. Given an ${M}$-labelled $2^{Z}$-tree $t=(\tau,\ell_{M})$ and an ${N}$-labelled $2^{Z}$-tree $t^{\prime}=(\tau^{\prime},\ell_{N})$ with same domain $\tau=\tau^{\prime}$, we define $t\uplus t^{\prime}:=(\tau,\ell^{\prime})$, where for every $u\in\tau$, $\ell^{\prime}(u)=\ell_{M}(u)\cup\ell_{N}(u)$. Now, given a complete ${M}$-labelled $2^{X}$-tree $t=((2^{X})^{*},\ell_{M})$ and a complete ${N}$-labelled $2^{Y}$-tree $t^{\prime}=((2^{Y})^{*},\ell_{N})$, we define $t\oplus t^{\prime}:=t\\!\uparrow^{2^{Z\setminus X}}\uplus\,t^{\prime}\\!\uparrow^{2^{Z\setminus Y}}$. CL Syntax. Let $\mathcal{C}$ be a set of _coordination variables_ , and let $\mathcal{S}$ be a set of _strategy variables_ disjoint from $\mathcal{C}$. The syntax of CL is given by the following grammar: $\varphi::=x\mid\neg\varphi\mid\varphi\vee\varphi\mid{\bf X}\varphi\mid\varphi{\bf U}\varphi\mid\Finv C\exists s.\,\varphi$ where $x\in\mathcal{C}\cup\mathcal{S}$, $C\subseteq\mathcal{C}$ and $s\in\mathcal{S}$, and with the restriction that each coordination variable appears in at most one _subtree quantifier_ $\Finv C\exists s.\,$, and similarly for strategy variables. The notion of free and bound (coordination or strategy) variables is as usual. The set of free coordination variables in $\varphi$ is noted $\mathcal{F}_{\varphi}$. A bound coordination variable $c$ is _visible_ to a strategy variable $s$ if $s$ is in the scope of the quantifier that introduces $c$, and $\textit{Scope}_{\varphi}(s)$ is the union of the set of bound coordination variables visible to $s$ and the set of free coordination variables (note that this union is disjoint). We will see, in the semantics, that the meaning of a bound strategy variable $s$ is a strategy $f_{s}:(2^{\textit{Scope}_{\varphi}(s)})^{*}\to 2^{\\{s\\}}$. Free strategy variables are called _atomic propositions_ , and we denote the set of atomic propositions in $\varphi$ by $\textnormal{AP}_{\varphi}$. CL Semantics. A CL formula $\varphi$ is evaluated on a complete $\textnormal{AP}_{\varphi}$-labelled $2^{\mathcal{F}_{\varphi}}$-tree $t$. An $(\textnormal{AP}_{\varphi},2^{\mathcal{F}_{\varphi}})$-tree $t=(\tau,\ell)$ satisfies a CL formula $\varphi$ if for every path $\lambda$ that starts in the root we have $t,\lambda,0\models\varphi$, where the satisfaction of a formula at position $i\geq 0$ of a path $\lambda$ is defined inductively as follows: $\displaystyle t,\lambda,i\models$ $\displaystyle\,p$ if $\displaystyle\quad p\in\ell(\lambda_{i})$ $\displaystyle t,\lambda,i\models$ $\displaystyle\,\neg\varphi^{\prime}$ if $\displaystyle\quad t,\lambda,i\not\models\varphi^{\prime}$ $\displaystyle t,\lambda,i\models$ $\displaystyle\,\varphi_{1}\vee\varphi_{2}$ if $\displaystyle\quad t,\lambda,i\models\varphi_{1}\mbox{ or }t,\lambda,i\models\varphi_{2}$ $\displaystyle t,\lambda,i\models$ $\displaystyle\,{\bf X}\varphi^{\prime}$ if $\displaystyle\quad t,\lambda,i+1\models\varphi^{\prime}$ $\displaystyle t,\lambda,i\models$ $\displaystyle\,\varphi_{1}{\bf U}\varphi_{2}$ if $\displaystyle\quad\exists\,j\geq i\mbox{ s.t. }t,\lambda,j\models\varphi_{2}\text{ and }\forall k\text{ s.t. }i\leq k<j,\;t,\lambda,k\models\varphi_{1}$ $\displaystyle t,\lambda,i\models$ $\displaystyle\,\Finv C\exists s.\,\varphi^{\prime}\quad$ if $\displaystyle\quad\exists\,f:(2^{\textit{Scope}_{\varphi}(s)})^{*}\to 2^{\\{s\\}}\mbox{ s.t. }t_{\lambda_{i}}\oplus((2^{\textit{Scope}_{\varphi}(s)})^{*},f)\models\varphi^{\prime},$ where $t_{\lambda_{i}}$ is the subtree of $t$ rooted in $\lambda_{i}$. First, observe that in the last inductive case, $t_{\lambda_{i}}$ being a $2^{\mathcal{F}_{\varphi}}$-tree, $t_{\lambda_{i}}\oplus((2^{\textit{Scope}_{\varphi}(s)})^{*},f)$ is a $2^{\mathcal{F}_{\varphi}\cup\textit{Scope}_{\varphi}(s)}$-tree. By definition, $\textit{Scope}_{\varphi}(s)=\mathcal{F}_{\varphi}\cup C=\mathcal{F}_{\varphi^{\prime}}$. It follows that $\mathcal{F}_{\varphi}\cup\textit{Scope}_{\varphi}(s)=\textit{Scope}_{\varphi}(s)=\mathcal{F}_{\varphi^{\prime}}$, hence $\varphi^{\prime}$ is indeed evaluated on a $\mathcal{F}_{\varphi^{\prime}}$-tree. ###### Remark 5. Note that all strategies observe the value of all atomic propositions. Formally, for every CL-formula $\varphi$ of the form $\varphi=\Finv C_{1}\exists s_{1}.\,\ldots,\Finv C_{i}\exists s_{i}.\,\varphi^{\prime}$ evaluated on a $2^{\mathcal{F}_{\varphi}}$-tree $t=(\tau,\ell)$, $\varphi^{\prime}$ is evaluated on a $2^{\mathcal{F}_{\varphi}\cup C_{1}\cup\ldots\cup C_{i}}$-tree $t^{\prime}=(\tau^{\prime},\ell^{\prime})$ such that for every $p\in\textnormal{AP}_{\varphi}$, for every pair of nodes $u,u^{\prime}\in t^{\prime}$ such that $u\\!\downarrow_{2^{\mathcal{F}_{\varphi}}}=u^{\prime}\\!\downarrow_{2^{\mathcal{F}_{\varphi}}}$, it holds that $p\in\ell^{\prime}(u)$ iff $p\in\ell^{\prime}(u^{\prime})$. Thus, in CL one cannot directly capture strategic problems where atomic propositions are not observable to all players. The input to the model-checking problem for CL consists of a CL formula $\varphi$ and a finite representation of a $(\textnormal{AP}_{\varphi},2^{\mathcal{F}_{\varphi}})$-tree $t$. The standard assumption is to assume $t$ is a regular tree, i.e., is the unfolding of a finite structure. Precisely, a _finite representation_ of a $(\textnormal{AP}_{\varphi},2^{\mathcal{F}_{\varphi}})$-tree $t=(\tau,\ell^{\prime})$ is a structure $\mathcal{S}=(S,R,\ell,s_{\iota})$ such that * • $S=2^{\mathcal{F}_{\varphi}}$, * • $R=S\times S$, * • $\ell:S\to 2^{\textnormal{AP}_{\varphi}}$, * • $s_{\iota}\in S$, and $t=t_{\mathcal{S}}$ is the unfolding of $\mathcal{S}$. Thus, an _instance_ of the model-checking problem for CL is a pair $(\mathcal{S},\Phi)$ where $\mathcal{S}=(S,R,s_{{\iota}},\ell)$ is a finite representation of an $(\textnormal{AP}_{\varphi},2^{\mathcal{F}_{\varphi}})$-tree and $\Phi$ is a CL formula (over variables $\mathcal{S}\cup\mathcal{C}$). The _model-checking problem for CL_ is the following decision problem: given an instance $(\mathcal{S},\Phi)$, return ‘Yes’ if $t_{\mathcal{S}}\models\Phi$ and ‘No’ otherwise. We now describe a natural translation of CL-instances to $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-instances. This translation: 1. (1) reduces the model-checking problem of CL to that of the hierarchical fragment of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. 2. (2) shows that CL only produces instances in which all atoms are uniform with regard to all observations, i.e., instances $(\mathcal{G},\Phi)$ such that for every $p\in\textnormal{AP}$ and $o\in\textnormal{Obs}$, $v\sim_{o}v^{\prime}$ implies $p\in\ell(v)\leftrightarrow p\in\ell(v^{\prime})$. We will present the translation in two steps: first from CL-instances into $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$-instances, and then from $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$-instances to $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-instances such that $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$-instances translate to hierarchical $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-instances. #### 6.2.1. Translating CL to $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$ Let $(\mathcal{S},\Phi)$ be an instance of the model-checking problem for CL, where $\mathcal{S}=(S,R,\ell,s_{{\iota}})$. We will construct a $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$-instance $(\widetilde{\mathcal{S}},\widetilde{\varphi})$ such that $\mathcal{S}\models\Phi$ iff $\widetilde{\mathcal{S}}\models\widetilde{\Phi}$. Let $\widetilde{\textnormal{AP}}$ be the set of all strategy variables occurring in $\Phi$, let $\mathcal{C}(\Phi)$ be the set of coordination variables that appear in $\Phi$, and assume, w.l.o.g., that $\mathcal{C}(\varphi)=[n]$ for some $n\in\mathbb{N}$. Let $\textit{hidden}(\Phi):=\mathcal{C}(\Phi)\setminus\mathcal{F}_{\varphi}$. First, we define the CKS $\widetilde{\mathcal{S}}$ over $\widetilde{\textnormal{AP}}$: the idea is to add in the structure $\mathcal{S}$ the local states corresponding to coordination variables that are not seen by all the strategies. Formally, $\widetilde{\mathcal{S}}:=(\widetilde{S},\widetilde{R},\widetilde{s_{{\iota}}},\widetilde{\ell})$ where * • $\widetilde{S}=\prod_{c\in\mathcal{C}(\Phi)}L_{c}$ where $L_{c}=\\{c_{0},c_{1}\\}$, * • $\widetilde{R}=\widetilde{S}\times\widetilde{S}$, * • for every $s\in\widetilde{S}$, $\widetilde{\ell}(s)=\ell(s\\!\downarrow_{\mathcal{F}_{\varphi}})$, and * • $\widetilde{s_{{\iota}}}\in\widetilde{S}$ is any state $s$ such that $s\\!\downarrow_{\mathcal{F}_{\varphi}}=s_{{\iota}}$ Second, we define concrete observations corresponding to strategy variables in $\Phi$. As explained in (Finkbeiner and Schewe, 2010), and as reflected in the semantics of CL, the intuition is that a strategy variable $s$ in formula $\Phi$ observes coordination variables $\textit{Scope}_{\varphi}(s)$. Therefore, we simply define, for each strategy variable $s$ in $\Phi$, the concrete observation $\textnormal{{o}}_{s}:=\textit{Scope}_{\varphi}(s)$. Finally, we define the $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula $\widetilde{\Phi}$. This is done by induction on $\Phi$ as follows (recall that we take for atomic propositions in $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ the set of all strategy variables in $\Phi$): $\displaystyle\widetilde{x}$ $\displaystyle:=x$ $\displaystyle\widetilde{\neg\varphi}$ $\displaystyle:=\neg\widetilde{\varphi}$ $\displaystyle\widetilde{\varphi_{1}\vee\varphi_{2}}$ $\displaystyle:=\widetilde{\varphi_{1}}\vee\widetilde{\varphi_{2}}$ $\displaystyle\widetilde{{\bf X}\varphi}$ $\displaystyle:={\bf X}\,\widetilde{\varphi}$ $\displaystyle\widetilde{\varphi_{1}{\bf U}\varphi_{2}}$ $\displaystyle:=\widetilde{\varphi_{1}}\,{\bf U}\,\widetilde{\varphi_{2}}$ $\displaystyle\widetilde{\Finv C\exists s.\,\varphi}$ $\displaystyle:=\exists^{\textnormal{{o}}_{s}}s.\,{\bf A}\widetilde{\varphi}$ In the last case, note that $C\subseteq\textnormal{{o}}_{s}=\textit{Scope}_{\varphi}(s)$. Note that $\widetilde{\Phi}$ is a hierarchical $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$-formula. Also, one can easily check that the following holds: ###### Lemma 6.4. $t_{\mathcal{S}}\models\Phi\quad\mbox{iff}\quad t_{\widetilde{\mathcal{S}}}\models{\bf A}\widetilde{\Phi}$. Importantly, we notice that ${\bf A}\widetilde{\Phi}\in\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$, and that: ###### Lemma 6.5. For every $x\in\textnormal{AP}_{\varphi}$ and every $s$ quantified in $\Phi$, $t_{\widetilde{\mathcal{S}}}$ is $\textnormal{{o}}_{s}$-uniform in $x$. #### 6.2.2. Translation from $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ to $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ We now present a translation of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$-instances to $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-instances. It is a simple adaptation of the reduction from the model-checking problem for $\textnormal{{QCTL}}^{*}$ to the model-checking problem for $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc}}$ presented in (Laroussinie and Markey, 2015). Let $(\mathcal{S},\Phi)$ be an instance of the model-checking problem for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, where $\mathcal{S}=(S,R,\ell,s_{{\iota}})$ and $S\subseteq\prod_{i\in[n]}L_{i}$. We assume, without loss of generality, that every atomic proposition is quantified at most once, and that if it appears quantified it does not appear free. Also, let ${\textnormal{AP}_{\exists}}(\Phi)=\\{p_{1},\ldots,p_{k}\\}$ be the set of atomic propositions quantified in $\Phi$, and for $i\in[k]$, let $\textnormal{{o}}_{i}$ be the concrete observation associated to the quantifier on $p_{i}$. We build the $\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}^{\mathcal{S}}:=(\textnormal{Ac},V,E,\ell^{\prime},v_{\iota},\mathcal{O})$ over agents $\textnormal{Ag}:=\\{a_{0},a_{1},\ldots,a_{k}\\}$, observations $\textnormal{Obs}:=\\{o_{0},o_{1},\ldots,o_{k}\\}$ and atomic propositions $\textnormal{AP}:={\textnormal{AP}_{\exists}}(\Phi)\cup\\{p_{S}\\}$, where $p_{S}$ is a fresh atomic proposition. Intuitively, agent $a_{0}$ is in charge of choosing transitions in $\mathcal{S}$, while agent $a_{i}$ for $i\geq 1$ is in charge of choosing the valuation for $p_{i}\in{\textnormal{AP}_{\exists}}(\Phi)$. To this aim, we let $V:=\begin{array}[]{l}\\{v_{s}\mid s\in S\\}\;\cup\\\ \\{v_{s,i}\mid s\in S\mbox{ and }i\in[k]\\}\;\cup\\\ \\{v_{p_{i}}\mid 0\leq i\leq k\\}\;\cup\\\ \\{v_{\perp}\\}\end{array}$ and $\textnormal{Ac}:=\\{c^{s}\mid s\in S\\}\cup\\{c^{i}\mid 0\leq i\leq k\\}.$ In positions of the form $v_{s}$ with $s\in S$, transitions are determined by the action of agent $a_{0}$. First, she can choose to simulate a transition in $\mathcal{S}$: for every joint action $\bm{c}\in\textnormal{Ac}^{\textnormal{Ag}}$ such that $\bm{c}_{0}=c^{s^{\prime}}$, $E(v_{s},\bm{c}):=\begin{cases}v_{s^{\prime}}&\text{if }R(s,s^{\prime})\\\ v_{\perp}&\text{otherwise}.\end{cases}$ She can also choose to move to a position in which agent $a_{i}$ will choose the valuation for $p_{i}$ in the current node: for every joint action $\bm{c}\in\textnormal{Ac}^{\textnormal{Ag}}$ such that $\bm{c}_{0}=c^{i}$, $E(v_{s},\bm{c}):=\begin{cases}v_{s,i}&\text{if }i\neq 0\\\ v_{\perp}&\text{otherwise}.\end{cases}$ Next, in a position of the form $v_{s,i}$, agent $a_{i}$ determines the transition, which codes the labelling of $p_{i}$ in the current node: choosing $c^{i}$ means that $p_{i}$ holds in the current node, choosing any other action codes that $p_{i}$ does not hold. Formally, for every joint action $\bm{c}\in\textnormal{Ac}^{\textnormal{Ag}}$, $E(v_{s,i},\bm{c}):=\begin{cases}v_{p_{i}}&\text{if }\bm{c}_{i}=c^{i}\\\ v_{\perp}&\text{otherwise}.\end{cases}$ Positions of the form $v_{p_{i}}$ and $v_{\perp}$ are sink positions. The labelling function $\ell^{\prime}$ is defined as follows: $\ell^{\prime}(v):=\begin{cases}\ell(s)\cup\\{p_{S}\\}&\mbox{if }v=v_{s}\mbox{ for some }s\in S\\\ \emptyset&\mbox{if }v\in\\{v_{s,i}\mid s\in S,i\in[k]\\}\cup\,\\{v_{p_{0}},v_{\perp}\\}\\\ \\{p_{i}\\}&\mbox{if }v=v_{p_{i}}\text{ with }i\in[k]\end{cases}$ Finally we let $v_{{\iota}}:=v_{s_{{\iota}}}$ and we define the observation interpretation as follows: $\mathcal{O}(o_{0}):=\\{(v,v)\mid v\in V\\},$ meaning that agent $a_{0}$ has perfect information, and for $i\in[k]$, $\mathcal{O}(o_{i})$ is the smallest reflexive relation such that $\mathcal{O}(o_{i})\supseteq\bigcup_{s,s^{\prime}\in S}\\{(v_{s},v_{s^{\prime}}),(v_{s,i},v_{s^{\prime},i})\mid s\approx_{\textnormal{{o}}_{i}}s^{\prime}\\}.$ We explain the latter definition. First, observe that for every finite play $\rho$ in $\mathcal{G}^{\mathcal{S}}$ that stays in $V_{S}=\\{v_{s}\mid s\in S\\}$, writing $\rho=v_{s_{0}}\ldots v_{s_{n}}$, one can associate a finite path $\lambda_{\rho}=s_{0}\ldots s_{n}$ in $\mathcal{S}$. This mapping actually defines a bijection between the set of finite paths in $\mathcal{S}$ that start in $s_{{\iota}}$ and the set of finite plays in $\mathcal{G}^{\mathcal{S}}$ that remain in $V_{S}$. Now, according to the definition of the transition function, a strategy $\sigma_{i}$ for agent $i$ with $i\in[k]$ is only relevant on finite plays of the form $\rho=\rho^{\prime}\cdot v_{s,i}$, where $\rho^{\prime}\in V_{S}^{*}$, and $\sigma_{i}(\rho)$ is meant to determine whether $p_{i}$ holds in $\lambda_{\rho^{\prime}}$. If $\sigma_{i}$ is $o_{i}$-uniform, by definition of $\mathcal{O}(o_{i})$, it determines an $\textnormal{{o}}_{i}$-uniform labelling for $p_{i}$ in $t_{\mathcal{S}}$. Reciprocally, an $\textnormal{{o}}_{i}$-uniform labelling for $p_{i}$ in $t_{\mathcal{S}}$ induces an $\mathcal{O}(o_{i})$-strategy for agent $a_{i}$. It remains to transform $\Phi$ into an $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-formula. We define the $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ formula $\widetilde{\Phi}$ by induction on $\Phi$ as follows: $\displaystyle\widetilde{p}$ $\displaystyle:=\begin{cases}{\bf E}{\bf X}{\bf X}p&\text{if }p=p_{i}\\\ p&\text{otherwise}\end{cases}$ $\displaystyle\widetilde{\neg\varphi}$ $\displaystyle:=\neg\widetilde{\varphi}$ $\displaystyle\widetilde{\varphi_{1}\vee\varphi_{2}}$ $\displaystyle:=\widetilde{\varphi_{1}}\vee\widetilde{\varphi_{2}}$ $\displaystyle\widetilde{{\bf E}\psi}$ $\displaystyle:={\bf E}({\bf G}p_{S}\wedge\widetilde{\psi})$ $\displaystyle\widetilde{\exists^{\textnormal{{o}}_{i}}p_{i}.\,\varphi}$ $\displaystyle:=\langle\\!\langle x_{i}\rangle\\!\rangle^{o_{i}}(a_{i},x_{i})\widetilde{\varphi}.$ The cases for path formulas are obtained by distributing over the operators. Observe that player 0 is never bound to a strategy. In the case for atomic propositions, the existential quantification on outcomes thus lets player 0 choose to move to a position where agent $i$ fixes the value for $p_{i}$ according to his strategy, fixed by the strategy quantifier in the translation for formulas of the form $\exists^{\textnormal{{o}}_{i}}p_{i}.\,\varphi$. In the translation of formulas of the form ${\bf E}\psi$, the existential quantification on outcomes lets player 0 choose a path in the original CKS $\mathcal{S}$. We have the following: ###### Lemma 6.6. $\mathcal{S}\models\Phi\quad\text{if and only if}\quad\mathcal{G}^{\mathcal{S}}\models\widetilde{\Phi}$. We observe that if $\Phi$ is hierarchical, then $(\widetilde{\Phi},\mathcal{G}^{\mathcal{S}})$ is a hierarchical instance, and: ###### Lemma 6.7. For every $p\in\textnormal{AP}_{f}(\Phi)$ and for every $i\in[k]$, if $t_{\mathcal{S}}$ is $\textnormal{{o}}_{i}$-uniform in $p$ then $v\sim_{o_{i}}v^{\prime}$ implies that $p\in\ell(v)$ iff $p\in\ell(v^{\prime})$. Combining Lemma 6.4 with Lemma 6.6 we get a reduction from the model-checking problem for CL to that for the hierarchical fragment of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, and Lemma 6.5 together with Lemma 6.7 show that in the models produced by this reduction, all atomic propositions are observable to all players. This implies that in CL one cannot reason about strategic problems with unobservable objectives. As a result it does not fully capture classic distributed synthesis (Pnueli and Rosner, 1990; Kupferman and Vardi, 2001), where the specification can talk about all variables, hidden and visible. It also shows that CL does not capture in a natural way ATL with imperfect information as defined in (Alur et al., 2002, Section 7.1), where imperfect information of agents is modelled by defining which atomic propositions they can observe. This, as well as unobservable objectives, can be naturally modelled in $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. ## 7\. Applications In this section we apply Theorem 2.9 to decide the existence of Nash Equilibria in hierarchical games of imperfect information. We then use a similar approach to obtain decidability results for the rational synthesis problem. In this section, for a tuple of agents $\bm{a}=(a_{i})_{i\in[m]}$ and tuple of strategy variables $\bm{x}=(x_{i})_{i\in[m]}$, we let $(\bm{a},\bm{x})$ be a macro for $(a_{1},x_{1})\ldots(a_{m},x_{m})$, and similarly for the unbinding operator $(\bm{a},\operatorname{?})$ which stands for $(a_{1},\operatorname{?})\ldots(a_{m},\operatorname{?})$. ### 7.1. Existence of Nash Equilibria in games with hierarchical observations A Nash equilibrium in a game is a tuple of strategies such that no player has an incentive to deviate. Let $\textnormal{Ag}=\\{a_{i}:i\in[n]\\}$. Assuming that agent $a_{i}$ has observation $o_{i}$ and LTL goal $\psi_{i}$, the following $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ formula expresses the existence of a Nash equilibrium: $\displaystyle\Phi_{\textsc{NE}}:=$ $\displaystyle\langle\\!\langle x_{1}\rangle\\!\rangle^{o_{1}}\dots\langle\\!\langle x_{n}\rangle\\!\rangle^{o_{n}}(\bm{a},\bm{x})\bigwedge_{i\in[n]}\Big{[}\Big{(}\langle\\!\langle y_{i}\rangle\\!\rangle^{o_{i}}(a_{i},y_{i})\,{\bf A}\psi_{i}\Big{)}\to{\bf A}\psi_{i}\Big{]}$ where $\bm{a}=(a_{i})_{i\in[n]}$ and $\bm{x}=(x_{i})_{i\in[n]}$. Nash equilibria do not always exist when one restricts attention to pure strategies, as we do in this work. This is the case already in finite games, and by extension also in the infinite concurrent games played on graphs that we consider. This motivates the study of the Nash equilibria existence problem in such games. In the perfect information case, the problem has been solved for $\omega$-regular objectives, as well as more complex semi-quantitative objectives (Bouyer et al., 2015). When moving to imperfect information, for two players the problem is decidable for LTL objectives (Gutierrez et al., 2018) and parity objectives (Filiot et al., 2018). However, as for distributed synthesis, existence of Nash equilibria becomes undecidable for more than two players. This result is proved in (Bouyer, 2018) for constrained Nash equilibria (when one specifies for each player whether her objective is satisfied or not), and in (Gutierrez et al., 2018) for unconstrained equilibria. In both cases the proof proceeds by reduction from the distributed synthesis problem (Peterson et al., 2001; Pnueli and Rosner, 1990). The only known decidable cases for more than two players assume that all players receive the same information. In (Bouyer, 2018) the problem is solved on games where players observe the evolution of the game via _public signals_ and objectives are given by visible parity conditions or mean-payoff functions. In (Belardinelli et al., 2017a), an epistemic extension of strategy logic is used to solve the existence of Nash equilibria on games with _broadcast actions_ for objectives given as formulas from epistemic temporal logic. A stronger notion of Nash equilibria, called _locally consistent equilibria_ , is studied in (Ramanujam and Simon, 2010). In a locally consistent equilibrium, each player’s strategy has to be a best response not only to other players’ strategies in the equilibrium, but also to all strategies that are indistinguishable from those in the equilibrium. It is proved in (Ramanujam and Simon, 2010) that the existence of such equilibria is decidable on a model of games close in spirit to those with public signals studied in (Bouyer, 2018). Here we show that the existence of Nash equilibria is decidable for $n$ players when observations are hierarchical and objectives are given as LTL formulas. Note that this result is orthogonal to those described above, which all allow in a way or another some non-hierarchical information: in (Bouyer, 2018) players know their own actions in addition to the public signals, in (Ramanujam and Simon, 2010) they know their private local state, and in (Belardinelli et al., 2017a) they can have incomparable initial knowledge of the situation. ###### Definition 7.1. A $\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}$ presents _hierarchical observation_ (Berwanger et al., 2018) if the “finer-than” relation is a total ordering, i.e., if for all $o,o^{\prime}\in\textnormal{Obs}$, either $\mathcal{O}(o)\subseteq\mathcal{O}(o^{\prime})$ or $\mathcal{O}(o^{\prime})\subseteq\mathcal{O}(o)$. Let $\mathcal{G}$ be a $\textrm{CGS}_{\textnormal{ii}}$ with hierarchical observation, and since all agents have symmetric roles in the problem considered, assume without loss of generality that $\mathcal{O}(o_{n})\subseteq\ldots\subseteq\mathcal{O}(o_{1})$. Because of the nested strategy quantifiers $\langle\\!\langle y_{i}\rangle\\!\rangle^{o_{i}}$, the instance $(\mathcal{G},\Phi_{\textsc{NE}})$ is _not_ hierarchical even if $\mathcal{G}$ yields hierarchical observation (unless $\mathcal{O}(o_{i})=\mathcal{O}(o_{j})$ for all $i,j\in[n]$). However, considering the special observation symbol $o_{p}$ that is always interpreted as the identity relation (and thus represents perfect observation), and letting $\displaystyle\Phi^{\prime}:=$ $\displaystyle\langle\\!\langle x_{1}\rangle\\!\rangle^{o_{1}}\dots\langle\\!\langle x_{n}\rangle\\!\rangle^{o_{n}}(\bm{a},\bm{x})\bigwedge_{i\in[n]}\Big{[}\Big{(}\langle\\!\langle y_{i}\rangle\\!\rangle^{o_{p}}(a_{i},y_{i})\,{\bf E}\psi_{i}\Big{)}\to{\bf E}\psi_{i}\Big{]},$ we have that $\Phi^{\prime}$ forms a hierarchical instance with any $\textrm{CGS}_{\textnormal{ii}}$ that presents hierarchical observation. Besides, we can prove that for deterministic strategies, $\Phi^{\prime}$ is equivalent to $\Phi_{\textsc{NE}}$: ###### Lemma 7.2. If we consider deterministic strategies, then $\Phi_{\textsc{NE}}\equiv\Phi^{\prime}$. ###### Proof. Concerning the universal versus existential quantification on outcomes, it is enough to observe that assigning a deterministic strategy to each agent determines a unique outcome. Next, to change each inner $o_{i}$ for $o_{p}$, we exploit the fact that in a one-player game of partial observation (such a game occurs when all but one player have fixed their strategies), the player has a strategy enforcing some goal iff she has a uniform strategy enforcing that goal. To see this, it is enough to establish that for every $\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}$ and position $v$, $\mathcal{G},\chi,v\models\langle\\!\langle y_{i}\rangle\\!\rangle^{o_{p}}(a_{i},y_{i})\,{\bf E}\psi_{i}\leftrightarrow\langle\\!\langle y_{i}\rangle\\!\rangle^{o_{i}}(a_{i},y_{i})\,{\bf E}\psi_{i},$ for every $i\in[n]$ and every assignment $\chi$ such that $\chi(a_{j})$ is defined for all $j$. To this end, fix $i$ and $\chi$. The right-to-left implication is immediate (since $o_{p}$ is finer than $o_{i}$). For the converse, let $\sigma$ be an $o_{p}$-strategy (i.e., a perfect-information strategy) such that $\mathcal{G}^{\prime},\chi^{\prime},v_{{\iota}}\models\psi_{i}$, where $\chi^{\prime}=\chi[y_{i}\mapsto\sigma,a_{i}\mapsto\sigma]$. Because we consider deterministic strategies and $\chi^{\prime}$ assigns a strategy to each agent, it defines a unique outcome $\pi$ from the initial position, i.e., $\textnormal{Out}(\chi^{\prime},v_{\iota})=\\{\pi\\}$. We construct an $o_{i}$-strategy $\sigma^{\prime}$ such that if $a_{i}$ uses it instead of $\sigma$, we obtain the same outcome $\pi$, i.e., $\textnormal{Out}(\chi^{\prime\prime},v_{\iota})=\\{\pi\\}$, where $\chi^{\prime\prime}=\chi[y_{i}\mapsto\sigma^{\prime},a_{i}\mapsto\sigma^{\prime}]$. This can be done as follows: if $\rho\sim_{o_{i}}\pi_{\leq|\rho|-1}$ then define $\sigma^{\prime}(\rho):=\sigma(\pi_{\leq|\rho|-1})$, and otherwise let $\sigma^{\prime}(\rho):=c$ for some fixed action $c\in\textnormal{Ac}$. It is easy to see that $\sigma^{\prime}$ is an $o_{i}$-strategy and that $\chi^{\prime\prime}$ produces the same outcome as $\chi$ from $v_{\iota}$. ∎ ###### Corollary 7.3. If we consider deterministic strategies, then the existence of Nash Equilibria in games with hierarchical observation and $k$ different observations is in $(k+1)$-Exptime . ###### Proof. Deciding the existence of a Nash Equilibrium in a $\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}$ amounts to model-checking formula $\Phi_{\textsc{NE}}$ in $\mathcal{G}$, which by Lemma 7.2 is equivalent to model-checking $\Phi^{\prime}$ in $\mathcal{G}$ if we restrict to deterministic strategies. Because $\Phi^{\prime}$ forms hierarchical instances with games that yield hierarchical observation, by Theorem 2.9 we can model check it on such games. Now because each $\psi_{i}$ is an LTL formula, we have that $\displaystyle\mbox{sd}\left(\langle\\!\langle y_{i}\rangle\\!\rangle^{o_{p}}(a_{i},y_{i})\,{\bf E}\psi_{i}\right)$ $\displaystyle=(0,\mbox{nd}),$ $\displaystyle\mbox{sd}\left(\bigwedge_{i\in[n]}\Big{[}\Big{(}\langle\\!\langle y_{i}\rangle\\!\rangle^{o_{p}}(a_{i},y_{i})\,{\bf E}\psi_{i}\Big{)}\to{\bf E}\psi_{i}\Big{]}\right)$ $\displaystyle=(0,\mbox{alt}),$ and finally we obtain that $\mbox{sd}(\Phi^{\prime})=(k,\mbox{nd})$, where $k$ is the number of different observations in $\mathcal{G}$, i.e., $k=|\\{\mathcal{O}(o_{1}),\ldots,\mathcal{O}(o_{n})\\}|$. By Proposition 5.2, we can model check $\Phi^{\prime}$ on $\mathcal{G}$ in time $(k+1)$-exponential, which concludes. ∎ We now show that, using the same trick, our main result can be applied to solve a more general problem called _rational synthesis_. ### 7.2. Rational distributed synthesis in games with hierarchical observations In classic synthesis, the environment is considered monolithic and “hostile”, in the sense that the system to be synthesised should be able to deal with all possible behaviours of the environment, even the most undesirable ones. This is a very strong requirement that can not always be met. When the environment can be considered rational, and its objective is known, it is reasonable to relax this requirement by asking that the system to synthesise behave well against the _rational_ behaviours of the environment. This problem is known as the _rational synthesis_ problem (Fisman et al., 2010; Kupferman et al., 2016; Condurache et al., 2016; Filiot et al., 2018). In the setting considered in the works above-mentioned, the system is seen as an agent $a$ and the environment is composed of several components, say $\\{e_{1},\ldots,e_{m}\\}$, that are assumed to be rational and follow individual objectives. While (Condurache et al., 2016) and (Filiot et al., 2018) consider various types of objectives such as reachability, safety or parity, here we consider LTL objectives as is done in (Fisman et al., 2010; Kupferman et al., 2016): the specification for the system is an LTL formula $\psi_{g}$, and the objective of each component $e_{i}$ of the environment is an LTL formula $\psi_{i}$. However note that the decidability results we establish would also hold for arbitrary omega-regular objectives. #### 7.2.1. Rational synthesis: state of the art Two variants of the rational synthesis problem have been considered: the _cooperative_ one, in which it is possible to tell the environment how to behave, as long as the suggested behaviour for each component forms an equilibrium, and the _non-cooperative_ one, in which the components of the environment may have any behaviour that forms an equilibrium. The existence of a solution to these problems can be expressed by the formulas $\Phi_{\text{c-RS}}$ and $\Phi_{\text{nc-RS}}$, respectively, defined as follows: $\displaystyle\Phi_{\text{c-RS}}$ $\displaystyle:=\langle\\!\langle x\rangle\\!\rangle^{o_{p}}\langle\\!\langle y_{1}\rangle\\!\rangle^{o_{p}}\ldots\langle\\!\langle y_{m}\rangle\\!\rangle^{o_{p}}(a,x)(\bm{e},\bm{y})\,\varphi_{\gamma}\wedge{\bf A}\psi_{g}$ $\displaystyle\Phi_{\text{nc-RS}}$ $\displaystyle:=\langle\\!\langle x\rangle\\!\rangle^{o_{p}}[\\![y_{1}]\\!]^{o_{p}}\ldots[\\![y_{m}]\\!]^{o_{p}}(a,x)(\bm{e},\bm{y})\,\varphi_{\gamma}\to{\bf A}\psi_{g}$ where $\bm{e}=(e_{i})_{i\in[m]}$, $\bm{y}=(y_{i})_{i\in[m]}$, and $\varphi_{\gamma}$ expresses that $\bm{y}$ forms an equilibrium for the environment. Also, as in the previous section, $o_{p}$ represents the perfect- information observation. Three different kinds of equilibria are considered in (Kupferman et al., 2016): profiles of dominant strategies, Nash equilibria, and subgame-perfect equilibria. Here we only consider Nash equilibria, because subgames of games with imperfect information should start in situations where all players have perfect information of the state, which we do not know how to express in $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$; and for dominant strategies, the natural formula to express them does not give rise to non- trivial decidable cases in the imperfect-information setting that we introduce later. The rational synthesis problem for Nash equilibria is obtained by replacing $\varphi_{\gamma}$ in the above formula with: $\displaystyle\varphi_{\text{NE}}$ $\displaystyle:=\bigwedge_{i\in[m]}\Big{[}\Big{(}\langle\\!\langle y^{\prime}_{i}\rangle\\!\rangle^{o_{p}}(e_{i},y^{\prime}_{i})\,{\bf A}\psi_{i}\Big{)}\to{\bf A}\psi_{i}\Big{]}$ It is proved in (Kupferman et al., 2016) that these problems are decidable for perfect information. Concerning imperfect information, because the existence of Nash equilibria is undecidable for three players, the problem is undecidable when the environment consists of at least three components (Filiot et al., 2018). Three decidable cases are known: when the environment consists of a single component (Filiot et al., 2018), when actions of all components are public (Belardinelli et al., 2017a), and when only the system has imperfect information while the (finitely many) components of the environment are perfectly informed (Filiot et al., 2018). We now extend the latter result by defining a generalisation of the rational synthesis problem that we call _rational distributed synthesis_ , and solving it in the case of hierarchical information. The case where the environment is perfectly informed and the system consists of a single component, solved in (Filiot et al., 2018), is a particular case of our Corollary 7.5 below666We only consider LTL objectives, but our automata construction can be adapted to handle all $\omega$-regular objectives.. However the other decidability result established in (Filiot et al., 2018) does not assume hierarchical information, and thus cannot be derived from the results we now present. #### 7.2.2. Rational distributed synthesis While for perfect information, distributed synthesis amounts to synthesis for a single meta-component which tells each component what to do, in the context of imperfect information it makes sense to consider that the system to be synthesised is composed of various components $\\{a_{1},\ldots,a_{n}\\}$ with different observation power, say $o_{i}$ for component $a_{i}$. We also let $o^{e}_{i}$ be the observation of the environment’s component $e_{i}$, for $i\in[m]$. We consider the imperfect-information variants of cooperative and non- cooperative rational synthesis defined by the following $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ formulas: $\displaystyle\Phi^{\textnormal{\scriptsize ii}}_{\text{c-RS}}$ $\displaystyle:=\langle\\!\langle x_{1}\rangle\\!\rangle^{o_{1}}\ldots\langle\\!\langle x_{n}\rangle\\!\rangle^{o_{n}}\langle\\!\langle y_{1}\rangle\\!\rangle^{o^{e}_{1}}\ldots\langle\\!\langle y_{m}\rangle\\!\rangle^{o^{e}_{m}}(\bm{a},\bm{x})(\bm{e},\bm{y})\,\varphi_{\gamma}\wedge{\bf A}\psi_{g}$ $\displaystyle\Phi^{\textnormal{\scriptsize ii}}_{\text{nc-RS}}$ $\displaystyle:=\langle\\!\langle x\rangle\\!\rangle^{o_{1}}\ldots\langle\\!\langle x_{n}\rangle\\!\rangle^{o_{n}}[\\![y_{1}]\\!]^{o^{e}_{1}}\ldots[\\![y_{m}]\\!]^{o^{e}_{m}}(\bm{a},\bm{x})(\bm{e},\bm{y})\,\varphi_{\gamma}\to{\bf A}\psi_{g}$ The formula for Nash equilibrium is adapted as follows: $\displaystyle\varphi^{\textnormal{\scriptsize ii}}_{\text{NE}}$ $\displaystyle:=\bigwedge_{i\in[m]}\Big{[}\Big{(}\langle\\!\langle y^{\prime}_{i}\rangle\\!\rangle^{o^{e}_{i}}(e_{i},y^{\prime}_{i})\,{\bf A}\psi_{i}\Big{)}\to{\bf A}\psi_{i}\Big{]}$ The only difference with the perfect-information case is that we use the observation of the different components of the environment instead of the perfect-information observation. We call the problems expressed by formulas $\Phi^{\textnormal{\scriptsize ii}}_{\text{c-RS}}$ and $\Phi^{\textnormal{\scriptsize ii}}_{\text{nc-RS}}$ _cooperative rational distributed synthesis_ and _non-cooperative rational distributed synthesis_ , respectively. As in the previous section on the existence of Nash equilibria, one can see that even if there is a total hierarchy on all observations, these formula do not yield hierarchical instances unless all observations are the same. However, the trick applied in the proof of Corollary 7.3 also applies here, both for Nash equilibria and subgame-perfect equilibria, i.e., we can replace each $o^{e}_{i}$ with $o_{p}$ in $\varphi^{\textnormal{\scriptsize ii}}_{\text{NE}}$ without affecting the semantics of formulas $\Phi^{\textnormal{\scriptsize ii}}_{\text{c-RS}}$ and $\Phi^{\textnormal{\scriptsize ii}}_{\text{nc-RS}}$. As a result, when there is a hierarchy on observations $o_{1},\ldots,o_{n},o^{e}_{1},\ldots,o^{e}_{m}$, the cooperative rational distributed synthesis is decidable. ###### Corollary 7.4. If we consider deterministic strategies and hierarchical observations, then cooperative rational distributed synthesis is decidable. For the non-cooperative variant, one cannot switch universal quantifications on strategies for the environments with existential quantifications for the system in order to obtain hierarchical instances, as the resulting formula would then capture a different problem. As a consequence, in addition to a hierarchy on observations $o_{1},\ldots,o_{n},o^{e}_{1},\ldots,o^{e}_{m}$, we need to have that the components of the environment observe better than the components of the system or, in other words, that the least informed component of the environment observes better than the best informed component of the system. When it is the case, we say that the environment is _more informed_ than the system. ###### Corollary 7.5. Non-cooperative rational distributed synthesis is decidable for deterministic strategies and hierarchical observations where the environment is more informed than the system. This result applies for instance when there is hierarchical information amongst the components of the system, and the environment has perfect information. Note that when the system consists of a single component, this corresponds to the second decidability result in (Filiot et al., 2018). As we mentioned in the introduction, considering that the opponent has perfect information is something classic in two-player games with imperfect information, as doing so ensures that the strategy one synthesises is winning no matter how much the opponent observes. In Reif’s words, this amounts to considering the possibility that the opponent may “cheat” and use information that it normally does not have access to (Reif, 1984). The non-cooperative rational synthesis problem is not precisely a two-player game, but it resembles one in the sense that the system as a whole (composed of its various components $a_{1},\ldots,a_{n}$) should win against any “rational” behaviour of the environment as a whole. In this view, considering that the components of the environment have perfect information thus yields a distributed system that is robust to possible leaks of hidden information to the environment. ###### Remark 6. When all components of the environment have perfect information, $\Phi^{\textnormal{\scriptsize ii}}_{\text{c-RS}}$ and $\Phi^{\textnormal{\scriptsize ii}}_{\text{nc-RS}}$ already form hierarchical instances with games where there is hierarchical observation amongst the system’s components, and one does not need to resort to the trick used in the proof of Corollary 7.3. A consequence is that in that case, corollaries 7.4 and 7.5 also hold for nondeterministic strategies. ## 8\. Conclusion We introduced $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, a logic for reasoning about strategic behaviour in multi-player games with imperfect information. The syntax specifies the observations with which strategies have to work, and thus allows one to reason about strategic problems in settings where agents can change observation power, for instance by being eventually granted access to previously hidden information. Moreover our logic contains an outcome quantifier and an unbinding operator which simplify the semantics, make it easier to express branching-time properties, allow us to naturally consider nondeterministic strategies, and make the correspondence with $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ tighter, enabling us to derive precise complexity results for the model-checking of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. We isolated the class of hierarchical formula/model pairs $(\Phi,\mathcal{G})$ and proved that for such instances one can decide whether $\mathcal{G}\models\Phi$. The proof reduces (hierarchical) instances of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ to (hierarchical) formulas of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, a low-level logic that we introduced, and that serves as a natural bridge between $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ and automata constructions. We also studied in detail the complexity of the model-checking problems solved in this work. To do so we introduced a new measure on formulas called _simulation depth_. This measure, though being a purely syntactic notion, reflects the complexity of automata constructions required to treat a given formula. Since one can alternate quantifiers in $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, our decidability result goes beyond synthesis and can be used to easily obtain the decidability of many strategic problems. In this work we applied it to the problem of existence of Nash equilibria in games with hierarchical observation, and to the imperfect-information generalisations of rational synthesis that we called (cooperative and non-cooperative) _rational distributed synthesis_. Our result has also been used to prove that the existence of admissible strategies in games with hierarchical information is decidable (Brenguier et al., 2017). An interesting direction for future work would be to try and adapt the notion of hierarchical instances to allow for situations in which hierarchies can change along a play, as done in (Berwanger et al., 2018). We would also like to consider alternatives to the synchronous perfect recall setting considered here, such as the classic asynchronous perfect recall setting (Fagin et al., 1995; Puchala, 2010), or the more recent notion of causal knowledge (Genest et al., 2015). Finally, it is often interesting in presence of imperfect information to introduce epistemic operators to reason explicitely about what agents know. We already generalised the main result of this work to an extension of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ with such operators (Maubert and Murano, 2018); we would like to see if this can be used to reason about subgame-perfect equilibria in games with imperfect information, which do not seem to be easy to characterise in $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, as mentioned in Section 7.2.1. Indeed, in games with imperfect information, the notion of subgame specifies that the initial situation should be known to all players (Selten, 1965), a property that epistemic logics are meant to be able to express. ###### Acknowledgements. We thank anonymous reviewers for their valuable comments on a previous version of this work. This project has received funding from the Sponsor European Union’s Horizon 2020 research and innovation programme https://ec.europa.eu/programmes/horizon2020/en under the Marie Sklodowska- Curie grant agreement No Grant #709188. ## References * (1) * Alur et al. (2002) Rajeev Alur, Thomas A. Henzinger, and Orna Kupferman. 2002\. Alternating-time temporal logic. _J. ACM_ 49, 5 (2002), 672–713. https://doi.org/10.1145/585265.585270 * Belardinelli (2014) Francesco Belardinelli. 2014\. Reasoning about Knowledge and Strategies: Epistemic Strategy Logic. In _SR’14_. 27–33. https://doi.org/10.4204/EPTCS.146.4 * Belardinelli (2015) Francesco Belardinelli. 2015\. A Logic of Knowledge and Strategies with Imperfect Information. In _LAMAS’15_. 1–15. * Belardinelli et al. (2017a) Francesco Belardinelli, Alessio Lomuscio, Aniello Murano, and Sasha Rubin. 2017a. Verification of Broadcasting Multi-Agent Systems against an Epistemic Strategy Logic. In _IJCAI’17_. 91–97. https://doi.org/10.24963/ijcai.2017/14 * Belardinelli et al. (2017b) Francesco Belardinelli, Alessio Lomuscio, Aniello Murano, and Sasha Rubin. 2017b. Verification of Multi-agent Systems with Imperfect Information and Public Actions. In _AAMAS’17_. 1268–1276. * Berthon et al. (2017) Raphael Berthon, Bastien Maubert, Aniello Murano, Sasha Rubin, and Moshe Y. Vardi. 2017. Strategy Logic with imperfect information. In _LICS’17_. IEEE, 1–12. https://doi.org/10.1109/LICS.2017.8005136 * Berwanger et al. (2010) Dietmar Berwanger, Krishnendu Chatterjee, Martin De Wulf, Laurent Doyen, and Thomas A Henzinger. 2010\. Strategy construction for parity games with imperfect information. _Information and computation_ 208, 10 (2010), 1206–1220. * Berwanger et al. (2018) Dietmar Berwanger, Anup Basil Mathew, and Marie van den Bogaard. 2018. Hierarchical information and the synthesis of distributed strategies. _Acta Inf._ 55, 8 (2018), 669–701. https://doi.org/10.1007/s00236-017-0306-5 * Bittner et al. (2012) Benjamin Bittner, Marco Bozzano, Alessandro Cimatti, and Xavier Olive. 2012. Symbolic Synthesis of Observability Requirements for Diagnosability. In _AAAI’12_. * Bouyer (2017) Patricia Bouyer. 2017\. Games on graphs with a public signal monitoring. _arXiv preprint arXiv:1710.07163_ (2017). * Bouyer (2018) Patricia Bouyer. 2018\. Games on Graphs with a Public Signal Monitoring. In _FOSSACS’18_. Springer, 530–547. https://doi.org/10.1007/978-3-319-89366-2_29 * Bouyer et al. (2015) Patricia Bouyer, Romain Brenguier, Nicolas Markey, and Michael Ummels. 2015. Pure Nash Equilibria in Concurrent Deterministic Games. _Logical Methods in Computer Science_ 11, 2 (2015). https://doi.org/10.2168/LMCS-11(2:9)2015 * Bouyer et al. (2017) Patricia Bouyer, Nicolas Markey, and Steen Vester. 2017\. Nash equilibria in symmetric graph games with partial observation. _Information and Computation_ 254 (2017), 238–258. * Brenguier et al. (2017) Romain Brenguier, Arno Pauly, Jean-François Raskin, and Ocan Sankur. 2017. Admissibility in Games with Imperfect Information. In _CONCUR’17_ , Vol. 85. * Bulling and Jamroga (2014) Nils Bulling and Wojciech Jamroga. 2014. Comparing variants of strategic ability: how uncertainty and memory influence general properties of games. _AAMAS’14_ 28, 3 (2014), 474–518. * Chatterjee and Doyen (2010) Krishnendu Chatterjee and Laurent Doyen. 2010. The complexity of partial-observation parity games. In _International Conference on Logic for Programming Artificial Intelligence and Reasoning_. Springer, 1–14. * Chatterjee and Doyen (2014a) Krishnendu Chatterjee and Laurent Doyen. 2014a. Games with a Weak Adversary. In _ICALP’14_. 110–121. https://doi.org/10.1007/978-3-662-43951-7_10 * Chatterjee and Doyen (2014b) Krishnendu Chatterjee and Laurent Doyen. 2014b. Partial-observation stochastic games: How to win when belief fails. _ACM Transactions on Computational Logic (TOCL)_ 15, 2 (2014), 16\. https://doi.org/10.1145/2579821 * Chatterjee et al. (2017) Krishnendu Chatterjee, Laurent Doyen, Emmanuel Filiot, and Jean-François Raskin. 2017\. Doomsday equilibria for omega-regular games. _Inf. Comput._ 254 (2017), 296–315. https://doi.org/10.1016/j.ic.2016.10.012 * Chatterjee et al. (2010a) Krishnendu Chatterjee, Thomas A Henzinger, and Nir Piterman. 2010a. Strategy logic. _Information and Computation_ 208 (2010). * Chatterjee et al. (2010b) Krishnendu Chatterjee, Thomas A. Henzinger, and Nir Piterman. 2010b. Strategy Logic. _Inf. Comput._ 208, 6 (2010), 677–693. https://doi.org/10.1016/j.ic.2009.07.004 * Clarke et al. (1999) Edmund M Clarke, Orna Grumberg, and Doron Peled. 1999\. _Model checking_. MIT press. * Condurache et al. (2016) Rodica Condurache, Emmanuel Filiot, Raffaella Gentilini, and Jean-François Raskin. 2016\. The Complexity of Rational Synthesis. In _ICALP’16_. 121:1–121:15. https://doi.org/10.4230/LIPIcs.ICALP.2016.121 * Degorre et al. (2010) Aldric Degorre, Laurent Doyen, Raffaella Gentilini, Jean-François Raskin, and Szymon Toruńczyk. 2010. Energy and mean-payoff games with imperfect information. In _CSL’10_. Springer, 260–274. * Dima and Tiplea (2011) Catalin Dima and Ferucio Laurentiu Tiplea. 2011. Model-checking ATL under Imperfect Information and Perfect Recall Semantics is Undecidable. _CoRR_ (2011). arXiv:1102.4225 * Doyen and Raskin (2011) Laurent Doyen and Jean-François Raskin. 2011\. Games with imperfect information: Theory and algorithms. _Lectures in Game Theory for Computer Scientists_ (2011), 185–212. * Elgot and Rabin (1966) Calvin C. Elgot and Michael O. Rabin. 1966. Decidability and Undecidability of Extensions of Second (First) Order Theory of (Generalized) Successor. _JSL_ 31, 2 (1966), 169–181. https://doi.org/10.2307/2269808 * Emerson and Halpern (1986) E Allen Emerson and Joseph Y Halpern. 1986. ”Sometimes” and ”not never” revisited: on branching versus linear time temporal logic. _Journal of the ACM (JACM)_ 33, 1 (1986), 151–178. * Fagin et al. (1995) Ronald Fagin, Joseph Y. Halpern, Yoram Moses, and Moshe Y. Vardi. 1995. _Reasoning about knowledge_. Vol. 4. MIT press Cambridge. * Filiot et al. (2018) Emmanuel Filiot, Raffaella Gentilini, and Jean-François Raskin. 2018\. Rational Synthesis Under Imperfect Information. In _LICS’18_. ACM, 422–431. * Finkbeiner and Schewe (2005) Bernd Finkbeiner and Sven Schewe. 2005. Uniform Distributed Synthesis. In _LICS’05_. 321–330. https://doi.org/10.1109/LICS.2005.53 * Finkbeiner and Schewe (2010) Bernd Finkbeiner and Sven Schewe. 2010. Coordination Logic. In _CSL’10_. 305–319. https://doi.org/10.1007/978-3-642-15205-4_25 * Fisman et al. (2010) Dana Fisman, Orna Kupferman, and Yoad Lustig. 2010\. Rational synthesis. In _TACAS’10_. Springer, 190–204. https://doi.org/10.1007/978-3-642-12002-2_16 * French (2001) Tim French. 2001\. Decidability of quantifed propositional branching time logics. In _AJCAI’01_. 165–176. https://doi.org/10.1007/3-540-45656-2_15 * Gastin et al. (2009) Paul Gastin, Nathalie Sznajder, and Marc Zeitoun. 2009\. Distributed synthesis for well-connected architectures. _FMSD_ 34, 3 (2009), 215–237. https://doi.org/10.1007/s10703-008-0064-7 * Genest et al. (2015) Blaise Genest, Doron Peled, and Sven Schewe. 2015\. Knowledge= observation+ memory+ computation. In _International Conference on Foundations of Software Science and Computation Structures_. Springer, 215–229. * Guelev et al. (2011) Dimitar P. Guelev, Catalin Dima, and Constantin Enea. 2011\. An alternating-time temporal logic with knowledge, perfect recall and past: axiomatisation and model-checking. _Journal of Applied Non-Classical Logics_ 21, 1 (2011), 93–131. https://doi.org/10.3166/jancl.21.93-131 * Gutierrez et al. (2018) Julian Gutierrez, Giuseppe Perelli, and Michael Wooldridge. 2018\. Imperfect information in Reactive Modules games. _Inf. Comput._ 261, Part (2018), 650–675. https://doi.org/10.1016/j.ic.2018.02.023 * Halpern and Vardi (1989) Joseph Y. Halpern and Moshe Y. Vardi. 1989. The complexity of reasoning about knowledge and time. I. Lower bounds. _JCSS_ 38, 1 (1989), 195–237. * Huang and Van Der Meyden (2014) Xiaowei Huang and Ron Van Der Meyden. 2014. A Temporal Logic of Strategic Knowledge. In _KR’14_. * Jamroga and Bulling (2011) W. Jamroga and N. Bulling. 2011. Comparing variants of strategic ability. In _IJCAI’11_. AAAI Press, 252–257. https://doi.org/10.1023/A:1026171312755 * Jamroga and Murano (2014) Wojciech Jamroga and Aniello Murano. 2014. On module checking and strategies. In _AAMAS’14_. International Foundation for Autonomous Agents and Multiagent Systems, 701–708. * Jamroga and Murano (2015) Wojciech Jamroga and Aniello Murano. 2015. Module checking of strategic ability. In _AAMAS’15_. International Foundation for Autonomous Agents and Multiagent Systems, 227–235. * Jamroga and van der Hoek (2004) Wojciech Jamroga and Wiebe van der Hoek. 2004. Agents that Know How to Play. _Fundam. Inform._ 63, 2-3 (2004), 185–219. * Knight and Maubert (2019) Sophia Knight and Bastien Maubert. 2019. Dealing with imperfect information in Strategy Logic. arXiv:arXiv:1908.02488 Presented at SR’15. * Kupferman (1999) Orna Kupferman. 1999\. Augmenting branching temporal logics with existential quantification over atomic propositions. _JLC_ 9, 2 (1999), 135–147. https://doi.org/10.1093/logcom/9.2.135 * Kupferman et al. (2000a) Orna Kupferman, Parthasarathy Madhusudan, Pazhamaneri Subramaniam Thiagarajan, and Moshe Y. Vardi. 2000a. Open Systems in Reactive Environments: Control and Synthesis. In _CONCUR’00_. 92–107. * Kupferman et al. (2016) Orna Kupferman, Giuseppe Perelli, and Moshe Y. Vardi. 2016\. Synthesis with rational environments. _Ann. Math. Artif. Intell._ 78, 1 (2016), 3–20. https://doi.org/10.1007/s10472-016-9508-8 * Kupferman and Vardi (1999) Orna Kupferman and Moshe Y. Vardi. 1999. Church’s problem revisited. _BSL_ (1999), 245–263. * Kupferman and Vardi (2001) Orna Kupferman and Moshe Y. Vardi. 2001. Synthesizing distributed systems. In _LICS’01_. 389–398. https://doi.org/10.1109/LICS.2001.932514 * Kupferman et al. (2000b) Orna Kupferman, Moshe Y. Vardi, and Pierre Wolper. 2000b. An automata-theoretic approach to branching-time model checking. _JACM_ 47, 2 (2000), 312–360. https://doi.org/10.1145/333979.333987 * Kupferman et al. (2001) Orna Kupferman, Moshe Y Vardi, and Pierre Wolper. 2001\. Module checking. _Information and Computation_ 164, 2 (2001), 322–344. * Laroussinie and Markey (2014) François Laroussinie and Nicolas Markey. 2014. Quantified CTL: Expressiveness and Complexity. _LMCS_ 10, 4 (2014). https://doi.org/10.2168/LMCS-10(4:17)2014 * Laroussinie and Markey (2015) François Laroussinie and Nicolas Markey. 2015. Augmenting ATL with strategy contexts. _Inf. Comput._ 245 (2015), 98–123. https://doi.org/10.1016/j.ic.2014.12.020 * Laroussinie et al. (2015) François Laroussinie, Nicolas Markey, and Arnaud Sangnier. 2015\. ATLsc with partial observation. In _GandALF’15_. 43–57. https://doi.org/10.4204/EPTCS.193.4 * Läuchli and Savioz (1987) Hans Läuchli and Christian Savioz. 1987. Monadic second order definable relations on the binary tree. _JSL_ 52, 01 (1987), 219–226. https://doi.org/10.2307/2273878 * Löding (2011) Christof Löding. 2011\. Automata on Infinite Trees. In _preliminary version for the handbook Automata: from Mathematics to Applications_ , Jean-Eric Pin (Ed.). * Lomuscio and Raimondi (2006) Alessio Lomuscio and Franco Raimondi. 2006. MCMAS : A Model Checker for Multi-agent Systems. In _TACAS’06_ _(LNCS 4314)_. 450–454. * Maubert and Murano (2018) Bastien Maubert and Aniello Murano. 2018. Reasoning about knowledge and strategies under hierarchical information. In _KR’18_. * Mogavero et al. (2014) Fabio Mogavero, Aniello Murano, Giuseppe Perelli, and Moshe Y. Vardi. 2014. Reasoning About Strategies: On the Model-Checking Problem. _ACM Trans. Comput. Log._ 15, 4 (2014), 34:1–34:47. https://doi.org/10.1145/2631917 * Muller and Schupp (1995) David E. Muller and Paul E. Schupp. 1995. Simulating Alternating Tree Automata by Nondeterministic Automata: New Results and New Proofs of the Theorems of Rabin, McNaughton and Safra. _TCS_ 141, 1&2 (1995), 69–107. https://doi.org/10.1016/0304-3975(94)00214-4 * Pérez (2017) Guillermo A Pérez. 2017\. The fixed initial credit problem for partial-observation energy games is Ack-complete. _Inform. Process. Lett._ 118 (2017), 91–99. * Peterson et al. (2001) Gary Peterson, John Reif, and Salman Azhar. 2001. Lower bounds for multiplayer noncooperative games of incomplete information. _CAMWA_ 41, 7 (2001), 957–992. https://doi.org/10.1016/S0898-1221(00)00333-3 * Peterson et al. (2002) Gary Peterson, John Reif, and Salman Azhar. 2002. Decision algorithms for multiplayer noncooperative games of incomplete information. _CAMWA_ 43, 1 (2002), 179–206. https://doi.org/10.1016/S0898-1221(01)00282-6 * Peterson and Reif (1979) Gary L. Peterson and John H. Reif. 1979. Multiple-Person Alternation. In _SFCS’79_. 348–363. https://doi.org/10.1109/SFCS.1979.25 * Pinchinat and Riedweg (2005) Sophie Pinchinat and Stéphane Riedweg. 2005. A decidable class of problems for control under partial observation. _IPL_ 95, 4 (2005), 454–460. https://doi.org/10.1016/j.ipl.2005.04.011 * Pnueli (1977) Amir Pnueli. 1977\. The Temporal Logic of Programs. In _FOCS_. 46–57. * Pnueli and Rosner (1989) Amir Pnueli and Roni Rosner. 1989. On the synthesis of a reactive module. In _POPL_. 179–190. * Pnueli and Rosner (1990) Amir Pnueli and Roni Rosner. 1990. Distributed reactive systems are hard to synthesize. In _FOCS’90_. 746–757. https://doi.org/10.1109/FSCS.1990.89597 * Puchala (2010) Bernd Puchala. 2010\. Asynchronous Omega-Regular Games with Partial Information. In _MFCS_. 592–603. * Rabin (1969) Michael O Rabin. 1969\. Decidability of second-order theories and automata on infinite trees. _TAMS_ 141 (1969), 1–35. https://doi.org/10.1090/S0002-9947-1969-0246760-1 * Ramanujam and Simon (2010) Ramaswamy Ramanujam and Sunil Simon. 2010. A communication based model for games of imperfect information. In _International Conference on Concurrency Theory_. Springer, 509–523. * Reif (1984) John H Reif. 1984\. The complexity of two-player games of incomplete information. _Journal of computer and system sciences_ 29, 2 (1984), 274–301. https://doi.org/10.1016/0022-0000(84)90034-5 * Schewe and Finkbeiner (2007) Sven Schewe and Bernd Finkbeiner. 2007. Distributed Synthesis for Alternating-Time Logics. In _ATVA’07_. 268–283. https://doi.org/10.1007/978-3-540-75596-8_20 * Schobbens (2004) Pierre-Yves Schobbens. 2004\. Alternating-time logic with imperfect recall. _Electr. Notes Theor. Comput. Sci._ 85, 2 (2004), 82–93. https://doi.org/10.1016/S1571-0661(05)82604-0 * Selten (1965) Reinhard Selten. 1965\. Spieltheoretische behandlung eines oligopolmodells mit nachfrageträgheit: Teil i: Bestimmung des dynamischen preisgleichgewichts. _Zeitschrift für die gesamte Staatswissenschaft/Journal of Institutional and Theoretical Economics_ H. 2 (1965), 301–324. * Sistla (1983) A Prasad Sistla. 1983\. _Theoretical Issues in the Design and Certification of Distributed Systems._ Ph.D. Dissertation. Harvard University, Cambridge, MA, USA. * Thomas (1992) Wolfgang Thomas. 1992\. Infinite Trees and Automaton-Definable Relations over omega-Words. _TCS_ 103, 1 (1992), 143–159. https://doi.org/10.1016/0304-3975(92)90090-3 * van der Meyden and Vardi (1998) Ron van der Meyden and Moshe Y. Vardi. 1998. Synthesis from knowledge-based specifications. In _CONCUR’98_. Springer, 34–49. * van der Meyden and Wilke (2005) Ron van der Meyden and Thomas Wilke. 2005. Synthesis of Distributed Systems from Knowledge-Based Specifications. In _CONCUR’05_. 562–576. * Vardi and Wolper (1994) Moshe Y. Vardi and Pierre Wolper. 1994. Reasoning about infinite computations. _IC_ 115, 1 (1994), 1–37. * Zielonka (1998) Wieslaw Zielonka. 1998\. Infinite Games on Finitely Coloured Graphs with Applications to Automata on Infinite Trees. _TCS_ 200, 1-2 (1998), 135–183. ## Appendix A Proof of Proposition 4.12 First, for every LTL formula $\psi$ one can build a parity word automaton $\mathcal{W}^{\psi}$ with two colours and $2^{O(|\psi|)}$ states (Vardi and Wolper, 1994). Let $K_{\psi}\in\mathbb{N}$ be such that the number of states of $\mathcal{W}^{\psi}$ is bounded by $2^{K_{\psi}|\psi|}$. We also state a more precise version of Theorem 4.6: for every ATA $\mathcal{A}$ with $n$ states and $l$ colours, one can build an NTA $\mathcal{N}$ with at most $2^{O(nl\log(nl))}$ states and $O(nl)$ colours such that $\mathcal{L}(\mathcal{A})=\mathcal{L}(\mathcal{N})$ (Muller and Schupp, 1995; Löding, 2011). We let $K_{1},K_{2}\in\mathbb{N}$ be such that the number of states of $\mathcal{N}$ is bounded by $2^{K_{1}nl\log(nl)}$ and the number of colours by $K_{2}nl$. Proposition 4.12 follows directly from the following. ###### Proposition A.1. Let $\Phi$ be a $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$ formula, $\mathcal{S}$ a CKS , and let ${\textnormal{AP}_{\exists}}={\textnormal{AP}_{\exists}}(\Phi)$. For every subformula $\varphi$ of $\Phi$ and state $s\in\mathcal{S}$, it holds that: * • if $\mbox{sd}_{k}(\varphi)=0$, $\mathcal{A}_{s}^{\varphi}$ has at most $f_{\mathcal{S}}^{\varphi}$ states and 2 colours, * • if $\mbox{sd}_{k}(\varphi)\geq 1$, $\mathcal{A}_{s}^{\varphi}$ has at most $\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid f_{\mathcal{S}}^{\varphi}\log f_{\mathcal{S}}^{\varphi}\big{)}$ states, and its number of colours is at most $\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)-1\mid f_{\mathcal{S}}^{\varphi}\log f_{\mathcal{S}}^{\varphi}\big{)}$, with $f_{\mathcal{S}}^{\varphi}=(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\varphi||\mathcal{S}|^{{\bf E}\mathrm{d}(\varphi)}2^{K_{\psi}|\varphi|{\bf E}\mathrm{d}(\varphi)}$. In addition, if $\mathcal{A}_{s}^{\varphi}$ has state set $Q$, for each $q\in Q$ and $a\in 2^{\textnormal{AP}_{\exists}}$, we have $|\delta(q,a)|\leq|\mathcal{S}||Q|^{|\mathcal{S}|}2^{H|\varphi|}$, where $H=1+{\bf E}\mathrm{d}(\varphi)$. ###### Proof. We prove the result by induction on $\varphi$. $\bm{\varphi=p:}$ in this case $\mbox{sd}_{k}(\varphi)=\exists\mathrm{d}(\varphi)={\bf E}\mathrm{d}(\varphi)=0$. By construction, $\mathcal{A}_{s}^{\varphi}$ has one state $q_{\iota}$ and two colours, so that the first part of the claim holds. In addition, each formula of its transition function is of size one, so that the second part of the claim also holds. $\bm{\varphi=\neg\varphi^{\prime}:}$ Complementing an ATA does not change the number of states, number of colours or size of formulas in the transition function, so that the result follows by induction hypothesis and the fact that $|\varphi^{\prime}|\leq|\varphi|$ and ${\bf E}\mathrm{d}(\varphi)={\bf E}\mathrm{d}(\varphi^{\prime})$. $\bm{\varphi=\varphi_{1}\vee\varphi_{2}:}$ To establish the claim about number of states and colours we split cases. First we consider the case where $\mbox{sd}_{k}(\varphi)=0$. In that case we also have $\mbox{sd}_{k}(\varphi_{1})=\mbox{sd}_{k}(\varphi_{2})=0$. By induction hypothesis, for $i\in\\{1,2\\}$, $\mathcal{A}_{s}^{\varphi_{i}}$ has at most $f_{\mathcal{S}}^{\varphi_{i}}$ states and $2$ colours. These automata are then narrowed down, but the narrowing operation leaves the size of formulas in the transition function unchanged (in fact they may become smaller, but not bigger, see (Kupferman and Vardi, 1999)). Therefore, by construction $\mathcal{A}_{s}^{\varphi}$ has at most $1+f_{\mathcal{S}}^{\varphi_{1}}+f_{\mathcal{S}}^{\varphi_{2}}$ states and two colours. Now we have that $\displaystyle 1+f_{\mathcal{S}}^{\varphi_{1}}+f_{\mathcal{S}}^{\varphi_{2}}$ $\displaystyle=1+\sum_{i\in\\{1,2\\}}(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi_{i})}|\varphi_{i}||\mathcal{S}|^{{\bf E}\mathrm{d}(\varphi_{i})}2^{K_{\psi}|\varphi_{i}|{\bf E}\mathrm{d}(\varphi_{i})}$ $\displaystyle=1+(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\varphi||\mathcal{S}|^{{\bf E}\mathrm{d}(\varphi)}\sum_{i\in\\{1,2\\}}2^{K_{\psi}|\varphi_{i}|{\bf E}\mathrm{d}(\varphi)}$ $\displaystyle\leq 1+(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\varphi||\mathcal{S}|^{{\bf E}\mathrm{d}(\varphi)}2^{K_{\psi}(|\varphi_{1}|+|\varphi_{2}|){\bf E}\mathrm{d}(\varphi)}$ $\displaystyle 1+f_{\mathcal{S}}^{\varphi_{1}}+f_{\mathcal{S}}^{\varphi_{2}}$ $\displaystyle\leq(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\varphi||\mathcal{S}|^{{\bf E}\mathrm{d}(\varphi)}2^{K_{\psi}(|\varphi_{1}|+|\varphi_{2}|+1){\bf E}\mathrm{d}(\varphi)}$ We get that (8) $1+f_{\mathcal{S}}^{\varphi_{1}}+f_{\mathcal{S}}^{\varphi_{2}}\leq f_{\mathcal{S}}^{\varphi}$ which concludes the claim about the number of states. Now for the case where $\mbox{sd}_{k}(\varphi)\geq 1$. By definition of nondeterminisation depth, for at least one $i\in\\{1,2\\}$ we have $\mbox{sd}_{k}(\varphi_{i})\geq 1$. Also, the number of colours used in $\mathcal{A}_{s}^{\varphi}$ is the maximum between the number of colours used in $\mathcal{A}_{s}^{\varphi_{1}}$ and those used in $\mathcal{A}_{s}^{\varphi_{2}}$. By induction hypothesis it is the case that $\mathcal{A}_{s}^{\varphi_{i}}$ has at most $\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi_{i})-1\mid f_{\mathcal{S}}^{\varphi_{i}}\log f_{\mathcal{S}}^{\varphi_{i}}\big{)}$ colours if $\mbox{sd}_{k}(\varphi_{i})\geq 1$, or 2 if $\mbox{sd}_{k}(\varphi_{i})=0$. Therefore, the number of colours in $\mathcal{A}_{s}^{\varphi}$ is at most $\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi_{i})-1\mid f_{\mathcal{S}}^{\varphi_{i}}\log f_{\mathcal{S}}^{\varphi_{i}}\big{)}$ for some $i$, which is less than $\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)-1\mid f_{\mathcal{S}}^{\varphi}\log f_{\mathcal{S}}^{\varphi}\big{)}$. For the number of states $|Q|$ in $\mathcal{A}_{s}^{\varphi}$, we have that $|Q|=1+|Q_{1}|+|Q_{2}|$, where $Q_{i}$ is the set of states of $\mathcal{A}_{s}^{\varphi_{i}}$. By induction hypothesis we get $\displaystyle|Q|$ $\displaystyle\leq 1+\sum_{i\in\\{1,2\\}}\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi_{i})\mid f_{\mathcal{S}}^{\varphi_{i}}\log f_{\mathcal{S}}^{\varphi_{i}}\big{)}$ $\displaystyle\leq 1+\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid\sum_{i\in\\{1,2\\}}f_{\mathcal{S}}^{\varphi_{i}}\log f_{\mathcal{S}}^{\varphi_{i}}\big{)}$ $\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid(\sum_{i\in\\{1,2\\}}f_{\mathcal{S}}^{\varphi_{i}}+1)\log f_{\mathcal{S}}^{\varphi}\big{)}$ $\displaystyle|Q|$ $\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid f_{\mathcal{S}}^{\varphi}\log f_{\mathcal{S}}^{\varphi}\big{)}\mbox{\hskip 56.9055pt (using Equation~{}\eqref{eq-boum})}$ which concludes the claim about the number of states. Concerning the size of formulas in the transition function, for all states from $\mathcal{A}_{s}^{\varphi_{1}}$ and $\mathcal{A}_{s}^{\varphi_{2}}$ the transition function is unchanged and the result thus holds by induction hypothesis. For the remaining state $q_{\iota}$, we have by definition $\delta(q_{\iota},a)=\delta^{1}(q_{\iota}^{1},a)\vee\delta^{2}(q_{\iota}^{2},a)$ and thus $|\delta(q_{\iota},a)|=|\delta^{1}(q_{\iota}^{1},a)|+|\delta^{2}(q_{\iota}^{2},a)|+1$. By induction hypothesis we get that $\displaystyle|\delta(q_{\iota},a)|$ $\displaystyle\leq|\mathcal{S}||Q_{1}|^{|\mathcal{S}|}2^{H(\varphi_{1})|\varphi_{1}|}+|\mathcal{S}||Q_{2}|^{|\mathcal{S}|}2^{H(\varphi_{2})|\varphi_{2}|}+1$ $\displaystyle\leq|\mathcal{S}|2^{H(\varphi)(|\varphi_{1}|+|\varphi_{2}|)}(|Q_{1}|^{|\mathcal{S}|}+|Q_{2}|^{|\mathcal{S}|})$ $\displaystyle\leq|\mathcal{S}|2^{H(\varphi)|\varphi|}(|Q_{1}|+|Q_{2}|)^{|\mathcal{S}|}$ And thus $|\delta(q_{\iota},a)|\leq|\mathcal{S}|2^{H(\varphi)|\varphi|}|Q|^{|\mathcal{S}|}$ as required. $\bm{\varphi={\bf E}\psi:}$ The word automaton built for the LTL skeleton of $\psi$ is in fact a Büchi automaton, and thus uses only two colours. The number of colours used by $\mathcal{A}_{s}^{\varphi}$ is therefore the maximum number of colours used by the automata $\mathcal{A}_{s}^{\varphi_{i}}$ built for the maximal state subformulas $\varphi_{i}$ in $\psi$, and the result follows by induction hypothesis. Concerning the number of states, let $|Q_{\varphi}|$ (resp. $|Q_{i}|$, $|Q_{\psi}|$) be the number of states in $\mathcal{A}_{s}^{\varphi}$ (resp. $\mathcal{A}_{s}^{\varphi_{i}}$, $\mathcal{W}^{\psi}$). Note that the number of states in $\mathcal{A}_{s^{\prime}}^{\varphi_{i}}$ does not depend on $s^{\prime}$. Recall that $\max(\psi)=\\{\varphi_{1},\ldots,\varphi_{n}\\}$ is the set of maximal state subformulas of $\psi$, and let $\psi^{\prime}$ be the LTL skeleton of $\psi$, i.e., the LTL formula obtained from $\psi$ by replacing maximal state subformulas $\varphi_{i}$ with propositions $p_{\varphi_{i}}$. We thus have $\displaystyle|Q|$ $\displaystyle=|Q_{\psi}||\mathcal{S}|+2|\mathcal{S}|\sum_{i\in[n]}|Q_{i}|$ $\displaystyle\leq 2^{K_{\psi}|\psi^{\prime}|}|\mathcal{S}|+2|\mathcal{S}|\sum_{i\in[n]}\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi_{i})\mid f_{\mathcal{S}}^{\varphi_{i}}\log f_{\mathcal{S}}^{\varphi_{i}}\big{)}$ $\displaystyle|Q|$ $\displaystyle\leq 2^{K_{\psi}|\psi^{\prime}|}|\mathcal{S}|\left(1+\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid\sum_{i\in[n]}f_{\mathcal{S}}^{\varphi_{i}}\log f_{\mathcal{S}}^{\varphi_{i}}\big{)}\right)$ And thus (9) $|Q|\leq 2^{K_{\psi}|\psi^{\prime}|}|\mathcal{S}|\left(1+\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid\log f_{\mathcal{S}}^{\varphi}\sum_{i\in[n]}f_{\mathcal{S}}^{\varphi_{i}}\big{)}\right)$ Now observe that for each $i\in[n]$ we have that ${\bf E}\mathrm{d}(\varphi_{i})\leq{\bf E}\mathrm{d}(\varphi)-1$, and $\exists\mathrm{d}(\varphi_{i})=\exists\mathrm{d}(\varphi)$. Therefore, $\displaystyle\sum_{i\in[n]}f_{\mathcal{S}}^{\varphi_{i}}$ $\displaystyle=(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}\sum_{i\in[n]}|\varphi_{i}||\mathcal{S}|^{{\bf E}\mathrm{d}(\varphi_{i})}2^{K_{\psi}|\varphi_{i}|{\bf E}\mathrm{d}(\varphi_{i})}$ $\displaystyle\leq(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\mathcal{S}|^{{\bf E}\mathrm{d}(\varphi)-1}(\sum_{i\in[n]}|\varphi_{i}|)2^{K_{\psi}({\bf E}\mathrm{d}(\varphi)-1)\sum_{i\in[n]}|\varphi_{i}|}$ Using this in Equation (9) we get $\displaystyle|Q|$ $\displaystyle\leq 2^{K_{\psi}|\psi^{\prime}|}|\mathcal{S}|\left(1+\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\mathcal{S}|^{{\bf E}\mathrm{d}(\varphi)-1}(\sum_{i\in[n]}|\varphi_{i}|)2^{K_{\psi}({\bf E}\mathrm{d}(\varphi)-1)\sum_{i\in[n]}|\varphi_{i}|}\log f_{\mathcal{S}}^{\varphi}\big{)}\right)$ $\displaystyle\leq 2^{K_{\psi}|\psi^{\prime}|}\left(1+\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\mathcal{S}|^{{\bf E}\mathrm{d}(\varphi)}(\sum_{i\in[n]}|\varphi_{i}|)2^{K_{\psi}({\bf E}\mathrm{d}(\varphi)-1)\sum_{i\in[n]}|\varphi_{i}|}\log f_{\mathcal{S}}^{\varphi}\big{)}\right)$ $\displaystyle\leq 2^{K_{\psi}|\psi^{\prime}|}\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\mathcal{S}|^{{\bf E}\mathrm{d}(\varphi)}(1+\sum_{i\in[n]}|\varphi_{i}|)2^{K_{\psi}({\bf E}\mathrm{d}(\varphi)-1)\sum_{i\in[n]}|\varphi_{i}|}\log f_{\mathcal{S}}^{\varphi}\big{)}$ $\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\mathcal{S}|^{{\bf E}\mathrm{d}(\varphi)}|\varphi|2^{K_{\psi}B}\log f_{\mathcal{S}}^{\varphi}\big{)},$ where $B=({\bf E}\mathrm{d}(\varphi)-1)\sum_{i\in[n]}|\varphi_{i}|+|\psi^{\prime}|$. To conclude it only remains to show that $B\leq|\varphi|{\bf E}\mathrm{d}(\varphi)$. Because $\varphi={\bf E}\psi$, it holds that ${\bf E}\mathrm{d}(\varphi)\geq 1$. If ${\bf E}\mathrm{d}(\varphi)=1$, we have $B=|\psi^{\prime}|\leq|\varphi|{\bf E}\mathrm{d}(\varphi)$. Now if ${\bf E}\mathrm{d}(\varphi)\geq 2$, we have $B=({\bf E}\mathrm{d}(\varphi)-2)\sum_{i\in[n]}|\varphi_{i}|+|\psi^{\prime}|+\sum_{i\in[n]}|\varphi_{i}|$ Clearly, $\sum_{i\in[n]}|\varphi_{i}|\leq|\varphi|$, and $|\psi^{\prime}|+\sum_{i\in[n]}|\varphi_{i}|\leq 2|\varphi|$, and the result follows. Note that it could seem that $|\psi^{\prime}|+\sum_{i\in[n]}|\varphi_{i}|\leq|\varphi|$. It is true if one defines the size of a formula as the number of connectors, but not if one also counts atomic propositions, as we do here. However it is true that $|\psi^{\prime}|+\sum_{i\in[n]}|\varphi_{i}|\leq 2|\varphi|$, independently of the definition of formulas’ size. It remains to establish the claim about the size of transition formulas. By definition, for every state $q$ of $\mathcal{A}_{s}^{\varphi}$ that comes from some $\mathcal{A}^{i}_{s^{\prime}}$ or $\overline{\mathcal{A}^{i}_{s^{\prime}}}$, the transition function is unchanged and thus the result follows by induction hypothesis and the fact that narrowing and complementation do not increase the size of formulas in transition functions. Now for the remaining states, for each $(q^{\psi},s^{\prime})\in Q$ and every $a\in 2^{{\textnormal{AP}_{\exists}}(\Phi)}$, we have $\displaystyle|\delta((q^{\psi},s^{\prime}),a)|$ $\displaystyle\leq\sum_{a^{\prime}\in 2^{\max(\psi)}}\left(|\delta_{\psi}((q^{\psi},s^{\prime}),a^{\prime})|+1+\sum_{\varphi_{i}\in a^{\prime}}(|\delta^{i}_{s^{\prime}}(q^{i}_{s^{\prime}},a)|+1)+\sum_{\varphi_{i}\notin a^{\prime}}(|\overline{\delta^{i}_{s^{\prime}}}(\overline{q^{i}_{s^{\prime}}},a)|+1)\right)$ Now by induction hypothesis, and because complementation does not increase the size of formulas, we get: (10) $|\delta((q^{\psi},s^{\prime}),a)|\leq\sum_{a^{\prime}\in 2^{\max(\psi)}}\left(|\delta_{\psi}((q^{\psi},s^{\prime}),a^{\prime})|+2\sum_{i\in[n]}|\mathcal{S}|2^{H(\varphi_{i})|\varphi_{i}|}|Q_{i}|^{|\mathcal{S}|}\right)+2^{|\max(\psi)|}+2|\max(\psi)|2^{|\max(\psi)|},$ where $|Q_{i}|$ is the number of states in automaton $\mathcal{A}_{s^{\prime}}^{\varphi_{i}}$. Now by definition, $\displaystyle|\delta_{\psi}((q^{\psi},s^{\prime}),a^{\prime})|$ $\displaystyle=\left(\sum_{q^{\prime}\in\Delta^{\psi}(q^{\psi},a^{\prime})}\sum_{s^{\prime\prime}\in R(s^{\prime})}1\right)+|\Delta^{\psi}(q^{\psi},a^{\prime})||R(s^{\prime})|-1$ $\displaystyle|\delta_{\psi}((q^{\psi},s^{\prime}),a^{\prime})|$ $\displaystyle\leq 2|\Delta^{\psi}(q^{\psi},a^{\prime})||R(s^{\prime})|-1$ We thus have (11) $|\delta_{\psi}((q^{\psi},s^{\prime}),a^{\prime})|\leq 2|Q_{\psi^{\prime}}||\mathcal{S}|-1$ where $Q_{\psi^{\prime}}$ is the set of states of the word automaton $\mathcal{W}^{\psi}$. Using this in Equation 10 we get: $\displaystyle|\delta((q^{\psi},s^{\prime}),a)|$ $\displaystyle\leq 2^{|\max(\psi)|}\left(2|Q_{\psi^{\prime}}||\mathcal{S}|-1+2\sum_{i\in[n]}|\mathcal{S}|2^{H(\varphi_{i})|\varphi_{i}|}|Q_{i}|^{|\mathcal{S}|}\right)+2^{|\max(\psi)|}+2|\max(\psi)|2^{|\max(\psi)|}$ $\displaystyle|\delta((q^{\psi},s^{\prime}),a)|$ $\displaystyle\leq 2^{|\max(\psi)|+1}|\mathcal{S}|\left(|Q_{\psi^{\prime}}|+\sum_{i\in[n]}2^{H(\varphi_{i})|\varphi_{i}|}|Q_{i}|^{|\mathcal{S}|}\right)+2|\max(\psi)|2^{|\max(\psi)|}$ But for natural numbers $\\{a_{i},b_{i}\\}_{i\in[n]}$, it holds that $\sum_{i\in[n]}2^{a_{i}}b_{i}=2^{\sum_{i\in[n]}a_{i}}\sum_{i\in[n]}b_{i}-\sum_{i\in[n]}2^{a_{i}}(2^{\sum_{j\neq i}a_{j}}-1)b_{i}$ Applying this to $a_{i}=H(\varphi_{i})|\varphi_{i}|$ and $b_{i}=|Q_{i}|^{|\mathcal{S}|}$ we obtain $\sum_{i\in[n]}2^{H(\varphi_{i})|\varphi_{i}|}|Q_{i}|^{|\mathcal{S}|}=2^{\sum_{i\in[n]}H(\varphi_{i})|\varphi_{i}|}\sum_{i\in[n]}|Q_{i}|^{|\mathcal{S}|}-\sum_{i\in[n]}2^{H(\varphi_{i})|\varphi_{i}|}(2^{\sum_{j\neq i}H(\varphi_{j})|\varphi_{j}|}-1)|Q_{i}|^{|\mathcal{S}|}$ We thus get that $|\delta((q^{\psi},s^{\prime}),a)|\leq 2^{|\max(\psi)|+1}|\mathcal{S}|\left(|Q_{\psi^{\prime}}|+2^{\sum_{i\in[n]}H(\varphi_{i})|\varphi_{i}|}\sum_{i\in[n]}|Q_{i}|^{|\mathcal{S}|}\right)+C,$ with $\displaystyle C$ $\displaystyle=2|\max(\psi)|2^{|\max(\psi)|}-2^{|\max(\psi)|+1}|\mathcal{S}|\sum_{i\in[n]}2^{H(\varphi_{i})|\varphi_{i}|}(2^{\sum_{j\neq i}H(\varphi_{j})|\varphi_{j}|}-1)|Q_{i}|^{|\mathcal{S}|}$ $\displaystyle=2^{|\max(\psi)|}\left(2|\max(\psi)|-2|\mathcal{S}|\sum_{i\in[n]}2^{H(\varphi_{i})|\varphi_{i}|}(2^{\sum_{j\neq i}H(\varphi_{j})|\varphi_{j}|}-1)|Q_{i}|^{|\mathcal{S}|}\right)$ If $n=|\max(\psi)|>1$, i.e., there are at least two maximal state subformulas, then $\sum_{j\neq i}H(\varphi_{j})|\varphi_{j}|>0$, hence $2|\mathcal{S}|\sum_{i\in[n]}2^{H(\varphi_{i})|\varphi_{i}|}(2^{\sum_{j\neq i}H(\varphi_{j})|\varphi_{j}|}-1)|Q_{i}|^{|\mathcal{S}|}\geq 4n=4|\max(\psi)|$, which implies that $C\leq 0$, and thus $\displaystyle|\delta((q^{\psi},s^{\prime}),a)|$ $\displaystyle\leq 2^{|\max(\psi)|+1}|\mathcal{S}|\left(|Q_{\psi^{\prime}}|+2^{\sum_{i\in[n]}H(\varphi_{i})|\varphi_{i}|}\sum_{i\in[n]}|Q_{i}|^{|\mathcal{S}|}\right)$ $\displaystyle\leq 2^{|\max(\psi)|+1}|\mathcal{S}|2^{\sum_{i\in[n]}H(\varphi_{i})|\varphi_{i}|}\left(|Q_{\psi^{\prime}}|^{|\mathcal{S}|}+\sum_{i\in[n]}|Q_{i}|^{|\mathcal{S}|}\right)$ $\displaystyle\leq|\mathcal{S}|2^{|\max(\psi)|+1+(H(\varphi)-1)\sum_{i\in[n]}|\varphi_{i}|}\left(|Q_{\psi^{\prime}}|+\sum_{i\in[n]}|Q_{i}|\right)^{|\mathcal{S}|}$ $\displaystyle\leq|\mathcal{S}|2^{|\varphi|+(H(\varphi)-1)|\varphi|}|Q|^{|\mathcal{S}|}$ $\displaystyle|\delta((q^{\psi},s^{\prime}),a)|$ $\displaystyle\leq|\mathcal{S}|2^{H(\varphi)|\varphi|}|Q|^{|\mathcal{S}|}$ It remains to consider the case where $\max(\psi)=\\{\varphi_{1}\\}$. In that case there are only two letters in the alphabet $2^{\max(\psi)}$, which are $\emptyset$ and $\\{\varphi_{1}\\}$. The transition formulas then simplify and one gets that $\displaystyle|\delta((q^{\psi},s^{\prime}),a)|$ $\displaystyle\leq|\delta_{\psi}((q^{\psi},s^{\prime}),\emptyset)|+1+|\overline{\delta^{1}_{s^{\prime}}}(\overline{q^{1}_{s^{\prime}}},a)|+1+|\delta_{\psi}((q^{\psi},s^{\prime}),\\{\varphi_{1}\\})|+1+|\delta^{1}_{s^{\prime}}(q^{1}_{s^{\prime}},a)|$ Using Equation (11) and the induction hypothesis we get $\displaystyle|\delta((q^{\psi},s^{\prime}),a)|$ $\displaystyle\leq 4|Q_{\psi^{\prime}}||\mathcal{S}|-2+2|\mathcal{S}|2^{H(\varphi_{1})|\varphi_{1}|}|Q_{1}|^{|\mathcal{S}|}+3$ $\displaystyle\leq 1+2|\mathcal{S}|(2|Q_{\psi^{\prime}}|+2^{H(\varphi_{1})|\varphi_{1}|}|Q_{1}|^{|\mathcal{S}|})$ $\displaystyle\leq 1+2|\mathcal{S}|2^{(H(\varphi)-1)|\varphi_{1}|}(|Q_{\psi^{\prime}}|^{|\mathcal{S}|}+|Q_{1}|^{|\mathcal{S}|})$ $\displaystyle\leq 1+|\mathcal{S}|2^{H(\varphi)|\varphi|}(|Q_{\psi^{\prime}}|^{|\mathcal{S}|}+|Q_{1}|^{|\mathcal{S}|})$ $\displaystyle|\delta((q^{\psi},s^{\prime}),a)|$ $\displaystyle\leq|\mathcal{S}|2^{H(\varphi)|\varphi|}|Q|^{|\mathcal{S}|}$ $\bm{\varphi=\exists}^{\bm{\textnormal{{o}}}}\bm{p.\,\varphi^{\prime}:}$ We first establish the claim for states and colours, and we start with the case $\mbox{sd}_{k}(\varphi)=\mbox{sd}_{k}(\varphi^{\prime})$. By definition we necessarily have that $\mbox{sd}_{x}(\varphi^{\prime})=\mbox{nd}$, i.e., $\mathcal{A}_{s}^{\varphi^{\prime}}$ is nondeterministic, and $\textnormal{{o}}=I_{\varphi^{\prime}}$, therefore there is no need to use narrowing or nondeterminisation here. $\mathcal{A}_{s}^{\varphi}$ is obtained by directly projecting $\mathcal{A}_{s}^{\varphi^{\prime}}$, an operation that does not change the number of states or colours, so that the claim for states and colours follows directly by induction hypothesis. Now we consider the case where $\mbox{sd}_{k}(\varphi)\neq\mbox{sd}_{k}(\varphi^{\prime})$, which implies that $\mbox{sd}_{k}(\varphi)\geq 1$. Let $n$ be the number of states and $l$ the number of colours in $\mathcal{A}_{s}^{\varphi^{\prime}}$. In this case $\mathcal{A}_{s}^{\varphi^{\prime}}$ is first narrowed down, which does not change number of states or colours. The resulting automaton is then nondeterminised, yielding an automaton with at most $2^{K_{1}nl\log nl}$ states and $K_{2}nl$ colours. Again, we split cases: if $\mbox{sd}_{k}(\varphi^{\prime})=0$, by induction hypothesis, $n\leq f_{\mathcal{S}}^{\varphi^{\prime}}$ and $l=2$. For the number of colours, observing that $\exists\mathrm{d}(\varphi)=\exists\mathrm{d}(\varphi^{\prime})+1$, we have $\displaystyle K_{2}nl\leq 2K_{2}f_{\mathcal{S}}^{\varphi^{\prime}}$ $\displaystyle=2K_{2}(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi^{\prime})}|\varphi^{\prime}||\mathcal{S}|^{{\bf E}\mathrm{d}(\varphi^{\prime})}2^{K_{\psi}|\varphi^{\prime}|{\bf E}\mathrm{d}(\varphi^{\prime})}$ $\displaystyle\leq(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\varphi||\mathcal{S}|^{{\bf E}\mathrm{d}(\varphi)}2^{K_{\psi}|\varphi|{\bf E}\mathrm{d}(\varphi)}$ $\displaystyle K_{2}nl$ $\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)-1\mid f_{\mathcal{S}}^{\varphi}\log f_{\mathcal{S}}^{\varphi}\big{)}$ For the number of states, we have that $\displaystyle 2^{K_{1}nl\log nl}$ $\displaystyle\leq 2^{2K_{1}f_{\mathcal{S}}^{\varphi^{\prime}}\log(2f_{\mathcal{S}}^{\varphi^{\prime}})}\leq\mathrm{exp}\big{(}\mbox{sd}_{k}{\varphi}\mid f_{\mathcal{S}}^{\varphi}\log(f_{\mathcal{S}}^{\varphi})\big{)}$ Now for the final case, if $\mbox{sd}_{k}(\varphi)=\mbox{sd}_{k}(\varphi^{\prime})+1$ and $\mbox{sd}_{k}(\varphi^{\prime})\geq 1$, by induction hypothesis $n\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})\mid f_{\mathcal{S}}^{\varphi^{\prime}}\log f_{\mathcal{S}}^{\varphi^{\prime}}\big{)}$ and $l\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})-1\mid f_{\mathcal{S}}^{\varphi^{\prime}}\log f_{\mathcal{S}}^{\varphi^{\prime}}\big{)}$. For the number of colours in $\mathcal{A}_{s}^{\varphi}$ we thus get $\displaystyle K_{2}nl$ $\displaystyle\leq K_{2}\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})-1\mid f_{\mathcal{S}}^{\varphi^{\prime}}\log f_{\mathcal{S}}^{\varphi^{\prime}}2^{f_{\mathcal{S}}^{\varphi^{\prime}}\log f_{\mathcal{S}}^{\varphi^{\prime}}}\big{)}$ $\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})\mid 2K_{2}f_{\mathcal{S}}^{\varphi^{\prime}}\log f_{\mathcal{S}}^{\varphi^{\prime}}\big{)}$ $\displaystyle K_{2}nl$ $\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)-1\mid f_{\mathcal{S}}^{\varphi}\log f_{\mathcal{S}}^{\varphi}\big{)}$ Concerning the number of states, we observe that $\displaystyle nl$ $\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})-1\mid f_{\mathcal{S}}^{\varphi^{\prime}}\log f_{\mathcal{S}}^{\varphi^{\prime}}2^{f_{\mathcal{S}}^{\varphi^{\prime}}\log f_{\mathcal{S}}^{\varphi^{\prime}}}\big{)}$ $\displaystyle nl$ $\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})\mid 2f_{\mathcal{S}}^{\varphi^{\prime}}\log f_{\mathcal{S}}^{\varphi^{\prime}}\big{)}$ $\displaystyle K_{1}nl\log nl$ $\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})-1\mid 2K_{1}f_{\mathcal{S}}^{\varphi^{\prime}}\log f_{\mathcal{S}}^{\varphi^{\prime}}2^{2f_{\mathcal{S}}^{\varphi^{\prime}}\log f_{\mathcal{S}}^{\varphi^{\prime}}}\big{)}$ $\displaystyle K_{1}nl\log nl$ $\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})\mid 4K_{1}f_{\mathcal{S}}^{\varphi^{\prime}}\log f_{\mathcal{S}}^{\varphi^{\prime}}\big{)}$ $\displaystyle K_{1}nl\log nl$ $\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})\mid f_{\mathcal{S}}^{\varphi}\log f_{\mathcal{S}}^{\varphi}\big{)}$ $\displaystyle 2^{K_{1}nl\log nl}$ $\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid f_{\mathcal{S}}^{\varphi}\log f_{\mathcal{S}}^{\varphi}\big{)}$ It only remains to establish the claim for the size of transition formulas. Since $\mathcal{A}_{s}^{\varphi}$ is nondeterministic, formulas $\delta(q,a)$ are written in disjunctive normal form and for every direction $x\in S_{\varphi}$ each disjunct contains exactly one element of $\\{x\\}\times Q$, where $Q$ is the set of states in $\mathcal{A}_{s}^{\varphi}$. As a result, each formula $\delta(q,a)$ is of size $\displaystyle|\delta(q,a)|$ $\displaystyle\leq|Q|^{|S_{\varphi}|}(2|S_{\varphi}|-1)+|Q|^{|S_{\varphi}|}-1$ $\displaystyle\leq 2|S_{\varphi}||Q|^{|S_{\varphi}|}$ $\displaystyle|\delta(q,a)|$ $\displaystyle\leq 2^{H(\varphi)|\varphi|}|\mathcal{S}||Q|^{|\mathcal{S}|}$ ∎
2024-09-04T02:54:58.721052
2020-03-06T13:14:33
2003.04738
{ "authors": "Jens Hesse", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26135", "submitter": "Jens Hesse", "url": "https://arxiv.org/abs/2003.04738" }
arxiv-papers
# EKOR strata on Shimura varieties with parahoric reduction Jens Hesse Technische Universität Darmstadt<EMAIL_ADDRESS> ###### Abstract We investigate the geometry of the special fiber of the integral model of a Shimura variety with parahoric level at a given prime place. To be more precise, we deal with the EKOR stratification which interpolates between the Ekedahl-Oort and Kottwitz-Rapoport stratifications. In the Siegel case we give a geometric description by suitably generalizing the theory of $G$-zips of Moonen, Wedhorn, Pink and Ziegler to our context. ###### Contents 1. 1 Background 1. 1.1 Shimura data of Hodge type 2. 1.2 Bruhat-Tits buildings 3. 1.3 Bruhat-Tits group schemes 4. 1.4 Siegel integral models 5. 1.5 Local structure of the integral model 1. 1.5.1 Generizations and irreducible components 6. 1.6 The local model 1. 1.6.1 The Siegel case 2. 1.6.2 The relation between the integral and the local model 3. 1.6.3 The Pappas-Zhu construction 2. 2 EKOR strata and zips in the case of parahoric reduction 1. 2.1 The Ekedahl-Oort, Kottwitz-Rapoport and EKOR stratifications 1. 2.1.1 Iwahori-Weyl group and the admissible subset 2. 2.1.2 Kottwitz-Rapoport stratification 3. 2.1.3 Ekedahl-Oort stratification 4. 2.1.4 EKOR stratification 2. 2.2 $\overline{\mathcal{G}}_{K}$-zips in the Siegel case 1. 2.2.1 Preliminaries 2. 2.2.2 Lattice chains, zips, admissibility 3. 2.2.3 An explicit description of $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ 4. 2.2.4 $\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$ in the Siegel case 5. 2.2.5 The example of $\operatorname{GSp}(4)$ Introduction Shimura varieties are objects of arithmetic geometry (namely varieties over number fields) that naturally arise in the search for generalized, non-abelian reciprocity laws (i.e., in the Langlands program) and as moduli spaces of abelian varieties (with certain extra structures on them). One way of approaching these objects is to try to understand their mod-$p$ reduction (which has to be carefully defined first). Insofar as a moduli interpretation in the above sense exists and continues to exist likewise for the mod-$p$ reduction111There need not be a _literal_ moduli interpretation, but in any event the stratifications in question derive from a close connection to moduli problems., it allows us to stratify the moduli space according to several invariants of the abelian varieties parametrized, e.g., the isomorphism classes of their $p$-torsion. (An important observation is that these stratifications genuinely live in the characteristic $p$ world, making use of Frobenius endomorphisms and so on.) This, very roughly, is the general theme everything in this article revolves around. More precisely, we will be dealing with Shimura varieties of Hodge type and parahoric level structure, at some fixed prime $v\mid p$ of the number field over which the Shimura variety is defined. Under some reasonably mild assumptions, cf. 1.17, Kisin and Pappas [KP15] constructed a canonical integral model for such a Shimura variety. We try to understand some aspects of the geometry of the special fiber of said integral model, namely the EKOR strata (an interpolation between the Ekedahl-Oort strata, which in the case of hyperspecial level are roughly the patches where the isomorphism class of the $p$-torsion associated with the abelian variety is constant, and the Kottwitz- Rapoport strata, which roughly are the patches where the Hodge filtration looks constant) and defining them in a geometrical way. Let us now go into more detail. On the integral model $\mathscr{S}_{K}$ ($K$ parahoric level) we have a “universal” abelian scheme (the quotation marks indicating that it is not really universal for some moduli problem on $\mathscr{S}_{K}$, but it comes from a universal abelian scheme via pullback) and we have various kinds of Hodge tensors. We also have a “universal” isogeny chain of abelian schemes tightly connected to the “universal” abelian scheme. The overarching goal (and what we meant above by “defining the EKOR strata in a geometrical way”) is to construct a “nice” algebraic stack $\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$ and a “nice” morphism $\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$ from the mod-$p$ reduction of the Shimura variety to it, such that the fibers are the EKOR strata. Shen, Yu and Zhang [SYZ19] solved this problem on individual Kottwitz-Rapoport strata and globally after perfection, but not in the form stated here (i.e., globally without passing to perfections). In the Siegel case we propose a solution which specializes to that of Shen, Yu and Zhang on Kottwitz-Rapoport strata, and should not be difficult to generalize to many (P)EL cases. We show that $\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$ is surjective. However, we have to leave the question of whether $\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$ is smooth (which would be part of “nice”) an open conjecture. For hyperspecial level, the EKOR stratification agrees with the Ekedahl-Oort stratification, and the goal just set out is achieved by the stack of $\overline{\mathcal{G}}_{K}$-zips, first defined in special cases by Moonen and Wedhorn in [MW04] and then generally by Pink, Wedhorn and Ziegler in [PWZ11, PWZ15]; the relation to Shimura varieties being established in increasing generality in [MW04], by Viehmann and Wedhorn in [VW13], and finally by Zhang in [Zha15]. One way of looking at the transition from hyperspecial level to general parahoric level (at the very least in nice enough (P)EL cases) is from the point of view of moduli problems of abelian varieties with extra structure, where in the hyperspecial case we are really dealing just with that and in the general case we are dealing with isogeny chains of abelian varieties with extra structure, indexed by lattice chains coming from the Bruhat-Tits building of the reductive $p$-adic Lie group in question. The basic idea in generalizing zips from the hyperspecial to the general parahoric case then is that one should be dealing with chains of zips in the old sense. The zip of an abelian variety encodes the following information: the Hodge filtration, the conjugate filtration, and the Cartier isomorphism relating the two. In the general case, every abelian variety in the isogeny chain has a Hodge filtration, a conjugate filtration and a Cartier isomorphism. Problems now arise because we are dealing with $p$-primary isogenies on $p$-torsion points, implying that the transition morphisms in these chains have non- vanishing kernels. This introduces additional difficulty compared to the hyperspecial case; there is a naive way of defining a zip stack, but eventually we need to consider a certain admissible locus in it, which so far suffers from the absence of a nice moduli description. Passing to perfections however simplifies things and allows us to prove that the admissible locus is closed. From here we arrive at the stack that we are really interested in by dividing out a certain group action involving the unipotent radical of the special fiber of the parahoric group scheme. A careful inspection shows that on Kottwitz-Rapoport strata we arrive at the same result as in [SYZ19]. To sum up the results, ###### Theorem A: In the Siegel case, there is an algebraic stack $\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$ and a surjective morphism $\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$, whose fibers are the EKOR strata and such that on Kottwitz-Rapoport strata, one gets the stack and map constructed in [SYZ19]. For $\operatorname{GSp}(4)$ we do some calculations to illustrate the theory; section 2.2.5. ### Acknowledgements This article essentially is an extract of my doctoral thesis [Hes20] (another extract222In particular, there is a large overlap between the “Background” sections of the two articles., dealing with the foliation into central leaves, is [Hes20a]). I thank Torsten Wedhorn for suggesting the topic of the dissertation, his support and patiently answering my questions. Moreover I thank Eva Viehmann and Paul Hamacher for their hospitality and helpful discussions during a month-long stay in Munich at the TU München. I am also grateful to Timo Richarz and Timo Henkel for numerous helpful discussions. This research was supported by the Deutsche Forschungsgemeinschaft (DFG), project number WE 2380/5. ## 1 Background ### 1.1 Shimura data of Hodge type This article deals with aspects of the geometry of Shimura varieties (of Hodge type), which are the (systems of) varieties associated with Shimura data (of Hodge type). ###### Definition 1.1. A _Shimura datum of Hodge type_ is a pair $(G,X)$, where $G$ is a reductive algebraic group over $\mathbb{Q}$ and $X\subseteq\operatorname{Hom}_{\mathbb{R}\text{-grp}}(\mathbb{S},G_{\mathbb{R}})$ is a $G(\mathbb{R})$-conjugacy class ($\mathbb{S}:=\operatorname{Res}_{\mathbb{C}/\mathbb{R}}\mathbb{G}_{m,\mathbb{C}}$ being the Deligne torus) subject to the following conditions: 1. (1) For $h\in X$, the induced Hodge structure $\mathbb{S}\xrightarrow{h}G_{\mathbb{R}}\xrightarrow{\mathrm{Ad}}\operatorname{GL}(\operatorname{Lie}(G_{\mathbb{R}}))$ satisfies $\operatorname{Lie}(G_{\mathbb{C}})=\operatorname{Lie}(G_{\mathbb{C}})^{-1,1}\oplus\operatorname{Lie}(G_{\mathbb{C}})^{0,0}\oplus\operatorname{Lie}(G_{\mathbb{C}})^{1,-1}$. 2. (2) $\operatorname{int}(h(i))\colon G^{\mathrm{ad}}_{\mathbb{R}}\to G^{\mathrm{ad}}_{\mathbb{R}}$ is a Cartan involution, i.e., $\\{g\in G^{\mathrm{ad}}(\mathbb{C})\;|\;gh(i)=h(i)\overline{g}\\}$ is compact. Another way of phrasing this condition: Every finite-dimensional real representation $V$ of $G^{\mathrm{ad}}_{\mathbb{R}}$ carries a $G^{\mathrm{ad}}_{\mathbb{R}}$-invariant bilinear form $\varphi$ such that $(u,v)\mapsto\varphi(u,h(i)v)$ is symmetric and positive definite. It is enough to show that this holds for one _faithful_ finite-dimensional real representation $V$. 3. (3) $G^{\mathrm{ad}}$ _cannot_ be non-trivially written as $G^{\mathrm{ad}}\cong H\times I$ over $\mathbb{Q}$ with $\mathbb{S}\to G_{\mathbb{R}}\xrightarrow{\mathrm{proj}}H_{\mathbb{R}}$ trivial. 4. (4) There exists an embedding $(G,X)\hookrightarrow(\operatorname{GSp}(V),S^{\pm})$, where $(\operatorname{GSp}(V),S^{\pm})$ is the Shimura datum associated with a finite-dimensional symplectic $\mathbb{Q}$-vector space $V$ (see below). That is, we have an embedding $G\hookrightarrow\operatorname{GSp}(V)$ of $\mathbb{Q}$-group schemes such that the induced map $\operatorname{Hom}_{\mathbb{R}\text{-grp}}(\mathbb{S},G_{\mathbb{R}})\hookrightarrow\operatorname{Hom}_{\mathbb{R}\text{-grp}}(\mathbb{S},\operatorname{GSp}(V_{\mathbb{R}}))$ restricts to a map $X\hookrightarrow S^{\pm}$. ###### Example 1.2. Let $W$ be a finite-dimensional $\mathbb{R}$-vector space. $\mathbb{R}$-group homomorphisms $\mathbb{S}\to\operatorname{GL}(W)$ then correspond to Hodge decompositions of $W$, i.e., to decompositions $W_{\mathbb{C}}=\oplus_{(p,q)\in\mathbb{Z}^{2}}W_{\mathbb{C}}^{p,q}$, such that $W_{\mathbb{C}}^{p,q}$ is the complex conjugate of $W_{\mathbb{C}}^{q,p}$ for all $(p,q)\in\mathbb{Z}^{2}$. Under this correspondence, $h\colon\mathbb{S}\to\operatorname{GL}(W)$ corresponds to the Hodge decomposition $W_{\mathbb{C}}^{p,q}=\\{w\in W_{\mathbb{C}}\;|\;\forall z\in\mathbb{S}(\mathbb{R})=\mathbb{C}^{\times}\colon h(z)w=z^{-p}\bar{z}^{-q}w\\}$. Hodge decompositions of $W$ of type $(-1,0)+(0,-1)$ correspond to complex structures on $W$: If $h\colon\mathbb{S}\to\operatorname{GL}(W)$ yields such a Hodge decomposition, then $h(i)$ gives an $\mathbb{R}$-endomorphism $J$ of $W$ with $J\circ J=-\operatorname{id}_{W}$. Let $V=(V,\psi)$ be a finite-dimensional symplectic $\mathbb{Q}$-vector space. We say that a complex structure $J$ on $V_{\mathbb{R}}$ is positive (resp. negative) if $\psi_{J}:=\psi_{\mathbb{R}}(\\_,J\\_)$ is a positive definite (resp. negative definite) symmetric bilinear form on $V_{\mathbb{R}}$. Define $S^{+}$ (resp. $S^{-}$) to be the set of positive (resp. negative) complex structures on $(V_{\mathbb{R}},\psi_{\mathbb{R}})$ and $S^{\pm}:=S^{+}\sqcup S^{-}$. We can make this more concrete: A symplectic basis of $(V_{\mathbb{R}},\psi_{\mathbb{R}})$ is a basis $e_{1},\dotsc,\allowbreak e_{g},e_{-g},\dotsc,\allowbreak e_{-1}$, such that $\psi_{\mathbb{R}}$ is of the form $\begin{pmatrix}&\tilde{I}_{g}\\\ -\tilde{I}_{g}&\end{pmatrix}$ with respect to this basis, where $\tilde{I}_{g}=\begin{pmatrix}&&1\\\ &\iddots&\\\ 1&&\end{pmatrix}$ is the antidiagonal identity matrix.333Occasionally (in particular when doing concrete matrix calculations), it is more convenient to number the basis vectors $1,\dotsc,g,-1,\dotsc,-g$ instead of $1,\dotsc,g,-g,\dotsc,-1$. Then the standard symplectic form is given by $\left(\begin{smallmatrix}&I_{g}\\\ -I_{g}&\end{smallmatrix}\right)$, $I_{g}$ being the $g\times g$ identity matrix. Let $J$ be the endomorphism of $V_{\mathbb{R}}$ of the form $\begin{pmatrix}&-\tilde{I}_{g}\\\ \tilde{I}_{g}&\end{pmatrix}$ with respect to this basis. Then $J\in S^{+}$ and what we have described is a surjective map $\\{\text{symplectic bases of }(V_{\mathbb{R}},\psi_{\mathbb{R}})\\}\twoheadrightarrow S^{+}.$ In particular we see that $\operatorname{Sp}(V_{\mathbb{R}},\psi_{\mathbb{R}}):=\\{f\in\operatorname{GL}(V_{\mathbb{R}})\;|\;\psi_{\mathbb{R}}(f(\\_),f(\\_))=\psi_{\mathbb{R}}\\}$ (by virtue of acting simply transitively on the symplectic bases) acts transitively on $S^{+}\cong\operatorname{Sp}(V_{\mathbb{R}},\psi_{\mathbb{R}})/\operatorname{SpO}(V_{\mathbb{R}},\psi_{\mathbb{R}},J)$ (where we define $\operatorname{SpO}(V_{\mathbb{R}},\psi_{\mathbb{R}},J):=\operatorname{Sp}(V_{\mathbb{R}},\psi_{\mathbb{R}})\cap O(V_{\mathbb{R}},\psi_{J})=U((V_{\mathbb{R}},J),\psi_{J})$ for a fixed choice of $J\in S^{+}$) and therefore the general symplectic group $\operatorname{GSp}(V_{\mathbb{R}},\psi_{\mathbb{R}}):=\\{f\in\operatorname{GL}(V_{\mathbb{R}})\;|\;\psi_{\mathbb{R}}(f(\\_),f(\\_))=c\cdot\psi_{\mathbb{R}}\text{ for some }c\in\mathbb{R}^{\times}\\}$ acts transitively on $S^{\pm}$ (note that the element of the form $e_{\pm i}\mapsto e_{\mp i}$ of $\operatorname{GSp}(V_{\mathbb{R}},\psi_{\mathbb{R}})$ for any given choice of symplectic basis $\left(e_{i}\right)_{i}$ permutes $S^{+}$ and $S^{-}$). ###### Definition 1.3. Condition (1) of Definition 1.1 implies that the action of $\mathbb{G}_{m,\mathbb{R}}$ (embedded in $\mathbb{S}$ in the natural way) on $\operatorname{Lie}(G_{\mathbb{R}})$ is trivial, so that $h$ induces a homomorphism ${w\colon\mathbb{G}_{m,\mathbb{R}}\to\operatorname{Cent}(G_{\mathbb{R}})}$. This homomorphism is independent of the choice of $h\in X$ and is called the _weight homomorphism_ of $(G,X)$. Moreover, we denote by $\\{\mu\\}$ the the $G(\mathbb{C})$-conjugacy class of the cocharacter $\mu_{h}:=h\circ(\operatorname{id}_{\mathbb{G}_{m,\mathbb{C}}},1)\colon\mathbb{G}_{m,\mathbb{C}}\to\mathbb{G}_{m,\mathbb{C}}^{2}\cong\mathbb{S}_{\mathbb{C}}\to G_{\mathbb{C}}$, where $h$ is as above. Obviously, the conjugacy class $\\{\mu\\}$ is independent of the particular choice of $h\in X$. ###### Remark 1.4. Let $L/\mathbb{Q}$ be a field extension such that $G_{L}$ contains a split maximal torus $T$. Let $W:=\operatorname{Norm}_{G(L)}(T)/T$ be the Weyl group. Then the natural map $W\backslash\operatorname{Hom}_{L\text{-grp}}(\mathbb{G}_{m,L},T)\to G(L)\backslash\operatorname{Hom}_{L\text{-grp}}(\mathbb{G}_{m,L},G_{L})$ is bijective. Since the left hand side remains unchanged if we go from $L=\bar{\mathbb{Q}}$ (where as usual $\bar{\mathbb{Q}}$ denotes an algebraic closure of $\mathbb{Q}$) to $L=\mathbb{C}$, we see that $\\{\mu\\}$ contains a cocharacter defined over $\bar{\mathbb{Q}}$ and that we may then also consider $\\{\mu\\}$ as a $G(\bar{\mathbb{Q}})$-conjugacy class. ###### Definition 1.5. The _reflex field_ $\mathbf{E}=\mathbf{E}(G,X)$ of $(G,X)$ is the field of definition of $\\{\mu\\}$, i.e., the fixed field in $\bar{\mathbb{Q}}$ of $\\{\gamma\in\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\;|\;\gamma(\\{\mu\\})=\\{\mu\\}\\}$. ###### Example 1.6. The reflex field of the Shimura datum $(\operatorname{GSp}_{2g,\mathbb{Q}},S^{\pm})$ of Example 1.2 is $\mathbb{Q}$. To wit, one of the cocharacters in the conjugacy class $\\{\mu\\}$ is $\mu(z)=\left(\begin{smallmatrix}z&&&&&\\\ &\ddots&&&&\\\ &&z&&&\\\ &&&1&&\\\ &&&&\ddots&\\\ &&&&&1\end{smallmatrix}\right).$ ###### Notation 1.7. We denote the ring of (rational) adeles by $\mathbb{A}:=\mathbb{A}_{\mathbb{Q}}$, the subring of finite adeles by $\mathbb{A}_{f}:=\mathbb{A}_{\mathbb{Q},f}$ and the subring of finite adeles away from some fixed prime $p$ by $\mathbb{A}_{f}^{p}$. ###### Definition and Remark 1.8. Let $K\subseteq G(\mathbb{A}_{f})$ be a compact open subgroup. The _Shimura variety of level $K$ associated with $(G,X)$_ is the double coset space $\operatorname{Sh}_{K}(G,X):=G(\mathbb{Q})\backslash(X\times(G(\mathbb{A}_{f})/K)).$ A priori, this is just a set, but if $K$ is sufficiently small (i.e., “neat” in the sense of [Bor69, Pin90]), $\operatorname{Sh}_{K}(G,X)$ can be canonically written as a finite disjoint union of hermitian symmetric domains.444If $K$ fails to be sufficiently small, one might very reasonably argue that our definition of the Shimura variety of level $K$ really is the definition of the _coarse_ Shimura variety and that one should be working with stacks instead. Since we will only be interested in sufficiently small level, this is inconsequential for us. In particular, this gives $\operatorname{Sh}_{K}(G,X)$ the structure of a complex manifold. In fact, by the theorem of Baily-Borel, this complex manifold attains the structure of a quasi-projective complex variety in a canonical way. By work of Deligne, Milne and Borovoi, this variety is defined already (and again in a canonical way) over the reflex field $\mathbf{E}$. So in particular, it is defined over a number field independent of $K$. This is important when varying $K$ and it is the reason why we consider the whole Shimura variety instead of its connected components over $\mathbb{C}$ on their own. It is possible for the Shimura variety to have multiple connected components over $\mathbb{C}$ while being connected over $\mathbf{E}$. More detailed explanations may be found in [Mil05]. ### 1.2 Bruhat-Tits buildings Let $K$ be a complete discrete valuation field with ring of integers $\mathcal{O}$, uniformizer $\varpi$ and perfect residue field $\kappa:=\mathcal{O}/\varpi$. ###### Notation 1.9. For a (connected) reductive group $G$ over $K$, we denote by $\mathcal{B}(G,K)$ the extended (or enlarged) and by $\mathcal{B}^{\mathrm{red}}(G,K)$ the reduced (i.e., non-extended) Bruhat-Tits building of $G$ over $K$ [BT84]. Moreover, $\mathcal{B}^{\mathrm{abstract}}(G,K)$ denotes the underlying abstract simplicial complex. ###### Remark 1.10. Let $V$ be a finite-dimensional $K$-vector space. As described in [KP15, 1.1.9] (originally in [BT84a]), the points of $\mathcal{B}(\operatorname{GL}(V),K)$ correspond to graded periodic lattice chains $(\mathcal{L},c)$, i.e., * • $\emptyset\neq\mathcal{L}$ is a totally ordered set of full $\mathcal{O}$-lattices in $V$ stable under scalar multiplication (i.e., $\Lambda\in\mathcal{L}\iff\varpi\Lambda\in\mathcal{L}$), * • $c\colon\mathcal{L}\to\mathbb{R}$ is a strictly decreasing function such that $c(\varpi^{n}\Lambda)=c(\Lambda)+n$. ###### Remark 1.11. Fix such an $\mathcal{L}$ and let $\Lambda^{0}\in\mathcal{L}$. Then every homothety class of lattices has a unique representative $\Lambda$ such that $\Lambda\subseteq\Lambda^{0}$ and $\Lambda\not\subseteq\varpi\Lambda^{0}$. Consider such representatives $\Lambda^{i}$ for all of the distinct homothety classes of lattices that make up $\mathcal{L}$. Because $\mathcal{L}$ is totally ordered and $\Lambda^{i}\not\subseteq\varpi\Lambda^{0}$, it follows that $\Lambda^{i}\supseteq\varpi\Lambda^{0}$ for all $i$ and that $\left\\{\Lambda^{i}/\varpi\Lambda^{0}\right\\}_{i}$ is a flag of non-trivial linear subspaces of $\Lambda^{0}/\varpi\Lambda^{0}\cong\kappa^{n}$, where $n:=\dim V$. Consequently, the number $r$ of homothety classes is in $\\{1,\dotsc,n\\}$; it is called the _period length_ (or _rank_) of $\mathcal{L}$. Numbering the $\Lambda^{i}$ in descending order we hence obtain $r$ lattices $\Lambda^{0},\Lambda^{1},\dotsc,\Lambda^{r-1}$ such that $\Lambda^{0}\supsetneqq\Lambda^{1}\supsetneqq\dotsb\supsetneqq\Lambda^{r-1}\supsetneqq\varpi\Lambda^{0}$ (1.12) and $\mathcal{L}$ is given by the the strictly descending sequence of lattices $\Lambda^{qr+i}=\varpi^{q}\Lambda^{i},\quad q\in\mathbb{Z},\;0\leq i<r.$ ###### Remark 1.13. Let $V$ be a finite-dimensional symplectic $K$-vector space. $\mathcal{B}(\operatorname{GSp}(V),K)$ embeds into the subset of $\mathcal{B}(\operatorname{GL}(V),K)$ consisting of those $(\mathcal{L},c)$ such that $\Lambda\in\mathcal{L}\implies\Lambda^{\vee}\in\mathcal{L}$. Passing to the underlying abstract simplicial complexes means forgetting about the grading $c$ and $\mathcal{B}^{\mathrm{abstract}}(\operatorname{GSp}(V),K)=\\{\mathcal{L}\in\mathcal{B}^{\mathrm{abstract}}(\operatorname{GL}(V),K)\;|\;\Lambda\in\mathcal{L}\implies\Lambda^{\vee}\in\mathcal{L}\\}.$ If $\mathcal{L}\in\mathcal{B}^{\mathrm{abstract}}(\operatorname{GSp}(V),K)$ and $\\{\Lambda^{i}\\}_{i}$ is as in Remark 1.11, then there is an involution $t\colon\mathbb{Z}\to\mathbb{Z}$ with $\left(\Lambda^{i}\right)^{\vee}=\Lambda^{t(i)}$, $t(i+qr)=t(i)-qr$, and $i<j\implies t(i)>t(j)$. So $-a:=t(0)>t(1)>\dotsb>t(r)=-a-r$, which implies $t(i)=-i-a$. Thus $i_{0}-t(i_{0})=2i_{0}+a\in\\{0,1\\}$ for some unique $i_{0}\in\mathbb{Z}$. Hence, upon renumbering the $\Lambda^{i}$, we may assume that $a\in\\{0,1\\}$. We therefore have $\displaystyle\varpi\Lambda^{0}\subsetneqq\Lambda^{r-1}\subsetneqq\Lambda^{r-2}\subsetneqq\dotsb\subsetneqq\Lambda^{0}\subseteq\left(\Lambda^{0}\right)^{\vee}=\Lambda^{-a}\subsetneqq\left(\Lambda^{1}\right)^{\vee}=\Lambda^{-1-a}$ $\displaystyle\subsetneqq\dotsb\subsetneqq\left(\Lambda^{r-1}\right)^{\vee}=\Lambda^{-r+1-a}\subseteq\Lambda^{-r}=\varpi^{-1}\Lambda^{0}.$ ###### Example 1.14. See also section 2.2.5 for some elaborations on the building of $\operatorname{GSp}_{4}(\mathbb{Q}_{p})$. ### 1.3 Bruhat-Tits group schemes ###### Notation 1.15. Let $E$ be a finite field extension of $\mathbb{Q}_{p}$. Denote by $\breve{E}$ the completion of the maximal unramified extension of $E$ (hence $\breve{E}=E\cdot\breve{\mathbb{Q}}_{p}$). ###### Remark 1.16. If $E/\mathbb{Q}_{p}$ is unramified, then ${{\cal O}_{\breve{E}}}=W(\bar{\mathbb{F}}_{p})$, $\bar{\mathbb{F}}_{p}$ denoting an algebraic closure of $\mathbb{F}_{p}$ and $W\colon\mathrm{Ring}\to\mathrm{Ring}$ being the ($p$-adic) Witt vectors functor. This generalizes to the ramified case using _ramified Witt vectors_ instead, see e.g. [Haz78, Chap. IV, (18.6.13)] or [Ahs11, Chapter 1]. Let $(G,X)$ be a Shimura datum of Hodge type, let $(G,X)\hookrightarrow(\operatorname{GSp}(V),S^{\pm})$ be an embedding as in Definition 1.1 (4), and let $x\in\mathcal{B}(G,\mathbb{Q}_{p})$ be a point in the Bruhat-Tits building of $G$ over $\mathbb{Q}_{p}$. We consider the associated Bruhat-Tits scheme ${\cal G}_{x}$, i.e., the affine smooth model of $G_{\mathbb{Q}_{p}}$ over $\mathbb{Z}_{p}$ such that ${\cal G}_{x}(\breve{\mathbb{Z}}_{p})\subseteq G(\breve{\mathbb{Q}}_{p})$ is the stabilizer of the facet of $x$ in ${\cal B}(G,\breve{\mathbb{Q}}_{p})\overset{\text{\cite[cite]{[\@@bibref{}{landvogt}{}{}, Prop.\leavevmode\nobreak\ 2.1.3]}}}{=}\mathcal{B}(G,\mathbb{Q}^{\mathrm{ur}}_{p})$. Let $K_{p}:={\cal G}_{x}(\mathbb{Z}_{p})\subseteq G(\mathbb{Q}_{p})$ and let $K^{p}\subseteq G(\mathbb{A}_{f}^{p})$ be a sufficiently small open compact subgroup. Define $K:=K_{p}K^{p}\subseteq G(\mathbb{A}_{f})$. ###### Assumptions 1.17. From now on, we will always make the following assumptions: * • $\mathcal{G}_{x}=\mathcal{G}_{x}^{\circ}$ is connected. * • $G$ splits over a tamely ramified extension of $\mathbb{Q}_{p}$. * • $p\nmid\\#\pi_{1}(G^{\mathrm{der}})$. ###### Notation 1.18. In order not to make notation overly cumbersome, we usually denote the base change $G_{\mathbb{Q}_{p}}$ of $G$ to $\mathbb{Q}_{p}$ by $G$ again. (Later, we will almost exclusively be dealing with $G_{\mathbb{Q}_{p}}$.) ### 1.4 Siegel integral models With notation as above let $\displaystyle N_{p}$ $\displaystyle:=\operatorname{Stab}_{\operatorname{GSp}(V)(\mathbb{Q}_{p})}(\mathcal{L})\quad\text{(as before)},$ $\displaystyle J_{p}$ $\displaystyle:=\operatorname{Stab}_{\operatorname{GL}(V^{\S})(\mathbb{Q}_{p})}(\Lambda^{\S})\cap\operatorname{GSp}(V^{\S})(\mathbb{Q}_{p}).$ Let $N^{p}\subseteq\operatorname{GSp}(V)(\mathbb{A}_{f}^{p})$ and $J^{p}\subseteq\operatorname{GSp}(V^{\S})(\mathbb{A}_{f}^{p})$ be sufficiently small open compact subgroups, and $N:=N_{p}N^{p}$, $J:=J_{p}J^{p}$. In this subsection, we are going to describe integral models of $\operatorname{Sh}_{N}(\operatorname{GSp}(V),S^{\pm})$ and of $\operatorname{Sh}_{J}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$ over $\mathbb{Z}_{(p)}$ and relate the two. ###### Remark 1.19. By [RZ96, Definition 6.9], the integral model $\mathscr{S}_{N}(\operatorname{GSp}(V),S^{\pm})$ is given by the moduli problem $(\mathbb{Z}_{(p)}\text{-scheme})\ni S\mapsto\left\\{(A,\bar{\lambda},\eta^{p})\right\\}/{\scriptstyle\cong}$, where: 1. (a) $A=\left(A_{\Lambda}\right)_{\Lambda\in\mathcal{L}}$ is an $\mathcal{L}$-set of abelian schemes, i.e., * • for every $\Lambda\in\mathcal{L}$, an abelian $S$-scheme up to $\mathbb{Z}_{(p)}$-isogeny $A_{\Lambda}$ (i.e., $A_{\Lambda}$ is an object of the category $(\text{abelian }S\text{-schemes})\otimes\mathbb{Z}_{(p)}$, where the category $\mathcal{A}\otimes R$ for $\mathcal{A}$ an preadditive category and $R$ a ring has the same objects as $\mathcal{A}$ and $\operatorname{Hom}_{\mathcal{A}\otimes R}(X,Y)=\operatorname{Hom}(X,Y)\otimes_{\mathbb{Z}}R$ for all objects $X,Y$), * • for every inclusion $\Lambda_{1}\subseteq\Lambda_{2}$ a $\mathbb{Z}_{(p)}$-isogeny $\rho_{\Lambda_{2},\Lambda_{1}}\colon A_{\Lambda_{1}}\to A_{\Lambda_{2}}$, * • $\rho_{\Lambda_{3},\Lambda_{1}}=\rho_{\Lambda_{3},\Lambda_{2}}\circ\rho_{\Lambda_{2},\Lambda_{1}}$ if $\Lambda_{1}\subseteq\Lambda_{2}\subseteq\Lambda_{3}$ in $\mathcal{L}$, * • the height of $\rho_{\Lambda_{2},\Lambda_{1}}$ is $\log_{p}|\Lambda_{2}/\Lambda_{1}|$. Here $\rho_{\Lambda_{2},\Lambda_{1}}$ gives rise to a well-defined homomorphism of $p$-divisible groups, and what we mean is that the kernel of this homomorphism (which is a finite locally free commutative group scheme, which we also refer to simply as the kernel of $\rho_{\Lambda_{2},\Lambda_{1}}$) is to have order $|\Lambda_{2}/\Lambda_{1}|$. * • For every $\Lambda\in\mathcal{L}$, there is an isomorphism (called _periodicity isomorphism_) $\theta_{\Lambda}\colon A_{\Lambda}\to A_{p\Lambda}$ such that $\rho_{\Lambda,p\Lambda}\circ\theta_{\Lambda}=[p]\colon A_{\Lambda}\to A_{\Lambda}$ is the multiplication-by-$p$ isogeny. 2. (b) $\bar{\lambda}\colon A\to\tilde{A}$ is a $\mathbb{Q}$-homogeneous principal polarization, i.e., a $\underline{\mathbb{Q}^{\times}}$-orbit of a principal polarization $\lambda\colon A\to\tilde{A}$. Here $\tilde{A}$ is the $\mathcal{L}$-set of abelian schemes over $S$ up to prime-to-$p$ isogeny given by $\tilde{A}_{\Lambda}:=(A_{\Lambda^{\vee}})^{\vee}$. And being a polarization $\lambda$ means being a quasi-isogeny of $\mathcal{L}$-sets $\lambda\colon A\to\tilde{A}$ such that $A_{\Lambda}\xrightarrow{\lambda_{\Lambda}}\tilde{A}_{\Lambda}=(A_{\Lambda^{\vee}})^{\vee}\xrightarrow{\varrho_{\Lambda^{\vee},\Lambda}^{\vee}}(A_{\Lambda})^{\vee}$ is a polarization of $A_{\Lambda}$ for all $\Lambda$. If $\lambda_{\Lambda}$ can be chosen to be an isomorphism up to prime-to-$p$ isogeny for all $\Lambda$, then we speak of a principal polarization. In that case, when referring to $\lambda_{\Lambda}$, we mean a $\lambda_{\Lambda}$ which is an isomorphism up to prime-to-$p$ isogeny. 3. (c) $\eta^{p}$ is a level-$N^{p}$-structure, i.e. (if $S$ is connected), it is a $\pi_{1}(S,s)$-invariant $N^{p}$-orbit of symplectic similitudes $\eta^{p}\colon V_{\mathbb{A}_{f}^{p}}\to H_{1}(A_{s},\mathbb{A}_{f}^{p})$ (where $s$ is some geometric basepoint and $H_{1}(A_{s},\mathbb{A}_{f}^{p})$ with its $\pi_{1}(S,s)$-action corresponds to the Tate $\mathbb{A}_{f}^{p}$-module of $A$ (cf. [RZ96, 6.8]), which is a smooth $\mathbb{A}_{f}^{p}$-sheaf). Note that this forces the abelian schemes $A_{\Lambda}$ to be $(\dim_{\mathbb{Q}}V)$-dimensional. ###### Definition 1.20. Set $\Lambda^{\S}_{\mathbb{Z}_{(p)}}:=\Lambda^{\S}_{\mathbb{Z}_{p}}\cap V^{\S}_{\mathbb{Q}}=\prod_{i=-(r-1)-a}^{r-1}\Lambda_{\mathbb{Z}_{(p)}}^{i}$. We choose a lattice $\Lambda^{\S}_{\mathbb{Z}}\subseteq V^{\S}$ such that $\Lambda^{\S}_{\mathbb{Z}}\otimes_{\mathbb{Z}}\mathbb{Z}_{(p)}=\Lambda^{\S}_{\mathbb{Z}_{(p)}}$ and $\Lambda^{\S}_{\mathbb{Z}}\subseteq(\Lambda^{\S}_{\mathbb{Z}})^{\vee}$. ###### Remark 1.21. Set $d:=\bigl{|}\left(\Lambda_{\mathbb{Z}}^{\S}\right)^{\vee}/\Lambda_{\mathbb{Z}}^{\S}\bigr{|}$. By [Kis10, 2.3.3, 3.2.4], the integral model $\mathscr{S}_{J}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$ is given by the moduli problem $(\mathbb{Z}_{(p)}\text{-schemes})\ni S\mapsto\left\\{(A^{\S},\lambda^{\S},\epsilon^{p})\right\\}/{\scriptstyle\cong}$, where 1. (a) $A^{\S}$ is an abelian scheme over $S$ up to $\mathbb{Z}_{(p)}$-isogeny, 2. (b) $\lambda^{\S}\colon A^{\S}\to\left(A^{\S}\right)^{\vee}$ is a polarization of degree $d$ (i.e., the polarization of the (well-defined) associated $p$-divisible group has degree $d$), 3. (c) $\epsilon^{p}$ is a level-$J^{p}$-structure, i.e. (if $S$ is connected), it is a $\pi_{1}(S,s)$-invariant $J^{p}$-orbit of symplectic similitudes $\epsilon^{p}\colon V^{\S}_{\mathbb{A}_{f}^{p}}\to H_{1}(A^{\S}_{s},\mathbb{A}_{f}^{p})$. Note that this forces the abelian schemes $A^{\S}$ to be $(\dim_{\mathbb{Q}}V^{\S})$-dimensional. This completes the descriptions of the moduli problems, and we turn to the question of the relationship between the two. Consider (for appropriate $N^{p},J^{p}$; see below) the morphism $\chi\colon\mathscr{S}_{N}(\operatorname{GSp}(V),S^{\pm})\to\mathscr{S}_{J}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$ given on $S$-valued points by sending $(A,\bar{\lambda},\eta^{p})$ to $(A^{\S},\lambda^{\S},\epsilon^{p})$, where 1. (a) $\displaystyle A^{\S}:=\prod_{i=-(r-1)-a}^{r-1}A_{\Lambda^{i}}$, 2. (b) $\displaystyle\lambda^{\S}:=\prod_{i=-(r-1)-a}^{r-1}\left(\rho_{\left(\Lambda^{i}\right)^{\vee},\Lambda^{i}}^{\vee}\circ\lambda_{\Lambda^{i}}\right)$, 3. (c) $\epsilon^{p}$ is the product $\prod_{i=-(r-1)-a}^{r-1}\eta^{p}$, to be interpreted as the product over $\eta^{p}\colon V_{\mathbb{A}_{f}^{p}}\to H_{1}(A_{\Lambda^{i},s},\mathbb{A}_{f}^{p})\cong H_{1}(A_{s},\mathbb{A}_{f}^{p})$, where the isomorphism $H_{1}(A_{\Lambda^{i},s},\mathbb{A}_{f}^{p})\cong H_{1}(A_{s},\mathbb{A}_{f}^{p})$ is by definition the identity for some fixed $i=i_{0}$ and otherwise induced by the transition map $\rho_{\Lambda^{i},\Lambda^{i_{0}}}$. We need that $N^{p}$ is mapped into $J^{p}$ by $\operatorname{GSp}(V)\hookrightarrow\operatorname{GSp}(V^{\S})$ for this to make sense. ###### Lemma 1.22. Let $S$ be a scheme, $\ell\neq p$ prime numbers. If $\ell$ does not appear as a residue characteristic of $S$, then the Tate module functors $\displaystyle H_{1}(\\_,\mathbb{Z}_{\ell})$ $\displaystyle\colon(\text{abelian }S\text{-schemes})\to(\text{\~{A}©tale }\mathbb{Z}_{\ell}\text{-local systems on }S),$ $\displaystyle H_{1}(\\_,\mathbb{Q}_{\ell})$ $\displaystyle\colon(\text{abelian }S\text{-schemes})\to(\text{\~{A}©tale }\mathbb{Q}_{\ell}\text{-local systems on }S)$ (cf. [Gro74, III, 5.4 and 6.2] for precise definitions) are faithful. If only $p$ and $0$ appear as residue characteristics of $S$, then the Tate module functor $H_{1}(\\_,\mathbb{A}_{f}^{p})\colon(\text{abelian }S\text{-schemes})\to(\text{\~{A}©tale }\mathbb{A}_{f}^{p}\text{-local systems on }S)$ is faithful. ###### Proof: First note that the statements about $H_{1}(\\_,\mathbb{Q}_{\ell})$ and $H_{1}(\\_,\mathbb{A}_{f}^{p})$ follows from the statement about $H_{1}(\\_,\mathbb{Z}_{\ell})$, which is why it is enough to only look at $H_{1}(\\_,\mathbb{Z}_{\ell})$. A homomorphism of abelian $S$-schemes $f\colon A\to B$ vanishes if and only if it vanishes over every (geometric) fiber of $S$: Indeed, if it vanishes fiberwise, then it is flat by the fiber criterion for flatness. Applying that criterion again we see that the closed immersion and fiberwise isomorphism $\ker(f)\hookrightarrow A$ is flat, which means that is an isomorphism. This way we are reduced to the case where $R$ is an (algebraically closed) field of characteristic different from $\ell$. In this setting the faithfulness is well-known (the salient point being that the $\ell$-primary torsion is dense). □ ###### Lemma 1.23. Let $H$ be a totally disconnected locally compact555By (our) definition, locally compact implies Hausdorff. group (i.e., a locally profinite group) and let $N\subseteq H$ be a compact subgroup. Then $N=\bigcap_{\begin{subarray}{c}N\subseteq J\\\ J\subseteq H\text{ open compact subgrp.}\end{subarray}}J.$ Note that this is (a variant of) a well-known theorem by van Dantzig if $N=\\{1\\}$ [Dan36]. ###### Proof: We make use of the following fact [AT08, Prop. 3.1.7]: A Hausdorff space is locally compact and totally disconnected if and only if the open compact sets form a basis of the topology. (Van Dantzig’s theorem is the group version of this, which talks only about a neighborhood basis of the identity and open compact _subgroups_.) First we show that $N$ is contained in some open compact subset $K\subseteq H$. For every $x\in N$ choose a compact open neighborhood $x\in K_{x}\subseteq H$. This is possible by the fact cited above. Then there is a finite subset $I\subseteq N$ such that $N\subseteq\bigcup_{x\in I}K_{x}=:K$. Next, for every $x\in N$ choose an open neighborhood of the identity $U_{x}$ such that $xU_{x}K\subseteq K$. With $N\subseteq U:=\bigcup_{x\in N}xU_{x}$ we obtain $UK\subseteq K$. Replacing $U$ by $U\cap U^{-1}$, we may moreover assume it is symmetric. The subgroup generated by $U$ is open (hence closed) and contained in $K$, hence is an open compact subgroup. Thus $N$ even is contained in an open compact sub _group_ ; in other words, we may assume that $H$ is compact, i.e., is a profinite group. Then $H/N$ is compact666Hausdorff quotient spaces of compact spaces are compact again, but for “locally compact” the analogous statement is not true in general! and totally disconnected777Take $x,y\in H$ such that $xN\neq yN$. We show that any subspace $S\subseteq H/N$ containing both $xN$ and $yN$ is disconnected. Let $U\subseteq H/N$ be a neighborhood of $xN$ not containing $yN$. Let $x\in V\subseteq\pi^{-1}(U)$ be open and compact, where $\pi\colon H\to H/N$ is the projection. Then $yN\notin\pi(V)\subseteq H/N$ is open and compact (hence closed) and we have $S=(\pi(V)\cap S)\sqcup S\setminus\pi(V)$ where both $\pi(V)\cap S$ and $S\setminus\pi(V)$ are open in $S$. This shows that $S$ is disconnected. (i.e., is a Stone space). By the fact cited above, $H/N\supseteq\\{1\\}=\bigcap_{L\subseteq H/N\text{ open compact subset}}L.$ Observe that the quotient map $H\to H/N$ is proper to deduce $N=\bigcap_{\begin{subarray}{c}N\subseteq M\\\ M\subseteq H\text{ open compact subset}\end{subarray}}M.$ Say $M$ is an open and compact subset of $H$ containing $N$. As we have shown above, there is an open compact subgroup $J\subseteq H$ in between $N$ and $M$, and this is all we need to complete the proof. □ ###### Proposition 1.24. For every compact open subgroup $N^{p}\subseteq\operatorname{GSp}(V)(\mathbb{A}_{f}^{p})$ $\chi\colon\mathscr{S}_{N}(\operatorname{GSp}(V),S^{\pm})\to\mathscr{S}_{J}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$ is a well-defined morphism for all compact open subgroups $N^{p}\subseteq J^{p}\subseteq\operatorname{GSp}(V^{\S})(\mathbb{A}_{f}^{p})$ and is a closed immersion for all sufficiently small compact open subgroups $N^{p}\subseteq J^{p}\subseteq\operatorname{GSp}(V^{\S})(\mathbb{A}_{f}^{p})$. ###### Proof: The fact that it’s well-defined is clear from the construction. To show the second statement, as in [Del71, Prop. 1.15], it is enough to show that $\mathscr{S}_{N_{p}N^{p}}(\operatorname{GSp}(V),S^{\pm})\to\varprojlim_{J^{p}}\mathscr{S}_{J_{p}J^{p}}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$ is a closed immersion, i.e., a proper monomorphism. We begin by proving that it is a monomorphism, i.e., injective on $S$-valued points ($S$ arbitrary $\mathbb{Z}_{(p)}$-scheme). So, say $(A_{1},\lambda_{1},\eta_{1}^{p})$ and $(A_{2},\lambda_{2},\eta_{2}^{p})$ both map to $(A^{\S},\lambda^{\S},\epsilon_{J^{p}}^{p})$. That means precisely that there is an isomorphism of abelian $S$-schemes up to $\mathbb{Z}_{(p)}$-isogeny $\phi\colon\prod_{i=-(r-1)-a}^{r-1}A_{1,\Lambda^{i}}\xrightarrow{\cong}\prod_{i=-(r-1)-a}^{r-1}A_{2,\Lambda^{i}}$ such that $\phi^{\vee}\circ\prod_{i=-(r-1)-a}^{r-1}\left(\rho_{2,\left(\Lambda^{i}\right)^{\vee},\Lambda^{i}}^{\vee}\circ\lambda_{2,\Lambda^{i}}\right)\circ\phi=\prod_{i=-(r-1)-a}^{r-1}\left(\rho_{1,\left(\Lambda^{i}\right)^{\vee},\Lambda^{i}}^{\vee}\circ\lambda_{1,\Lambda^{i}}\right)$ and $H_{1}(\phi,\mathbb{A}_{f}^{p})\circ\epsilon_{1,J^{p}}^{p}=\epsilon_{2,J^{p}}^{p}\mod{J^{p}}.$ We claim that $\phi$ comes from isomorphisms $\phi_{i}\colon A_{1,\Lambda^{i}}\xrightarrow{\cong}A_{2,\Lambda^{i}}.$ Certainly there is but one candidate for $\phi_{i}$: define $\phi_{i}$ to be the composition $A_{1,\Lambda^{i}}\xrightarrow{\mathrm{incl}}\prod_{i=-(r-1)-a}^{r-1}A_{1,\Lambda^{i}}\xrightarrow{\phi}\prod_{i=-(r-1)-a}^{r-1}A_{2,\Lambda^{i}}\xrightarrow{\mathrm{proj}}A_{2,\Lambda^{i}}.$ Our claim then is that $\phi=\prod_{i=-(r-1)-a}^{r-1}\phi_{i}.$ Apply $H^{1}(\\_,\mathbb{A}_{f}^{p})$ on both sides. For the left hand side, we have $H_{1}(\phi,\mathbb{A}_{f}^{p})=\epsilon_{2,J^{p}}^{p}\circ\left(\epsilon_{1,J^{p}}^{p}\right)^{-1}\mod{J^{p}}.$ and the right hand side of this equation is block diagonal. So $H_{1}(\phi,\mathbb{A}_{f}^{p})=\prod_{i=-(r-1)-a}^{r-1}H_{1}(\phi_{i},\mathbb{A}_{f}^{p})\mod{J^{p}}.$ Since (by Lemma 1.23) $N^{p}=\bigcap_{\begin{subarray}{c}N_{\ell}\subseteq J_{\ell}\\\ J_{\ell}\subseteq\operatorname{GSp}(V^{\S})(\mathbb{Q}_{\ell})\text{ cpt. open subgrp.}\end{subarray}}J_{\ell},$ it follows that (with $\ell\neq p$) $H_{1}(\phi,\mathbb{Q}_{\ell})=\prod_{i=-(r-1)-a}^{r-1}H_{1}(\phi_{i},\mathbb{Q}_{\ell})\mod{N_{\ell}},$ hence (since $N_{\ell}$ acts block-diagonally) that $H_{1}(\phi,\mathbb{Q}_{\ell})=\prod_{i=-(r-1)-a}^{r-1}H_{1}(\phi_{i},\mathbb{Q}_{\ell})$. Since $H_{1}(\\_,\mathbb{Q}_{\ell})$ is faithful (Lemma 1.22), this implies $\phi=\prod_{i=-(r-1)-a}^{r-1}\phi_{i}$, as desired. Next, consider the extension by zero of $\left(H_{1}(\rho_{1/2,\Lambda^{j},\Lambda^{i}},\mathbb{A}_{f}^{p})\right)_{i,j}$ (where for “$1/2$” either “$1$” or “$2$” can be plugged in) to a map $H_{1}(A^{\S},\mathbb{A}_{f}^{p})\to H_{1}(A^{\S},\mathbb{A}_{f}^{p})$. Under the isomorphism given by the $J^{p}$-level structure this corresponds, up to the $J^{p}$-action, to the map $V^{\S}_{\mathbb{A}_{f}^{p}}\to V^{\S}_{\mathbb{A}_{f}^{p}}$ given by mapping the $i$’th copy of $V_{\mathbb{A}_{f}^{p}}$ identically to the $j$’th copy and the rest to zero. Thus $\rho_{1/2,i,j}$ yield the same up to $J^{p}$ after applying $H_{1}(\\_,\mathbb{A}_{f}^{p})$, hence they are equal in the $\mathbb{Z}_{(p)}$-isogeny category. Consequently, $\chi$ is a monomorphism. For properness, we will use the valuative criterion. Let $R$ be a discrete valuation ring with field of fractions $K$ and assume that a $K$-point $A^{\S}=\prod_{i=-(r-1)-a}^{r-1}A_{\Lambda^{i}}$ with its additional structures coming from $(A_{\Lambda^{i}})_{i}$ extends to an $R$-point $\mathcal{A}^{\S}$. Consider the map $A^{\S}\to A_{\Lambda^{i_{0}}}\to A^{\S}$ where the first map is a projection and the second an inclusion. By the Néron mapping property, this extends to a map $\mathcal{A}^{\S}\to\mathcal{A}^{\S}$. Define $\mathcal{A}_{\Lambda^{i_{0}}}$ to be the image of this map. The Néron mapping property also allows us to extend the transition isogenies $\rho_{\Lambda^{i_{0}},\Lambda^{j_{0}}}\colon\allowbreak{A_{\Lambda^{j_{0}}}\to A_{\Lambda^{i_{0}}}}$, $i_{0}\leq j_{0}$, the periodicity isomorphisms, and the polarization. Since $\pi_{1}(\operatorname{Spec}K)$ surjects onto $\pi_{1}(\operatorname{Spec}R)$ (see [Stacks, Tag 0BQM]), extending the level structure away from $p$ is trivial. □ ### 1.5 Local structure of the integral model #### 1.5.1 Generizations and irreducible components Let $\mathscr{X}\to\operatorname{Spec}\mathcal{O}_{\breve{E}}$ be a flat scheme locally of finite type; denote the special fiber by $X\to\operatorname{Spec}\bar{\mathbb{F}}_{p}$ and the generic fiber by $\mathcal{X}\to\operatorname{Spec}\breve{E}$. We assume that $\mathcal{X}$ is locally integral (e.g. smooth). For example, we can consider $(\mathscr{X},X,\mathcal{X})=(\mathscr{S}^{-}_{K}(G,X)_{{{\cal O}_{\breve{E}}}},\mathscr{S}^{-}_{K}(G,X)_{{{\cal O}_{\breve{E}}}}\otimes_{{{\cal O}_{\breve{E}}}}\bar{\mathbb{F}}_{p},\allowbreak{\operatorname{Sh}_{K}(G,X)\otimes_{E}\breve{E}})$. Let $\bar{x}\in X(\bar{\mathbb{F}}_{p})$. ###### Lemma 1.25. There is a generization $x$ of $\bar{x}$ which lies in the generic fiber $\mathcal{X}$, and is a closed point in there, i.e., $x\in\mathcal{X}(L)$ for a finite extension $L/\breve{E}$. ###### Definition 1.26. We shall call such a point $x$ a _closed point generization_ of $\bar{x}$ for short. ###### Proof: Due to flatness (going-down) there is _some_ generization in the generic fiber; call it $x_{0}$. By [Stacks, Tag 053U] the following set is dense (and in particular non-empty) in the closure of $\\{x_{0}\\}$ in $\mathcal{X}$: $\left\\{x\in\mathscr{X}\;|\;x\text{ is a specialization of }x_{0}\text{ and a closed point generization of }\bar{x}\right\\}.$ □ ###### Lemma 1.27. Notation as in the preceding lemma. The specialization $x\leadsto\bar{x}$ can be realized by an ${\cal O}_{L}$-valued point of $\mathscr{X}$. ###### Proof: First off, by [EGA2, 7.1.9], it can be realized by a morphism $\operatorname{Spec}R=\\{\eta,s\\}\to\mathscr{X}$ of ${{\cal O}_{\breve{E}}}$-schemes, where $R$ is a discrete valuation ring such that $L\cong\kappa(\eta)=\operatorname{Quot}(R)$ as field extensions of $\kappa(x)$. We hence get local homomorphisms of local rings ${{\cal O}_{\breve{E}}}\to{\cal O}_{\mathscr{X},\bar{x}}\to R$. Thus the discrete valuation on $L$ defined by $R$ extends the discrete valuation on $\breve{E}$. But there is but one such extension and its valuation ring is ${\cal O}_{L}$ (by definition). □ ###### Lemma 1.28. Mapping $x$ to the unique irreducible component of $\mathscr{X}$ that contains $x$ establishes a surjection from the set of closed point generizations $x$ of $\bar{x}$ to the set of irreducible components of $\mathscr{X}$ containing $\bar{x}$. ###### Proof: If $x_{0}\in\mathcal{X}$ is a generization of $\bar{x}$, then $x_{0}$ lies in a unique irreducible component of $\mathscr{X}$ because $\mathcal{X}$ is locally irreducible. Hence the map described above is well-defined. Now for surjectivity: Given an irreducible component $C$ of $\mathscr{X}$ containing $\bar{x}$, let $x_{0}\in C$ be the generic point. Then $x_{0}$ must be in the generic fiber (else we would be able to find a generization in the generic fiber by going-down). Now go through the proof of Lemma 1.25 with this particular choice of $x_{0}$. □ ### 1.6 The local model To give a very rough idea of what the _local model_ to be discussed in this section is supposed to accomplish: It should be an $\mathcal{O}_{E}$-scheme that is étale-locally isomorphic to $\mathscr{S}_{K}(G,X)$, but easier to understand by virtue of being of a more “linear-algebraic flavor”. In actuality however, the theory of local models quickly gets quite complicated once one departs from the simplest examples. #### 1.6.1 The Siegel case We do start with the simplest example. We consider the standard Iwahori subgroup $I_{p}\subseteq\operatorname{GSp}_{2g}(\mathbb{Z}_{p})$, defined as the preimage of the standard Borel subgroup of $\operatorname{GSp}_{2g}(\mathbb{F}_{p})$. In terms of the building (cf. Remark 1.13), it corresponds to the lattice chain $\mathcal{L}_{\mathrm{full}}$ given by $\displaystyle\Lambda^{0}=\mathbb{Z}_{p}^{2g}$ $\displaystyle\supsetneqq\Lambda^{1}=\mathbb{Z}_{p}^{2g-1}\oplus p\mathbb{Z}_{p}\supsetneqq\Lambda^{2}=\mathbb{Z}_{p}^{2g-2}\oplus p\mathbb{Z}_{p}^{2}$ (1.29) $\displaystyle\supsetneqq\dotsb\supsetneqq\Lambda^{2g-1}=\mathbb{Z}_{p}\oplus p\mathbb{Z}_{p}^{2g-1}\supsetneqq p\Lambda^{0}=p\mathbb{Z}_{p}^{2g}$ of period length $2g$. Consider a subset $J=\\{j_{0}>\dotsb>j_{m-1}\\}\subseteq\\{1,\dotsc,2g\\}$ such that for each $j\in J$ with $1\leq j\leq 2g-1$ also $2g-j\in J$, and let $K_{p}$ be the parahoric subgroup associated with the partial lattice chain $\mathcal{L}\subseteq\mathcal{L}_{\mathrm{full}}$ obtained from $\left\\{\Lambda^{j}\;|\;j\in J\right\\}$. Define a scheme $\tilde{\mathscr{S}}_{K}(G,X)$ over $\mathscr{S}_{K}(G,X)$ as follows: $\tilde{\mathscr{S}}_{K}(G,X)(S)=\left\\{(A,\bar{\lambda},\eta^{p},\tau)\;\middle|\;\begin{tabular}[]{@{}l@{}}$(A,\bar{\lambda},\eta^{p})\in\mathscr{S}_{K}(\operatorname{GSp}_{2g},S^{\pm})(S)$,\\\ $\tau\colon H_{\mathrm{dR}}^{1}(A)\xrightarrow{\sim}\mathcal{L}\otimes\mathcal{O}_{S}$ isomorphism of lattice chains\end{tabular}\right\\}$ for every $\mathbb{Z}_{p}$-scheme $S$. By [RZ96, Appendix to Chap. 3], $\tilde{\mathscr{S}}_{K}(G,X)\to\mathscr{S}_{K}(G,X)$ is a Zariski torsor under the automorphism group of $\mathcal{L}$, i.e., the Iwahori group scheme. This motivates the definition of the local model $M^{\mathrm{loc}}_{K_{p}}\to\operatorname{Spec}\mathbb{Z}_{p}$ as the “moduli space of Hodge filtrations”; more precisely: ###### Remark 1.30. (See [Gör03, 91].) $M^{\mathrm{loc}}_{K_{p}}(S)$ is the set of isomorphism classes of commutative diagrams $\Lambda^{j_{0}}_{S}$$\mathcal{F}^{j_{0}}$$\Lambda^{j_{1}}_{S}$$\mathcal{F}^{j_{1}}$$\dotsb$$\dotsb$$\Lambda^{j_{0}}_{S}$$\mathcal{F}^{j_{0}}$$\Lambda^{j_{m-1}}_{S}$$\mathcal{F}^{j_{m-1}}$$\cdot p$ with $\Lambda^{j}_{S}:=\Lambda^{j}\otimes_{\mathbb{Z}_{p}}\mathcal{O}_{S}$, $\mathcal{F}^{j}\subseteq\Lambda^{j}_{S}$ locally direct summand of rank $g$, such that for all $j\in J$, $\mathcal{F}^{j}\to\Lambda^{j}_{S}\overset{\psi}{\cong}(\Lambda^{2g-j}_{S})^{*}\to(\mathcal{F}^{2g-j})^{*}$ vanishes, $\psi$ being the symplectic pairing. By Grothendieck-Messing theory, one obtains a diagram $\tilde{\mathscr{S}}_{K}(G,X)$$\mathscr{S}_{K}(G,X)$$M^{\mathrm{loc}}_{K}$smooth of rel. dim. $\dim\operatorname{Aut}(\mathcal{L})$$\operatorname{Aut}(\mathcal{L})$-torsor Since both morphisms in this diagram are smooth of the same dimension, it follows that for every finite field extension $\mathbb{F}_{q}/\mathbb{F}_{p}$ and every point $x\in\mathscr{S}_{K}(G,X)(\mathbb{F}_{q})$, there exists a point $y\in M^{\mathrm{loc}}_{K}(\mathbb{F}_{q})$ and an isomorphism $\mathcal{O}_{\mathscr{S}_{K}(G,X),x}^{h}\cong\mathcal{O}_{M^{\mathrm{loc}}_{K},y}^{h}$ of henselizations. In many (P)EL situations one has similar descriptions with the obvious extra structures. Sometimes however the so-called “naive” local models so obtained additionally need to be flattened, which leaves one without any self-evident moduli interpretation. #### 1.6.2 The relation between the integral and the local model Generalizing the Siegel example, we axiomatically characterize the relationship between the integral model of the Shimura variety and its local model: One wants a _local model diagram_ , i.e., a diagram of $\mathcal{O}_{E}$-schemes functorial in $K$ $\tilde{\mathscr{S}}_{K}(G,X)$$\mathscr{S}_{K}(G,X)$$M^{\mathrm{loc}}_{K}$equivariant and smooth of rel. dim. $\dim\mathcal{G}_{\mathcal{O}_{E}}$$\mathcal{G}_{\mathcal{O}_{E}}$-torsor (1.31) where $M^{\mathrm{loc}}_{K}$ is a projective flat $\mathcal{O}_{E}$-scheme with an action of $\mathcal{G}\otimes_{\mathbb{Z}_{p}}\mathcal{O}_{E}$ and generic fiber the canonical model of $G_{\bar{\mathbb{Q}}_{p}}/P_{\mu^{-1}}$ over $E$. By Kisin-Pappas [KP15] we do actually have such a diagram in our situation. #### 1.6.3 The Pappas-Zhu construction In [PZ13], Pappas and Zhu give a construction of the local model in quite a general context, in particular with no assumptions going beyond our running assumptions 1.17. ###### Remark 1.32. To this end, they construct an affine smooth group scheme $\underline{\mathcal{G}}_{K}\to\mathbb{A}^{1}_{\mathbb{Z}_{p}}=\operatorname{Spec}\mathbb{Z}_{p}[t]$ with the following key properties: 1. (1) $\underline{\mathcal{G}}_{K}$ has connected fibers, 2. (2) $\underline{\mathcal{G}}_{K}$ is reductive over $\operatorname{Spec}\mathbb{Z}_{p}[t^{\pm 1}]$, 3. (3) $\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t],t\mapsto p}\mathbb{Z}_{p}\cong\mathcal{G}_{K}$, in particular * • $\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t],t\mapsto p}\mathbb{Q}_{p}\cong G_{\mathbb{Q}_{p}}$ and * • $\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t]}\mathbb{F}_{p}:=\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t],t\mapsto 0}\mathbb{F}_{p}\cong\mathcal{G}_{K}\otimes\mathbb{F}_{p}$, 4. (4) $\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t]}\mathbb{Q}_{p}[\mkern-2.0mu[t]\mkern-2.0mu]$ is parahoric for $\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t]}\mathbb{Q}_{p}(\mkern-4.0mu(t)\mkern-4.0mu)$, 5. (5) $\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t]}\mathbb{F}_{p}[\mkern-2.0mu[t]\mkern-2.0mu]$ is parahoric for $\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t]}\mathbb{F}_{p}(\mkern-4.0mu(t)\mkern-4.0mu)$. ###### Definition and Remark 1.33. Let $X_{\mu}$ be the canonical model of $G_{\bar{\mathbb{Q}}_{p}}/P_{\mu^{-1}}$ over $E$, where for a cocharacter $\nu$ one defines $P_{\nu}:=\\{g\in G\;|\;\lim_{t\to 0}\nu(t)g\nu(t)^{-1}\text{ exists}\\}$. Let $S_{\mu}$ be the closed subvariety of $\operatorname{Gr}_{G}\times_{\mathbb{Q}_{p}}E$ with $S_{\mu}(\bar{\mathbb{Q}}_{p})=G(\bar{\mathbb{Q}}_{p}[\mkern-2.0mu[t]\mkern-2.0mu])\mu(t)G(\bar{\mathbb{Q}}_{p}[\mkern-2.0mu[t]\mkern-2.0mu])/G(\bar{\mathbb{Q}}_{p}[\mkern-2.0mu[t]\mkern-2.0mu]).$ Then $S_{\mu}$ can be $G_{E}$-equivariantly identified with $X_{\mu}$. ###### Definition 1.34. The local model $M^{\mathrm{loc}}_{G,\mu,K}$ now is defined to be the Zariski closure of $X_{\mu}\subseteq\operatorname{Gr}_{G}\times_{\mathbb{Q}_{p}}E$ in $\operatorname{Gr}_{\underline{\mathcal{G}}_{K},\mathbb{Z}_{p}}\otimes_{\mathbb{Z}_{p}}\mathcal{O}_{E}$, where $\operatorname{Gr}_{\underline{\mathcal{G}}_{K},\mathbb{Z}_{p}}:=\operatorname{Gr}_{\underline{\mathcal{G}}_{K},\mathbb{A}_{\mathbb{Z}_{p}}^{1}}\otimes_{\mathbb{A}^{1}_{\mathbb{Z}_{p}},\,u\mapsto p}\mathbb{Z}_{p}$ is a base change of the global affine Graßmannian as defined in [PZ13]. ## 2 EKOR strata and zips in the case of parahoric reduction ###### Notation 2.1. We still fix a Shimura datum $(G,X)$ of Hodge type, a parahoric subgroup $K_{p}\subseteq G(\mathbb{Q}_{p})$ (associated with a Bruhat-Tits group scheme $\mathcal{G}=\mathcal{G}_{K}=\mathcal{G}_{K_{p}}\to\operatorname{Spec}\mathbb{Z}_{p}$ associated with a facet $\mathfrak{f}$) and a sufficiently small open compact subgroup $K^{p}\subseteq G(\mathbb{A}_{f}^{p})$. Define $\overline{\mathcal{G}}_{K}:=\mathcal{G}_{K}\otimes_{\mathbb{Z}_{p}}\kappa$. We also keep up our standard assumptions 1.17. We now want to discuss the EKOR stratification on the special fiber of the integral model with parahoric level structure. The EKOR stratification interpolates between the Ekedahl-Oort (EO) and the Kottwitz-Rapoport (KR) stratification (see Remark 2.22 below for a precise formulation). We begin by explaining the basics about these stratifications and the combinatorics involved in the first section of this chapter. ### 2.1 The Ekedahl-Oort, Kottwitz-Rapoport and EKOR stratifications #### 2.1.1 Iwahori-Weyl group and the admissible subset ###### Notation 2.2. 1. (1) We fix an Iwahori subgroup $I_{p}\subseteq K_{p}$, i.e., $I_{p}$ is the group of $\mathbb{Z}_{p}$-points of the parahoric group scheme $\mathcal{I}$ associated with an alcove $\mathfrak{a}$ (facet of maximal dimension) such that $\mathfrak{f}\subseteq\overline{\mathfrak{a}}$. As usual, we also define $\breve{I}:=\mathcal{I}(\breve{\mathbb{Z}}_{p})\subseteq\breve{K}$. 2. (2) Let $T\subseteq G$ be a maximal torus such that $T_{\breve{\mathbb{Q}}_{p}}$ is contained in a Borel subgroup of $G_{\breve{\mathbb{Q}}_{p}}$888Note that by Steinberg’s theorem, $G_{\breve{\mathbb{Q}}_{p}}$ is quasi-split. [Ser97, Chap. III, § 2] and let $S$ be the maximal $\breve{\mathbb{Q}}_{p}$-split torus contained in $T_{\breve{\mathbb{Q}}_{p}}$. We can and do choose $T$ such that the alcove $\mathfrak{a}$ is contained in the apartment associated with $S$. By $N$ we denote the normalizer of $T$. 3. (3) Let $(V,R)$ be the relative root system of $(G_{\breve{\mathbb{Q}}_{p}},T_{\breve{\mathbb{Q}}_{p}})$, i.e., $V$ is the $\mathbb{R}$-vector space $X^{*}_{\breve{\mathbb{Q}}_{p}}(T_{\breve{\mathbb{Q}}_{p}})\otimes_{\mathbb{Z}}\mathbb{R}$ and $R\subseteq X^{*}_{\breve{\mathbb{Q}}_{p}}(T_{\breve{\mathbb{Q}}_{p}})$ is (as usual) such that we have a decomposition $\mathfrak{g}:=\operatorname{Lie}(G_{\bar{\mathbb{Q}}_{p}})=\operatorname{Lie}(T_{\bar{\mathbb{Q}}_{p}})\oplus\bigoplus_{\alpha\in R}\mathfrak{g}_{\alpha}.$ Contrary to the absolute situation, $\dim\mathfrak{g}_{\alpha}$ may be greater than $1$. ###### Definition 2.3. 1. (1) The _(finite relative) Weyl group_ of $G$ (over $\breve{\mathbb{Q}}_{p}$) is $W:=N(\breve{\mathbb{Q}}_{p})/T(\breve{\mathbb{Q}}_{p})$. It is the Weyl group of the root system $(V,R)$, i.e., the group generated by the orthogonal reflections through the hyperplanes defined by the elements of $R$. 2. (2) As described in [Lan00, 1.2.3], one defines a set of affine roots $R_{\mathrm{aff}}\supseteq R$ on $V$ using the valuation on $\breve{\mathbb{Q}}_{p}$. By $W_{a}\subseteq\mathrm{Aff}(V^{*})=\operatorname{GL}(V^{*})\ltimes V^{*}$ we denote the _affine Weyl group_ of the affine root system $(V,R_{\mathrm{aff}})$, i.e., the group generated by the orthogonal reflections through the affine hyperplanes defined by the elements of $R_{\mathrm{aff}}$. 3. (3) $\widetilde{W}:=N(\breve{\mathbb{Q}}_{p})/(T(\breve{\mathbb{Q}}_{p})\cap\breve{I})$ is the _Iwahori-Weyl group_. 4. (4) $W_{K}:=(N(\breve{\mathbb{Q}}_{p})\cap\breve{K})/(T(\breve{\mathbb{Q}}_{p})\cap\breve{I})\subseteq\widetilde{W}$. (Recall that $\breve{K}=\mathcal{G}(\breve{\mathbb{Z}}_{p})$.) ###### Remarks 2.4. 1. (1) We have $W\subseteq W_{a}$. With the systems of generators indicated above, $W$ and $W_{a}$ become (affine) Coxeter groups; in particular we can talk about reduced words and have length functions, cf. [BB05]. 2. (2) $W_{I}$ is the trivial group. ###### Proposition 2.5. [HR08, Prop. 8] The Bruhat-Tits decomposition $G(\breve{\mathbb{Q}}_{p})=\bigcup_{w\in\widetilde{W}}\breve{K}w\breve{K}$ identifies $\breve{K}\backslash G(\breve{\mathbb{Q}}_{p})/\breve{K}\cong W_{K}\backslash\widetilde{W}/W_{K}.$ ###### Proposition 2.6. [HR08, Prop. 13] Let $\breve{K}$ be the maximal parahoric subgroup of $G(\breve{\mathbb{Q}}_{p})$ associated with a special vertex in the apartment corresponding to $S$. Then $W_{K}\to W$ is an isomorphism and $\widetilde{W}\cong W\ltimes X_{*}(T)_{\operatorname{Gal}(\bar{\mathbb{Q}}_{p}/\breve{\mathbb{Q}}_{p})}$.999Notation: Let $\Gamma$ be a group and $M$ a $\mathbb{Z}[\Gamma]$-module. Then $M_{\Gamma}:=\mathbb{Z}\otimes_{\mathbb{Z}[\Gamma]}M=M/\langle\gamma m-m\;|\;\gamma\in\Gamma,\;m\in M\rangle$ is the module of $\Gamma$-coinvariants of $M$. ###### Notation 2.7. We denote the map $X_{*}(T)_{\operatorname{Gal}(\bar{\mathbb{Q}}_{p}/\breve{\mathbb{Q}}_{p})}\to\widetilde{W}$ of the proposition by $\nu\mapsto t_{\nu}$. ###### Proposition 2.8. [HR08, Lemma 14] Let $\Omega\subseteq\widetilde{W}$ be the subgroup consisting of those elements that preserve the base alcove $\mathfrak{a}$. There is an exact sequence $1\to W_{a}\to\widetilde{W}\to\Omega\to 1,$ with a canonical right splitting (namely the inclusion $\Omega\hookrightarrow\widetilde{W}$), i.e., $\widetilde{W}\cong W_{a}\rtimes\Omega$. ###### Definition 2.9. The semidirect product decomposition of the preceding proposition means that $\widetilde{W}$ is a “quasi-Coxeter” group. In practical terms, this means: 1. (1) We define a length function $\ell$ on $\widetilde{W}$ as follows: $\ell(w_{a},\omega):=\ell(w_{a})$ for all $w_{a}\in W_{a}$ and $\omega\in\Omega$, where on the right hand side we use the length function of the affine Coxeter group $W_{a}$. Note that $\Omega=\ell^{-1}(0)$. 2. (2) Likewise, we extend the Bruhat partial order from $W_{a}$ to $\widetilde{W}$ by defining $(w_{a,1},\omega_{1})\leq(w_{a,2},\omega_{2})\;:\Longleftrightarrow\;w_{a,1}\leq w_{a,2}\text{ and }\omega_{1}=\omega_{2}.$ Note that $w_{1}\leq w_{2}$ ($w_{1},w_{2}\in\widetilde{W}$) implies $\ell(w_{1})\leq\ell(w_{2})$. ###### Definition 2.10. 1. (1) Let $\\{\mu\\}$ be a $W_{\mathrm{abs}}$-conjugacy class of geometric cocharacters of $T$ (cf. Remark 1.4), $W_{\mathrm{abs}}:=N(\bar{\mathbb{Q}}_{p})/T(\bar{\mathbb{Q}}_{p})$ being the absolute Weyl group. Let $\bar{\mu}\in X_{*}(T)_{\operatorname{Gal}(\bar{\mathbb{Q}}_{p}/\breve{\mathbb{Q}}_{p})}$ be the image of a cocharacter in $\\{\mu\\}$ whose image in $X_{*}(T)\otimes_{\mathbb{Z}}\mathbb{R}$ is contained in the closed Weyl chamber corresponding to some Borel subgroup of $G$ containing $T$ and defined over $\breve{\mathbb{Q}}_{p}$. 2. (2) $\mathrm{Adm}(\mu):=\mathrm{Adm}(\\{\mu\\}):=\\{w\in\widetilde{W}\;|\;w\leq qt_{\bar{\mu}}q^{-1}=t_{q\bar{\mu}}\text{ for some }q\in W\\}$ is the _$\\{\mu\\}$ -admissible subset_ of $\widetilde{W}$. 3. (3) $\mathrm{Adm}(\\{\mu\\})^{K}:=W_{K}\mathrm{Adm}(\\{\mu\\})W_{K}\subseteq\widetilde{W}$. 4. (4) $\mathrm{Adm}(\\{\mu\\})_{K}:=\mathrm{KR}(K,\\{\mu\\}):=W_{K}\backslash\mathrm{Adm}(\\{\mu\\})^{K}/W_{K}\subseteq W_{K}\backslash\widetilde{W}/W_{K}$. 5. (5) Define ${}^{K}\widetilde{W}\subseteq\widetilde{W}$ to be the set of representatives of minimal length for the quotient $W_{K}\backslash\widetilde{W}$. 6. (6) ${}^{K}\mathrm{Adm}(\\{\mu\\}):=\mathrm{EKOR}(K,\\{\mu\\}):=\mathrm{Adm}(\\{\mu\\})^{K}\cap{}^{K}\widetilde{W}\subseteq{}^{K}\widetilde{W}$. ###### Lemma 2.11. (See [SYZ19, Thm. 1.2.2].) ${}^{K}\mathrm{Adm}(\\{\mu\\})=\mathrm{Adm}(\\{\mu\\})\cap{}^{K}\widetilde{W}$. #### 2.1.2 Kottwitz-Rapoport stratification Recall from Section 1.6.2 that we have an integral model and a local model diagram $\mathscr{S}_{K}\leftarrow\tilde{\mathscr{S}}_{K}\to M^{\mathrm{loc}}_{K}$ or, equivalently, a (smooth) morphism of stacks $\mathscr{S}_{K}\to[\mathcal{G}_{K}\backslash M^{\mathrm{loc}}_{K}]$ (over $\mathcal{O}_{E}$). As explained in Section 1.6.3, by the construction in [PZ13], the special fiber $M^{\mathrm{loc}}_{K}\otimes\kappa$ of $M^{\mathrm{loc}}_{K}$ is a closed subscheme of the affine flag variety $\operatorname{Gr}_{\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}}\kappa}=\mathcal{F}l_{\underline{\mathcal{G}}_{K}\otimes\kappa[\mkern-2.0mu[t]\mkern-2.0mu]}$, which is the ind-projective ind-scheme over $\kappa$ given as the fpqc sheafification (which exists in this case!) of the presheaf $R\mapsto\underline{\mathcal{G}}_{K}(R(\mkern-4.0mu(t)\mkern-4.0mu))/\underline{\mathcal{G}}_{K}(R[\mkern-2.0mu[t]\mkern-2.0mu])$. ###### Definition 2.12. Define $L^{+}(\underline{\mathcal{G}}_{K}\otimes\kappa[\mkern-2.0mu[t]\mkern-2.0mu])$ to be the $\kappa$-functor sending a $\kappa$-algebra $R$ to $\underline{\mathcal{G}}_{K}(R[\mkern-2.0mu[t]\mkern-2.0mu])$. We let $L^{+}(\underline{\mathcal{G}}_{K}\otimes\kappa[\mkern-2.0mu[t]\mkern-2.0mu])$ act on $\operatorname{Gr}_{\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}}\kappa}$ from the left and call this action $a$ (within this subsection). The orbits of this action on $\operatorname{Gr}_{\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}}\bar{\kappa}}$ are the _Schubert cells_. ###### Remarks 2.13. 1. (1) The Schubert cells can be indexed by $W_{K}\backslash\widetilde{W}/W_{K}$ by Proposition 2.5 with the following in mind: Strictly speaking, using the Bruhat-Tits decomposition here, we arrive at something involving the Iwahori- Weyl group of $\underline{\mathcal{G}}_{K}\otimes\bar{\kappa}(\mkern-4.0mu(t)\mkern-4.0mu)$. However, by [PZ13, 9.2.2], this is isomorphic to the Iwahori-Weyl group of $G_{\breve{\mathbb{Q}}_{p}}$. 2. (2) $M^{\mathrm{loc}}_{K}\otimes\bar{\kappa}$ is a union of Schubert cells, namely of those indexed by $\mathrm{KR}(K,\\{\mu\\}):=W_{K}\backslash(W_{K}\mathrm{Adm}(\\{\mu\\})W_{K})/W_{K}$, cf. [PZ13, Theorem 9.3]. ###### Remark 2.14. By construction, $M^{\mathrm{loc}}_{K}$ has an action $b$ of $\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t],t\mapsto p}\mathcal{O}_{E}\cong\mathcal{G}_{K}\otimes\mathcal{O}_{E}$. For $w\in\widetilde{W}$ choose a representative $\dot{w}\in L\underline{\mathcal{G}}_{K}(\bar{\kappa})$ (with Remark 2.13 (1) in mind) and let $e_{0}\in\operatorname{Gr}_{\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}}\kappa}$ be the distinguished base point (associated with the identity). For $w\in W_{K}\mathrm{Adm}(\\{\mu\\})W_{K}$, the orbit map of $\dot{w}\cdot e_{0}$ for the action $a$ factors through the homomorphism $L^{+}(\underline{\mathcal{G}}_{K}\otimes\bar{\kappa}[\mkern-2.0mu[t]\mkern-2.0mu])\to\mathcal{G}_{K}\otimes\kappa\cong\underline{\mathcal{G}}_{K}\otimes\bar{\kappa}$. The orbits associated with the two $\mathcal{G}_{K}\otimes\kappa$-actions $a$ and $b$ on $M^{\mathrm{loc}}_{K}\otimes\kappa$ agree. The orbits of the $\mathcal{G}_{K}\otimes\kappa$-action on $M^{\mathrm{loc}}_{K}\otimes\kappa$ are indexed by $\mathrm{KR}(K,\\{\mu\\})$. ###### Definition 2.15. The stratifications thus obtained on $M^{\mathrm{loc}}_{K}\otimes\kappa$ and $\mathscr{S}_{K}\otimes\kappa$ are called _Kottwitz-Rapoport stratifications_. That is to say that Kottwitz-Rapoport strata on $\mathscr{S}_{K}\otimes\kappa$ are by definition pullbacks of Kottwitz-Rapoport strata on $M^{\mathrm{loc}}_{K}$, which in turn are $\mathcal{G}_{K}\otimes\kappa$-orbits. #### 2.1.3 Ekedahl-Oort stratification The Ekedahl-Oort stratification is only defined in the case of good reduction, i.e., if $K_{p}$ is hyperspecial or, equivalently, if $\mathcal{G}_{K}$ is a _reductive_ model of $G_{\mathbb{Q}_{p}}$. Then $G_{\mathbb{Q}_{p}}$ splits over $\breve{\mathbb{Q}}_{p}$ (by definition of “hyperspecial”, cf. [Tit79, 1.10.2]). We therefore put ourselves in the situation of good reduction for this subsection. ###### Remark 2.16. Then $W$ as defined in Definition 2.3 (1) agrees with the absolute Weyl group of $G_{\mathbb{Q}_{p}}=\mathcal{G}_{K}\otimes\mathbb{Q}_{p}$, which in turn agrees with the absolute Weyl group of $\overline{\mathcal{G}}_{K}:=\mathcal{G}_{K}\otimes\kappa$, cf. [VW13, App. A.5]. ###### Definition 2.17. Define $I$ to be the type (interpreted as a subset of simple reflections) of the parabolic subgroup of $G_{\mathbb{Q}_{p}}$ defined by $\mu^{-1}$ (cf. Remark 1.33), and ${}^{I}W\subseteq W$ to be the system of representatives of the quotient group $W_{I}\backslash W$ containing the element of least length of every coset. ###### Theorem 2.18. [MW04, PWZ15, PWZ11, Zha18] There is a smooth algebraic stack $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}_{\kappa}:=\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\mu}_{\kappa}$ over $\kappa$ with underlying topological space ${}^{I}W$ together with a smooth morphism $\mathscr{S}_{K}\otimes\kappa\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\mu}_{\kappa}.$ The stratification of $\mathscr{S}_{K}\otimes\kappa$ thus obtained is the _Ekedahl-Oort stratification_. #### 2.1.4 EKOR stratification ###### Definition 2.19. Let $L$ be a valued field extension of $\mathbb{Q}_{p}$ with ring of integers $\mathcal{O}$, maximal ideal $\mathfrak{m}$ and residue field $\lambda$. The _pro-unipotent radical_ of $\mathcal{G}_{K}(\mathcal{O})$ is $\mathcal{G}_{K}(\mathcal{O})_{1}:=\\{g\in\mathcal{G}_{K}(\mathcal{O})\;|\;(g\mod\mathfrak{m})\in\bar{R}_{K}(\lambda)\\},$ where $\bar{R}_{K}$ is the unipotent radical of $\mathcal{G}_{K}\otimes_{\mathbb{Z}_{p}}\lambda$. In particular, if $K$ is hyperspecial, then $\mathcal{G}_{K}(\mathcal{O})_{1}=\ker(\mathcal{G}_{K}(\mathcal{O})\to\mathcal{G}_{K}(\lambda))$. Also, $\overline{\breve{K}}:=\breve{K}/\breve{K}_{1}\cong\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}(\bar{\mathbb{F}}_{p})$, where $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ is the maximal reductive quotient of $\overline{\mathcal{G}}_{K}:=\mathcal{G}_{K}\otimes\kappa$. ###### Remark 2.20. [HR17, after Cor. 6.2] We have a commutative diagram $G(\breve{\mathbb{Q}}_{p})/\breve{K}_{\sigma}(\breve{K}_{1}\times\breve{K}_{1})$${}^{K}\widetilde{W}$$W_{K}\backslash\widetilde{W}/W_{K}.$$\breve{K}\backslash G(\breve{\mathbb{Q}}_{p})/\breve{K}$ Consider the map $v_{K}\colon\mathscr{S}_{K}\otimes\kappa\to G(\breve{\mathbb{Q}}_{p})/\breve{K}_{\sigma}(\breve{K}_{1}\times\breve{K}_{1}),$ which is the composition of the central leaves map $\Upsilon_{K}\colon\mathscr{S}_{K}\otimes\kappa\to G(\breve{\mathbb{Q}}_{p})/\breve{K}_{\sigma}$ (see [Hes20a]) with the projection $G(\breve{\mathbb{Q}}_{p})/\breve{K}_{\sigma}\to G(\breve{\mathbb{Q}}_{p})/\breve{K}_{\sigma}(\breve{K}_{1}\times\breve{K}_{1})$. The Kottwitz-Rapoport map $\lambda_{K}\colon{\mathscr{S}_{K}\otimes\kappa}\to\breve{K}\backslash G(\breve{\mathbb{Q}}_{p})/\breve{K}$ factors through this map. The fibers of $v_{K}$ are called _EKOR strata_. By [HR17, Thm. 6.15], they are locally closed subsets of $\mathscr{S}_{K}\otimes\kappa$. ###### Remarks 2.21. 1. (1) One can explicitly express the image of a EKOR stratum under a change-of- parahoric map as a union of EKOR strata on the target [HR17, Prop. 6.11]. 2. (2) The closure of an EKOR stratum is a union of EKOR strata and one can explicitly describe the associated order relation [HR17, Thm. 6.15]. ###### Remark 2.22. In the hyperspecial case, the EKOR stratification agrees with the Ekedahl-Oort stratification. In the Iwahori case, it agrees with the Kottwitz-Rapoport stratification (${}^{K}\widetilde{W}=\widetilde{W}=W_{K}\backslash\widetilde{W}/W_{K}$ in that case). By definition, the EKOR stratification always is a refinement of the Kottwitz- Rapoport stratification. So one way of approaching the EKOR stratification is by looking at a fixed Kottwitz-Rapoport stratum and trying to understand how it is subdivided into EKOR strata. To get this started, let us recall some calculations from the proof of [HR17, Thm. 6.1]. Fixing a Kottwitz-Rapoport stratum means restricting our view to $\breve{K}w\breve{K}/\breve{K}_{\sigma}$ rather than the whole of $G(\breve{\mathbb{Q}}_{p})/\breve{K}_{\sigma}$, for some fixed $w\in\mathrm{KR}(K,\\{\mu\\})$. The EKOR strata in the Kottwitz-Rapoport stratum associated with $w$ are therefore indexed by $\breve{K}w\breve{K}/\breve{K}_{\sigma}(\breve{K}_{1}\times\breve{K}_{1})$. Define $\sigma^{\prime}:=\sigma\circ\operatorname{Ad}(w)$ and consider the bijection $\displaystyle\breve{K}/(\breve{K}\cap w^{-1}\breve{K}w)_{\sigma^{\prime}}$ $\displaystyle\xrightarrow{\sim}\breve{K}w\breve{K}/\breve{K}_{\sigma},$ $\displaystyle k$ $\displaystyle\mapsto wk,$ $\displaystyle k_{2}\sigma(k_{1})$ $\displaystyle\mapsfrom k_{1}wk_{2}.$ Let $J$ be the set of simple affine reflections in $W_{K}$, let $\bar{B}$ be the image of $\breve{I}$ in $\overline{\breve{K}}$ and $\bar{T}\subseteq\bar{B}$ the maximal torus. Set $J_{1}:=J\cap w^{-1}Jw$. ###### Proposition 2.23. (See [Mor93, Lemma 3.19].) The image of $\breve{K}\cap w^{-1}\breve{K}w$ in $\overline{\breve{K}}$ is $\bar{P}_{J_{1}}$, i.e., the standard parabolic subgroup of $\overline{\breve{K}}$ associated with $J_{1}$. ###### Remark 2.24. He and Rapoport invoke Carter’s book [Car93] at this point, which primarily pertains to the case of (usual) BN-pairs attached to reductive groups. Morris [Mor93] shows that the relevant results carry over likewise to the case of generalized (or affine) BN-pairs. Then we get a map $\displaystyle\breve{K}w\breve{K}/\breve{K}_{\sigma}\to\breve{K}/(\breve{K}\cap w^{-1}\breve{K}w)_{\sigma^{\prime}}$ $\displaystyle\to\overline{\breve{K}}/(\bar{P}_{J_{1}})_{\sigma^{\prime}}$ $\displaystyle\to\overline{\breve{K}}/(\bar{L}_{J_{1}})_{\sigma^{\prime}}(\bar{U}_{J_{1}})_{\sigma^{\prime}}\to\overline{\breve{K}}/(\bar{L}_{J_{1}})_{\sigma^{\prime}}(\bar{U}_{J_{1}}\times\bar{U}_{\sigma^{\prime}(J_{1})}),$ which factors through a bijection $\breve{K}w\breve{K}/\breve{K}_{\sigma}(\breve{K}_{1}\times\breve{K}_{1})\xrightarrow{\sim}\overline{\breve{K}}/(\bar{L}_{J_{1}})_{\sigma^{\prime}}(\bar{U}_{J_{1}}\times\bar{U}_{\sigma^{\prime}(J_{1})})\cong\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\text{-}\mathrm{Zip}^{\mathcal{Z}_{w}}(\bar{\mathbb{F}}_{p})/{{\scriptstyle\cong}}.$ Here, $\mathcal{Z}_{w}$ is the (connected) algebraic zip datum $\mathcal{Z}_{w}=(\overline{\mathcal{G}}^{\mathrm{rdt}},\bar{P}_{J_{1}},\bar{P}_{\sigma^{\prime}(J_{1})},\sigma^{\prime})$, as described in [SYZ19]. In [SYZ19], Shen, Yu and Zhang show that this observation “globalizes” (with the drawback that “global” here still just refers to the Kottwitz-Rapoport stratum101010They also give another “globalization”; the drawback there being that it only works after perfection.) in a pleasant way. To wit, one gets a smooth morphism [SYZ19, Theorem A] $\zeta_{w}\colon\overline{\mathscr{S}}_{K}^{w}\to\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\text{-}\mathrm{Zip}^{\mathcal{Z}_{w}}_{\kappa}$ (the source being a Kottwitz-Rapoport stratum). ### 2.2 $\overline{\mathcal{G}}_{K}$-zips in the Siegel case Here we work with the Siegel Shimura datum, cf. Example 1.2. #### 2.2.1 Preliminaries ###### Notation 2.25. Fix $p\neq 2$,111111As in [RZ96], the principal reason for this restriction is our use of the equivalence between alternating and skew-symmetric. See Definition 2.30 (e). $g\in\mathbb{Z}_{\geq 1}$ and a subset $J\subseteq\mathbb{Z}$ with $J=-J$ and $J+2g\mathbb{Z}=J$. Associated with $J$ is the partial lattice chain $\left\\{\Lambda^{j}\;|\;j\in J\right\\}$, where $\Lambda^{j}$ are defined as in equation (1.29). Let $K_{p}$ be the corresponding parahoric subgroup of $\operatorname{GSp}_{2g}(\mathbb{Q}_{p})$, i.e., the stabilizer of said lattice chain. It contains the Iwahori subgroup $I_{p}$ associated with the full lattice chain (1.29). For the maximal torus $T$ we take the usual diagonal (split) torus. ###### Remark 2.26. The Weyl group is $\displaystyle W$ $\displaystyle=\\{\pi\in S_{2g}=\operatorname{Aut}(\\{\pm 1,\pm 2,\dotsc,\pm g\\})\;|\;\pi(-n)=-\pi(n)\text{ for }n=1,2,\dotsc,g\\}$ $\displaystyle\cong S_{g}\ltimes\\{\pm 1\\}^{g}.$ Here the transposition $(n\quad m)$ of $S_{g}=\operatorname{Aut}(\\{1,2,\dotsc,g\\})$ corresponds to the element ${(n\quad m)(-n\quad{-m})}$ of $\operatorname{Aut}(\\{\pm 1,\pm 2,\dotsc,\pm g\\})$ and the element of $\\{\pm 1\\}^{g}$ which has a $-1$ in position $i$ and $1$ everywhere else corresponds to $(i\quad{-i})$. The affine Weyl group is $W_{a}=W\ltimes Y_{0}$ and the Iwahori-Weyl group $\widetilde{W}=W\ltimes Y$ with $\displaystyle\mathbb{Z}^{g+1}$ $\displaystyle\cong Y=\\{(\nu_{1},\dotsc,\nu_{g},\nu_{-g},\dotsc,\nu_{-1})\in\mathbb{Z}^{2g}:\nu_{1}+\nu_{-1}=\dotsb=\nu_{g}+\nu_{-g}\\}$ $\displaystyle\supseteq Y_{0}=\\{(\nu_{1},\dotsc,\nu_{g},\nu_{-g},\dotsc,\nu_{-1})\in\mathbb{Z}^{2g}:0=\nu_{1}+\nu_{-1}=\dotsb=\nu_{g}+\nu_{-g}\\}\cong\mathbb{Z}^{g}.$ The simple affine roots (whose walls bound the base alcove $\mathfrak{a}$) are $\displaystyle 1-2e_{-1}+e_{0}=1+2e_{1}-e_{0},$ $\displaystyle e_{-1}-e_{-2}=e_{2}-e_{1},e_{-2}-e_{-3},\dotsc,e_{-(g-1)}-e_{-g},$ $\displaystyle 2e_{-g}-e_{0}=e_{0}-2e_{g},$ where $e_{1},\dotsc,e_{g},e_{-g},\dotsc,e_{-1}\colon T\to\mathbb{G}_{m}$ are the obvious cocharacters and $e_{0}=e_{1}+e_{-1}=\dotsb=e_{g}+e_{-g}$. The reflections corresponding to the simple affine roots are $((1\quad{-1}),\left(\begin{smallmatrix}-1\\\ 0\\\ \vdots\\\ 0\\\ 1\end{smallmatrix}\right)),(-1\quad{-2})(1\quad 2),\dotsc,(-g\quad{-(g-1)})(g\quad{g-1}),(g\quad{-g}).$ The length zero subgroup $\Omega\subseteq\widetilde{W}$ is generated by $((w_{0},\epsilon),y)\in(S_{g}\ltimes\\{\pm 1\\}^{g})\ltimes Y$, where $w_{0}\in S_{g}$ is the longest element, $\epsilon=(-1,-1,\dotsc,-1)$ and $y=(0^{g},1^{g})$. ###### Remark 2.27. One also can choose $\dotsb\subseteq p\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}^{2g-1}\subseteq\mathbb{Z}_{p}^{2g}\subseteq\dotsb$ instead of $\dotsb\subseteq\mathbb{Z}_{p}^{2g-1}\oplus p\mathbb{Z}_{p}\subseteq\mathbb{Z}_{p}^{2g}\subseteq\dotsb$ as the standard lattice chain. Then the simple affine roots would be $1-2e_{1}+e_{0},e_{1}-e_{2}=e_{2}-e_{1},e_{2}-e_{3},\dotsc,e_{g-1}-e_{g},2e_{g}-e_{0}.$ ###### Remark 2.28. $\widetilde{W}=W\ltimes Y=N(\mathbb{Q}_{p})/T(\mathbb{Z}_{p})$ and $N(\mathbb{Q}_{p})\to W\ltimes Y$ has a section $W\ltimes Y\to N(\mathbb{Q}_{p})$, which sends $(\pi,\underline{\nu})\in W\ltimes Y$ to $T_{\underline{\nu}}P_{w}$, where $T_{\underline{\nu}}=\left(\begin{smallmatrix}p^{\nu_{1}}&&&&\\\ &p^{\nu_{2}}&&&\\\ &&\ddots&&\\\ &&&p^{\nu_{-2}}&\\\ &&&&p^{\nu_{-1}}\end{smallmatrix}\right)$ and $P_{w}$ is the permutation matrix with $P_{w}(e_{i})=e_{w(i)}$. ###### Remark 2.29. Using the results of [KR00] we also easily can compute $\mathrm{Adm}(\\{\mu\\})$. One potential source of confusion at this point is that, due to our choice of the base alcove (cf. Remark 2.27), in our setup we need to use $\omega_{i}:=(0^{2g-i},1^{i})$ instead of $\omega_{i}:=(1^{i},0^{2g-i})$ (notation of [KR00]), cf. [Yu08, 1268]. With that convention in place, we have that $x\in\widetilde{W}$ is $\\{\mu\\}$-admissible ($\mu=(1^{g},0^{g})$) if and only if $(0,\dotsc,0)\leq x(\omega_{i})-\omega_{i}\leq(1,\dotsc,1)\quad\text{for all }0\leq i<2g$ (component-wise comparison). #### 2.2.2 Lattice chains, zips, admissibility ###### Definition 2.30. Let $S$ be a $\mathbb{Z}_{p}$-scheme. A _Siegel lattice chain in the weak sense on $S$ of type $J$_ is a tuple $(\mathcal{V}^{\bullet},\mathcal{L},\alpha_{\bullet\bullet},\theta_{\bullet},\psi_{\bullet})$, where 1. (a) for all $j\in J$, $\mathcal{V}^{j}$ is a vector bundle on $S$ of rank $2g$, 2. (b) $\mathcal{L}$ is a line bundle on $S$, 3. (c) for all $i,j\in J$ with $j>i$, $\alpha_{j,i}\colon\mathcal{V}^{j}\to\mathcal{V}^{i}$ is a vector bundle homomorphism, such that the $\bigl{(}\alpha_{j,i}\bigr{)}$ satisfy the obvious cocycle condition (and we also define $\alpha_{i,i}:=\operatorname{id}$), 4. (d) for all $j\in J$, $\theta_{j}\colon\mathcal{V}^{j}\xrightarrow{\sim}\mathcal{V}^{j-2g}$ is a vector bundle isomorphism such that the $\bigl{(}\theta_{j}\bigr{)}$ are compatible with the $\bigl{(}\alpha_{j,i}\bigr{)}$ in that $\theta_{i}\circ\alpha_{j,i}=\alpha_{j-2g,i-2g}\circ\theta_{j}$ and $\alpha_{j,j-2g}=p\cdot\theta_{j}$, 5. (e) for all $j\in J$ a vector bundle isomorphism $\psi_{j}\colon\mathcal{V}^{j}\xrightarrow{\sim}(\mathcal{V}^{-j})^{*}\otimes\mathcal{L}$ compatible with $\bigl{(}\theta_{j}\bigr{)}$ and $\bigl{(}\alpha_{j,i}\bigr{)}$, such that $-\psi_{j}(x,y)=\psi_{-j}(y,x)$ for all $(x,y)\in\mathcal{V}^{j}\times\mathcal{V}^{-j}$.121212By “$(x,y)\in\mathcal{V}^{j}\times\mathcal{V}^{-j}$” we of course mean that there is an open subset $U\subseteq S$ such that $(x,y)\in(\mathcal{V}^{j}\times\mathcal{V}^{-j})(U)$. We also have a _standard_ Siegel lattice chain in the weak sense on $\operatorname{Spec}\mathbb{Z}_{p}$ (and hence by base change on every $\mathbb{Z}_{p}$-scheme $S$) of type $J$, namely the one given by the lattice chain $\left\\{\Lambda^{j}\;|\;j\in J\right\\}$. We can think of the standard Siegel lattice chain as either having varying $\mathcal{V}^{j}$ with the $\alpha_{j,i}$ being the obvious inclusion maps (e.g. (if $\\{0,1\\}\subseteq J$), $\mathcal{V}^{1}={\mathbb{Z}_{p}^{2g-1}\oplus p\mathbb{Z}_{p}}\xrightarrow{\alpha_{1,0}=\mathrm{inclusion}}\mathbb{Z}_{p}^{2g}=\mathcal{V}^{0}$) or as having constant $\mathcal{V}^{j}=\mathbb{Z}_{p}^{2g}$ with the $\alpha_{j,i}$ being diagonal matrices with all entries either $p$ or $1$ (e.g., $\mathcal{V}^{1}=\mathbb{Z}_{p}^{2g}\xrightarrow{\alpha_{1,0}=\operatorname{diag}(1,1,\dotsc,1,p)}\mathbb{Z}_{p}^{2g}=\mathcal{V}^{0}$). Usually the latter point of view is more convenient. A _Siegel lattice chain on $S$ of type $J$_ (or _Siegel lattice chain in the strong sense on $S$ of type $J$_) then is a Siegel lattice chain in the weak sense on $S$ of type $J$ that Zariski-locally on $S$ is isomorphic to the standard chain. ###### Remarks 2.31. 1. (1) Let $(\mathcal{V}^{\bullet},\mathcal{L},\alpha_{\bullet\bullet},\theta_{\bullet},\psi_{\bullet})$ be a Siegel lattice chain in the weak sense on $S$ of type $J$. Then $\tilde{\psi_{j}}:=(\tilde{\alpha}_{j,-j}^{*}\otimes\operatorname{id}_{\mathcal{L}})\circ\psi_{j}\colon\mathcal{V}^{j}\otimes\mathcal{V}^{j}\to\mathcal{L}$ is alternating. Here $\tilde{\alpha}_{j,-j}$ is defined as follows: Let $n\in\mathbb{Z}$ be maximal with $j-2gn\geq-j$. Then $\tilde{\alpha}_{j,-j}:=\alpha_{j-2gn,-j}\circ\theta_{j-2g(n-1)}\circ\dotsb\circ\theta_{j}$. 2. (2) Note that this means that $\tilde{\psi}_{j}$ is (twisted) symplectic if $-j\in j+2g\mathbb{Z}$, i.e., if $j\in g\mathbb{Z}$. ###### Proof: (of (1)) Let $x,y\in\mathcal{V}^{j}$. Then $\displaystyle\tilde{\psi}_{j}(x,y)$ $\displaystyle=\psi_{j}(x,\tilde{\alpha}_{j,-j}(y))$ $\displaystyle=\psi_{j}(x,(\alpha_{j-2gn,-j}\circ\theta_{j-2g(n-1)}\circ\dotsb\circ\theta_{j})(y))$ $\displaystyle=\psi_{2gn-j}(\alpha_{j,2gn-j}(x),(\theta_{j-2g(n-1)}\circ\dotsb\circ\theta_{j})(y))$ $\displaystyle=\psi_{2g(n-1)-j}((\theta_{2gn-j}\circ\alpha_{j,2gn-j})(x),(\theta_{j-2g(n-2)}\circ\dotsb\circ\theta_{j})(y))$ $\displaystyle=\dotsb$ $\displaystyle=\psi_{-j}((\theta_{-j+2g}\circ\dotsb\circ\theta_{2gn-j}\circ\alpha_{j,2gn-j})(x),y)$ $\displaystyle=-\psi_{j}(y,(\theta_{-j+2g}\circ\dotsb\circ\theta_{2gn-j}\circ\alpha_{j,2gn-j})(x))$ $\displaystyle=-\tilde{\psi}_{j}(y,x).$ □ ###### Reminder 2.32. $\mathcal{G}_{K}$ is the automorphism group of the standard Siegel lattice chain. The following definition is a generalization of [VW13, Definition 3.1] in the Siegel case. ###### Definition 2.33. Let $S$ be an $\mathbb{F}_{p}$-scheme. A $\overline{\mathcal{G}}_{K}$-zip over $S$ is a tuple $(\mathcal{V}^{\bullet},\mathcal{L},\alpha_{\bullet\bullet},\theta_{\bullet},\psi_{\bullet},\mathcal{C}^{\bullet},\mathcal{D}^{\bullet},\varphi_{0}^{\bullet},\varphi_{1}^{\bullet},\varphi_{\mathcal{L}})$, where 1. (a) $(\mathcal{V}^{\bullet},\mathcal{L},\alpha_{\bullet\bullet},\theta_{\bullet},\psi_{\bullet})$ is a Siegel lattice chain on $S$ of type $J$, 2. (b) for all $j\in J$, $\mathcal{C}^{j}\subseteq\mathcal{V}^{j}$ are locally direct summands of rank $g$ compatible with $\alpha_{\bullet\bullet},\theta_{\bullet}$, such that $\mathcal{C}^{j}\hookrightarrow\mathcal{V}^{j}\overset{\psi_{j}}{\cong}(\mathcal{V}^{-j})^{*}\otimes\mathcal{L}\to(\mathcal{C}^{-j})^{*}\otimes\mathcal{L}$ vanishes. (cf. Remark 1.30 for the origins of this condition.) 3. (c) $\mathcal{D}^{\bullet}\subseteq\mathcal{V}^{\bullet}$ satisfies the same conditions as $\mathcal{C}^{\bullet}\subseteq\mathcal{V}^{\bullet}$, 4. (d) $\varphi_{0}^{j}\colon(\mathcal{C}^{j})^{(p)}\xrightarrow{\sim}\mathcal{V}^{j}/\mathcal{D}^{j}$ and $\varphi_{1}^{j}\colon(\mathcal{V}^{j}/\mathcal{C}^{j})^{(p)}\xrightarrow{\sim}\mathcal{D}^{j}$ are isomorphisms of vector bundles compatible with $\alpha_{\bullet\bullet}$ and $\theta_{\bullet}$ and $\varphi_{\mathcal{L}}\colon\mathcal{L}^{(p)}\xrightarrow{\sim}\mathcal{L}$ is an isomorphism of line bundles, such that $(\mathcal{C}^{j})^{(p)}$$(\mathcal{V}^{-j}/\mathcal{C}^{-j})^{*,(p)}\otimes\mathcal{L}^{(p)}$$(\mathcal{D}^{-j})^{*}\otimes\mathcal{L}$$\mathcal{V}^{j}/\mathcal{D}^{j}$$\varphi_{0}^{j}$$(\varphi_{1}^{-j})^{*}\otimes\varphi_{\mathcal{L}}^{-1}$$\psi_{j}^{(p)}$$\psi_{j}$ commutes, i.e., ${\psi_{j}(\varphi_{0}^{j}(\\_),\varphi_{1}^{-j}(\\_))=\varphi_{\mathcal{L}}\circ\psi_{j}^{(p)}(\\_,\\_)\colon}{(\mathcal{C}^{j})^{(p)}\times(\mathcal{V}^{-j}/\mathcal{C}^{-j})^{(p)}\to\mathcal{L}^{(p)}\to\mathcal{L}}.$ Since $\varphi_{\mathcal{L}}$ evidently is uniquely determined by the other data, we sometimes leave it out. We obtain a fibered category $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}\to\mathrm{Sch}_{\mathbb{F}_{p}}$. ###### Remark 2.34. $\psi_{j}$ gives rise to isomorphisms $\displaystyle\mathcal{C}^{j}$ $\displaystyle\xrightarrow{\sim}(\mathcal{V}^{-j}/\mathcal{C}^{-j})^{*}\otimes\mathcal{L},$ $\displaystyle\mathcal{V}^{j}/\mathcal{C}^{j}$ $\displaystyle\xrightarrow{\sim}(\mathcal{C}^{-j})^{*}\otimes\mathcal{L},$ $\displaystyle\mathcal{D}^{j}$ $\displaystyle\xrightarrow{\sim}(\mathcal{V}^{-j}/\mathcal{D}^{-j})^{*}\otimes\mathcal{L},$ $\displaystyle\mathcal{V}^{j}/\mathcal{D}^{j}$ $\displaystyle\xrightarrow{\sim}(\mathcal{D}^{-j})^{*}\otimes\mathcal{L}.$ This way $\mathcal{V}^{\bullet}/\mathcal{C}^{\bullet}\oplus\mathcal{C}^{\bullet}$ and $\mathcal{D}^{\bullet}\oplus\mathcal{V}^{\bullet}/\mathcal{D}^{\bullet}$ become Siegel lattice chains in the weak(!) sense of type $J$. The Cartier isomorphism then is an isomorphism in the category of Siegel lattice chains in the weak sense of type $J$. Over an algebraically closed field, we call the isomorphism type of the Siegel lattice chain in the weak sense $\mathcal{V}^{\bullet}/\mathcal{C}^{\bullet}\oplus\mathcal{C}^{\bullet}$ the _Kottwitz-Rapoport type of the $\overline{\mathcal{G}}_{K}$-zip_. We also define a linearly rigidified version of $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}$ as follows. ###### Definition 2.35. We define the fibered category $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}\to\mathrm{Sch}_{\mathbb{F}_{p}}$ just like $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}$ but with the extra condition that $(\mathcal{V}^{\bullet},\mathcal{L},\alpha_{\bullet\bullet},\theta_{\bullet},\psi_{\bullet})$ be the standard Siegel lattice chain (rather than just locally isomorphic to it). ###### Lemma 2.36. We always have a closed embedding of $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}$ into a product of (classical) $\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$’s, and therefore $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}$ is a scheme. ###### Proof: Set $J^{\prime}:=J\cap\\{0,\dotsc,2g-1\\}$. Let $\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}=E_{(1^{g},0^{g})}\backslash(\operatorname{GL}_{2g}\times\operatorname{GL}_{2g})$ be the $\mathbb{F}_{p}$-scheme of trivialized $\operatorname{GL}_{2g}$-zips (so that $[\operatorname{GL}_{2g}\backslash\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}]=\operatorname{GL}_{2g}\text{-}\mathrm{Zip}$) with respect to the the cocharacter $(1^{g},0^{g})$, and $\prod_{j\in J^{\prime}}\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$ the product of $\\#J^{\prime}$ copies of this scheme. On $J^{\prime}$ we define $-j:=2g-j$ for $1\leq j\leq 2g-1$ and $-0:=0$. Then we get a monomorphism $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}\hookrightarrow\prod_{j\in J^{\prime}}\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$ (2.37) by sending $(\mathcal{C}^{\bullet},\mathcal{D}^{\bullet},\varphi_{0}^{\bullet},\varphi_{1}^{\bullet})$ to $\left(\mathcal{C}^{j},\mathcal{D}^{j},\varphi_{0}^{j},\varphi_{1}^{j}\right)_{j\in J^{\prime}}$. The extra conditions for an element of $\prod_{j\in J^{\prime}}\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$ to be in $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}$ are as in Definition 2.33: 1. (1) $\mathcal{C}^{\bullet},\mathcal{D}^{\bullet},\varphi_{0}^{\bullet},\varphi_{1}^{\bullet}$ are compatible with the transition maps (or, to put it differently, ${(\mathcal{C}^{j}\oplus\mathcal{V}^{j}/\mathcal{C}^{j})^{(p)}}\xrightarrow[\cong]{\varphi_{0}^{j}\oplus\varphi_{1}^{j}}{\mathcal{V}^{j}/\mathcal{D}^{j}\oplus\mathcal{D}^{j}}$ is compatible with the transition maps), 2. (2) $\mathcal{C}^{j}\hookrightarrow\mathcal{V}^{j}\overset{\psi_{j}}{\cong}(\mathcal{V}^{-j})^{*}\to(\mathcal{C}^{-j})^{*}$ vanishes. 3. (3) $\mathcal{D}^{j}\hookrightarrow\mathcal{V}^{j}\overset{\psi_{j}}{\cong}(\mathcal{D}^{-j})^{*}\to(\mathcal{D}^{-j})^{*}$ vanishes. 4. (4) There is a (necessarily unique) isomorphism $\varphi_{\mathcal{L}}\colon\mathcal{L}^{(p)}\xrightarrow{\sim}\mathcal{L}=\mathcal{O}_{S}$ of line bundles, such that $(\mathcal{C}^{j})^{(p)}$$(\mathcal{V}^{-j}/\mathcal{C}^{-j})^{*,(p)}$$(\mathcal{D}^{-j})^{*}$$\mathcal{V}^{j}/\mathcal{D}^{j}$$\varphi_{0}^{j}$$(\varphi_{1}^{-j})^{*}\otimes\varphi_{\mathcal{L}}^{-1}$$\psi_{j}^{(p)}$$\psi_{j}$ commutes. We claim that the conditions are closed on $\prod_{j\in J^{\prime}}\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$ (hence the monomorphism is a closed immersion). To see this, we recall the construction of the scheme $\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$ as executed in [MW04, (3.10), (3.11), (4.3)]. Recall our notational convention regarding the parabolic subgroup associated with a cocharacter $\chi$ from Definition 1.33. As in [MW04], we denote by $\mathrm{Par}_{\chi}$ the scheme of parabolic subgroups of type $\chi$. There is a group scheme $H$ defined by the cartesian diagram $H$$\mathcal{P}_{((-1)^{g},0^{g})}/\mathcal{U}_{((-1)^{g},0^{g})}$$\square$$\mathrm{Par}_{((-1)^{g},0^{g})}\times\mathrm{Par}_{(1^{g},0^{g})}$$\mathrm{Par}_{((-1)^{g},0^{g})}$$(\;)^{(p)}\circ\operatorname{pr}_{1}$ where $\mathcal{P}_{((-1)^{g},0^{g})}\to\mathrm{Par}_{((-1)^{g},0^{g})}$ is the universal parabolic group scheme and $\mathcal{U}_{((-1)^{g},0^{g})}$ its unipotent radical, such that $\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$ is an $H$-Zariski torsor over $\mathrm{Par}_{((-1)^{g},0^{g})}\times\mathrm{Par}_{(1^{g},0^{g})}$, where $\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}\to\mathrm{Par}_{((-1)^{g},0^{g})}\times\mathrm{Par}_{(1^{g},0^{g})}$ is given by $(C,D,\varphi_{0},\varphi_{1})\mapsto(C,D)$. Clearly, compatibility of $\mathcal{C}^{\bullet},\mathcal{D}^{\bullet}$ with the transition maps is a closed condition on $\prod_{j\in J^{\prime}}\mathrm{Par}_{((-1)^{g},0^{g})}\times\mathrm{Par}_{(1^{g},0^{g})}$ and then also on $\prod_{j\in J^{\prime}}\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$. Similar for the conditions (2) and (3). Locally, we can choose complements (not necessarily compatible with the transition maps) and then $\varphi_{\bullet}^{j}$ yield sections $g^{j}$ of $\operatorname{GL}_{2g}$ as in [MW04, definition of $g\in G(S)$ in the proof of (4.3)]. The $g^{j}$ are well-defined up to $\mathcal{U}_{((-1)^{g},0^{g})}^{(p)}\times\mathcal{U}_{(1^{g},0^{g})}$, and we want them to be compatible with the transition maps coming from the Siegel lattice chains in the weak sense $\mathcal{C}^{j}\oplus\mathcal{V}^{j}/\mathcal{C}^{j}$ and $\mathcal{V}^{j}/\mathcal{D}^{j}\oplus\mathcal{D}^{j}$, respectively. With our complements in place, these transition maps correspond to maps $\mathcal{V}^{j}\to\mathcal{V}^{j-n}$. The question of whether $g^{j}$ is compatible with these maps is independent of the choice of complements (basically because the transition maps $\mathcal{V}^{j}\to\mathcal{V}^{j-n}$ depend on the choice of complements similar to how $g^{j}$ depends on that choice). So in effect we can view the conditions on $\varphi_{0}^{\bullet},\varphi_{1}^{\bullet}$ of (1) as closed conditions on $\prod_{j\in J^{\prime}}\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim\sim}$, where $\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim\sim}\to\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$ (an fpqc quotient map) additionally comes with complementary spaces of $C,D$ ($\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim\sim}=\tilde{X}_{\tau}$ in the notation of [MW04, proof of (4.3)]). We also can reformulate condition (4) in those terms. □ ###### Corollary 2.38. $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}$ is the algebraic quotient stack $[\overline{\mathcal{G}}_{K}\backslash\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}]$. Here by definition an element $\phi\in\overline{\mathcal{G}}_{K}$ acts on $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}$ by replacing $(\mathcal{C}^{\bullet},\mathcal{D}^{\bullet},\varphi_{0}^{\bullet},\varphi_{1}^{\bullet},\varphi_{\mathcal{L}})$ by $(\phi\mathcal{C}^{\bullet},\phi\mathcal{D}^{\bullet},\phi\varphi_{0}^{\bullet}\phi^{-(p)},\phi\varphi_{1}^{\bullet}\phi^{-(p)},\varphi_{\mathcal{L}})$. ###### Definition 2.39. We let an element $(X,Y)\in\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$ act on $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}$ by replacing $(\mathcal{V}^{\bullet},\mathcal{L},\alpha_{\bullet\bullet},\theta_{\bullet},\psi_{\bullet},\mathcal{C}^{\bullet},\mathcal{D}^{\bullet},\varphi_{0}^{\bullet},\varphi_{1}^{\bullet})$ by $(\mathcal{V}^{\bullet},\mathcal{L},\alpha_{\bullet\bullet},\theta_{\bullet},\psi_{\bullet},X\mathcal{C}^{\bullet},Y\mathcal{D}^{\bullet},Y\varphi_{0}^{\bullet}X^{-(p)},Y\varphi_{1}^{\bullet}X^{-(p)}).$ ###### Notation 2.40. Let $\mathscr{S}_{K}\to\operatorname{Spec}\mathbb{Z}_{p}$ be the integral model of the Siegel Shimura variety of level $K$ (where $K=K_{p}K^{p}$ with $K^{p}$ sufficiently small), and recall $\tilde{\mathscr{S}}_{K}$ from Section 1.6.1. Moreover, define $\overline{\mathscr{S}}_{K}:=\mathscr{S}_{K}\otimes_{\mathbb{Z}_{p}}\mathbb{F}_{p}$ and $\tilde{\overline{\mathscr{S}}}_{K}:=\tilde{\mathscr{S}}_{K}\otimes_{\mathbb{Z}_{p}}\mathbb{F}_{p}$. So $\tilde{\overline{\mathscr{S}}}_{K}\to\overline{\mathscr{S}}_{K}$ is a $\overline{\mathcal{G}}_{K}$-torsor. ###### Remark 2.41. We have morphisms $\tilde{\overline{\mathscr{S}}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}$ (take first de Rham cohomology with Frobenius and Verschiebung) and $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}\to\overline{M}^{\mathrm{loc}}_{K}$ (take the $\mathcal{C}^{\bullet}$-filtration) and therefore $\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}\to[\overline{\mathcal{G}}_{K}\backslash\overline{M}^{\mathrm{loc}}_{K}].$ ###### Remark 2.42. In particular, $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}$ has a Kottwitz-Rapoport stratification, which agrees with the notion of Kottwitz- Rapoport type as defined in Remark 2.34. For $w\in\mathrm{KR}(K,\\{\mu\\})$ denote the associated Kottwitz-Rapoport stratum by $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}_{w}$, i.e., we interpret $w$ as a $\bar{\mathbb{F}}_{p}$-valued point of $[\overline{\mathcal{G}}_{K}\backslash\overline{M}^{\mathrm{loc}}_{K}]$ and form $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}_{w}$ as a fiber product. ###### Construction 2.43. Fix $w\in\mathrm{Adm}(\\{\mu\\})^{K}\subseteq\widetilde{W}$ (so that $W_{K}wW_{K}\in\mathrm{KR}(K,\\{\mu\\})$). We define a standard $\overline{\mathcal{G}}_{K}$-zip of KR type $W_{K}wW_{K}$. Using Remark 2.28, we interpret $w$ as an element of $N(\mathbb{Q}_{p})\subseteq G(\mathbb{Q}_{p})$. The admissibility condition implies that we can interpret it as an endomorphism $w^{\bullet}$ of the standard lattice chain $\mathcal{V}^{\bullet}$ over $\mathbb{Z}_{p}$.131313Take up the second point of view described in Definition 2.30 regarding $\mathcal{V}^{\bullet}$. Define $\underline{\nu}^{(0)}:=\underline{\nu}$, $\underline{\nu}^{(1)}:=\underline{\nu}+\left(\begin{smallmatrix}0\\\ \vdots\\\ 0\\\ 0\\\ -1\end{smallmatrix}\right)+w\left(\begin{smallmatrix}0\\\ \vdots\\\ 0\\\ 0\\\ 1\end{smallmatrix}\right)$, $\underline{\nu}^{(2)}:=\underline{\nu}+\left(\begin{smallmatrix}0\\\ \vdots\\\ 0\\\ -1\\\ -1\end{smallmatrix}\right)+w\left(\begin{smallmatrix}0\\\ \vdots\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right)$, and so on. Then $w^{j}=T_{\underline{\nu}^{(j)}}P_{w}$ for $0\leq j<2g$. From the formulation of the admissibility condition as in Remark 2.29, we see that $w\in\mathrm{Adm}(\\{\mu\\})^{K}$ is equivalent to the condition that $\underline{\nu}^{(j)}$ be a permutation of $(1^{g},0^{g})$ for all relevant $j$. We denote the standard Siegel lattice chain over $\mathbb{Z}_{p}$ by $\mathscr{V}^{\bullet}$ and its base change to $\mathbb{F}_{p}$ by $\mathcal{V}^{\bullet}$. Define $\mathscr{C}_{w}^{\bullet}:=pw^{\bullet,-1}\mathscr{V}^{\bullet}$ and $\mathscr{D}_{w}^{\bullet}:=\sigma(w^{\bullet})\mathscr{V}^{\bullet}$. Then $\mathcal{C}_{w}^{\bullet}:=\mathscr{C}_{w}^{\bullet}\otimes\mathbb{F}_{p}=\ker(w^{\bullet}\colon\mathcal{V}^{\bullet}\to\mathcal{V}^{\bullet})$, so $(\mathcal{V}^{\bullet}/\mathcal{C}_{w}^{\bullet})^{(p)}\xrightarrow{\sim}\mathcal{D}_{w}^{\bullet}:=\mathscr{D}_{w}^{\bullet}\otimes\mathbb{F}_{p}$ via $\sigma(w^{\bullet})$ and $(\mathcal{C}_{w}^{\bullet})^{(p)}\xrightarrow{\sim}\mathcal{V}^{\bullet}/\mathcal{D}_{w}^{\bullet}$ via $p^{-1}\sigma(w^{\bullet})$. This defines a standard element $\widetilde{\mathrm{Std}}(w)$ of $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}(\mathbb{F}_{p})$ and a standard element $\mathrm{Std}(w)$ of $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}_{w}(\mathbb{F}_{p})$. ###### Definition and Remark 2.44. (See also [SYZ19, Lemma 3.3.2].) $\mathcal{G}_{w}:=\operatorname{Aut}(\mathscr{C}_{w}^{\bullet}\subseteq\mathscr{V}^{\bullet})$ is a Bruhat-Tits group scheme with generic fiber $G_{\mathbb{Q}_{p}}$ and $\breve{\mathbb{Z}}_{p}$-points $\breve{K}\cap w^{-1}\breve{K}w$; and similarly for $\mathcal{G}_{\sigma(w)^{-1}}:=\operatorname{Aut}(\mathscr{D}_{w}^{\bullet}\subseteq\mathscr{V}^{\bullet})$ with $\breve{K}\cap\sigma(w)\breve{K}\sigma(w)^{-1}$. ###### Definition 2.45. We keep $w\in\mathrm{Adm}(\\{\mu\\})^{K}\subseteq\widetilde{W}$ fixed and define $\widetilde{E}_{w}\subseteq\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$ to be the stabilizer of $\widetilde{\mathrm{Std}}(w)$. So $\widetilde{E}_{w}$ consists of those $(X^{\bullet},Y^{\bullet})\in\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$ such that $X^{\bullet}\mathcal{C}_{w}^{\bullet}=\mathcal{C}_{w}^{\bullet}$, $Y^{\bullet}\mathcal{D}_{w}^{\bullet}=\mathcal{D}_{w}^{\bullet}$, and $Y^{\bullet}\circ\varphi_{j}^{\bullet}\circ X^{\bullet,-(p)}=\varphi_{j}^{\bullet}$ for $j=0,1$. In the notation of [SYZ19, Lemma 3.3.2] we have $\widetilde{E}_{w}=\overline{\mathcal{G}}_{w}\times_{\overline{\mathcal{G}}_{w}^{L,(p)}}\overline{\mathcal{G}}_{\sigma(w)^{-1}}.$ (2.46) Here $\overline{\mathcal{G}}_{w}^{L}$ is the image of $\overline{\mathcal{G}}_{w}$ in $\mathrm{DiagAut}(\mathcal{C}_{w}^{\bullet}\oplus\mathcal{V}^{\bullet}/\mathcal{C}_{w}^{\bullet})$ (the automorphisms of $\mathcal{C}_{w}^{\bullet}\oplus\mathcal{V}^{\bullet}/\mathcal{C}_{w}^{\bullet}$ respecting both $\mathcal{C}_{w}^{\bullet}$ and $\mathcal{V}^{\bullet}/\mathcal{C}_{w}^{\bullet}$). The orbit of $\widetilde{\mathrm{Std}}(w)$ in $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}$ is the fppf quotient $(\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K})/\widetilde{E}_{w}$, cf. [DG80, II, § 5, no. 3]. ###### Lemma 2.47. We have commutative diagrams $\overline{\mathcal{G}}_{w}$$\overline{\mathcal{G}}$$\overline{\mathcal{G}}^{\mathrm{rdt}}$$\bar{P}_{J_{1}}$ and $\overline{\mathcal{G}}_{\sigma(w)^{-1}}$$\overline{\mathcal{G}}$$\overline{\mathcal{G}}^{\mathrm{rdt}}$$\bar{P}_{\sigma^{\prime}(J_{1})}$ and $\overline{\mathcal{G}}_{w}^{L}$$\overline{\mathcal{G}}$$\overline{\mathcal{G}}^{\mathrm{rdt}}$$\bar{L}_{J_{1}}$. ###### Proof: This follows from Proposition 2.23. □ ###### Lemma 2.48. The image of $\widetilde{E}_{w}$ under $\overline{\mathcal{G}}\times\overline{\mathcal{G}}\to\overline{\mathcal{G}}^{\mathrm{rdt}}\times\overline{\mathcal{G}}^{\mathrm{rdt}}$ is $E_{\mathcal{Z}_{w}}$. ###### Proof: This follows from Lemma 2.47. □ ###### Lemma 2.49. Assume $0\in J$. The $\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$-orbit of $\widetilde{\mathrm{Std}}(w)$ for $w\in\mathrm{Adm}(\\{\mu\\})^{K}$ depends only on $W_{K}wW_{K}$. ###### Proof: Let $x,y\in W_{K}\subseteq W$. As above we get endomorphisms $x^{\bullet},y^{\bullet}$ of $\mathcal{V}^{\bullet}$, which in this case are in fact automorphisms. Now $\widetilde{\mathrm{Std}}(w)=((y^{\bullet})^{-1},\sigma(x^{\bullet}))\cdot\widetilde{\mathrm{Std}}(w)$. □ ###### Definition 2.50. Define $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}$ to be the union of the $\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$-orbits of the standard zips $\widetilde{\mathrm{Std}}(w)$ for $w\in\mathrm{Adm}(\\{\mu\\})^{K}$. Here an orbit by definition is the image of the orbit map endowed with the reduced subscheme structure, and—as we prove just below—the union of orbits just referred to is a closed subset, which we again endow with the reduced subscheme structure. Define $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}:=[\overline{\mathcal{G}}_{K}\backslash\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}]\subseteq[\overline{\mathcal{G}}_{K}\backslash\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}]=\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}$. ###### Lemma 2.51. $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}$ is a closed subset of $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}$. ###### Proof: This being a purely topological question, we may freely pass to perfections, which will be convenient since Dieudonné theory is simpler over perfect rings. By “perfection” we mean the inverse perfection in the terminology of [BG18, Section 5]. Consider therefore $(\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim})^{\mathrm{perf}}$ as a sheaf on $\mathrm{Perf}_{\mathbb{F}_{p}}$, the fpqc site of affine perfect $\mathbb{F}_{p}$-schemes. Again denoting the standard Siegel lattice chain over $\mathbb{Z}_{p}$ by $\mathscr{V}^{\bullet}$ and its base change to $\mathbb{F}_{p}$ by $\mathcal{V}^{\bullet}$, we can describe the elements of $(\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim})^{\mathrm{perf}}(R)=\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}(R)$, where $R$ is a perfect $\mathbb{F}_{p}$-algebra as being given by > homomorphisms > $\mathcal{V}_{R}^{\bullet,(p)}\xrightarrow{F^{\bullet}}\mathcal{V}_{R}^{\bullet}\xrightarrow{V^{\bullet}}\mathcal{V}_{R}^{\bullet,(p)}$ > such that > $\ker(F^{\bullet})=:\mathcal{C}^{\bullet,(p)}=\operatorname{im}(V^{\bullet})$ > and > $\operatorname{im}(F^{\bullet})=:\mathcal{D}^{\bullet}=\ker(V^{\bullet})$ > and $\psi_{j}(F^{j}\\_,\\_)=u\sigma(\psi_{j}(\\_,V^{-j}\\_))$ for some $u\in > R^{\times}$ and $\mathcal{C}^{\bullet,(p)},\mathcal{D}^{\bullet}$ have the > same rank (namely $g$). To see that $\mathcal{C}^{\bullet,(p)},\mathcal{D}^{\bullet}$ are direct summands of $\mathcal{V}_{R}^{\bullet,(p)},\mathcal{V}_{R}^{\bullet}$ (which makes the last part of the characterization given above meaningful), one argues as in [Lau14, Lemma 2.4] (since both are finitely presented, it is enough to show flatness and to that end, one looks at the fiber dimensions). Define a presheaf $\mathcal{X}$ on $\mathrm{Sch}_{\mathbb{Z}_{p}}$ in the same way but for the following changes: $\mathcal{V}^{\bullet}$ is replaced by $\mathscr{V}^{\bullet}$, and we impose the condition that both compositions $F^{\bullet}\circ V^{\bullet}$ and $V^{\bullet}\circ F^{\bullet}$ are multiplication by $p$, and the $\ker=\operatorname{im}$-conditions are only required to hold modulo $p$. We also slightly reformulate these $\ker=\operatorname{im}$-conditions: We impose the condition that the reductions $\bar{F}^{\bullet},\bar{V}^{\bullet}$ be fiberwise of rank $g$ over $R/p$. (Note that the argument that $\mathcal{C}^{\bullet,(p)},\mathcal{D}^{\bullet}$ are direct summands only works over reduced rings.) Then $\mathcal{X}$ is a separated $\mathbb{Z}_{p}$-scheme. To see this, we build it up from scratch as follows. $\operatorname{End}(\mathscr{V}^{j})$ obviously is a $\mathbb{Z}_{p}$-scheme (an affine space), hence so is $\operatorname{Hom}(\mathscr{V}^{j,(p)},\mathscr{V}^{j})$ since $\mathscr{V}_{j}^{(p)}\cong\mathscr{V}_{j}$. $\operatorname{Hom}(\mathscr{V}^{\bullet,(p)},\mathscr{V}^{\bullet})$ is a locally closed subscheme of a finite product of such schemes. Homomorphisms $\mathscr{V}^{\bullet,(p)}\xrightarrow{F^{\bullet}}\mathscr{V}^{\bullet}\xrightarrow{V^{\bullet}}\mathscr{V}^{\bullet,(p)}$ such that both compositions are multiplication by $p$ form a closed subscheme $\mathcal{X}^{\prime}$ of $\operatorname{Hom}(\mathscr{V}^{\bullet,(p)},\mathscr{V}^{\bullet})\times\operatorname{Hom}(\mathscr{V}^{\bullet},\mathscr{V}^{\bullet,(p)})$. In the special fiber $\mathcal{X}^{\prime}_{\mathbb{F}_{p}}$ we now consider the $\ker=\operatorname{im}$-conditions and show that they define an open subscheme $\bar{\mathcal{X}}^{\prime\prime}$. Then $\mathcal{X}=\mathcal{X}^{\prime}\times_{\mathcal{X}^{\prime}_{\mathbb{F}_{p}}}\bar{\mathcal{X}}^{\prime\prime}$. Indeed, the extra conditions are that all $F^{\bullet},V^{\bullet}$ have some non-vanishing $g$-minor—evidently open conditions. The upshot is that we defined a $\mathbb{Z}_{p}$-scheme $\mathcal{X}$ such that $(\mathcal{X}\times_{\mathbb{Z}_{p}}\mathbb{F}_{p})^{\mathrm{perf}}=(\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim})^{\mathrm{perf}}$ and such that we have an obvious morphism $\tilde{\mathscr{S}}_{K}\to\mathcal{X}$, which takes a principally polarized isogeny chain of abelian schemes to the evaluation of the Dieudonné crystal on the trivial thickening.141414This makes use of the crystalline-de Rham comparison to make a trivialization of the de Rham cohomology into a trivialization of the crystalline cohomology. Observe that $\mathcal{X}$ also has a natural $\mathcal{G}_{K}\times\mathcal{G}_{K}$-action: We interpret $\mathcal{G}_{K}$ as $\operatorname{Aut}(\mathscr{V}^{\bullet})$ and the action of $(X^{\bullet},Y^{\bullet})$ transforms $(F^{\bullet},V^{\bullet})$ into $(Y^{\bullet}\circ F^{\bullet}\circ X^{\bullet,-(p)},X^{(p)}\circ V^{\bullet}\circ Y^{\bullet,-1})$. The identity $(\mathcal{X}\times_{\mathbb{Z}_{p}}\mathbb{F}_{p})^{\mathrm{perf}}=(\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim})^{\mathrm{perf}}$ is an identity of $\overline{\mathcal{G}}_{K}^{\mathrm{perf}}\times\overline{\mathcal{G}}_{K}^{\mathrm{perf}}$-varieties. Now we claim that $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}=\left(\mathcal{X}_{\mathbb{F}_{p}}\times_{\mathcal{X}}\overline{\mathcal{X}_{\mathbb{Q}_{p}}}\right)^{\mathrm{perf}}$ topologically, where $\overline{\mathcal{X}_{\mathbb{Q}_{p}}}$ is the flat closure of the generic fiber in $\mathcal{X}$. This of course implies $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}\subseteq\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}$ being closed. Both sets are constructible, so it suffices to check it on a very dense subset, say the $\bar{\mathbb{F}}_{p}$-valued points. Using Lemmas 1.25 and 1.27, we see that $(\mathcal{X}_{\mathbb{F}_{p}}\times_{\mathcal{X}}\overline{\mathcal{X}_{\mathbb{Q}_{p}}})(\bar{\mathbb{F}}_{p})$ consists precisely of those elements $\bar{x}\in\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}(\bar{\mathbb{F}}_{p})$ such that there exists a finite field extension $L/\breve{\mathbb{Q}}_{p}$ and a point $x\in\mathcal{X}(\mathcal{O}_{L})$ lifting $\bar{x}$. (We’ll also say that $\bar{x}$ is _liftable_ in this situation.) Since $\mathcal{G}_{K}$ is flat over $\mathbb{Z}_{p}$, this liftability condition for $\mathcal{G}_{K}$ (in lieu of $\mathcal{X}$) is always satisfied. Consequently, $(\mathcal{X}_{\mathbb{F}_{p}}\times_{\mathcal{X}}\overline{\mathcal{X}_{\mathbb{Q}_{p}}})(\bar{\mathbb{F}}_{p})$ is stable under the $\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$-action. Also, the standard zips clearly are liftable. Thus, $(\mathcal{X}_{\mathbb{F}_{p}}\times_{\mathcal{X}}\overline{\mathcal{X}_{\mathbb{Q}_{p}}})(\bar{\mathbb{F}}_{p})\supseteq\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}(\bar{\mathbb{F}}_{p})$. For the converse inclusion, there are injective maps from $\mathcal{X}(\mathcal{O}_{L})$ to $\mathcal{X}(L)$ to $\mathcal{G}_{K}(L)$ such that the corresponding Schubert cell (in the local model) is indexed by the image mod $\mathcal{G}_{K}(\mathcal{O}_{L})\times\mathcal{G}_{K}(\mathcal{O}_{L})^{\mathrm{op}}$, cf. Proposition 2.5.151515Note that $\mathcal{G}_{K}(\mathcal{O}_{L})\backslash\mathcal{G}_{K}(L)/\mathcal{G}_{K}(\mathcal{O}_{L})\cong W_{K}\backslash\widetilde{W}/W_{K}$ for every strictly henselian discretely valued field $L$ by [HR08, Prop. 8]. (And also in the construction of $\widetilde{W}$ and $W_{K}$ any such field, not just $L=\breve{\mathbb{Q}}_{p}$, can be used.) This proves it since we know which Schubert cells belong to the local model. □ ###### Remark 2.52. Regarding the orbit closure relations for $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}$, let us point out that $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}\to\bar{M}^{\mathrm{loc}}_{K}$ is $\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$-equivariant, where the action of $\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$ on $M^{\mathrm{loc}}_{K}$ factors through the first projection map, and this map is a bijection on orbits. Writing $w^{\prime}\preceq w$ if $(\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K})\cdot\mathrm{Std}(w^{\prime})\subseteq\overline{(\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K})\cdot\mathrm{Std}(w)}$, it follows from these observations that $w^{\prime}\leq w$ implies $w^{\prime}\preceq w$. Here $\leq$ is the Bruhat order on $W_{K}\backslash\widetilde{W}/W_{K}$ as explained in [PRS13, section 4.2]. It appears reasonable to suspect that $\preceq$ and $\leq$ in fact agree. ###### Conjecture 2.53. The closure of ${(\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K})\cdot\widetilde{\mathrm{Std}}(w)}$ is given by the disjoint union of ${(\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K})\cdot\widetilde{\mathrm{Std}}(w^{\prime})}$ for $w^{\prime}\leq w$. ###### Lemma 2.54. The map $\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}$ factors through $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}$. ###### Proof: It is sufficient to check this on $k=\bar{\mathbb{F}}_{p}$-valued points. The map $\overline{\mathscr{S}}_{K}(k)\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}(k)$ factors through $\Upsilon_{K}\colon\overline{\mathscr{S}}_{K}(k)\to\bigcup_{w\in\mathrm{KR}(K,\\{\mu\\})}\breve{K}w\breve{K}/\breve{K}_{\sigma}$ with $\breve{K}w\breve{K}/\breve{K}_{\sigma}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}(k)$ given by sending $xwy$ to $(\bar{y}^{-1},\sigma(\bar{x}))\cdot\mathrm{Std}(w)$ (similar to Lemma 2.49). □ #### 2.2.3 An explicit description of $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ In order to get a better feeling for the passage from $\overline{\mathcal{G}}_{K}$ to the maximal reductive quotient $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}=\overline{\mathcal{G}}_{K}/R_{u}\overline{\mathcal{G}}_{K}$ (with $R_{u}\overline{\mathcal{G}}_{K}$ being the unipotent radical of $\overline{\mathcal{G}}_{K}$), which is key in the definition of the EKOR stratification, we describe $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ in explicit, linear-algebraic terms in the Siegel case. Let $(\mathcal{V}^{\bullet},\mathcal{L},\alpha_{\bullet\bullet},\theta_{\bullet},\psi_{\bullet})$ be the standard Siegel lattice chain on $S$ of type $J$. Assume $0\in J$. In what follows, we sometimes use $j$ as a shorthand for $\mathcal{V}^{j}$. By a _symmetric transition map_ , we mean a transition map from $j^{\prime}$ to $j^{\prime\prime}$, where $n\in\mathbb{Z}$, $j^{\prime},j^{\prime\prime}\in J$, $ng\geq j^{\prime}\geq j^{\prime\prime}>(n-2)g$, and $j^{\prime}+j^{\prime\prime}\in 2g\mathbb{Z}$. We will also call this the symmetric transition map of $(j^{\prime},n)$ (or of $j^{\prime}$ if $n$ doesn’t matter). By a _one-sided transition map_ , we mean a transition map from $j^{\prime}$ to $j^{\prime\prime}$, where $n\in\mathbb{Z}$, $j^{\prime},j^{\prime\prime}\in J$, $ng\geq j^{\prime}\geq j^{\prime\prime}\geq(n-1)g$. Call it right-anchored if $j^{\prime}=ng$ and left-anchored if $j^{\prime\prime}=(n-1)g$. We then also speak of the right-anchored transition map of $j^{\prime\prime}$ and the left-anchored transition map of $j^{\prime}$, respectively. The kernels of the symmetric transition maps are symplectic subbundles of $\mathcal{O}_{S}^{2g}$ (even of the form $\mathcal{O}_{S}^{I}$, where $I\subseteq\\{\pm 1,\dotsc,\pm g\\}$ is symmetric (i.e., $-I=I$)), and the kernels of the one-sided transition maps are totally isotropic subbundles (even of the form $\mathcal{O}_{S}^{I}$, where $I\subseteq\\{1,\dotsc,g\\}$ or $I\subseteq\\{-1,\dotsc,-g\\}$). Let $\mathcal{O}_{S}^{I_{j}}$ be the kernel of the symmetric transition map of $j$. Then $I_{j}\sqcup I_{-j}=\\{\pm 1,\dotsc,\pm g\\}$. Every kernel of a one-sided transition map is a subbundle of a kernel of an anchored transition map inside of which it is complemented by the kernel of another one-sided transition map. The kernel of the left-anchored transition map of $j$ is a subbundle of the kernel of the symmetric transition map of $-j$ inside of which it is complemented by the kernel of the right-anchored transition map of $-j$. Likewise, the kernel of the right-anchored transition map of $j$ is a subbundle of the kernel of the symmetric transition map of $j$ inside of which it is complemented by the kernel of the left-anchored transition map of $-j$. Now consider the standard symplectic bundle $\mathcal{O}_{S}^{2g}$ together with the kernels of all the symmetric transition maps and all the one-sided transition maps. So we have a symplectic bundle with a bunch of symplectic subbundles coming in complementary pairs, some of which come with a further decomposition into complementary Lagrangians, some of which come with further decompositions into complementary subbundles (of course still totally isotropic). We will also call these kernels _distinguished subspaces_. Below we prove that $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ is the automorphism group scheme $\mathcal{A}$ of these data. Clearly, $\mathcal{A}$ is reductive; in fact it is a Levi subgroup of a parabolic of $\operatorname{GSp}_{2g}$. We have a map $\overline{\mathcal{G}}_{K}\to\mathcal{A}$; the image of an $S$-point $f^{\bullet}$ under $\overline{\mathcal{G}}_{K}\to\mathcal{A}$ on the kernel of a transition map starting at $j$ is given by $f^{j}$. Note that $f^{j}=\tau\circ f^{j}$ on $\ker(\tau)$ for every transition map $\tau$ starting at $j$. $\overline{\mathcal{G}}_{K}\to\mathcal{A}$ has a natural section $\mathcal{A}\to\overline{\mathcal{G}}_{K}$, where in the image all the $f^{j}$ are the same as automorphisms of $\mathcal{O}_{S}^{2g}$. (This is well- defined!) ###### Proposition 2.55. $\mathcal{A}=\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$. ###### Proof: Let us show that $\mathcal{K}:=\ker(\overline{\mathcal{G}}_{K}\to\mathcal{A})$ is unipotent. Consider $\overline{\mathcal{G}}_{K}$ as a subgroup of $\prod_{j\in J/2g\mathbb{Z}}\operatorname{GL}_{2g}\subseteq\operatorname{GL}_{N}$. We claim that said kernel is contained in $\prod_{j\in J/2g\mathbb{Z}}U^{(j)}$, $U^{(j)}$ being a conjugate of the standard unipotent subgroup $\left(\begin{smallmatrix}1&\ast&\ast&\dotsb&\ast\\\ &1&\ast&\dotsb&\ast\\\ &&\ddots&\dotsb&\vdots\end{smallmatrix}\right)$ of $\operatorname{GL}_{2g}$. Indeed, say $f^{\bullet}$ is in the kernel. Then $f^{j}$ acts as the identity on the kernel of the symmetric transition map of $j$ and $f^{-j}$ acts as the identity on the kernel of the symmetric transition map of $-j$. On the image of the symmetric transition map $\tau_{j}$ of $j$, $f^{-j}$ agrees with $\tau_{j}\circ f^{j}$. Note that $\operatorname{im}(\tau_{j})=\ker(\tau_{-j})$. So $\tau_{j}\circ f^{j}$ is the identity on $\ker(\tau_{-j})$. Hence, if $x\in\ker(\tau_{-j})$, then $x=\tau_{j}(x)$ and $f^{j}(x)\equiv x\mod\ker(\tau_{j})$. Thus with respect to the decomposition $\ker(\tau_{j})\oplus\ker(\tau_{-j})$, $f^{j}$ is of the form $\begin{pmatrix}1&\ast\\\ &1\end{pmatrix}$. Now we have $\overline{\mathcal{G}}_{K}=\mathcal{A}\ltimes\mathcal{K}$, in particular $\overline{\mathcal{G}}_{K}\cong\mathcal{A}\times_{\mathbb{F}_{p}}\mathcal{K}$ as schemes. Since both $\overline{\mathcal{G}}_{K}$ and $\mathcal{A}$ are reduced and connected, so is $\mathcal{K}$. All in all, we see that $\mathcal{A}$ is indeed $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ and $\mathcal{K}=R_{u}\overline{\mathcal{G}}_{K}$ is the unipotent radical of $\overline{\mathcal{G}}_{K}$. □ ###### Example 2.56. * • If $J=\mathbb{Z}$, then $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}=\mathbb{G}_{m}^{g+1}$ is the standard maximal torus of $\operatorname{GSp}_{2g}$. * • If $g=2$ and $J=2\mathbb{Z}$, then $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ is the automorphism group of the standard twisted symplectic space $\mathbb{F}_{p}^{4}$ with its standard Lagrangian decomposition, i.e., $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\cong\operatorname{GL}_{2}\times\mathbb{G}_{m}$. * • If $g=2$ and $J/2g\mathbb{Z}=\\{-1,0,1\\}$, then $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ is the automorphism group of the standard twisted symplectic space $\mathbb{F}_{p}^{4}$ with its standard decomposition in twisted symplectic subspaces and the totally isotropic rank-$1$ subbundles generated by $e_{\pm 1}$, i.e., $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\cong\operatorname{GL}_{2}\times\mathbb{G}_{m}$. * • Let $g=8$. We have the local Dynkin diagram where we labelled the simple affine roots as follows: $1-2e_{-1}+e_{0}$ is labelled 0, $e_{-i}-e_{-(i+1)}$ is labelled $i$ for $1\leq i\leq 7$, and $2e_{-8}-e_{0}$ is labelled 8. Consider $J/2g\mathbb{Z}=\\{0,\pm 3,\pm 5\\}$. Then the Dynkin diagram of $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ should (according to [Tit79, 3.5.1]) be the one we get by removing $0,3,5$ and the adjacent edges. So we expect something along the lines161616i.e., having the same Dynkin diagram as of $\operatorname{GSp}(6)\times\operatorname{GL}(2)\times\operatorname{GL}(3)$. We have the following (bases of) kernels of symmetric transition maps: $\\{\pm 1,\pm 2,\pm 3\\},\\{\pm 4,\pm 5,\pm 6,\pm 7,\pm 8\\},\\{\pm 1,\pm 2,\pm 3,\pm 4,\pm 5\\},\\{\pm 6,\pm 7,\pm 8\\},$ and the following kernels of one-sided transition maps: $\displaystyle\\{-3,-2,-1\\},\\{-5,-4\\},\\{-5,-4,-3,-2,-1\\},$ $\displaystyle\\{4,5\\},\\{1,2,3,4,5\\},\\{1,2,3\\}.$ So an element $A$ of $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ is given by specifying linear automorphisms $A_{123}$ of $\langle 1,2,3\rangle$ and $A_{45}$ of $\langle 4,5\rangle$ and a symplectic similitude $A_{\pm 6,\pm 7,\pm 8}$ of $\langle\pm 6,\pm 7,\pm 8\rangle$, such that $\left.A\right|_{\langle 1,2,3\rangle}=A_{123}$, $\left.A\right|_{\langle 4,5\rangle}=A_{45}$, $\left.A\right|_{\langle\pm 6,\pm 7,\pm 8\rangle}=A_{\pm 6,\pm 7,\pm 8}$, where $\left.A\right|_{\langle-1,-2,-3\rangle}$ is uniquely determined by $A_{123}$, $c(A_{\pm 6,\pm 7,\pm 8})$ ($c$ being the multiplier character) and the imposition that $A$ be a symplectic similitude, and similarly for $\left.A\right|_{\langle-4,-5\rangle}$. If for example we consider $J/2g\mathbb{Z}=\\{0,\pm 2,\pm 3,\pm 5\\}$ instead, we expect something along the lines of $\operatorname{GSp}(6)\times\operatorname{GL}(2)\times\operatorname{GL}(2)$ and indeed we additionally get the subbundles $\displaystyle\\{-2,-1\\},\\{1,2\\},\\{-3\\},\\{-5,-4,-3\\},$ $\displaystyle\\{3,4,5\\},\\{3\\},\\{3,4,5,6,7,8,-8,-7,-6,-5,-4,-3\\},\\{1,2,-2,-1\\}.$ So an element $A$ of $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ is given by specifying linear automorphisms $A_{12}$ of $\langle 1,2\rangle$ and $A_{45}$ of $\langle 4,5\rangle$ and a symplectic similitude $A_{\pm 6,\pm 7,\pm 8}$ of $\langle\pm 6,\pm 7,\pm 8\rangle$ in a similar way to above. #### 2.2.4 $\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$ in the Siegel case Recall that we denote the unipotent radical of $\overline{\mathcal{G}}_{K}$ by $R_{u}\overline{\mathcal{G}}_{K}$. We divide out of $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}$ the action of the smooth normal subgroup $R_{u}\overline{\mathcal{G}}_{K}\times R_{u}\overline{\mathcal{G}}_{K}\subseteq\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$ and observe that $\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$ still acts on $[R_{u}\overline{\mathcal{G}}_{K}\times R_{u}\overline{\mathcal{G}}_{K}\backslash\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}]=:\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}^{\sim}$ (not a scheme). We also define $\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}:=[(\Delta({\overline{\mathcal{G}}_{K}})\cdot(R_{u}\overline{\mathcal{G}}_{K}\times R_{u}\overline{\mathcal{G}}_{K}))\backslash\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}]$. ###### Proposition 2.57. We have well-defined morphisms $\displaystyle(\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K})/\widetilde{E}_{w}$ $\displaystyle\to(\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\times\overline{\mathcal{G}}_{K}^{\mathrm{rdt}})/E_{\mathcal{Z}_{w}},$ $\displaystyle\quad(X,Y)$ $\displaystyle\mapsto(X^{\mathrm{rdt}},Y^{\mathrm{rdt}}),$ $\displaystyle\overline{\mathcal{G}}_{K}/\widetilde{E}_{w}$ $\displaystyle\to\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}/E_{\mathcal{Z}_{w}},$ $\displaystyle\quad X$ $\displaystyle\mapsto X^{\mathrm{rdt}},$ and a bijectiion $(\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K})/(\widetilde{E}_{w}\cdot(R_{u}\overline{\mathcal{G}}_{K}\times R_{u}\overline{\mathcal{G}}_{K}))\to(\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\times\overline{\mathcal{G}}_{K}^{\mathrm{rdt}})/E_{\mathcal{Z}_{w}}.$ ###### Proof: The first assertion follows from the definition of $E_{\mathcal{Z}_{w}}$ and equation (2.46). The second then follows from Lemma 2.48. □ ###### Lemma 2.58. Assume $0\in J$. The underlying topological spaces of the stacks in consideration are as follows: 1. (1) $|[\overline{\mathcal{G}}_{K}\backslash\overline{M}^{\mathrm{loc}}]|=\mathrm{KR}(K,\\{\mu\\})\overset{\text{def.}}{=}W_{K}\backslash(W_{K}\mathrm{Adm}(\\{\mu\\})W_{K})/W_{K}$. 2. (2) $|\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}|=\mathrm{EKOR}(K,\\{\mu\\})=\mathrm{Adm}(\\{\mu\\})^{K}\cap{}^{K}\widetilde{W}$ $\overset{\text{\ref{iw- diagr}}}{\cong}\bigcup_{w\in\mathrm{KR}(K,\\{\mu\\})}\breve{K}w\breve{K}/\breve{K}_{\sigma}(\breve{K}_{1}\times\breve{K}_{1})$. ###### Proof: (1) is well-known as explained in Section 2.1.2. (2): By Lemma 2.49, the $\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$-orbits in $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}$ are indexed by $\mathrm{Adm}(\\{\mu\\})_{K}=\mathrm{KR}(K,\\{\mu\\})$. Let us further investigate the $\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$-orbit of $\widetilde{\mathrm{Std}}(w)$ in $\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}^{\sim}$ for some fixed $w\in\mathrm{Adm}(\\{\mu\\})^{K}$. By Proposition 2.57, its underlying topological space agrees with that of $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\text{-}\mathrm{Zip}^{\sim,\mathcal{Z}_{w}}$. By [SYZ19] we know that $|\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\text{-}\mathrm{Zip}^{\mathcal{Z}_{w}}|\cong\breve{K}w\breve{K}/\breve{K}_{\sigma}(\breve{K}_{1}\times\breve{K}_{1})$, whence the lemma. □ ###### Corollary 2.59. We have a morphism $\left(\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}_{w}\right)_{\mathrm{red}}=\text{orbit of }\mathrm{Std}(w)\to\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\text{-}\mathrm{Zip}^{\mathcal{Z}_{w}}.$ This defines the EKOR stratification on $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}_{w}$. All in all, we get an EKOR stratification on $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}$. The morphism factors through $(\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}_{w})_{\mathrm{red}}$, and $(\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}_{w})_{\mathrm{red}}\to\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\text{-}\mathrm{Zip}^{\mathcal{Z}_{w}}$ is an isomorphism. ###### Corollary 2.60. For every point of $[\overline{\mathcal{G}}_{K}\backslash\overline{M}_{K}]$, $\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$ is smooth as a map between the associated reduced fiber of $\overline{\mathscr{S}}_{K}\to[\overline{\mathcal{G}}_{K}\backslash\overline{M}_{K}]$ and the associated reduced fiber of $\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}\to[\overline{\mathcal{G}}_{K}\backslash\overline{M}_{K}]$. ###### Proof: This follows from the preceding corollary by [SYZ19, Theorem A] (which says that the map $\overline{\mathscr{S}}_{K}^{w}\to\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\text{-}\mathrm{Zip}^{\mathcal{Z}_{w}}$ is smooth, cf. subsection 2.1.4). □ The key obstacle in going forward toward proving smoothness of $\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$ now is that we do not know whether the fibers of $\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}\to[\overline{\mathcal{G}}_{K}\backslash\overline{M}_{K}]$ are reduced. ###### Conjecture 2.61. We conjecture that the answer is affirmative. In fact, we conjecture that $\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}\to[\overline{\mathcal{G}}_{K}\backslash\overline{M}_{K}]$ is smooth. ###### Corollary 2.62. $\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$ is surjective. ###### Proof: This follows from the description of the topological space and what is already known from [HR17, first paragraph of section 6.3]. □ We get a commutative diagram $\tilde{\overline{\mathscr{S}}}_{K}$$\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}$$\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}^{\sim}$$\overline{\mathscr{S}}_{K}$$\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}$$\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$$[\overline{\mathcal{G}}_{K}\backslash\overline{M}^{\mathrm{loc}}_{K}]$ ###### Remark 2.63. Since $R_{u}\overline{\mathcal{G}}_{K}$ is smooth, $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}^{\sim}$ is smooth. ###### Remark 2.64. Another open question at this point is: what is the relationship between $\overline{\mathcal{G}}_{K}\text{-EKORZip}^{\mathrm{perf}}$ and the shtuka approach of [SYZ19, Section 4]? ###### Remark 2.65. It should be straightforward to generalize (taking into account the extra structure) our constructions to those (P)EL cases where the local model is the “naive” local model of Rapoport-Zink [RZ96]. #### 2.2.5 The example of $\operatorname{GSp}(4)$ To illustrate some aspects, we look at the example $2g=4$. ##### The apartment. We describe the (extended) apartment. We follow the general outline of [Lan00], in particular as far as notation is concerned. The roots are $\pm(2e_{1}-e_{0}),\pm(2e_{2}-e_{0}),\pm(e_{1}-e_{2}),\pm(e_{1}+e_{2}-e_{0})$. The simple affine roots and the (various variants of the) Weyl group are as described in Remark 2.26. The root one-parameter subgroups171717The parameter being additive here; i.e., we’re talking about homomorphisms $\mathbb{G}_{a}\to G$. are given as follows: $\displaystyle u_{e_{1}-e_{2}}(x)$ $\displaystyle=\begin{pmatrix}1&x&&\\\ &1&&\\\ &&1&-x\\\ &&&1\end{pmatrix},$ $\displaystyle u_{e_{2}-e_{1}}(x)$ $\displaystyle=\begin{pmatrix}1&&&\\\ x&1&&\\\ &&1&\\\ &&-x&1\end{pmatrix},$ $\displaystyle u_{2e_{1}-e_{0}}(x)$ $\displaystyle=\begin{pmatrix}1&&&x\\\ &1&&\\\ &&1&\\\ &&&1\end{pmatrix},$ $\displaystyle u_{e_{0}-2e_{1}}(x)$ $\displaystyle=\begin{pmatrix}1&&&\\\ &1&&\\\ &&1&\\\ x&&&1\end{pmatrix},$ $\displaystyle u_{2e_{2}-e_{0}}(x)$ $\displaystyle=\begin{pmatrix}1&&&\\\ &1&x&\\\ &&1&\\\ &&&1\end{pmatrix},$ $\displaystyle u_{e_{0}-2e_{2}}(x)$ $\displaystyle=\begin{pmatrix}1&&&\\\ &1&&\\\ &x&1&\\\ &&&1\end{pmatrix},$ $\displaystyle u_{e_{1}+e_{2}-e_{0}}(x)$ $\displaystyle=\begin{pmatrix}1&&x&\\\ &1&&x\\\ &&1&\\\ &&&1\end{pmatrix},$ $\displaystyle u_{e_{0}-e_{1}-e_{2}}(x)$ $\displaystyle=\begin{pmatrix}1&&&\\\ &1&&\\\ x&&1&\\\ &x&&1\end{pmatrix}$ For $a\in R$ define $w_{a}(x):=u_{a}(x)u_{-a}(-x^{-1})u_{a}(x)$. ###### Remark 2.66. $N(\mathbb{Q}_{p})$ is generated by $T(\mathbb{Q}_{p})$ and all $w_{a}(x)$ as above. ###### Remark 2.67. $w_{a}(x)=m(u_{-a}(-x^{-1}))$ in Landvogt’s notation [Lan00]. We have $V_{1}:=X_{*}(T)\otimes\mathbb{R}=\\{(x_{1},x_{2},x_{-2},x_{-1})\in\mathbb{R}^{4}\;|\;x_{1}+x_{-1}=x_{2}+x_{-2}\\}$ and $\nu_{1}\colon T(\mathbb{Q}_{p})\to V_{1},\;\begin{pmatrix}d_{1}&&&\\\ &d_{2}&&\\\ &&cd_{2}^{-1}&\\\ &&&cd_{1}^{-1}\end{pmatrix}\mapsto\begin{pmatrix}-v_{p}(d_{1})\\\ -v_{p}(d_{2})\\\ -v_{p}(cd_{2}^{-1})\\\ -v_{p}(cd_{1}^{-1})\end{pmatrix}.$ Also, $V_{0}=\\{v\in V_{1}\;|\;a(v)=0\;\forall a\in\Phi\\}=\mathbb{R}(1,1,1,1)$, $V:=V_{1}/V_{0}$. The extended apartment $A=A^{\mathrm{ext}}$ now is an affine $V_{1}$-space together with the map $\nu_{1}\colon N(\mathbb{Q}_{p})\to\operatorname{Aff}(A)=\operatorname{GL}(V_{1})\ltimes V_{1}$, whose restriction to $T(\mathbb{Q}_{p})$ is given as above and (cf. Remark 2.66) $\displaystyle\nu_{1}(w_{2e_{1}-e_{0}}(x))$ $\displaystyle=(\left(\begin{smallmatrix}&&&1\\\ &1&&\\\ &&1&\\\ 1&&&\end{smallmatrix}\right),\left(\begin{smallmatrix}-v_{p}(x)\\\ 0\\\ 0\\\ v_{p}(x)\end{smallmatrix}\right)),$ $\displaystyle\nu_{1}(w_{2e_{2}-e_{0}}(x))$ $\displaystyle=(\left(\begin{smallmatrix}1&&&\\\ &&1&\\\ &1&&\\\ &&&1\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\\ -v_{p}(x)\\\ v_{p}(x)\\\ 0\end{smallmatrix}\right)),$ $\displaystyle\nu_{1}(w_{e_{1}-e_{2}}(x))$ $\displaystyle=(\left(\begin{smallmatrix}&1&&\\\ 1&&&\\\ &&&1\\\ &&1&\end{smallmatrix}\right),\left(\begin{smallmatrix}-v_{p}(x)\\\ v_{p}(x)\\\ -v_{p}(x)\\\ v_{p}(x)\end{smallmatrix}\right)),$ $\displaystyle\nu_{1}(w_{e_{1}+e_{2}-e_{0}}(x))$ $\displaystyle=(\left(\begin{smallmatrix}&&1&\\\ &&&1\\\ 1&&&\\\ &1&&\end{smallmatrix}\right),\left(\begin{smallmatrix}-v_{p}(x)\\\ -v_{p}(x)\\\ v_{p}(x)\\\ v_{p}(x)\end{smallmatrix}\right)),$ etc. (Recipe: Write $w_{a}(x)$ as a product of a diagonal matrix $\operatorname{diag}(d_{1},d_{2},d_{-2},d_{-1})$ and a permutation matrix $P$ (this need not be a factorization in $\operatorname{GSp}(4)$); then $\nu_{1}(w_{a}(x))=(P,\left(\begin{smallmatrix}-v_{p}(d_{1})\\\ -v_{p}(d_{2})\\\ -v_{p}(d_{-2})\\\ -v_{p}(d_{-1})\end{smallmatrix}\right)).)$ The reduced apartment $A^{\mathrm{red}}$ is the affine $V$-space together with $\nu\colon N(\mathbb{Q}_{p})\to\operatorname{Aff}(A^{\mathrm{red}})=\operatorname{GL}(V)\ltimes V$ given by the same formulas. The walls (or rather, wall conditions) are given as follows ($n\in\mathbb{Z}$): $\displaystyle 2e_{1}-e_{0}$ $\displaystyle:n=x_{0}-2x_{1},$ $\displaystyle 2e_{2}-e_{0}$ $\displaystyle:n=x_{0}-2x_{2},$ $\displaystyle e_{1}-e_{2}$ $\displaystyle:n=x_{2}-x_{1},$ $\displaystyle e_{1}+e_{2}-e_{0}$ $\displaystyle:n=x_{0}-x_{1}-x_{2}.$ Figure 1: The reduced apartment with the base alcove highlighted. ##### Lattice chains and parahoric subgroups. By [BT84a], the extended building $\mathcal{B}(\operatorname{GL}(X),\mathbb{Q}_{p})$ is in bijection with norms181818Defining conditions for a norm: $\alpha(tx)=\alpha(x)+\operatorname{ord}_{p}(t)$, $\alpha(x+y)\geq\min(\alpha(x),\alpha(y))$, $\alpha(x)=\infty\iff x=0$ $\alpha\colon X\to\mathbb{R}\cup\\{\infty\\}$. Norms in turn are in bijection with graded lattice chains (cf. Remark 1.10). Indeed, if $\alpha$ is a norm, define $\Delta_{\alpha}$ to be the set of its balls centered around zero and $c_{\alpha}(\Lambda):=\inf_{\lambda\in\Lambda}\alpha(\lambda)$. Conversely, given a graded lattice chain $(\Delta,c)$, define a norm $\alpha$ by $\alpha(x):=c(\Lambda)$ for the smallest $\Lambda\in\Delta$ with $x\in\Lambda$. To go from the extended apartment of $\operatorname{GL}(X)$, an affine $\mathbb{R}^{n}$-space, where $n=\dim X$, to norms, fix a basis $e_{1},\dotsc,e_{n}$ of $X$. Then $v\in\mathbb{R}^{n}$ corresponds to the norm $\alpha_{v}$ with $\alpha_{v}(\sum t_{i}e_{i})=\min_{i}(\operatorname{ord}_{p}(t_{i})-v_{i}).$ There are seven types of points in the extended apartment (in each case we choose one in the base alcove to represent all of its type) corresponding to the vertices, edges and interior of the base alcove: * • standard hyperspecial: $x_{\mathrm{hs}}=(0,0,0,0)$ * • paramodular: $x_{\mathrm{paramod}}=(-1/2,0,0,1/2)$ * • Klingen: $x_{\mathrm{Klingen}}=(-1/4,0,0,1/4)$ * • Siegel: $x_{\mathrm{Siegel}}=(-1/4,-1/4,1/4,1/4)$ * • Iwahori: $x_{\mathrm{Iwahori}}=(-1/4,-1/8,1/8,1/4)$ * • another hyperspecial: $x=(-1/2,-1/2,1/2,1/2)$ * • another parahoric: $x=(-1/2,-1/4,1/4,1/2)$ The last two are conjugates (by the Atkin-Lehner element) of the standard hyperspecial and the Klingen parahoric, respectively (see e.g. [Rös18, 151]); therefore we will neglect them in the sequel. For a set of lattices $S$ denote by $\langle S\rangle$ the closure under homotheties, i.e., $\langle S\rangle:=\\{p^{n}s\;|\;n\in\mathbb{Z},\;s\in S\\}$. Then: * • $\Delta_{\mathrm{hs}}=\langle\mathbb{Z}_{p}^{4}\rangle$ and $c_{\mathrm{hs}}(\mathbb{Z}_{p}^{4})=0$. * • $\Delta_{\mathrm{paramod}}=\langle\mathbb{Z}_{p}^{3}\oplus p\mathbb{Z}_{p},\;\mathbb{Z}_{p}\oplus p\mathbb{Z}_{p}^{3}\rangle$ and $c_{\mathrm{paramod}}(\mathbb{Z}_{p}^{3}\oplus p\mathbb{Z}_{p})=-\frac{1}{2}$, $c_{\mathrm{paramod}}(\mathbb{Z}_{p}\oplus p\mathbb{Z}_{p}^{3})=0$. * • $\Delta_{\mathrm{Klingen}}=\langle\mathbb{Z}_{p}^{4},\mathbb{Z}_{p}^{3}\oplus p\mathbb{Z}_{p},\mathbb{Z}_{p}\oplus p\mathbb{Z}_{p}^{3}\rangle$ and $c_{\mathrm{Klingen}}(\mathbb{Z}_{p}^{4})=-1/4$, $c_{\mathrm{Klingen}}(\mathbb{Z}_{p}^{3}\oplus p\mathbb{Z}_{p})=0$, $c_{\mathrm{Klingen}}(\mathbb{Z}_{p}\oplus p\mathbb{Z}_{p}^{3})=1/4$. * • $\Delta_{\mathrm{Siegel}}=\langle\mathbb{Z}_{p}^{4},\mathbb{Z}_{p}^{2}\oplus p\mathbb{Z}_{p}^{2}\rangle$ and $c_{\mathrm{Siegel}}(\mathbb{Z}_{p}^{4})=-1/4$, $c_{\mathrm{Siegel}}(\mathbb{Z}_{p}^{2}\oplus p\mathbb{Z}_{p}^{2})=1/4$. * • $\Delta_{\mathrm{Iwahori}}=\langle\mathbb{Z}_{p}^{4},\mathbb{Z}_{p}^{3}\oplus p\mathbb{Z}_{p},\mathbb{Z}_{p}^{2}\oplus p\mathbb{Z}_{p}^{2},\mathbb{Z}_{p}\oplus p\mathbb{Z}_{p}^{3}\rangle$ and $c_{\mathrm{Iwahori}}(\mathbb{Z}_{p}^{4})=-1/4$, $c_{\mathrm{Iwahori}}(\mathbb{Z}_{p}^{3}\oplus p\mathbb{Z}_{p})=-1/8$, $c_{\mathrm{Iwahori}}(\mathbb{Z}_{p}^{2}\oplus p\mathbb{Z}_{p}^{2})=1/8$, $c_{\mathrm{Iwahori}}(\mathbb{Z}_{p}\oplus p\mathbb{Z}_{p}^{3})=1/4$. The associated parahoric subgroups are * • hyperspecial: $\operatorname{GSp}_{4}(\mathbb{Z}_{p})$ * • paramodular: $\operatorname{GSp}_{4}(\mathbb{Q}_{p})\cap\begin{pmatrix}\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&p^{-1}\mathbb{Z}_{p}\\\ p\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\ p\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\ p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&\mathbb{Z}_{p}\end{pmatrix}$ * • Klingen: $\operatorname{GSp}_{4}(\mathbb{Z}_{p})\cap\begin{pmatrix}\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\ p\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\ p\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\ p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&\mathbb{Z}_{p}\end{pmatrix}$ * • Siegel: $\operatorname{GSp}_{4}(\mathbb{Z}_{p})\cap\begin{pmatrix}\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\ \mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\ p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\ p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\end{pmatrix}$ * • Iwahori: $\operatorname{GSp}_{4}(\mathbb{Z}_{p})\cap\begin{pmatrix}\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\ p\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\ p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\ p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&\mathbb{Z}_{p}\end{pmatrix}$ ###### Remark 2.68. Dualizing with respect to the symplectic form, we have $\displaystyle(\mathbb{Z}_{p}^{4})^{\vee}$ $\displaystyle=\mathbb{Z}_{p}^{4},$ $\displaystyle(\mathbb{Z}_{p}\oplus p\mathbb{Z}_{p}^{3})^{\vee}$ $\displaystyle=p^{-1}(\mathbb{Z}_{p}^{3}\oplus p\mathbb{Z}_{p}),$ $\displaystyle(\mathbb{Z}_{p}^{3}\oplus p\mathbb{Z}_{p})^{\vee}$ $\displaystyle=p^{-1}(\mathbb{Z}_{p}\oplus p\mathbb{Z}_{p}^{3}),$ $\displaystyle(\mathbb{Z}_{p}^{2}\oplus p\mathbb{Z}_{p}^{2})^{\vee}$ $\displaystyle=p^{-1}(\mathbb{Z}_{p}^{2}\oplus p\mathbb{Z}_{p}^{2}).$ ##### Admissible set. We compute the admissible set in the way outlined in Remark 2.29. The cocharacter $\mu$ is $(1,1,0,0)$. We obtain $\displaystyle\mathrm{Adm}(\\{\mu\\})=\bigl{\\{}$ $\displaystyle\Bigl{(}\operatorname{id},\left(\begin{smallmatrix}1\\\ 1\\\ 0\\\ 0\end{smallmatrix}\right)\Bigr{)},\quad\Bigl{(}(2\quad{-2}),\left(\begin{smallmatrix}1\\\ 0\\\ 1\\\ 0\end{smallmatrix}\right)\Bigr{)},\quad\Bigl{(}\operatorname{id},\left(\begin{smallmatrix}1\\\ 0\\\ 1\\\ 0\end{smallmatrix}\right)\Bigr{)},$ $\displaystyle\Bigl{(}(1\quad{-1}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right)\Bigr{)},\quad\Bigl{(}(1\quad 2\quad{-1}\quad{-2}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right)\Bigr{)},$ $\displaystyle\Bigl{(}(1\quad 2)({-2}\quad{-1}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right)\Bigr{)},\quad\Bigl{(}\operatorname{id},\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right)\Bigr{)},$ $\displaystyle\Bigl{(}(1\quad{-1})(2\quad{-2}),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right)\Bigr{)},\quad\Bigl{(}(1\quad{-1}),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right)\Bigr{)},$ $\displaystyle\Bigl{(}(1\quad{-2})(2\quad{-1}),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right)\Bigr{)},\quad\Bigl{(}(1\quad{-2}\quad{-1}\quad 2),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right)\Bigr{)},$ $\displaystyle\Bigl{(}(2\quad{-2}),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right)\Bigr{)},\quad\Bigl{(}\operatorname{id},\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right)\Bigr{)}\bigr{\\}},$ or, in terms of Frobenii (cf. Construction 2.43) $\displaystyle\Bigl{\\{}$ $\displaystyle\left(\begin{smallmatrix}p&0&0&0\\\ 0&p&0&0\\\ 0&0&1&0\\\ 0&0&0&1\end{smallmatrix}\right),\left(\begin{smallmatrix}p&0&0&0\\\ 0&0&1&0\\\ 0&p&0&0\\\ 0&0&0&1\end{smallmatrix}\right),\left(\begin{smallmatrix}p&0&0&0\\\ 0&1&0&0\\\ 0&0&p&0\\\ 0&0&0&1\end{smallmatrix}\right),\left(\begin{smallmatrix}0&0&0&1\\\ 0&p&0&0\\\ 0&0&1&0\\\ p&0&0&0\end{smallmatrix}\right),\left(\begin{smallmatrix}0&0&1&0\\\ p&0&0&0\\\ 0&0&0&1\\\ 0&p&0&0\end{smallmatrix}\right),$ $\displaystyle\left(\begin{smallmatrix}0&1&0&0\\\ p&0&0&0\\\ 0&0&0&1\\\ 0&0&p&0\end{smallmatrix}\right),\left(\begin{smallmatrix}1&0&0&0\\\ 0&p&0&0\\\ 0&0&1&0\\\ 0&0&0&p\end{smallmatrix}\right),\left(\begin{smallmatrix}0&0&0&1\\\ 0&0&1&0\\\ 0&p&0&0\\\ p&0&0&0\end{smallmatrix}\right),\left(\begin{smallmatrix}0&0&0&1\\\ 0&1&0&0\\\ 0&0&p&0\\\ p&0&0&0\end{smallmatrix}\right),\left(\begin{smallmatrix}0&0&1&0\\\ 0&0&0&1\\\ p&0&0&0\\\ 0&p&0&0\end{smallmatrix}\right),$ $\displaystyle\left(\begin{smallmatrix}0&1&0&0\\\ 0&0&0&1\\\ p&0&0&0\\\ 0&0&p&0\end{smallmatrix}\right),\left(\begin{smallmatrix}1&0&0&0\\\ 0&0&1&0\\\ 0&p&0&0\\\ 0&0&0&p\end{smallmatrix}\right),\left(\begin{smallmatrix}1&0&0&0\\\ 0&1&0&0\\\ 0&0&p&0\\\ 0&0&0&p\end{smallmatrix}\right)\Bigr{\\}}.$ ##### Siegel level. From now on, we consider the Siegel level structure. Denote the Siegel parahoric by $K$ and the standard hyperspecial subgroup by $H$. Here $W_{K}$ is generated by $({-1}\quad{-2})(1\quad 2)$, while $W_{H}$ is generated by $W_{K}$ and $(2\quad{-2})$. Recalling Remark 2.31 (2), we note that one has a natural morphism $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}\to\overline{\mathcal{G}}_{H}\text{-}\mathrm{Zip}\times\overline{\mathcal{G}}_{H}\text{-}\mathrm{Zip}$. We have $\displaystyle\mathrm{KR}(K,\\{\mu\\})$ $\displaystyle=\Bigl{\\{}(\operatorname{id},\left(\begin{smallmatrix}1\\\ 1\\\ 0\\\ 0\end{smallmatrix}\right)),((1\quad{-2})(2\quad{-1}),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right)),((1\quad 2\quad{-1}\quad{-2}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right)),$ $\displaystyle(\operatorname{id},\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right)),((1\quad{-2}\quad{-1}\quad 2),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right)),((1\quad 2)({-2}\quad{-1}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right))\Bigr{\\}},$ $\displaystyle\mathrm{EKOR}(K,\\{\mu\\})$ $\displaystyle=\mathrm{KR}(K,\\{\mu\\})\cup\Bigl{\\{}(\operatorname{id},\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right)),\quad((2\quad{-2}),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right)),\quad((1\quad{-1}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right))\Bigr{\\}}.$ In the following table, $w^{j}$ is the isomorphism type of the $\overline{\mathcal{G}}_{H}$-zip at position $j$. For $\mathcal{C}^{\bullet},\mathcal{D}^{\bullet}$ we give (indices of) basis vectors. “$\leftarrow$” means “same as in the column adjacent to the left”. $\alpha_{0}\colon\bar{\mathbb{F}}_{p}^{4}\to\bar{\mathbb{F}}_{p}^{4}$ is the projection onto the plane spanned by the $1,2$-coordinates, $\alpha_{2}$ the projection onto the plane spanned by the $-2,-1$-coordinates. By $\alpha_{j,\mathcal{C}^{\bullet}/\mathcal{D}^{\bullet}}$ we denote the induced maps on $\mathcal{V}^{\bullet}/\mathcal{C}^{\bullet}\oplus\mathcal{C}^{\bullet}$ and $\mathcal{D}^{\bullet}\oplus\mathcal{V}^{\bullet}/\mathcal{D}^{\bullet}$, respectively. Each $\mathcal{C}^{j}\subseteq\mathcal{V}^{j}$ has a canonical complement in terms of standard basis vectors. Importantly however, we will not always have a complementary _chain_ of linear subspaces. In any event, below we say what the $\alpha_{j,\mathcal{C}^{\bullet}/\mathcal{D}^{\bullet}}$ are the projection onto if interpreted as described. For instance, the projection onto $\emptyset$ is the zero map. So in that case $\mathcal{V}^{\bullet}/\mathcal{C}^{\bullet}\oplus\mathcal{C}^{\bullet}$ (or $\mathcal{D}^{\bullet}\oplus\mathcal{V}^{\bullet}/\mathcal{D}^{\bullet}$) is a chain of vector spaces with zero transition maps. $w$ | KR-type | $\mathcal{C}^{0}$ | $\mathcal{D}^{0}$ | $\mathcal{C}^{2}$ | $\mathcal{D}^{2}$ | $w^{0}$ | $w^{2}$ | $\alpha_{2,\mathcal{C}^{\bullet}}$ | $\alpha_{0,\mathcal{C}^{\bullet}}$ | $\alpha_{2,\mathcal{D}^{\bullet}}$ | $\alpha_{0,\mathcal{D}^{\bullet}}$ ---|---|---|---|---|---|---|---|---|---|---|--- $(\operatorname{id},\left(\begin{smallmatrix}1\\\ 1\\\ 0\\\ 0\end{smallmatrix}\right))$ | $\leftarrow$ | $\\{1,2\\}$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ | $\\{-2,-1\\}$ | $(-2\quad 1)({-1}\quad 2)$ | $\leftarrow$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ $((1\quad{-2})(2\quad{-1}),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right))$ | $\leftarrow$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ | $\operatorname{id}$ | $\leftarrow$ | $\emptyset$ | $\emptyset$ | $\emptyset$ | $\emptyset$ $((1\quad 2\quad{-1}\quad{-2}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right))$ | $\leftarrow$ | $\\{1,2\\}$ | $\\{-2,1\\}$ | $\\{-2,1\\}$ | $\\{-2,-1\\}$ | $(-2\quad 2)$ | $\leftarrow$ | $\\{1\\}$ | $\\{-1\\}$ | $\\{2\\}$ | $\\{-2\\}$ $(\operatorname{id},\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right))$ | $\leftarrow$ | $\\{-2,-1\\}$ | $\\{-2,-1\\}$ | $\\{1,2\\}$ | $\\{1,2\\}$ | $(-2\quad 1)({-1}\quad 2)$ | $\leftarrow$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ $((1\quad{-2}\quad{-1}\quad 2),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right))$ | $\leftarrow$ | $\\{-2,1\\}$ | $\\{-2,-1\\}$ | $\\{1,2\\}$ | $\\{-2,1\\}$ | $(-2\quad 2)$ | $\leftarrow$ | $\\{2\\}$ | $\\{-2\\}$ | $\\{1\\}$ | $\\{-1\\}$ $((1\quad 2)({-2}\quad{-1}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right))$ | $\leftarrow$ | $\\{-2,1\\}$ | $\\{-2,1\\}$ | $\\{-2,1\\}$ | $\\{-2,1\\}$ | $\operatorname{id}$ | $\leftarrow$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ $(\operatorname{id},\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right))$ | $((1\quad 2)({-2}\quad{-1}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right))$ | $\\{-1,2\\}$ | $\\{-1,2\\}$ | $\\{-2,1\\}$ | $\\{-2,1\\}$ | $(-2\quad 1)({-1}\quad 2)$ | $\leftarrow$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ $((2\quad{-2}),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right))$ | $((1\quad{-2}\quad{-1}\quad 2),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right))$ | $\\{-1,2\\}$ | $\\{-2,-1\\}$ | $\\{1,2\\}$ | $\\{-2,1\\}$ | $(-2\quad{-1}\quad 2\quad 1)$ | $\leftarrow$ | $\\{1\\}$ | $\\{-1\\}$ | $\\{1\\}$ | $\\{-1\\}$ $((1\quad{-1}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right))$ | $((1\quad 2\quad{-1}\quad{-2}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right))$ | $\\{1,2\\}$ | $\\{-1,2\\}$ | $\\{-2,1\\}$ | $\\{-2,-1\\}$ | $(-2\quad{-1}\quad 2\quad 1)$ | $\leftarrow$ | $\\{2\\}$ | $\\{-2\\}$ | $\\{2\\}$ | $\\{-2\\}$ ###### Observations 2.69. * • We always have $w^{0}=w^{2}$. This is explained by the fact that the Ekedahl- Oort stratification in this case agrees with the Newton stratification (and isogenous abelian varieties by definition lie in the same Newton stratum). * • Consider the Kottwitz-Rapoport strata containing more than one EKOR stratum (i.e., containing two EKOR strata). Then we can distinguish among the EKOR strata by looking at the Ekedahl-Oort stratum. In other words, the EKOR stratification is in this case the coarsest common refinement of the Kottwitz- Rapoport and Ekedahl-Oort stratifications. ## References * [1] M. Kisin and G. Pappas “Integral models of Shimura varieties with parahoric level structure” In _Publ. Math., Inst. Hautes Étud. Sci._ 128 Springer, Berlin/Heidelberg; Institut des Hautes Études Scientifiques, Bures-sur-Yvette, 2018, pp. 121–218 * [Ahs11] Tobias Ahsendorf “$\mathcal{O}$-displays and $\pi$-divisible formal $\mathcal{O}$-modules”, 2011 URL: http://nbn-resolving.de/urn:nbn:de:hbz:361-24713520 * [AT08] Alexander Arhangel’skii and Mikhail Tkachenko “Topological groups and related structures” Hackensack, NJ: World Scientific; Paris: Atlantis Press, 2008 * [BB05] Anders Björner and Francesco Brenti “Combinatorics of Coxeter groups” 231, Graduate Texts in Mathematics Springer, New York, 2005, pp. xiv+363 * [BG18] Alessandra Bertapelle and Cristian D. González-Avilés “On the perfection of schemes” In _Expo. Math._ 36.2 Elsevier, Munich, 2018, pp. 197–220 * [Bor69] Armand Borel “Introduction aux groupes arithmétiques”, Publications de l’Institut de Mathématique de l’Université de Strasbourg, XV. Actualités Scientifiques et Industrielles, No. 1341 Hermann, Paris, 1969 * [BT84] François Bruhat and Jacques Tits “Groupes réductifs sur un corps local. II. Schémas en groupes. Existence d’une donnée radicielle valuée.” In _Publ. Math., Inst. Hautes Étud. Sci._ 60 Springer, Berlin/Heidelberg; Institut des Hautes Études Scientifiques, Bures-sur-Yvette, 1984, pp. 1–194 * [BT84a] François Bruhat and Jacques Tits “Schémas en groupes et immeubles des groupes classiques sur un corps local” In _Bull. Soc. Math. Fr._ 112 Société Mathématique de France (SMF), Paris, 1984, pp. 259–301 DOI: 10.24033/bsmf.2006 * [Car93] Roger W. Carter “Finite groups of Lie type” Conjugacy classes and complex characters, Reprint of the 1985 original, A Wiley-Interscience Publication, Wiley Classics Library John Wiley & Sons, Ltd., Chichester, 1993, pp. xii+544 * [Dan36] D. Dantzig “Zur topologischen Algebra. III: Brouwersche und Cantorsche Gruppen” In _Compos. Math._ 3 Cambridge University Press, Cambridge; London Mathematical Society, London, 1936, pp. 408–426 * [Del71] Pierre Deligne “Travaux de Shimura” In _Séminaire Bourbaki, 23ème année (1970/71), Exp. No. 389_ 244, Lecture Notes in Math. Springer, Berlin, 1971, pp. 123–165 * [DG80] Michel Demazure and Peter Gabriel “Introduction to algebraic geometry and algebraic groups” Translated from the French by J. Bell 39, North-Holland Mathematics Studies North-Holland Publishing Co., Amsterdam-New York, 1980, pp. xiv+357 * [EGA2] A. Grothendieck “Éléments de géométrie algébrique. II. Étude globale élémentaire de quelques classes de morphismes” In _Inst. Hautes Études Sci. Publ. Math._ , 1961 URL: http://www.numdam.org/item?id=PMIHES_1961__8__222_0 * [Gör03] Ulrich Görtz “On the flatness of local models for the symplectic group” In _Adv. Math._ 176.1 Elsevier (Academic Press), San Diego, CA, 2003, pp. 89–115 * [Gro74] Alexandre Grothendieck “Groupes de Barsotti-Tate et cristaux de Dieudonné” Séminaire de Mathématiques Supérieures, No. 45 (Été, 1970) Les Presses de l’Université de Montréal, Montreal, Que., 1974 * [Hai05] Thomas J. Haines “Introduction to Shimura varieties with bad reduction of parahoric type” In _Harmonic analysis, the trace formula, and Shimura varieties. Proceedings of the Clay Mathematics Institute 2003 summer school, Toronto, Canada, June 2–27, 2003_ Providence, RI: American Mathematical Society (AMS), 2005, pp. 583–642 * [Haz78] Michiel Hazewinkel “Formal groups and applications”, Pure and Applied Mathematics, 78. New York-San Francisco-London: Academic Press. XXII, 573 p., 1978 * [Hes20] Jens Hesse “Central leaves and EKOR strata on Shimura varieties with parahoric reduction”, 2020 URL: http://nbn-resolving.de/urn:nbn:de:tuda-tuprints-115430 * [Hes20a] Jens Hesse “Central leaves on Shimura varieties with parahoric reduction” Preprint, 2020 arXiv:2003.03175 [math.AG] * [HR08] Thomas J. Haines and Michael Rapoport “Appendix: On parahoric subgroups” In _Advances in Mathematics_ 219.1, 2008, pp. 188–198 DOI: https://doi.org/10.1016/j.aim.2008.04.020 * [HR17] X. He and M. Rapoport “Stratifications in the reduction of Shimura varieties.” In _Manuscr. Math._ 152.3-4 Springer, Berlin/Heidelberg, 2017, pp. 317–343 DOI: 10.1007/s00229-016-0863-x * [Ill85] Luc Illusie “Déformations de groupes de Barsotti-Tate (d’après A. Grothendieck)” Seminar on arithmetic bundles: the Mordell conjecture (Paris, 1983/84) In _Astérisque_ , 1985, pp. 151–198 * [Kis10] Mark Kisin “Integral models for Shimura varieties of abelian type” In _J. Amer. Math. Soc._ 23.4, 2010, pp. 967–1012 DOI: 10.1090/S0894-0347-10-00667-3 * [Kot92] Robert E. Kottwitz “Points on some Shimura varieties over finite fields” In _J. Amer. Math. Soc._ 5.2, 1992, pp. 373–444 DOI: 10.2307/2152772 * [KP15] M. Kisin and G. Pappas “Integral models of Shimura varieties with parahoric level structure” Preprint, 2015 arXiv:1512.01149v2 [math.AG] * [KR00] R. Kottwitz and M. Rapoport “Minuscule alcoves for $\mathrm{GL}_{n}$ and $\mathrm{GSp}_{2n}$” In _Manuscripta Math._ 102.4, 2000, pp. 403–428 DOI: 10.1007/s002290070034 * [Lan00] Erasmus Landvogt “Some functorial properties of the Bruhat-Tits building” In _J. Reine Angew. Math._ 518 De Gruyter, Berlin, 2000, pp. 213–241 DOI: 10.1515/crll.2000.006 * [Lan96] Erasmus Landvogt “A compactification of the Bruhat-Tits building” 1619, Lecture Notes in Mathematics Springer-Verlag, Berlin, 1996, pp. viii+152 DOI: 10.1007/BFb0094594 * [Lau13] Eike Lau “Smoothness of the truncated display functor.” In _J. Am. Math. Soc._ 26.1 American Mathematical Society (AMS), Providence, RI, 2013, pp. 129–165 * [Lau14] Eike Lau “Relations between Dieudonné displays and crystalline Dieudonné theory” In _Algebra Number Theory_ 8.9, 2014, pp. 2201–2262 DOI: 10.2140/ant.2014.8.2201 * [Mil05] J.. Milne “Introduction to Shimura varieties.” In _Harmonic analysis, the trace formula, and Shimura varieties. Proceedings of the Clay Mathematics Institute 2003 summer school, Toronto, Canada, June 2–27, 2003_ Providence, RI: American Mathematical Society (AMS), 2005, pp. 265–378 * [Mor93] Lawrence Morris “Tamely ramified intertwining algebras” In _Invent. Math._ 114.1, 1993, pp. 1–54 DOI: 10.1007/BF01232662 * [MW04] Ben Moonen and Torsten Wedhorn “Discrete invariants of varieties in positive characteristic” In _Int. Math. Res. Not._ , 2004, pp. 3855–3903 DOI: 10.1155/S1073792804141263 * [Pin90] Richard Pink “Arithmetical compactification of mixed Shimura varieties” 209, Bonner Mathematische Schriften [Bonn Mathematical Publications] Universität Bonn, Mathematisches Institut, 1990 * [PRS13] Georgios Pappas, Michael Rapoport and Brian Smithling “Local models of Shimura varieties, I. Geometry and combinatorics” In _Handbook of moduli. Vol. III_ 26, Adv. Lect. Math. (ALM) Int. Press, Somerville, MA, 2013, pp. 135–217 * [PWZ11] Richard Pink, Torsten Wedhorn and Paul Ziegler “Algebraic zip data” In _Doc. Math._ 16 Deutsche Mathematiker-Vereinigung, Berlin, 2011, pp. 253–300 * [PWZ15] Richard Pink, Torsten Wedhorn and Paul Ziegler “$F$-zips with additional structure” In _Pac. J. Math._ 274.1 Mathematical Sciences Publishers (MSP), Berkeley, CA; Pacific Journal of Mathematics c/o University of California, Berkeley, CA, 2015, pp. 183–236 DOI: 10.2140/pjm.2015.274.183 * [PZ13] Georgios Pappas and Xinwen Zhu “Local models of Shimura varieties and a conjecture of Kottwitz” In _Invent. Math._ 194.1 Springer, Berlin/Heidelberg, 2013, pp. 147–254 DOI: 10.1007/s00222-012-0442-z * [Rap05] Michael Rapoport “A guide to the reduction modulo $p$ of Shimura varieties.” In _Formes automorphes (I). Actes du Semestre du Centre Émile Borel, Paris, France, 17 février au 11 juillet 2000_ Paris: Société Mathématique de France, 2005, pp. 271–318 * [Rös18] Mirko Rösner “Parahoric restriction for $\mathrm{GSp}(4)$.” In _Algebr. Represent. Theory_ 21.1 Springer Netherlands, Dordrecht, 2018, pp. 145–161 * [RZ96] M. Rapoport and Th. Zink “Period spaces for $p$-divisible groups” 141, Annals of Mathematics Studies Princeton University Press, Princeton, NJ, 1996 DOI: 10.1515/9781400882601 * [Ser97] Jean-Pierre Serre “Galois cohomology” Translated from the French by Patrick Ion Springer-Verlag, Berlin, 1997, pp. x+210 DOI: 10.1007/978-3-642-59141-9 * [Stacks] The Stacks Project Authors “Stacks Project”, http://stacks.math.columbia.edu, 2020 * [SYZ19] Xu Shen, Chia-Fu Yu and Chao Zhang “EKOR strata for Shimura varieties with parahoric level structure” Preprint, 2019 arXiv:1910.07785v1 [math.AG] * [Tit79] J. Tits “Reductive groups over local fields” In _Automorphic forms, representations and $L$-functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977), Part 1_, Proc. Sympos. Pure Math., XXXIII Amer. Math. Soc., Providence, R.I., 1979, pp. 29–69 * [VW13] Eva Viehmann and Torsten Wedhorn “Ekedahl-Oort and Newton strata for Shimura varieties of PEL type” In _Math. Ann._ 356.4, 2013, pp. 1493–1550 DOI: 10.1007/s00208-012-0892-z * [Wor13] D. Wortmann “The $\mu$-ordinary locus for Shimura varieties of Hodge type” Preprint, 2013 arXiv:1310.6444v1 [math.AG] * [Yu08] Chia-Fu Yu “Irreducibility and $p$-adic monodromies on the Siegel moduli spaces” In _Adv. Math._ 218.4 Elsevier (Academic Press), San Diego, CA, 2008, pp. 1253–1285 * [Zha15] C. Zhang “Stratifications and foliations for good reductions of Shimura varieties of Hodge type” Preprint, 2015 arXiv:1512.08102v1 [math.AG] * [Zha18] Chao Zhang “Ekedahl-Oort strata for good reductions of Shimura varieties of Hodge type” In _Canad. J. Math._ 70.2, 2018, pp. 451–480 DOI: 10.4153/CJM-2017-020-5 * [Zin01] Thomas Zink “A Dieudonné theory for $p$-divisible groups.” In _Class field theory – its centenary and prospect. Proceedings of the 7th MSJ International Research Institute of the Mathematical Society of Japan, Tokyo, Japan, June 3–12, 1998_ Tokyo: Mathematical Society of Japan, 2001, pp. 139–160 * [Zin02] Thomas Zink “The display of a formal $p$-divisible group.” In _Cohomologies $p$-adiques et applications arithmétiques (I)_ Paris: Société Mathématique de France, 2002, pp. 127–248
2024-09-04T02:54:58.744700
2020-03-07T09:19:04
2003.04739
{ "authors": "Gioele Zardini, Nicolas Lanzetti, Mauro Salazar, Andrea Censi, Emilio\n Frazzoli, Marco Pavone", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26136", "submitter": "Gioele Zardini", "url": "https://arxiv.org/abs/2003.04739" }
arxiv-papers
# On the Co-Design of AV-Enabled Mobility Systems Gioele Zardini1,3, Nicolas Lanzetti2,3, Mauro Salazar3,4, Andrea Censi1, Emilio Frazzoli1, and Marco Pavone3 1Institute for Dynamic Systems and Control, ETH Zürich<EMAIL_ADDRESS>Control Laboratory, ETH Zürich<EMAIL_ADDRESS>of Aeronautics and Astronautics, Stanford University<EMAIL_ADDRESS>Systems Technology Group, Eindhoven University of Technology<EMAIL_ADDRESS>preliminary version of this paper was presented at the 99th Annual Meeting of the Transportation Research Board [1].This research was supported by the National Science Foundation under CAREER Award CMMI-1454737, the Toyota Research Institute (TRI), and ETH Zürich. This article solely reflects the opinions and conclusions of its authors and not NSF, TRI, or any other entity. ###### Abstract The design of autonomous vehicles (AVs) and the design of AV-enabled mobility systems are closely coupled. Indeed, knowledge about the intended service of AVs would impact their design and deployment process, whilst insights about their technological development could significantly affect transportation management decisions. This calls for tools to study such a coupling and co- design AVs and AV-enabled mobility systems in terms of different objectives. In this paper, we instantiate a framework to address such co-design problems. In particular, we leverage the recently developed theory of co-design to frame and solve the problem of designing and deploying an intermodal Autonomous Mobility-on-Demand system, whereby AVs service travel demands jointly with public transit, in terms of fleet sizing, vehicle autonomy, and public transit service frequency. Our framework is modular and compositional, allowing one to describe the design problem as the interconnection of its individual components and to tackle it from a system-level perspective. To showcase our methodology, we present a real-world case study for Washington D.C., USA. Our work suggests that it is possible to create user-friendly optimization tools to systematically assess costs and benefits of interventions, and that such analytical techniques might gain a momentous role in policy-making in the future. ## I Introduction Arguably, the current design process for AVs largely suffers from the lack of clear, specific requirements in terms of the service such vehicles will be providing. Yet, knowledge about their intended service (e.g., last-mile versus point-to-point travel) might dramatically impact how the AVs are designed, and, critically, significantly ease their development process. For example, if for a given city we knew that for an effective on-demand mobility system autonomous cars only need to drive up to 25 mph and only on relatively easy roads, their design would be greatly simplified and their deployment could certainly be accelerated. At the same time, from the system-level perspective of transportation management, knowledge about the trajectory of technology development for AVs would certainly impact decisions on infrastructure investments and provision of service. In other words, the design of the AVs and the design of a mobility system leveraging AVs are intimately coupled. This calls for methods to reason about such a coupling, and in particular to _co-design_ the AVs and the associated AV-enabled mobility system. A key requirement in this context is the ability to account for a range of heterogeneous objectives that are often not directly comparable (consider, for instance, travel time and emissions). Accordingly, the goal of this paper is to lay the foundations for a framework through which one can co-design future AV-enabled mobility systems. Specifically, we show how one can leverage the recently developed mathematical theory of co-design [2, 3, 4], which provides a general methodology to co- design complex systems in a modular and compositional fashion. This tool delivers the set of rational design solutions lying on the Pareto front, allowing one to reason about costs and benefits of the individual design options. The framework is instantiated in the setting of co-designing intermodal AMoD systems [5], whereby fleets of self-driving vehicles provide on-demand mobility jointly with public transit. Aspects subject to co-design include fleet size, AV-specific characteristics, and public transit service frequency. ### I-A Literature Review Our work lies at the interface of the design of urban public transportation services and the design of AMoD systems. The first research stream is reviewed in [6, 7], and comprises _strategic_ long-term infrastructure modifications and _operational_ short-term scheduling. The joint design of traffic network topology and control infrastructure has been presented in [8]. Public transportation scheduling has been solved jointly with the design of the transit network in a passengers’ and operators’ cost-optimal fashion in [9], using demand-driven approaches in [10], and in an energy-efficient way in [11]. However, these works only focus on the public transit system and do not consider its joint design with an AMoD system. The research on the design of AMoD systems is reviewed in [12] and mainly pertains their fleet sizing. In this regard, studies range from simulation-based approaches [13, 14, 15, 16] to analytical methods [17]. In [18], the authors jointly design the fleet size and the charging infrastructure, and formulate the arising design problem as a mixed integer linear program. The authors of [19] solve the fleet sizing problem together with the vehicle allocation problem. Finally, [20] co-designs the AMoD fleet size and its composition. More recently, the joint design of multimodal transit networks and AMoD systems was formulated in [21] as a bilevel optimization problem and solved with heuristics. Overall, the problem- specific structure of existing design methods for AMoD systems is not amenable to a modular and compositional problem formulation. Moreover, previous work does not capture important aspects of AV-enabled mobility systems, such as other transportation modes and AV-specific design parameters (e.g., the level of autonomy). ### I-B Statement of Contribution In this paper we lay the foundations for the systematic study of the design of AV-enabled mobility systems. Specifically, we leverage the mathematical theory of co-design [2] to devise a framework to study the design of intermodal AMoD (I-AMoD) systems in terms of fleet characteristics and public transit service, enabling the computation of the _rational_ solutions lying on the Pareto front of minimal travel time, transportation costs, and emissions. Our framework allows one to structure the design problem in a modular way, in which each different transportation option can be “plugged in” in a larger model. Each model has minimal assumptions: Rather than properties such as linearity and convexity, we ask for very general monotonicity assumptions. For example, we assume that the cost of automation increases monotonically with the speed achievable by the AV. We are able to obtain the full Pareto front of _rational_ solutions, or, given policies, to weigh incomparable costs (such as travel time and emissions) and to present actionable information to the stakeholders of the mobility ecosystem. We showcase our methodology through a real-world case study of Washington D.C., USA. We show how, given the model, we can easily formulate and answer several questions regarding the introduction of new technologies and investigate possible infrastructure interventions. ### I-C Organization The remainder of this paper is structured as follows: Section II reviews the mathematical theory of co-design. Section III presents the co-design problem for AV-enabled mobility systems. We showcase our approach with real-world case studies for Washington D.C., USA, in Section IV. Section V concludes the paper with a discussion and an overview on future research directions. ## II Background This paper builds on the mathematical theory of co-design, presented in [2]. In this section, we present a review of the main contents needed for this work. ### II-A Orders We will use basic facts from order theory, which we review in the following. ###### Definition II.1 (Poset). A partially ordered set (poset) is a tuple $\langle\mathcal{P},\preceq_{\mathcal{P}}\rangle$, where $\mathcal{P}$ is a set and $\preceq_{\mathcal{P}}$ is a partial order, defined as a reflexive, transitive, and antisymmetric relation. Given a poset, we can formalize the idea of “Pareto front” through antichains. ###### Definition II.2 (Antichains). A subset $S\subseteq\mathcal{P}$ is an antichain iff no elements are comparable: For $x,y\in S$, $x\preceq y$ implies $x=y$. We denote by $\textsf{A}\mathcal{P}$ the set of all antichains in $\mathcal{P}$. ###### Definition II.3 (Directed set). A subset $S\subseteq\mathcal{P}$ is directed if each pair of elements in $S$ has an upper bound: For all $a,b\in S$, there exists a $c\in S$ such that $a\preceq c$ and $b\preceq c$. ###### Definition II.4 (Completeness). A poset is a complete partial order (CPO) if each of its directed subsets has a supremum and a least element. For instance, the poset $\langle\mathbb{R}_{+},\leq\rangle$, with $\mathbb{R}_{+}\coloneqq\\{x\in\mathbb{R}\,|\,x\geq 0\\}$, is not complete, as its directed subset $\mathbb{R}_{+}\subseteq\mathbb{R}_{+}$ does not have an upper bound (and therefore a supremum). Nonetheless, we can make it complete by artificially adding a top element $\top$, i.e., by defining $\langle\overline{\mathbb{R}}_{+},\leq\rangle$ with $\overline{\mathbb{R}}_{+}\coloneqq\mathbb{R}_{+}\cup\\{\top\\}$ and $a\leq\top$ for all $a\in\mathbb{R}_{+}$. Similarly, we can complete $\mathbb{N}$ to $\overline{\mathbb{N}}$. In this setting, Scott-continuous maps will play a key role. Intuitively, Scott-continuity can be understood as a stronger notion of monotonicity. ###### Definition II.5 (Scott continuity). A map $f:\mathcal{P}\rightarrow\mathcal{Q}$ between two posets $\langle\mathcal{P},\preceq_{\mathcal{P}}\rangle$ and $\langle\mathcal{Q},\preceq_{\mathcal{Q}}\rangle$ is Scott-continuous iff for each directed set $D\subseteq\mathcal{P}$ the image $f(D)$ is directed and $\sup f(D)=f(\sup D)$. ### II-B Mathematical Theory of Co-Design We start by presenting design problems with implementation (DPIs), which can then be composed and interconnected to form a co-design problem with implementation (CDPI). ###### Definition II.6 (DPI). A DPI is a tuple $\langle\mathcal{F},\mathcal{R},\mathcal{I},\textsf{{exe}},\textsf{{eva}}\rangle$: * • $\mathcal{F}$ is a poset, called functionality space; * • $\mathcal{R}$ is a poset, called resource space; * • $\mathcal{I}$ is a set, called implementation space; * • the map $\textsf{{exe}}:\mathcal{I}\to\mathcal{F}$ maps an implementation to the functionality it provides; * • the map $\textsf{{eva}}:\mathcal{I}\to\mathcal{R}$, maps an implementation to the resources it requires. Given a DPI we can define a map which, given a functionality $\textsf{{f}}\in\mathcal{F}$, returns all the non-comparable resources (i.e., the antichain) which provide f. ###### Definition II.7 (Functionality to resources map). Given a DPI $\langle\mathcal{F},\mathcal{R},\mathcal{I},\textsf{{exe}},\textsf{{eva}}\rangle$ define the map $h:\mathcal{F}\to\textsf{{A}}\mathcal{R}$ as $\displaystyle h:$ $\displaystyle\mathcal{F}$ $\displaystyle\to$ $\displaystyle\textsf{{A}}\mathcal{R}$ (1) f $\displaystyle\mapsto$ $\displaystyle\min_{\preceq_{\mathcal{R}}}\\{\textsf{{eva}}(\textsf{{i}})\,|\,\textsf{{i}}\in\mathcal{I}\wedge\textsf{{f}}\preceq\textsf{{exe}}(\textsf{{i}})\\}.$ In particular, if a functionality is infeasible, then $h(\textsf{{f}})=\emptyset$. We now turn our attention to “monotone” DPIs. ###### Definition II.8 (Monotone DPI). We say a DPI $\langle\mathcal{F},\mathcal{R},\mathcal{I},\textsf{{exe}},\textsf{{eva}}\rangle$ is monotone if: 1. 1. The posets $\mathcal{F}$ and $\mathcal{R}$ are CPOs. 2. 2. The map $h$ (see Definition II.7) is Scott-continuous. Individual DPIs can be composed in series (i.e., the functionality of a DPI is the resource of a second DPI) and in parallel (i.e., two DPIs share the same resource or functionality) to obtain a CDPI. Notably, such compositions preserve monotonicity and, thus, all related algorithmic properties. For further details we refer to [2]. ## III Co-Design of AV-enabled Mobility Systems ### III-A Intermodal AMoD Framework #### III-A1 Multi-Commodity Flow Model The transportation system and its different modes are modeled using the digraph $\mathcal{G}=\left(\mathcal{V},\mathcal{A}\right)$, shown in Footnote 2. Figure 1: The I-AMoD network consists of a road, a walking, and a public transportation digraph. The coloured circles represent stops or intersections and the black arrows denote road links, pedestrian pathways, or public transit arcs. Dashed lines are nodes which are close geographically, while grey arrows denote the mode-switching arcs connecting them222We thank Ms. Sonia Monti for the illustration.. It is described through a set of nodes $\mathcal{V}$ and a set of arcs $\mathcal{A}\subseteq\mathcal{V}\times\mathcal{V}$. Specifically, it contains a road network layer $\mathcal{G}_{\mathrm{R}}=\left(\mathcal{V}_{\mathrm{R}},\mathcal{A}_{\mathrm{R}}\right)$, a public transportation layer $\mathcal{G}_{\mathrm{P}}=\left(\mathcal{V}_{\mathrm{P}},\mathcal{A}_{\mathrm{P}}\right)$, and a walking layer $\mathcal{G}_{\mathrm{W}}=\left(\mathcal{V}_{\mathrm{W}},\mathcal{A}_{\mathrm{W}}\right)$. The road network is characterized through intersections $i\in\mathcal{V}_{\mathrm{R}}$ and road segments $(i,j)\in\mathcal{A}_{\mathrm{R}}$. Similarly, public transportation lines are modeled through station nodes $i\in\mathcal{V}_{\mathrm{P}}$ and line segments $(i,j)\in\mathcal{A}_{\mathrm{P}}$. The walking network contains walkable streets $(i,j)\in\mathcal{A}_{\mathrm{W}}$, connecting intersections $i\in\mathcal{V}_{\mathrm{W}}$. Our model allows mode-switching arcs $\mathcal{A}_{\mathrm{C}}\subseteq\mathcal{V}_{\mathrm{R}}\times\mathcal{V}_{\mathrm{W}}\cup\mathcal{V}_{\mathrm{W}}\times\mathcal{V}_{\mathrm{R}}\cup\mathcal{V}_{\mathrm{P}}\times\mathcal{V}_{\mathrm{W}}\cup\mathcal{V}_{\mathrm{W}}\times\mathcal{V}_{\mathrm{P}}$, connecting the road and the public transportation layers to the walking layer. Consequently, $\mathcal{V}=\mathcal{V}_{\mathrm{W}}\cup\mathcal{V}_{\mathrm{R}}\cup\mathcal{V}_{\mathrm{P}}$ and $\mathcal{A}=\mathcal{A}_{\mathrm{W}}\cup\mathcal{A}_{\mathrm{R}}\cup\mathcal{A}_{\mathrm{P}}\cup\mathcal{A}_{\mathrm{C}}$. Consistently with the structural properties of road and walking networks in urban environments, we assume the graph $\mathcal{G}$ to be strongly connected. We represent customer movements by means of travel requests. A travel request refers to a customer flow starting its trip at a node $o\in\mathcal{V}$ and ending it at a node $d\in\mathcal{V}$. ###### Definition III.1 (Travel request). A travel request $\rho$ is a triple $(o,d,\alpha)\in\mathcal{V}\times\mathcal{V}\times\mathbb{R}_{+}$, described by an origin node $o\in\mathcal{V}$, a destination node $d\in\mathcal{V}$, and the request rate $\alpha>0$, namely, the number of customers who want to travel from $o$ to $d$ per unit time. To ensure that a customer is not forced to use a given transportation mode, we assume all requests to lie on the walking digraph, i.e., $o_{m},d_{m}\in\mathcal{V}_{\mathrm{W}}$ for all $m\in\mathcal{M}\coloneqq\\{1,\ldots,M\\}$. The flow $f_{m}\left(i,j\right)\geq 0$ represents the number of customers per unit time traversing arc $(i,j)\in\mathcal{A}$ and satisfying a travel request $m$. Furthermore, $f_{0}\left(i,j\right)\geq 0$ denotes the flow of empty AVs on road arcs $(i,j)\in\mathcal{A}_{\mathrm{R}}$. This accounts for rebalancing flows of AVs between a customer’s drop-off and the next customer’s pick-up. Assuming AVs to carry one customer at a time, the flows satisfy $\displaystyle\sum_{i:(i,j)\in\mathcal{A}}f_{m}\left(i,j\right)+\mathds{1}_{j=o_{m}}\cdot\alpha_{m}=\sum_{k:(j,k)\in\mathcal{A}}f_{m}\left(j,k\right)+\mathds{1}_{j=d_{m}}\cdot\alpha_{m}$ $\displaystyle\hskip 136.5733pt\forall m\in\mathcal{M},\,j\in\mathcal{V}$ (2a) $\displaystyle\sum_{i:(i,j)\in\mathcal{A}_{\mathrm{R}}}f_{\mathrm{tot}}\left(i,j\right)=\sum_{k:(j,k)\in\mathcal{A}_{\mathrm{R}}}f_{\mathrm{tot}}\left(j,k\right)\quad\forall j\in\mathcal{V}_{\mathrm{R}},$ (2b) where $\mathbb{1}_{j=x}$ denotes the boolean indicator function and $f_{\mathrm{tot}}\left(i,j\right)\coloneqq f_{0}\left(i,j\right)+\sum_{m\in\mathcal{M}}f_{m}\left(i,j\right)$. Specifically, (2a) guarantees flows conservation for every transportation demand, and (2b) preserves flow conservation for AVs on every road node. Combining conservation of customers (2a) with the conservation of AVs (2b) guarantees rebalancing AVs to match the demand. ### III-B Travel Time and Travel Speed The variable $t_{ij}$ denotes the time needed to traverse an arc $(i,j)$ of length $s_{ij}$. We assume a constant walking speed on walking arcs and infer travel times on public transportation arcs from the public transit schedules. Assuming that the public transportation system at node $j$ operates with frequency $\varphi_{j}$, switching from a pedestrian vertex $i\in\mathcal{V}_{\mathrm{W}}$ to a public transit station $j\in\mathcal{V}_{\mathrm{P}}$ takes, on average, $t_{ij}=t_{\mathrm{WS}}+0.5\cdot 1/\varphi_{j}\quad\forall(i,j)\in\mathcal{A}_{\mathrm{W}},$ (3) where $t_{\mathrm{WS}}$ is a constant sidewalk-to-station travel time. We assume that the average waiting time for AMoD vehicles is $t_{\mathrm{WR}}$, and that switching from the road graph and the public transit graph to the walking graph takes the transfer times $t_{\mathrm{RW}}$ and $t_{\mathrm{SW}}$, respectively. While each road arc $(i,j)\in\mathcal{A}_{\mathrm{R}}$ is characterized by a speed limit $v_{\mathrm{L,V},ij}$, AVs safety protocols impose a maximum achievable velocity $v_{\mathrm{V,a}}$. In order to prevent too slow and therefore dangerous driving behaviours, we only consider road arcs through which the AVs can drive at least at a fraction $\beta$ of the speed limit: Arc $(i,j)\in\mathcal{A}_{\mathrm{R}}$ is kept in the road network iff $v_{\mathrm{V,a}}\geq\beta\cdot v_{\mathrm{L,V},ij},$ (4) where $\beta\in(0,1]$. We set the velocity of all arcs fulfilling condition (4) to $v_{\mathrm{V},ij}=\min\\{v_{\mathrm{V,a}},v_{\mathrm{L,V},ij}\\}$ and compute the travel time to traverse them as $t_{ij}=s_{ij}/v_{\mathrm{V},ij}\quad\forall(i,j)\in\mathcal{A}_{\mathrm{R}}.$ (5) ### III-C Road Congestion We capture congestion effects with a threshold model. The total flow on each road arc $(i,j)\in\mathcal{A}_{\mathrm{R}}$, given by the sum of the AVs flow $f_{\mathrm{tot}}\left(i,j\right)$ and the baseline usage $u_{ij}$ (e.g., private vehicles), must remain below the nominal capacity $c_{ij}$ of the arc: $f_{\mathrm{tot}}\left(i,j\right)+u_{ij}\leq c_{ij}\quad\forall(i,j)\in\mathcal{A}_{\mathrm{R}}.$ (6) ### III-D Energy Consumption We compute the energy consumption of AVs for each road link considering an urban driving cycle, scaled so that the average speed $v_{\mathrm{avg,cycle}}$ matches the free-flow speed on the link. The energy consumption is then scaled as $e_{ij}=e_{\mathrm{cycle}}\cdot s_{ij}/s_{\mathrm{cycle}}\quad\forall(i,j)\in\mathcal{A}_{\mathrm{R}}.$ (7) For the public transportation system, we assume a constant energy consumption per unit time. This approximation is reasonable in urban environments, as the operation of the public transportation system is independent from the number of customers serviced, and its energy consumption is therefore customer- invariant. ### III-E Fleet Size We consider a fleet of $n_{\mathrm{V,max}}$ AVs. In a time-invariant setting, the number of vehicles on arc $(i,j)\in\mathcal{A}_{\mathrm{R}}$ is expressed as the product of the total vehicles flow on the arc and its travel time. Therefore, we constrain the number of used AVs as $n_{\mathrm{V,u}}=\sum_{(i,j)\in\mathcal{A}_{\mathrm{R}}}f_{\mathrm{tot}}\left(i,j\right)\cdot t_{ij}\leq n_{\mathrm{V,max}}.$ (8) ### III-F Discussion A few comments are in order. First, we assume the demand to be time-invariant and allow flows to have fractional values. This assumption is in line with the mesoscopic and system-level planning perspective of our study. Second, we model congestion effects using a threshold model. This approach can be interpreted as a municipality preventing AVs to exceed the critical flow density on road arcs. AVs can be therefore assumed to travel at free-flow speed [22]. This assumption is realistic for an initial low penetration of AMoD systems in the transportation market, especially when the AV fleet is of limited size. Finally, we allow AVs to transport one customer at the time [23]. ### III-G Co-Design Framework We integrate the I-AMoD framework presented in Section III-A in the co-design formalism, allowing one to decompose the CDPI of a complex system in the DPIs of its individual components in a modular, compositional, and systematic fashion. We aim at computing the antichain of resources, quantified in terms of costs, average travel time per trip, and emissions required to provide the mobility service to a set of customers. In order to achieve this, we decompose the CDPI in the DPIs of the individual AVs (Section III-G1), of the AV fleet (Section III-G3), and of the public transportation system (Section III-G2). The interconnection of the presented DPIs is presented in Section III-G4. #### III-G1 The Autonomous Vehicle Design Problem The AV DPI consists of selecting the maximal speed of the AVs. Under the rationale that driving safely at higher speed requires more advanced sensing and algorithmic capabilities, we model the achievable speed of the AVs $v_{\mathrm{V,a}}$ as a monotone function of the vehicle fixed costs $C_{\mathrm{V,f}}$ (resulting from the cost of the vehicle $C_{\mathrm{V,v}}$ and the cost of its automation $C_{\mathrm{V,a}}$) and of the mileage- dependent operational costs $C_{\mathrm{V,o}}$ (accounting for maintenance, cleaning, energy consumption, depreciation, and opportunity costs [24]). In this setting, the AV DPI provides the functionality $v_{\mathrm{V,a}}$ and requires the resources $C_{\mathrm{V,f}}$ and $C_{\mathrm{V,o}}$. Consequently, the functionality space is $\mathcal{F}_{\mathrm{V}}=\overline{\mathbb{R}}_{+}$, and the resources space is $\mathcal{R}_{\mathrm{V}}=\overline{\mathbb{R}}_{+}\times\overline{\mathbb{R}}_{+}$. #### III-G2 The Subway Design Problem We design the public transit infrastructure by means of the service frequency introduced in Section III-B. Specifically, we assume that the service frequency $\varphi_{j}$ scales linearly with the size of the train fleet $n_{\mathrm{S}}$ as $\varphi_{j}/\varphi_{j,\mathrm{base}}=n_{\mathrm{S}}/n_{\mathrm{S,base}}.$ (9) We relate a train fleet of size $n_{\mathrm{S}}$ to the fixed costs $C_{\mathrm{S,f}}$ (accounting for train and infrastructural costs) and to the operational costs $C_{\mathrm{S,o}}$ (accounting for energy consumption, vehicles depreciation, and train operators’ wages). Given the passenger- independent public transit operation in today’s cities, we reasonably assume the operational costs $C_{\mathrm{S,o}}$ to be mileage independent and to only vary with the size of the fleet. Formally, the number of acquired trains $n_{\mathrm{S,a}}=n_{\mathrm{S}}-n_{\mathrm{S,base}}$ is a functionality, whereas $C_{\mathrm{S,f}}$ and $C_{\mathrm{S,o}}$ are resources. The functionality space is $\mathcal{F}_{\mathrm{S}}=\overline{\mathbb{N}}$ and the resources space is $\mathcal{R}_{\mathrm{S}}=\overline{\mathbb{R}}_{+}\times\overline{\mathbb{R}}_{+}$. #### III-G3 The I-AMoD Framework Design Problem The I-AMoD DPI considers demand satisfaction as a functionality. Formally, $\mathcal{F}_{\mathrm{O}}=2^{\mathcal{V}\times\mathcal{V}\times\overline{\mathbb{R}}_{+}}$ with the partial order $\preceq_{\mathcal{F}_{\mathrm{O}}}$ defined by $\mathcal{D}_{1}\coloneqq\\{(o^{1}_{i},d^{1}_{i},\alpha^{1}_{i})\\}_{i=1}^{M_{1}}\preceq_{\mathcal{F}_{\mathrm{O}}}\\{(o^{2}_{i},d^{2}_{i},\alpha^{2}_{i})\\}_{i=1}^{M_{2}}\eqqcolon\mathcal{D}_{2}$ iff for all $(o^{1},d^{1},\alpha^{1})\in\mathcal{D}_{1}$ there is some $(o^{2},d^{2},\alpha^{2})\in\mathcal{D}_{2}$ with $o^{1}=o^{2}$, $d^{1}=d^{2}$, and $\alpha^{2}_{i}\geq\alpha^{1}_{i}$. In other words, $\mathcal{D}_{1}\preceq_{\mathcal{F}_{\mathrm{O}}}\mathcal{D}_{2}$ if every travel request in $\mathcal{D}_{1}$ is in $\mathcal{D}_{2}$ too. To successfully satisfy a given set of travel requests, we require the following resources: (i) the achievable speed of the AVs $v_{\mathrm{V,a}}$, (ii) the number of available AVs per fleet $n_{\mathrm{V,max}}$, (iii) the number of trains $n_{\mathrm{S,a}}$ acquired by the public transportation system, and (iv) the average travel time of a trip $t_{\mathrm{avg}}\coloneqq\frac{1}{\alpha_{\mathrm{tot}}}\cdot\sum_{m\in\mathcal{M},(i,j)\in\mathcal{A}}t_{ij}\cdot f_{m}\left(i,j\right),$ (10) with $\alpha_{\mathrm{tot}}\coloneqq\sum_{m\in\mathcal{M}}\alpha_{m}$, (v) the total distance driven by the AVs per unit time $s_{\mathrm{V,tot}}\coloneqq\sum_{(i,j)\in\mathcal{A}_{\mathrm{R}}}s_{ij}\cdot f_{\mathrm{tot}}\left(i,j\right),$ (11) (vi) the total AVs CO2 emissions per unit time $m_{\mathrm{CO_{2},V,tot}}\coloneqq\gamma\cdot\sum_{(i,j)\in\mathcal{A}_{\mathrm{R}}}e_{ij}\cdot f_{\mathrm{tot}}\left(i,j\right),$ (12) where $\gamma$ relates the energy consumption and the CO2 emissions. We assume that customers’ trips and AMoD rebalancing strategies are chosen to maximize customers’ welfare, defined through the average travel time $t_{\mathrm{avg}}$. Hence, we link the functionality and resources of the I-AMoD DPI through the following optimization problem: $\begin{split}\min_{\begin{subarray}{c}f_{m}\left(\cdot,\cdot\right)\geq 0,\\\ f_{0}\left(\cdot,\cdot\right)\geq 0\end{subarray}}t_{\mathrm{avg}}&=\frac{1}{\alpha_{\mathrm{tot}}}\sum_{m\in\mathcal{M},(i,j)\in\mathcal{A}}t_{ij}\cdot f_{m}\left(i,j\right)\\\ &\mathrm{s.t.\ Eq.}\eqref{eq:flowconstotal},\ \mathrm{Eq.}\eqref{eq:capacity},\ \mathrm{Eq.}\eqref{eq:fleetsizecar}.\end{split}$ (13) Formally, $\mathcal{F}_{\mathrm{O}}=\overline{\mathbb{R}}_{+}$, and $\mathcal{R}_{\mathrm{O}}=\overline{\mathbb{R}}_{+}\times\overline{\mathbb{N}}\times\overline{\mathbb{N}}\times\overline{\mathbb{R}}_{+}\times\overline{\mathbb{R}}_{+}\times\overline{\mathbb{R}}_{+}$. ###### Remark. In general, the optimization problem (13) might possess multiple optimal solutions, making the relation between resources and functionality ill-posed. To overcome this subtlety, if two solutions share the same average travel time, we select the one incurring in the lowest mileage. #### III-G4 The Monotone Co-Design Problem The functionality of the system is to provide mobility service to the customers. Formally, the functionality provided by the CDPI is the set of travel requests. To provide the mobility service, the following three resources are required. First, on the customers’ side, we require an average travel time, defined in (10). Second, on the municipality side, the resource is the total transportation cost of the intermodal mobility system. Assuming an average vehicles’ life $l_{\mathrm{V}}$, an average trains’ life $l_{\mathrm{S}}$, and a baseline subway fleet of $n_{\mathrm{S,base}}$ trains, we express the total costs as $C_{\mathrm{tot}}=C_{\mathrm{V}}+C_{\mathrm{S}},$ (14) where $C_{\mathrm{V}}$ is the AV-related cost $C_{\mathrm{V}}=\frac{C_{\mathrm{V,f}}}{l_{\mathrm{V}}}\cdot n_{\mathrm{V,max}}+C_{\mathrm{V,o}}\cdot s_{\mathrm{V,tot}},$ (15) and $C_{\mathrm{S}}$ is the public transit-related cost $C_{\mathrm{S}}=\frac{C_{\mathrm{S,f}}}{l_{\mathrm{S}}}\cdot n_{\mathrm{S,a}}+C_{\mathrm{S,o}}.$ (16) Third, on the environmental side, the resources are the total CO2 emissions $m_{\mathrm{CO_{2},tot}}=m_{\mathrm{CO_{2},V,tot}}+m_{\mathrm{CO_{2},S}}\cdot n_{\mathrm{S}},$ (17) where $m_{\mathrm{CO_{2},S}}$ represents the CO2 emissions of a single train. Formally, the set of travel requests $\\{\rho_{m}\\}_{m\in\mathcal{M}}$ is the CDPI functionality, whereas $t_{\mathrm{avg}}$, $C_{\mathrm{tot}}$, and $m_{\mathrm{CO_{2},tot}}$ are its resources. Consistently, the functionality space is $\mathcal{F}=\overline{\mathbb{R}}_{+}$ and the resources space is $\mathcal{R}=\overline{\mathbb{R}}_{+}\times\overline{\mathbb{R}}_{+}\times\overline{\mathbb{R}}_{+}$. Note that the resulting CDPI (Figure 2) is indeed monotone, since it consists of the interconnection of monotone DPIs [2]. #### III-G5 Discussion A few comments are in order. First, we lump the autonomy functionalities in its achievable velocity. We leave to future research more elaborated AV models, accounting for instance for accidents rates [25] and for safety levels. Second, we assume the service frequency of the subway system to scale linearly with the number of trains. We inherently rely on the assumption that the existing infrastructure can homogeneously accommodate the acquired train cars. To justify the assumption, we include an upper bound on the number of potentially acquirable trains in our case study design in Section IV. Third, we highlight that the I-AMoD framework is only one of the many feasible ways to map total demand to travel time, costs, and emissions. Specifically, practitioners can easily replace the corresponding DPI with more sophisticated models (e.g., simulation-based frameworks like AMoDeus [26]), as long as the monotonicity of the DPI is preserved. In our setting, we conjecture the customers’ and vehicles’ routes to be centrally controlled by the municipality in a socially-optimal fashion. Implicitly, we rely on the existence of effective incentives aligning private and societal interests. The study of such incentives represents an avenue for future research. Fourth, we assume a homogenous fleet of AVs. Nevertheless, our model is readily extendable to capture heterogeneous fleets. Finally, we consider a fixed travel demand, and compute the antichain of resources providing it. Nonetheless, our formalization can be readily extended to arbitrary demand models preserving the monotonicity of the CDPI (accounting for instance for elastic effects). We leave this topic to future research. I-AMoDVehicleSubway$\preceq$$\preceq$$\preceq$$\times$$\preceq$$\preceq$$\times$$\preceq$$\preceq$$\times$$\preceq$$\preceq$$\preceq$$+$$\preceq$$+$$\preceq$$\times$$\preceq$$\preceq$$+$$+$$\preceq$$+$$\preceq$$v_{\mathrm{V,a}}$$n_{\mathrm{S,a}}$$s_{\mathrm{V,tot}}$$C_{\mathrm{V,o}}$$C_{\mathrm{V,f}}$co- design constraint$C_{\mathrm{S,o}}$$C_{\mathrm{S,f}}$$n_{\mathrm{V,max}}$$C_{\mathrm{tot}}$$\alpha_{\mathrm{tot}}$$m_{\mathrm{CO_{2},V,tot}}$$n_{\mathrm{S}}$$m_{\mathrm{CO_{2},S}}$$m_{\mathrm{CO_{2},tot}}$$t_{\mathrm{avg}}$$l_{\mathrm{V}}$$l_{\mathrm{S}}$total costaverage travel timetotal emissionstotal request rate Figure 2: Schematic representation of the CDPI. In solid green the provided functionalities and in dashed red the required resources. The edges represent co-design constraints: The resources required by a first design problem are the lower bound for the functionalities provided by the second one. ## IV Results In this section, we leverage the framework presented in Section III to perform a real-world case study of Washington D.C., USA. Section IV-A details the case study. We then present numerical results in Sections IV-B and IV-C. ### IV-A Case Study We base our studies on a real-world case of the urban area of Washington D.C., USA. We import the road network and its features from OpenStreetMap [27]. The public transit network and its schedules are extracted from the GTFS data [28]. Demand data is obtained by merging the origin-destination pairs of the morning peak of May 31st 2017 provided by taxi companies [29] and the Washington Metropolitan Area Transit Authority (WMATA) [23]. Given the lack of reliable demand data for the MetroBus system, we focus our studies on the MetroRail system and its design, inherently assuming MetroBus commuters to be unaffected by our design methodology. To conform with the large presence of ride-hailing companies, we scale the taxi demand rate by factor of 5 [30]. Overall, the demand dataset includes 15,872 travel requests, corresponding to a demand rate of 24.22 $\nicefrac{{requests}}{{s}}$. To account for congestion effects, we compute the nominal road capacity as in [31] and assume an average baseline road usage of 93%, in line with [32]. We summarize the main parameters together with their bibliographic sources in Table I. In the remainder of this section, we tailor and solve the co-design problem presented in Section III through the PyMCDP solver [33], and investigate the influence of different AVs costs on the design objectives and strategies. | Parameter | Name | Value | Units | Source ---|---|---|---|---|--- | Road usage | | $u_{ij}$ | 93 | % | [32] Vehicle | | | | C1 | C2.1 | C2.2 | | Operational cost | $C_{\mathrm{V,o}}$ | 0.084 | 0.084 | 0.062 | $\nicefrac{{USD}}{{mile}}$ | [34, 35] Cost | $C_{\mathrm{V}}$ | 32,000 | 32,000 | 26,000 | $\nicefrac{{USD}}{{car}}$ | [34] Automation cost | 20 mph | $C_{\mathrm{V,a}}$ | 15,000 | 20,000 | 3,700 | $\nicefrac{{USD}}{{car}}$ | [35, 36, 37, 38, 39] 25 mph | 15,000 | 30,000 | 4,400 | $\nicefrac{{USD}}{{car}}$ | [35, 36, 37, 38, 39] 30 mph | 15,000 | 55,000 | 6,200 | $\nicefrac{{USD}}{{car}}$ | [35, 36, 37, 38, 39] 35 mph | 15,000 | 90,000 | 8,700 | $\nicefrac{{USD}}{{car}}$ | [35, 36, 37, 38, 39] 40 mph | 15,000 | 115,000 | 9,800 | $\nicefrac{{USD}}{{car}}$ | [35, 36, 37, 38, 39] 45 mph | 15,000 | 130,000 | 12,000 | $\nicefrac{{USD}}{{car}}$ | [35, 36, 37, 38, 39] 50 mph | 15,000 | 150,000 | 13,000 | $\nicefrac{{USD}}{{car}}$ | [35, 36, 37, 38, 39] Vehicle life | $l_{\mathrm{V}}$ | 5 | 5 | 5 | years | [34] CO2 per Joule | $\gamma$ | 0.14 | 0.14 | 0.14 | $\nicefrac{{g}}{{kJ}}$ | [40] Time $\mathcal{G}_{\mathrm{W}}$ to $\mathcal{G}_{\mathrm{R}}$ | $t_{\mathrm{WR}}$ | 300 | 300 | 300 | s | - Time $\mathcal{G}_{\mathrm{R}}$ to $\mathcal{G}_{\mathrm{W}}$ | $t_{\mathrm{RW}}$ | 60 | 60 | 60 | s | - | Speed fraction | $\beta$ | $\frac{1}{1.3}$ | $\frac{1}{1.3}$ | $\frac{1}{1.3}$ | - | - Public transit | Operational cost | 100 % | $C_{\mathrm{S,o}}$ | 148,000,000 | $\nicefrac{{USD}}{{year}}$ | [41] 133 % | 197,000,000 | $\nicefrac{{USD}}{{year}}$ | [41] 200 % | 295,000,000 | $\nicefrac{{USD}}{{year}}$ | [41] Fixed cost | $C_{\mathrm{S,f}}$ | 14,500,000 | $\nicefrac{{USD}}{{train}}$ | [42] Train life | $l_{\mathrm{S}}$ | 30 | years | [42] Emissions/train | $m_{\mathrm{CO_{2},S}}$ | 140,000 | $\nicefrac{{kg}}{{year}}$ | [43] Fleet baseline | $n_{\mathrm{S,base}}$ | 112 | trains | [42] Service frequency | $\varphi_{j,\mathrm{base}}$ | $1/6$ | $\nicefrac{{1}}{{min}}$ | [44] Time $\mathcal{G}_{\mathrm{W}}$ to $\mathcal{G}_{\mathrm{P}}$ | $t_{\mathrm{WS}}$ | $60$ | s | - Time $\mathcal{G}_{\mathrm{P}}$ to $\mathcal{G}_{\mathrm{W}}$ | $t_{\mathrm{SW}}$ | $60$ | s | - TABLE I: Parameters, variables, numbers, and units for the case studies. ### IV-B Case 1 - Constant Cost of Automation In line with [35, 36, 37, 38, 39], we first assume an average achievable- velocity-independent cost of automation. As discussed in Section III, we design the system by means of subway service frequency, AV fleet size, and achievable free-flow speed. Specifically, we allow the municipality to (i) increase the subway service frequency $\varphi_{j}$ by a factor of 0%, 33%, or 100%, (ii) deploy an AVs fleet of size $n_{\mathrm{V,max}}\in\\{0,500,1000,\ldots,6000\\}$ vehicles, and (iii) design the single AV achievable velocity $v_{\mathrm{V,a}}\in\\{20\,\mathrm{mph},25\,\mathrm{mph},\ldots,50\,\mathrm{mph}\\}$. We assume the AVs fleet to be composed of battery electric BEV-250 mile AVs [34]. In Figure 3(a), we show the solution of the co-design problem by reporting the antichain consisting of the total transportation cost, average travel time, and total CO2 emissions. These solutions are _rational_ (and not comparable) in the sense that there exists no instance which simultaneously yields lower cost, average travel time, and emissions. (a) Left: Three-dimensional representation of antichain elements and their projection in the cost-time space. Right: Two-dimensional projections. (b) Results for constant automation costs. On the left, the two-dimensional representation of the antichain elements: In red are the unfeasible strategies, in orange the feasible but irrational solutions, and in green the Pareto front. On the right, the implementations corresponding to the highlighted antichain elements, quantified in terms of achievable vehicle speed, AV fleet size, and train fleet size. Figure 3: Solution of the CDPI: state-of-the art case. For the sake of clarity, we opt for a two-dimensional antichain representation, by translating and including the emissions in the total cost. To do so, we consider the conversion factor 40 $\nicefrac{{USD}}{{kg}}$ [45]. Note that since this transformation preserves the monotonicity of the CDPI it smoothly integrates in our framework. Doing so, we can conveniently depict the co-design strategies through the two-dimensional antichain (Figure 3(b), left) and the corresponding municipality actions (Figure 3(b), right). Generally, as the municipality budget increases, the average travel time per trip required to satisfy the given demand decreases, reaching a minimum of about 17.1 min with an expense of around 43 $\nicefrac{{MilUSD}}{{month}}$. This configuration corresponds to a fleet of 5,500 AVs able to drive at 50 mph and to the doubling of the current MetroRail train fleet. On the other hand, the smallest rational investment of 12.9 $\nicefrac{{MilUSD}}{{month}}$ leads to a 42 % higher average travel time, corresponding to a inexistent autonomous fleet and an unchanged subway infrastructure. Notably, an expense of 23 $\nicefrac{{MilUSD}}{{month}}$ (48 % lower than the highest rational investment) only increases the minimal required travel time by 9 %, requiring a fleet of 4,000 vehicles able to drive at 35 mph and no acquisition of trains. Conversely, an investment of 15.6 $\nicefrac{{MilUSD}}{{month}}$ (just 2 $\nicefrac{{MilUSD}}{{month}}$ more than the minimal rational investment) provides a 3 min shorter travel time. Remarkably, the design of AVs able to exceed 40 mph only improves the average travel time by 6 %, and it is rational just starting from an expense of 22.8 $\nicefrac{{MilUSD}}{{month}}$. This suggests that the design of faster vehicles mainly results in higher emission rates and costs, without substantially contributing to a more time-efficient demand satisfaction. Finally, it is rational to improve the subway system only starting from a budget of 28.5 $\nicefrac{{MilUSD}}{{month}}$, leading to a travel time improvement of just 4 %. This trend can be explained with the high train acquisition and increased operation costs, related to the subway reinforcement. We expect this phenomenon to be more marked for other cities, considering the moderate operation costs of the MetroRail subway system due to its automation [44] and related benefits [46]. ### IV-C Case 2 - Speed-Dependent Automation Costs To relax the potentially unrealistic assumption of a velocity-independent automation cost, we consider a performance-dependent cost structure. The large variance in sensing technologies and their reported performances [47] suggests that this rationale is reasonable. Indeed, the technology required today to safely operate an autonomous vehicle at 50 mph is substantially more sophisticated, and therefore more expensive, than the one needed at 20 mph. To this end, we adopt the cost structure reported in Table I. Furthermore, the frenetic evolution of automation techniques intricates their monetary quantification. Therefore, we perform our studies with current (2020) costs as well as with their projections for the upcoming decade (2025) [48, 34]. #### IV-C1 Case 2.1 - 2020 We study the hypothetical case of an immediate AV fleet deployment. We introduce the aforementioned velocity-dependent automation cost structure and obtain the results reported in Figure 4(a). Comparing these results with the state-of-the-art parameters presented in Figure 3 confirms the previously observed trend concerning high vehicle speeds. Indeed, spending 24.9 $\nicefrac{{MilUSD}}{{month}}$ (55 % lower than the highest rational expense) only increases the average travel time by 10 %, requiring a fleet of 3,000 AVs at 40 mph and no subway interventions. Nevertheless, the comparison shows two substantial differences. First, the budget required to reach the minimum travel time of 17.1 min is 28 % higher compared to the previous case, and consists of the same strategy for the municipality, i.e., doubling the train fleet and having a fleet of 5,500 AVs at 50 mph. Second, the higher vehicle costs result in an average AV fleet growth of 5 %, an average velocity reduction of 9 %, and an average train fleet growth of 7 %. This trend suggests that, compared to Case 1, rational design strategies foster larger fleets and less performing AVs. (a) Results for speed-dependent automation costs in 2020. (b) Results for speed-dependent automation costs in 2025. Figure 4: Results for the speed-dependent automation costs. On the left, the two-dimensional representation of the antichain elements: In red are the unfeasible strategies, in orange the feasible but irrational solutions, and in green the Pareto front. On the right, the implementations corresponding to the highlighted antichain elements. #### IV-C2 Case 2.2 - 2025 Experts forecast a substantial decrease of automation costs (up to 90 %) in the next decade, mainly due to mass-production of the AVs sensing technology [48, 49]. In line with this prediction, we inspect the futuristic scenario by solving the CDPI for the adapted automation costs, and report the results in Figure 4(b). Two comments are in order. First, the maximal rational budget is 25 % lower than in the immediate adoption case. Second, the reduction in autonomy costs clearly eases the acquisition of more performant AVs, increasing the average vehicle speed by 10 %. As a direct consequence, the AV and train fleets are reduced in size by 5 % and 10 %, respectively. ### IV-D Discussion We conclude the analysis of our case study with two final comments. First, the presented case studies illustrate the ability of our framework to extract the set of rational design strategies for an AV-enabled mobility system. This way, stakeholders such as AVs companies, transportation authorities, and policy makers can get transparent and interpretable insights on the impact of future interventions. Second, we perform a sensitivity analysis through the variation of the autonomy cost structures. On the one hand, this reveals a clear transition from small fleets of fast AVs (in the case of low autonomy costs) to slow fleets of numerous AVs (in the case of high autonomy costs). On the other hand, our studies highlight that investments in the public transit infrastructure are rational only when large budgets are available. Indeed, the onerous train acquisition and operation costs lead to a comparative advantage of AV-based mobility. ## V Conclusion In this paper, we leveraged the mathematical theory of co-design to propose a design framework for AV-enabled mobility systems. Specifically, the nature of our framework allows both for the modular and compositional interconnection of the DPIs of different mobility options and for multiple objectives. Starting from the multi-commodity flow model of an I-AMoD system, we optimize the design of AVs and public transit both from a vehicle-centric and fleet-level perspective. In particular, we studied the problem of deploying a fleet of AVs providing on-demand mobility in cooperation with public transit, optimizing the speed achievable by the vehicles, the fleet size, and the service frequency of the subway lines. Our framework allows the stakeholders involved in the mobility ecosystem, from vehicle developers all the way to mobility-as- a-service companies and governmental authorities, to characterize rational trajectories for technology and investment development. We showcased our methodology on a real-world case study of Washington D.C., USA. Notably, our problem formulation allows for a systematic analysis of incomparable objectives, providing stakeholders with analytical insights for the socio- technical design of AV-enabled mobility systems. This work opens the field for the following future research streams: _Modeling:_ First, we would like to extend the presented framework to capture additional modes of transportation, such as micromobility, and heterogeneous fleets with different self-driving infrastructures, propulsion systems, and passenger capacity. Second, we would like to investigate variable demand models. Finally, we would like to analyze the interactions between multiple stakeholders, characterizing the equilibrium arising from their conflicting interests. _Algorithms:_ It is of interest to tailor co-design algorithmic frameworks to the particular case of transportation DPIs, possibly leveraging their specific structure. _Application:_ Finally, we would like to devise a user-friendly web interface which supports mobility stakeholders to reason about strategic interventions in urban areas. ## References * [1] G. Zardini, N. Lanzetti, M. Salazar, A. Censi, E. Frazzoli, and M. Pavone, “Towards a co-design framework for future mobility systems,” in _Annual Meeting of the Transportation Research Board_ , Washington D.C., United States, 2020. * [2] A. Censi, “A mathematical theory of co-design,” _arXiv preprint arXiv:1512.08055v7_ , 2015. * [3] ——, “Monotone co-design problems; or, everything is the same,” in _American Control Conference_ , 2016. * [4] ——, “A class of co-design problems with cyclic constraints and their solution,” _IEEE Robotics and Automation Letters_ , vol. 2, pp. 96–103, 2017. * [5] M. Salazar, N. Lanzetti, F. Rossi, M. Schiffer, and M. Pavone, “Intermodal autonomous mobility-on-demand,” _IEEE Transactions on Intelligent Transportation Systems_ , 2019. * [6] R. Z. Farahani, E. Miandoabchi, W. Y. Szeto, and H. Rashidi, “A review of urban transportation network design problems,” _European Journal of Operational Research_ , vol. 229, pp. 281–302, 2013. * [7] V. Guihaire and J.-K. Hao, “Transit network design and scheduling: A global review,” _Transportation Research Part B: Methodological_ , vol. 42, pp. 1251–1273, 2008. * [8] Z. Cong, B. De Schutter, and R. Babuska, “Co-design of traffic network topology and control measures,” _Transportation Research Part C: Emerging Technologies_ , vol. 54, pp. 56–73, 2015. * [9] R. O. Arbex and C. B. da Cunha, “Efficient transit network design and frequencies setting multi-objective optimization by alternating objective genetic algorithm,” _Transportation Research Part B: Methodological_ , vol. 81, pp. 355–376, 2015. * [10] L. Sun, J. G. Jin, D.-H. Lee, K. W. Axhausen, and A. Erath, “Demand-driven timetable design for metro services,” _Transportation Research Part C: Emerging Technologies_ , vol. 46, pp. 284–299, 2014. * [11] S. Su, X. Li, T. Tang, and Z. Gao, “A subway train timetable optimization approach based on energy-efficient operation strategy,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 14, no. 2, pp. 883–893, 2013. * [12] S. Narayanan, E. Chaniotakis, and C. Antoniou, “Shared autonomous vehicle services: A comprehensive review,” _Transportation Research Part C: Emerging Technologies_ , vol. 111, pp. 255–293, 2020. * [13] J. A. Barrios and J. D. Godier, “Fleet sizing for flexible carsharing systems: Simulation-based approach,” _Transportation Research Record: Journal of the Transportation Research Board_ , vol. 2416, pp. 1–9, 2014. * [14] D. J. Fagnant and K. M. Kockelman, “Dynamic ride-sharing and fleet sizing for a system of shared autonomous vehicles in austin, texas,” _Transportation_ , vol. 45, no. 1, pp. 143–158, 2018. * [15] M. M. Vazifeh, P. Santi, G. Resta, S. H. Strogatz, and C. Ratti, “Addressing the minimum fleet problem in on-demand urban mobility,” _Nature_ , vol. 557, no. 7706, p. 534, 2018. * [16] P. M. Boesch, F. Ciari, and K. W. Axhausen, “Autonomous vehicle fleet sizes required to serve different levels of demand,” _Transportation Research Record: Journal of the Transportation Research Board_ , vol. 2542, no. 1, pp. 111–119, 2016. * [17] K. Spieser, K. Treleaven, R. Zhang, E. Frazzoli, D. Morton, and M. Pavone, “Toward a systematic approach to the design and evaluation of automated mobility-on-demand systems: A case study in singapore,” in _Road vehicle automation_ , 2014, pp. 229–245. * [18] H. Zhang, C. J. R. Sheppard, T. Lipman, and S. Moura, “Joint fleet sizing and charging system planning for autonomous electric vehicles,” _IEEE Transactions on Intelligent Transportation Systems_ , 2019. * [19] G. J. Beaujon and M. A. Turnquist, “A model for fleet sizing and vehicle allocation,” _Transportation Science_ , vol. 25, no. 1, pp. 19–45, 1991\. * [20] A. Wallar, W. Schwarting, J. Alonso-Mora, and D. Rus, “Optimizing multi-class fleet compositions for shared mobility-as-a-service,” in _Proc. IEEE Int. Conf. on Intelligent Transportation Systems_. IEEE, 2019, pp. 2998–3005. * [21] H. K. R. F. Pinto, M. F. Hyland, H. S. Mahmassani, and I. O. Verbas, “Joint design of multimodal transit networks and shared autonomous mobility fleets,” _Transportation Research Part C: Emerging Technologies_ , 2019\. * [22] C. F. Daganzo and N. Geroliminis, “An analytical approximation for the macroscopic fundamental diagram of urban traffic,” _Transportation Research Part B: Methodological_ , vol. 42, no. 9, pp. 771–781, 2008. * [23] PIM. (2012) Metrorail ridership by origin and destination. Plan It Metro. Plan It Metro. * [24] A. Mas-Colell, M. D. Whinston, and J. R. Green, _Microeconomic Theory_. Oxford Univ. Press, 1995. * [25] D. C. Richards, “Relationship between speed and risk of fatal injury: Pedestrians and car occupants,” Department for Transport: London, Tech. Rep., 2010. * [26] C. Ruch, S. Hörl, and E. Frazzoli, “Amodeus, a simulation-based testbed for autonomous mobility-on-demand systems,” in _Proc. IEEE Int. Conf. on Intelligent Transportation Systems_ , 2018, pp. 3639–3644. * [27] M. Haklay and P. Weber, “OpenStreetMap: User-generated street maps,” _IEEE Pervasive Computing_ , vol. 7, no. 4, pp. 12–18, 2008. * [28] GTFS. (2019) Gtfs: Making public transit data universally accessible. * [29] ODDC. (2017) Taxicab trips in 2016. Open Data DC. Open Data DC. Available online at https://opendata.dc.gov/search?q=taxicabs. * [30] F. Siddiqui. (2018) As ride hailing booms in d.c., it’s not just eating in the taxi market – it’s increasing vehicle trips. The Washington Post. The Washington Post. available online. * [31] DoA, Ed., _Military Police Traffic Operations_. Department of the Army, 1977. * [32] S. Dixon, H. Irshad, and V. White, “Deloitte city moblity index – washington d.c.” Deloitte, Tech. Rep., 2018. * [33] A. Censi. (2019) Monotone co-design problems. Available online: https://co-design.science/index.html. * [34] N. Pavlenko, P. Slowik, and N. Lutsey, “When does electrifying shared mobility make economic sense?” The International Council on Clean Transportation, Tech. Rep., 2019. * [35] P. M. Boesch, F. Becker, H. Becker, and K. W. Axhausen, “Cost-based analysis of autonomous mobility services,” _Transport Policy_ , vol. 64, pp. 76–91, 2018. * [36] D. J. Fagnant and K. Kockelman, “Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations,” _Transportation Research Part A: Policy and Practice_ , vol. 77, pp. 167–181, 2015. * [37] G. S. Bauer, J. B. Greenblatt, and B. F. Gerke, “Cost, energy, and environmental impact of automated electric taxi fleets in manhattan,” _Environmental Science & Technology_, vol. 52, no. 8, pp. 4920–4928, 2018\. * [38] Z. Wadud, “Fully automated vehicles: A cost of ownership analysis to inform early adoption,” _Transportation Research Part A: Policy and Practice_ , vol. 101, pp. 163–176, 2017. * [39] T. Litman, “Autonomous vehicle implementation predictions – implications for transport planning,” Victoria Transport Policy Institute, Tech. Rep., 2019. * [40] W. Time. (2018, Mar.) Carbon footprint data. Wired. * [41] WMATA, “Fy2018 proposed budget,” Washington Metropolitan Area Transit Authority, Tech. Rep., 2017. * [42] L. Aratani. (2015) Metro to debut first of its 7000-series cars on blue line on april 14. The Washington Post. available online. * [43] WMATA, “Sustainability report 2018,” Washington Metropolitan Area Transit Authority, Tech. Rep., 2018. * [44] E. Jaffe. (2015) The case for driverless trains, by the numbers. Citylab. Citylab. Available online. * [45] P. Howard and D. Sylvan, “Expert consensus on the economics of climate change,” Institute for Policy Integrity – New York University School of Law, Tech. Rep., 2015. * [46] Y. Wang, J. Zhang, M. Ma, and X. Zhoum, “Survey on driverless train operation for urban rail transit systems,” _Urban Rail Transit_ , vol. 2, no. 3–4, p. 106–113, 2016. * [47] J. H. Gawron, G. A. Keoleian, R. D. De Kleine, T. J. Wallington, and K. Hyung Chul, “Life cycle assessment of connected and automated vehicles: Sensing and computing subsystem and vehicle level effects,” _Environmental Science & Technology_, vol. 52, pp. 3249–3256, 2018. * [48] P. Lienert. (2019) Cost of driverless vehicles to drop dramatically: Delphi ceo. Insurance Journal. available online. * [49] WCP, “The automotive lidar market,” Woodside Capital Partners, Tech. Rep., 2018\.
2024-09-04T02:54:58.766292
2020-02-29T21:59:52
2003.04805
{ "authors": "Razvan V. Marinescu", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26137", "submitter": "Razvan Marinescu", "url": "https://arxiv.org/abs/2003.04805" }
arxiv-papers
*mymilestone mymilestone/.style= shape=isosceles triangle, inner sep=0pt, draw=cyan, top color=white, bottom color=cyan!50 , mymilestone incomplete/.style= /pgfgantt/mymilestone, draw=yellow, bottom color=yellow!50 , mymilestone label font=, mymilestone left shift=0pt, mymilestone right shift=0pt stardate1cm, inner sep=0]
2024-09-04T02:54:58.774191
2020-03-10T15:54:53
2003.04819
{ "authors": "Benedek Rozemberczki, Oliver Kiss, Rik Sarkar", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26138", "submitter": "Benedek Rozemberczki", "url": "https://arxiv.org/abs/2003.04819" }
arxiv-papers
Karate Club: An API Oriented Open-Source Python Framework for Unsupervised Learning on Graphs Benedek Rozemberczki The University of Edinburgh United Kingdom Oliver Kiss Central European University Rik Sarkar The University of Edinburgh United Kingdom Graphs encode important structural properties of complex systems. Machine learning on graphs has therefore emerged as an important technique in research and applications. We present Karate Club – a Python framework combining more than 30 state-of-the-art graph mining algorithms. These unsupervised techniques make it easy to identify and represent common graph features. The primary goal of the package is to make community detection, node and whole graph embedding available to a wide audience of machine learning researchers and practitioners. Karate Club is designed with an emphasis on a consistent application interface, scalability, ease of use, sensible out of the box model behaviour, standardized dataset ingestion, and output generation. This paper discusses the design principles behind the framework with practical examples. We show Karate Club's efficiency in learning performance on a wide range of real world clustering problems and classification tasks along with supporting evidence of its competitive speed. § ACKNOWLEDGEMENTS Benedek Rozemberczki was supported by the Centre for Doctoral Training in Data Science, funded by EPSRC (grant EP/L016427/1). #1 #1#1#1 #1 #1 #1 #1#1 #1#1 [Abadi, Barham, Chen, Chen, Davis, Dean, Devin, Ghemawat, Irving, Isard, et alAbadi et al2016] authorpersonMartín Abadi, personPaul Barham, personJianmin Chen, personZhifeng Chen, personAndy Davis, personJeffrey Dean, personMatthieu Devin, personSanjay Ghemawat, personGeoffrey Irving, personMichael Isard, et al year2016. Tensorflow: A system for large-scale machine learning. In booktitle12th $\{$USENIX$\}$ Symposium on Operating Systems Design and Implementation ($\{$OSDI$\}$ 16). [Ahmed, Rossi, Lee, Willke, Zhou, Kong, and EldardiryAhmed et al2019] authorpersonNesreen K Ahmed, personRyan A Rossi, personJohn Boaz Lee, personTheodore L Willke, personRong Zhou, personXiangnan Kong, and personHoda Eldardiry. year2019. role2vec: Role-based network embeddings. In booktitleProc. DLG KDD. [Bandyopadhyay, Kara, Kannan, and MurtyBandyopadhyay et al2018] authorpersonSambaran Bandyopadhyay, personHarsh Kara, personAswin Kannan, and personM Narasimha Murty. year2018. Fscnmf: Fusing structure and content via non-negative matrix factorization for embedding information networks. journalarXiv preprint arXiv:1804.05313 [Belkin and NiyogiBelkin and Niyogi2002] authorpersonMikhail Belkin and personPartha Niyogi. year2002. Laplacian eigenmaps and spectral techniques for embedding and clustering. In booktitleAdvances in neural information processing systems. pages585–591. [Buitinck, Louppe, Blondel, Pedregosa, Mueller, Grisel, Niculae, Prettenhofer, Gramfort, Grobler, Layton, VanderPlas, Joly, Holt, and VaroquauxBuitinck et al2013] authorpersonLars Buitinck, personGilles Louppe, personMathieu Blondel, personFabian Pedregosa, personAndreas Mueller, personOlivier Grisel, personVlad Niculae, personPeter Prettenhofer, personAlexandre Gramfort, personJaques Grobler, personRobert Layton, personJacob VanderPlas, personArnaud Joly, personBrian Holt, and personGaël Varoquaux. year2013. API design for machine learning software: experiences from the scikit-learn project. journalArXiv volumeabs/1309.0238 [Cao, Lu, and XuCao et al2015] authorpersonShaosheng Cao, personWei Lu, and personQiongkai Xu. year2015. Grarep: Learning graph representations with global structural information. In booktitleProceedings of the 24th ACM international on conference on information and knowledge management. ACM, pages891–900. [Chen and KogaChen and Koga2019] authorpersonHong Chen and personHisashi Koga. year2019. GL2vec: Graph Embedding Enriched by Line Graphs with Edge Features. In booktitleInternational Conference on Neural Information Processing. Springer, pages3–14. [de Lara and Edouardde Lara and authorpersonNathan de Lara and personPineau Edouard. year2018. A simple baseline algorithm for graph classification. In booktitleAdvances in Neural Information Processing Systems. [Defferrard, Martin, Pena, and PerraudinDefferrard et al[n.d.]] authorpersonMichaël Defferrard, personLionel Martin, personRodrigo Pena, and personNathanaël Perraudin. year[n.d.]. titlePyGSP: Graph Signal Processing in Python. [Donnat, Zitnik, Hallac, and LeskovecDonnat et al2018] authorpersonClaire Donnat, personMarinka Zitnik, personDavid Hallac, and personJure Leskovec. year2018. Learning structural node embeddings via diffusion wavelets. In booktitleProceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, [Epasto, Lattanzi, and Paes LemeEpasto et al2017] authorpersonAlessandro Epasto, personSilvio Lattanzi, and personRenato Paes Leme. Ego-Splitting Framework: From Non-Overlapping to Overlapping Clusters. In booktitleProceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (seriesKDD '17). pages145–154. [Gao, Wolf, and HirnGao et al2019] authorpersonFeng Gao, personGuy Wolf, and personMatthew Hirn. year2019. Geometric Scattering for Graph Data Analysis. In booktitleProceedings of the 36th International Conference on Machine Learning, Vol. volume97. pages2122–2131. [Hagberg, Swart, and S ChultHagberg et al2008] authorpersonAric Hagberg, personPieter Swart, and personDaniel S Chult. booktitleExploring network structure, dynamics, and function using NetworkX. typeTechnical Report. institutionLos Alamos National Lab.(LANL), Los Alamos, NM (United States). [Henderson, Gallagher, Eliassi-Rad, Tong, Basu, Akoglu, Koutra, Faloutsos, and LiHenderson et al2012] authorpersonKeith Henderson, personBrian Gallagher, personTina Eliassi-Rad, personHanghang Tong, personSugato Basu, personLeman Akoglu, personDanai Koutra, personChristos Faloutsos, and personLei Li. year2012. Rolx: structural role extraction & mining in large graphs. In booktitleProceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. [Jundong LiJundong Li2019] authorpersonHuan Liu Jundong Li, Liang Wu. Multi-Level Network Embedding with Boosted Low-Rank Matrix Approximation. In booktitleProceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2019. ACM, pages50–56. [Kuang, Ding, and ParkKuang et al2012] authorpersonDa Kuang, personChris Ding, and personHaesun Park. year2012. Symmetric nonnegative matrix factorization for graph clustering. In booktitleProceedings of the 2012 SIAM international conference on data mining. SIAM, pages106–117. [Leskovec and KrevlLeskovec and authorpersonJure Leskovec and personAndrej Krevl. year2014. titleSNAP Datasets: Stanford Large Network Dataset [Li, Huang, Wang, and LaiLi et al2019] authorpersonPei-Zhen Li, personLing Huang, personChang-Dong Wang, and personJian-Huang Lai. EdMot: An Edge Enhancement Approach for Motif-aware Community Detection. In booktitleProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (seriesKDD '19). pages479–487. [Narayanan, Chandramohan, Venkatesan, Chen, and LiuNarayanan et al2017] authorpersonAnnamalai Narayanan, personMahinthan Chandramohan, personRajasekar Venkatesan, personLihui Chen, and personYang Liu. graph2vec: Learning distributed representations of [Ou, Cui, Pei, Zhang, and ZhuOu et al2016] authorpersonMingdong Ou, personPeng Cui, personJian Pei, personZiwei Zhang, and personWenwu Zhu. year2016. Asymmetric transitivity preserving graph embedding. In booktitleProceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. [Paszke, Gross, Massa, Lerer, Bradbury, Chanan, Killeen, Lin, Gimelshein, Antiga, et alPaszke et al2019] authorpersonAdam Paszke, personSam Gross, personFrancisco Massa, personAdam Lerer, personJames Bradbury, personGregory Chanan, personTrevor Killeen, personZeming Lin, personNatalia Gimelshein, personLuca Antiga, et al year2019. PyTorch: An imperative style, high-performance deep learning library. In booktitleAdvances in Neural Information Processing Systems. pages8024–8035. [Pedregosa, Varoquaux, Gramfort, Michel, Thirion, Grisel, Blondel, Prettenhofer, Weiss, Dubourg, et alPedregosa et al2011] authorpersonFabian Pedregosa, personGaël Varoquaux, personAlexandre Gramfort, personVincent Michel, personBertrand Thirion, personOlivier Grisel, personMathieu Blondel, personPeter Prettenhofer, personRon Weiss, personVincent Dubourg, et al year2011. Scikit-learn: Machine learning in Python. journalJournal of machine learning research volume12, numberOct (year2011), authorpersonTiago P Peixoto. The graph-tool python library. journalfigshare (year2014). [Perozzi, Al-Rfou, and SkienaPerozzi et al2014] authorpersonBryan Perozzi, personRami Al-Rfou, and personSteven Skiena. Deepwalk: Online learning of social representations. In booktitleProceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages701–710. [Perozzi, Kulkarni, Chen, and SkienaPerozzi et al2017] authorpersonBryan Perozzi, personVivek Kulkarni, personHaochen Chen, and personSteven Skiena. year2017. Don't Walk, Skip!: online learning of multi-scale network embeddings. In booktitleProceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017. ACM, pages258–265. [Prat-Pérez, Dominguez-Sal, and Larriba-PeyPrat-Pérez et al2014] authorpersonArnau Prat-Pérez, personDavid Dominguez-Sal, and personJosep-Lluis Larriba-Pey. year2014. High quality, scalable and parallel community detection for large real graphs. In booktitleProceedings of the 23rd international conference on World wide web. [Qiu, Dong, Ma, Li, Wang, and TangQiu et al2018] authorpersonJiezhong Qiu, personYuxiao Dong, personHao Ma, personJian Li, personKuansan Wang, and personJie Tang. Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec. In booktitleProceedings of the Eleventh ACM International Conference on Web Search and Data Mining. ACM, pages459–467. [Raghavan, Albert, and KumaraRaghavan et al2007] authorpersonUsha Nandini Raghavan, personRéka Albert, and personSoundar Kumara. Near Linear Time Algorithm to Detect Community Structures in Large-scale Networks. journalPhysical review E volume76, number3 (year2007), pages036106. [Rehurek and SojkaRehurek and Sojka2011] authorpersonRadim Rehurek and personPetr Sojka. year2011. Gensim—statistical semantics in python. journalRetrieved from genism. org [Rozemberczki, Allen, and SarkarRozemberczki et al2019a] authorpersonBenedek Rozemberczki, personCarl Allen, and personRik Sarkar. Multi-scale Attributed Node Embedding. journalarXiv preprint arXiv:1909.13021 [Rozemberczki, Davies, Sarkar, and SuttonRozemberczki et al2019b] authorpersonBenedek Rozemberczki, personRyan Davies, personRik Sarkar, and personCharles Sutton. year2019b. GEMSEC: Graph Embedding with Self Clustering. In booktitleProceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2019. ACM, [Rozemberczki, Kiss, and SarkarRozemberczki et al2020] authorpersonBenedek Rozemberczki, personOliver Kiss, and personRik Sarkar. Little Ball of Fur: A Python Library for Graph Sampling. In booktitleProceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM '20). ACM. [Rozemberczki and SarkarRozemberczki and authorpersonBenedek Rozemberczki and personRik Sarkar. year2018. Fast Sequence-Based Embedding with Diffusion Graphs. In booktitleInternational Workshop on Complex Networks. Springer, pages99–107. [Rozemberczki and SarkarRozemberczki and authorpersonBenedek Rozemberczki and personRik Sarkar. year2020. Characteristic Functions on Graphs: Birds of a Feather, from Statistical Descriptors to Parametric Models. In booktitleProceedings of the 29th ACM International on Conference on Information and Knowledge Management (CIKM '20). ACM. authorpersonRik Sarkar. Low distortion delaunay embedding of trees in hyperbolic plane. In booktitleInternational Symposium on Graph Drawing. Springer, pages355–366. [Sun, Shen, Gao, Ouyang, and ChengSun et al2017] authorpersonBing-Jie Sun, personHuawei Shen, personJinhua Gao, personWentao Ouyang, and personXueqi Cheng. year2017. A non-negative symmetric encoder-decoder approach for community detection. In booktitleProceedings of the 2017 ACM on Conference on Information and Knowledge Management. ACM, [Sun and FevotteSun and Fevotte2014] authorpersonDennis L Sun and personCedric Fevotte. year2014. Alternating direction method of multipliers for non-negative matrix factorization with the beta-divergence. In booktitle2014 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pages6201–6205. [Tsitsulin, Mottin, Karras, Bronstein, and MüllerTsitsulin et al2018] authorpersonAnton Tsitsulin, personDavide Mottin, personPanagiotis Karras, personAlexander Bronstein, and personEmmanuel Müller. Netlsd: hearing the shape of a graph. In booktitleProceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. [Verbeek and SuriVerbeek and Suri2014] authorpersonKevin Verbeek and personSubhash Suri. year2014. Metric embedding, hyperbolic space, and social networks. In booktitleProceedings of the thirtieth annual symposium on Computational geometry. pages501–510. [Verma and ZhangVerma and Zhang2017] authorpersonSaurabh Verma and personZhi-Li Zhang. year2017. Hunt for the unique, stable, sparse and fast feature learning on graphs. In booktitleAdvances in Neural Information Processing Systems. pages88–98. [Virtanen, Gommers, Oliphant, Haberland, Reddy, Cournapeau, Burovski, Peterson, Weckesser, Bright, et alVirtanen et al2019] authorpersonPauli Virtanen, personRalf Gommers, personTravis E Oliphant, personMatt Haberland, personTyler Reddy, personDavid Cournapeau, personEvgeni Burovski, personPearu Peterson, personWarren Weckesser, personJonathan Bright, et al year2019. SciPy 1.0–fundamental algorithms for scientific computing in Python. journalarXiv preprint arXiv:1907.10121 [Walt, Colbert, and VaroquauxWalt et al2011] authorpersonStéfan van der Walt, personS Chris Colbert, and personGael Varoquaux. The NumPy array: a structure for efficient numerical computation. journalComputing in Science & Engineering volume13, number2 (year2011), [Wang, Cui, Wang, Pei, Zhu, and YangWang et al2017] authorpersonXiao Wang, personPeng Cui, personJing Wang, personJian Pei, personWenwu Zhu, and personShiqiang Yang. Community Preserving Network Embedding. In booktitleProceedings of the Thirty-First AAAI Conference on Artificial Intelligence (seriesAAAI'17). [Yanardag and VishwanathanYanardag and authorpersonPinar Yanardag and personS.V.N. Vishwanathan. year2015. Deep Graph Kernels. In booktitleProceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. [Yang, Liu, Zhao, Sun, and ChangYang et al2015] authorpersonCheng Yang, personZhiyuan Liu, personDeli Zhao, personMaosong Sun, and personEdward Chang. year2015. Network representation learning with rich text information. In booktitleTwenty-Fourth International Joint Conference on Artificial Intelligence. [Yang, Sun, Liu, and TuYang et al2017] authorpersonCheng Yang, personMaosong Sun, personZhiyuan Liu, and personCunchao Tu. Fast network embedding enhancement via high order proximity approximation.. In booktitleIJCAI. [Yang, Rosso, Li, and Cudre-MaurouxYang et al2019] authorpersonDingqi Yang, personPaolo Rosso, personBin Li, and personPhilippe Cudre-Mauroux. NodeSketch: Highly-Efficient Graph Embeddings via Recursive Sketching. In booktitleProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. [Yang, Pan, Zhang, Chen, Lian, and ZhangYang et al2018] authorpersonHong Yang, personShirui Pan, personPeng Zhang, personLing Chen, personDefu Lian, and personChengqi Zhang. Binarized attributed network embedding. In booktitle2018 IEEE International Conference on Data Mining (ICDM). IEEE, pages1476–1481. [Yang and LeskovecYang and Leskovec2013] authorpersonJaewon Yang and personJure Leskovec. year2013. Overlapping community detection at scale: a nonnegative matrix factorization approach. In booktitleProceedings of the sixth ACM international conference on Web search and data mining. ACM, pages587–596. [Yang and YangYang and Yang2018] authorpersonShuang Yang and personBo Yang. year2018. Enhanced Network Embedding with Text Information. In booktitle2018 24th International Conference on Pattern Recognition (ICPR). IEEE, pages326–331. [Ye, Chen, and ZhengYe et al2018] authorpersonFanghua Ye, personChuan Chen, and personZibin Zheng. year2018. Deep Autoencoder-like Nonnegative Matrix Factorization for Community Detection. In booktitleProceedings of the 27th ACM International Conference on Information and Knowledge Management (seriesCIKM '18). pages1393–1402. authorpersonWayne W Zachary. An information flow model for conflict and fission in small groups. journalJournal of anthropological research volume33, number4 (year1977), [Zhang, Yin, Zhu, and ZhangZhang et al2018] authorpersonDaokun Zhang, personJie Yin, personXingquan Zhu, and personChengqi Zhang. SINE: Scalable Incomplete Network Embedding. In booktitle2018 IEEE International Conference on Data Mining (ICDM). IEEE, pages737–746.
2024-09-04T02:54:58.783178
2020-03-10T16:16:15
2003.04827
{ "authors": "David I. Spivak, David Jaz Myers", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26139", "submitter": "David Spivak", "url": "https://arxiv.org/abs/2003.04827" }
arxiv-papers
0pt0pt *34pc* **1 1in1in* 2 *1.5* subsubsection subsection # Dirichlet Polynomials David I. Spivak David Jaz Myers # Dirichlet Polynomials form a Topos David I. Spivak David Jaz Myers ###### Abstract One can think of power series or polynomials in one variable, such as $P(\mathcal{y})=2\mathcal{y}^{3}+\mathcal{y}+5$, as functors from the category $\mathsf{Set}$ of sets to itself; these are known as polynomial functors. Denote by $\mathsf{Poly}_{\mathsf{Set}}$ the category of polynomial functors on $\mathsf{Set}$ and natural transformations between them. The constants $0,1$ and operations $+,\times$ that occur in $P(\mathcal{y})$ are actually the initial and terminal objects and the coproduct and product in $\mathsf{Poly}_{\mathsf{Set}}$. Just as the polynomial functors on $\mathsf{Set}$ are the copresheaves that can be written as sums of representables, one can express any Dirichlet series, e.g. $\sum_{n=0}^{\infty}n^{\mathcal{y}}$, as a coproduct of representable presheaves. A Dirichlet polynomial is a finite Dirichlet series, that is, a finite sum of representables $n^{\mathcal{y}}$. We discuss how both polynomial functors and their Dirichlet analogues can be understood in terms of bundles, and go on to prove that the category of Dirichlet polynomials is an elementary topos. ## Chapter 0 Introduction Polynomials $P(\mathcal{y})$ and finite Dirichlet series $D(\mathcal{y})$ in one variable $\mathcal{y}$, with natural number coefficients $a_{i}\in\mathbb{N}$, are respectively functions of the form $\displaystyle P(\mathcal{y})$ $\displaystyle=a_{n}\mathcal{y}^{n}+\cdots+a_{2}\mathcal{y}^{2}+a_{1}\mathcal{y}^{1}+a_{0}\mathcal{y}^{0},$ (1) $\displaystyle D(\mathcal{y})$ $\displaystyle=a_{n}n^{\mathcal{y}}+\cdots+a_{2}2^{\mathcal{y}}+a_{1}1^{\mathcal{y}}+a_{0}0^{\mathcal{y}}.$ The first thing we should emphasize is that the algebraic expressions in (1) can in fact be regarded as _objects in a category_ , in fact two categories: $\mathsf{Poly}$ and $\mathsf{Dir}$. We will explain the morphisms later, but for now we note that in $\mathsf{Poly}$, $\mathcal{y}^{2}=\mathcal{y}\times\mathcal{y}$ is a product and $2\mathcal{y}=\mathcal{y}+\mathcal{y}$ is a coproduct, and similarly for $\mathsf{Dir}$. The operators—in both the polynomial and the Dirichlet case—are not just algebraic, they are category-theoretic. Moreover, these categories have a rich structure. The category $\mathsf{Poly}$ is well studied (see [GK12]). In particular, the following are equivalent: ###### Theorem 1. [GK12] For a functor $P\colon\mathsf{Fin}\to\mathsf{Fin}$, the following are equivalent: 1. 1. $P$ is polynomial. 2. 2. $P$ is a sum of representables. 3. 3. $P$ preserves connected limits – or equivalently, wide pullbacks. In Theorem 8 we prove an analogous result characterizing Dirichlet polynomials: ###### Theorem 2. For a functor $D\colon\mathsf{Fin}^{\textnormal{op}}\to\mathsf{Fin}$, the following are equivalent: 1. 1. $D$ is a Dirichlet polynomial. 2. 2. $D$ is a sum of representables. 3. 3. $D$ sends connected colimits to limits – or equivalently, $D$ preserves wide pushouts. We will also show that $\mathsf{Dir}$ is equivalent to the arrow category of finite sets, $\mathsf{Dir}\simeq\mathsf{Fin}^{\to},$ and in particular that $\mathsf{Dir}$ is an elementary topos. If one allows _arbitary_ sums of functors represented by finite sets, one gets _analytic_ functors in the covariant case—first defined by Joyal in his seminal paper on combinatorial species [Joy81]—and _Dirichlet_ functors in the contravariant case—first defined by Baez and Dolan and appearing in Baez’s _This Week’s Finds_ blog [BD]. Baez and Dolan also drop the traditional negative sign in the exponent (that is, they use $n^{s}$ where $n^{-s}$ usually appears), but also find a nice way to bring it back by moving to groupoids. Here, we drop the negative sign and work with finite sets to keep things as simple as possible. Similar considerations hold with little extra work for infinite Dirichlet series or power series, and even more generally, by replacing $\mathsf{Fin}$ with $\mathsf{Set}$. ## Chapter 1 Polynomial and Dirichlet functors Recall that a _co-representable functor_ $\mathsf{Fin}\to\mathsf{Fin}$ is one of the form $\mathsf{Fin}(k,-)$ for a finite set $k=\\{`1\text{'},`2\text{'},\ldots,`k\text{'}\\}.$ We denote this functor by $\mathcal{y}^{k}$ and say it is _represented by_ $k\in\mathsf{Fin}$. Similarly, a _(contra-) representable functor_ $\mathsf{Fin}^{\textnormal{op}}\to\mathsf{Fin}$ is contravariant functor of the form $\mathsf{Fin}(-,k)$; we denote this functor by $k^{\mathcal{y}}$. The functors $\mathcal{y}^{-}$ and $-^{\mathcal{y}}$ are the contravariant and covariant Yoneda embeddings, $\mathcal{y}^{k}\coloneqq\mathsf{Fin}(k,-)\qquad\text{and}\qquad k^{\mathcal{y}}\coloneqq\mathsf{Fin}(-,k).$ For example $\mathcal{y}^{3}(2)\cong 8$ and $3^{\mathcal{y}}(2)\cong 9$. Note that the functor $0^{\mathcal{y}}\not\cong 0$ is not the initial object in $\mathsf{Dir}$; it is given by $0^{\mathcal{y}}(s)=\begin{cases}1&\textnormal{ if }s=0\\\ 0&\textnormal{ if }s\geq 1.\end{cases}$ The coefficient $a_{0}$ of $1=\mathcal{y}^{0}$ in a polynomial $P$ is called its _constant_ term. We refer to the coefficient $D_{\text{zc}}\coloneqq a_{0}$ of $0^{\mathcal{y}}$ in a Dirichlet series $D$ as its _zero-content_ term. Rather than having no content, the content of the functor $D_{\text{zc}}{\cdot}0^{\mathcal{y}}$ becomes significant exactly when it is applied to zero. ###### Example 1. The reader can determine which Dirichlet polynomial $D(\mathcal{y})\in\mathsf{Dir}$ as in Eq. 1 has the following terms $\begin{array}[]{c|ccccccc}\mathcal{y}&\cdots&5&4&3&2&1&0\\\ \hline\cr D(\mathcal{y})&\cdots&96&48&24&12&6&7\end{array}$ Hint: its zero-content term is $D_{\text{zc}}=4$. The set $P(1)$ (resp. the set $D(0)$) has particular importance; it is the set of pure-power terms $\mathcal{y}^{k}$ in $P$ (resp. the pure-exponential terms $k^{\mathcal{y}}$ in $D$). For example if $P=\mathcal{y}^{2}+4\mathcal{y}+4$ and $D=2^{\mathcal{y}}+4+4{\cdot}0^{\mathcal{y}}$ then $P(1)=D(0)=9$. ###### Definition 2. A _polynomial functor_ is a functor $P\colon\mathsf{Fin}\to\mathsf{Fin}$ that can be expressed as a sum of co-representable functors. Similarly, we define a _Dirichlet functor_ to be a functor $D\colon\mathsf{Fin}^{\textnormal{op}}\to\mathsf{Fin}$ that can be expressed as a sum of representable presheaves (contravariant functors): $P=\sum_{i=1}^{P(1)}\mathcal{y}^{p_{i}}\qquad\text{and}\qquad D=\sum_{i=1}^{D(0)}(d_{i})^{\mathcal{y}}.$ (1) That is, $P(X)=\sum_{i=1}^{P(1)}\mathsf{Fin}(p_{i},X)$ and $D(X)=\sum_{i=1}^{D(0)}\mathsf{Fin}(X,d_{i})$ as functors applied to $X\in\mathsf{Fin}$. See Theorem 1 above for well-known equivalent conditions in $\mathsf{Poly}$ and Theorem 8 below for a Dirichlet analogue. ## Chapter 2 The categories $\mathsf{Poly}$ and $\mathsf{Dir}$ For any small category $C$, let $\mathsf{Fin}^{C}$ denote the category whose objects are the functors $C\to\mathsf{Fin}$ and whose morphisms are the natural transformations between them. ###### Definition 1. The _category of polynomial functors_ , denoted $\mathsf{Poly}$, is the (skeleton of the) full subcategory of $\mathsf{Fin}^{\mathsf{Fin}}$ spanned by sums $P$ of representable functors. The _category of Dirichlet functors_ , denoted $\mathsf{Dir}$, is the (skeleton of the) full subcategory of $\mathsf{Fin}^{(\mathsf{Fin}^{\textnormal{op}})}$ spanned by the sums $D$ of representable presheaves. While we will not pursue it here, one can take $\mathsf{Poly}_{\mathsf{Set}}$ to be the full subcategory of functors $\mathsf{Set}\to\mathsf{Set}$ spanned by small coproducts of representables, and similarly for $\mathsf{Dir}_{\mathsf{Set}}$. ###### Lemma 2. The set of polynomial maps $P\to Q$ and Dirichlet maps $D\to E$ are given by the following formulas: $\mathsf{Poly}(P,Q)\coloneqq\prod_{i\in P(1)}Q(p_{i})\qquad\text{and}\qquad\mathsf{Dir}(D,E)\coloneqq\prod_{i\in D(0)}E(d_{i}).$ ###### Example 3. Let $P=2\mathcal{y}^{2}$, $Q=\mathcal{y}+1$, and let $D=2\cdot 2^{\mathcal{y}}$ and $E=1+0^{\mathcal{y}}$. Then there are nine ($9$) polynomial morphisms $P\to Q$, zero ($0$) polynomial morphisms $Q\to P$, one ($1$) Dirichlet morphism $D\to E$, and eight ($8$) Dirichlet morphisms $E\to D$. ###### Remark 4. Sums and products of polynomials in the usual algebraic sense agree exactly with sums and products in the categorical sense: if $P$ and $Q$ are polynomials, i.e. objects in $\mathsf{Poly}$, then their coproduct is the usual algebraic sum $P+Q$ of polynomials, and similarly their product is the usual algebraic product $PQ$ of polynomials. The same is true for $\mathsf{Dir}$: sums and products of Dirichlet polynomials in the usual algebraic sense agree exactly with sums and products in the categorical sense. #### Formal structures. We review some formal structures of the categories $\mathsf{Poly}$ and $\mathsf{Dir}$; all are straightforward to prove. There is an adjoint quadruple and adjoint 5-tuple as follows, labeled by where they send objects $n\in\mathsf{Fin}$, $P\in\mathsf{Poly}$, $D\in\mathsf{Dir}$: ${\mathsf{Fin}}$${\mathsf{Poly}}$$\scriptstyle{n}$$\scriptstyle{n\mathcal{y}}$$\scriptstyle{P(0)}$$\scriptstyle{P(1)}$${\scriptstyle\top}$${\scriptstyle\top}$${\scriptstyle\top}$ ${\mathsf{Fin}}$${\mathsf{Dir}}$$\scriptstyle{n\cdot 0^{\mathcal{y}}}$$\scriptstyle{n}$$\scriptstyle{n^{\mathcal{y}}}$$\scriptstyle{D(0)}$$\scriptstyle{D(1)}$${\scriptstyle\bot}$${\scriptstyle\bot}$${\scriptstyle\bot}$${\scriptstyle\bot}$ (1) All five of the displayed functors out of $\mathsf{Fin}$ are fully faithful. For each $k:\mathsf{Fin}$ the functors $P\mapsto P(k)$ and $D\mapsto D(k)$ have left adjoints, namely $n\mapsto n\mathcal{y}^{k}$ and $n\mapsto n{\cdot}k^{\mathcal{y}}$ respectively. These are functorial in $k$ and in fact extend to two-variable adjunctions $\mathsf{Fin}\times\mathsf{Poly}\to\mathsf{Poly}$ and $\mathsf{Fin}\times\mathsf{Dir}\to\mathsf{Dir}$. Indeed, for $n\in\mathsf{Fin}$ and $P,Q\in\mathsf{Poly}$ (respectively $D,E\in\mathsf{Dir}$), we have $\displaystyle\mathsf{Poly}(nP,Q)\cong\mathsf{Poly}(P,Q^{n})\cong\mathsf{Fin}(n,\mathsf{Poly}(P,Q)),$ $\displaystyle\mathsf{Dir}(nD,E)\cong\mathsf{Dir}(D,E^{n})\cong\mathsf{Fin}(n,\mathsf{Dir}(D,E)),$ where $nP$ and $nD$ denote $n$-fold coproducts and $P^{n}$ and $D^{n}$ denote $n$-fold products. Consider the unique function $0\to 1$. The natural transformation induced by it, denoted $\pi_{D}\colon D(1)\to D(0)$, is equivalent to two natural transformations on $\mathsf{Dir}$ via the adjunctions in Eq. 1: $n{\cdot}0^{\mathcal{y}}\to n,\qquad D(1)\xrightarrow{\pi_{D}}D(0),\qquad n\to n^{\mathcal{y}}.$ (2) The one labeled $\pi_{D}$ is also $D(0!)$, where $0!\colon 0\to 1$ is the unique function of that type. The composite of two polynomial functors $\mathsf{Fin}\to\mathsf{Fin}$ is again polynomial, $(P\circ Q)(n)\coloneqq P(Q(n))$; this gives a nonsymmetric monoidal structure on $\mathsf{Poly}$. The monoidal unit is $\mathcal{y}$. Day convolution for the cartesian product monoidal structure provides symmetric monoidal structure $\otimes\colon\mathsf{Poly}\times\mathsf{Poly}\to\mathsf{Poly}$, for which the monoidal unit is $\mathcal{y}$. This monoidal structure—like the Cartesian monoidal structure—distributes over $+$ We can write an explicit formula for $P\otimes Q$, with $P,Q$ as in Eq. 1: $P\otimes Q=\sum_{i=1}^{P(1)}\sum_{j=1}^{Q(1)}\mathcal{y}^{p_{i}q_{j}}$ (3) We call this the _Dirichlet product_ of polynomials, for reasons we will see in Remark 1. The Dirichlet monoidal structure is closed; that is, for any $A,Q:\mathsf{Poly}$ we define $[A,Q]\coloneqq\prod_{i:A(1)}Q\circ(a_{i}\mathcal{y}),$ (4) for example $[n\mathcal{y},\mathcal{y}]\cong\mathcal{y}^{n}$ and $[\mathcal{y}^{n},\mathcal{y}]\cong n\mathcal{y}$. For any polynomial $A$ there is an $(-\otimes A)\dashv[A,-]$ adjunction $\mathsf{Poly}(P\otimes A,Q)\cong\mathsf{Poly}(P,[A,Q]).$ (5) In particular we recover Lemma 2 using Eqs. 4 and 1. The cartesian monoidal structure on $\mathsf{Poly}$ is also closed, $\mathsf{Poly}(P\times A,Q)\cong\mathsf{Poly}(P,Q^{A})$, and the formula for $Q^{A}$ is similar to Eq. 4: $Q^{A}\coloneqq\prod_{i:A(1)}Q\circ(a_{i}+\mathcal{y}).$ If we define the _global sections_ functor $\Gamma\colon\mathsf{Poly}\to\mathsf{Fin}^{\textnormal{op}}$ by $\Gamma P\coloneqq\mathsf{Poly}(P,\mathcal{y})$, or explicitly $\Gamma(P)=[P,\mathcal{y}](1)=\prod_{i}p_{i}$, we find that it is left adjoint to the Yoneda embedding $\leavevmode\hbox to107.58pt{\vbox to31.03pt{\pgfpicture\makeatletter\hbox{\hskip 53.78891pt\lower-21.44022pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{\offinterlineskip{}{}{{{}}{{}}}{{{}}}{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-53.78891pt}{-8.65135pt}\pgfsys@invoke{ }\hbox{\vbox{\halign{\pgf@matrix@init@row\pgf@matrix@step@column{\pgf@matrix@startcell#\pgf@matrix@endcell}&#\pgf@matrix@padding&&\pgf@matrix@step@column{\pgf@matrix@startcell#\pgf@matrix@endcell}&#\pgf@matrix@padding\cr\hfil\hskip 14.69168pt\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-10.38614pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{${\mathsf{Fin}^{\textnormal{op}}}$} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}}}&\hskip 14.69168pt\hfil&\hfil\hskip 64.09723pt\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-9.79169pt}{0.0pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{${\mathsf{Poly}}$} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}&\hskip 14.09723pt\hfil\cr}}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}{{{{}}}{{}}{{}}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {}{ {}{}{}}{}{ {}{}{}}{}{ {}{}{}} {{{{{}}{ {}{}}{}{}{{}{}}}}}{}{{{{{}}{ {}{}}{}{}{{}{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}{}{{{}{}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{{{}}}{{{}}}}{{}}{{{}}{{}}}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{}}{{}}}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.39998pt}\pgfsys@invoke{ }{}{}{}{}{{}}{}{}{{}}\pgfsys@moveto{-24.20555pt}{-0.15135pt}\pgfsys@lineto{24.99449pt}{-0.15135pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}}}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{25.19447pt}{-0.15135pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-4.55408pt}{3.5625pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{$\scriptstyle{n\mapsto\mathcal{y}^{n}}$} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{ {}{}{}}{}{ {}{}{}}{}{ {}{}{}} {{{{{}}{ {}{}}{}{}{{}{}}}}}{}{{{{{}}{ {}{}}{}{}{{}{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}{}{{{}{}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{{{}}}{{{}}}}{{}}{{{}}{{}}}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{{{}}{{}}}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}} {}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.39998pt}\pgfsys@invoke{ }{}{}{}{}{{}}{}{}{{}}\pgfsys@moveto{25.39445pt}{-12.15135pt}\pgfsys@lineto{-23.80559pt}{-12.15135pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}}}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{-1.0}{0.0}{0.0}{-1.0}{-24.00557pt}{-12.15135pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-10.23367pt}{-19.28745pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{$\scriptstyle{\Gamma P\mapsfrom P}$} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{ {}{}{}}{}{ {}{}{}} {{{{{}}{ {}{}}{}{}{{}{}}}}}{}{{{{{}}{ {}{}}{}{}{{}{}}}}}{{}}{}{}{}{}{}{{{}{}}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@setlinewidth{0.39998pt}\pgfsys@invoke{ }\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-3.23886pt}{-8.5819pt}\pgfsys@invoke{ }\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{${\scriptstyle\top}$} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{ {}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys<EMAIL_ADDRESS> Each of the categories $\mathsf{Poly}$ and $\mathsf{Dir}$ has pullbacks, which we denote using “fiber product notation” $A\times_{C}B$. We can use pullbacks in combination with monad units $\eta_{P}\colon P\to P(1)$ and $\eta_{D}\colon D\to D(0)$ arising from Eq. 1 to recover Eq. 1: $P=\sum_{i=1}^{P(1)}P\times_{P(1)}`i\text{'}\qquad\text{and}\qquad D=\sum_{i=1}^{D(0)}D\times_{D(0)}`i\text{'}.$ ###### Remark 5. By a result of Rosebrugh and Wood [RW94], the category of finite sets is characterized amongst locally finite categories by the existence of the five left adjoints to its Yoneda embedding $k\mapsto y^{k}\colon\mathsf{Fin}\to\mathsf{Fin}^{\mathsf{Fin}^{\textnormal{op}}}$. The adjoint sextuple displayed in (1) is just the observation that five of these six functors restrict to the subcategory $\mathsf{Dir}$. ## Chapter 3 $\mathsf{Poly}$ and $\mathsf{Dir}$ in terms of bundles There is a bijection between the respective object-sets of these two categories $\displaystyle\operatorname{Ob}(\mathsf{Poly})$ $\displaystyle\xrightarrow{\cong}\operatorname{Ob}(\mathsf{Dir})$ $\displaystyle\sum_{i=1}^{n}\mathcal{y}^{k_{i}}$ $\displaystyle\mapsto\sum_{i=1}^{n}(k_{i})^{\mathcal{y}}.$ (1) We call this mapping the _Dirichlet transform_ and denote it using an over- line $P\mapsto\overline{P}$. We will see in Theorem 6 that this bijection extends to an equivalence $\mathsf{Poly}_{\textnormal{cart}}\cong\mathsf{Dir}_{\textnormal{cart}}$ between the subcategories of cartesian maps. ###### Remark 1. With the Dirichlet transform in hand, we see why $P\otimes Q$ can be called the Dirichlet product, e.g. in Eq. 3. Namely, the Dirichlet transform is strong monoidal with respect to $\otimes$ and the cartesian monoidal structure $\times$ in $\mathsf{Dir}$: $\overline{P\otimes Q}=\overline{P}\times\overline{Q}.$ ###### Proposition 2. There is a one-to-one correspondence between the set of polynomials in one variable, the set of Dirichlet polynomials, and the set of (isomorphism classes of) functions $\pi\colon s\to t$ between finite sets. ###### Proof. We already established a bijection $P\mapsto\overline{P}$ between polynomials and finite Dirichlet series in Eq. 1. Given a finite Dirichlet series $D$, we have a function $\pi_{D}\colon D(1)\to D(0)$ as in Eq. 2. And given a function $\pi\colon s\to t$, define $D_{\pi}\coloneqq\sum_{i=1}^{t}(d_{i})^{\mathcal{y}}$, where $d_{i}\coloneqq\pi^{-1}(i)$ for each $1\leq i\leq t$. (N.B. Rather than constructing $D_{\pi}$ from $\pi$ by hand, one could instead use a certain orthogonal factorization system on $\mathsf{Dir}$.) It is easy to see that the roundtrip on Dirichlet series is identity, and that the round-trip for functions is a natural isomorphism. ∎ We will upgrade Proposition 2 to an equivalence $\mathsf{Poly}_{\textnormal{cart}}\simeq\mathsf{Dir}_{\textnormal{cart}}$ between certain subcategories of $\mathsf{Poly}$ and $\mathsf{Dir}$ in Theorem 6. ###### Example 3. Under the identification from Proposition 2, both the polynomial $2\mathcal{y}^{3}+\mathcal{y}^{2}+3$ and the Dirichlet series $2{\cdot}3^{\mathcal{y}}+1{\cdot}2^{\mathcal{y}}+3{\cdot}0^{\mathcal{y}}$ correspond to the function $\bullet$$1$$\bullet$$2$$\bullet$$3$$\bullet$$4$$\bullet$$5$$\bullet$$6$$6\cong D(0)\cong$$\bullet$$(1,1)$$\bullet$$(1,2)$$\bullet$$(1,3)$$\bullet$$(2,1)$$\bullet$$(2,2)$$\bullet$$(2,3)$$\bullet$$(3,1)$$\bullet$$(3,2)$$8\cong D(1)\cong$$\pi$ (2) We can think of a function $\pi\colon s\to t$, e.g. that shown in (2), as a _bundle_ of fibers $\pi^{-1}(`i\text{'})$, one for each element $`i\text{'}\in t$. In Definition 4 we define two different notions of morphism between bundles. We will see in Theorem 6 that they correspond to morphisms in the categories $\mathsf{Poly}$ and $\mathsf{Dir}$. For any function $\pi^{\prime}\colon s^{\prime}\to t^{\prime}$ and function $f\colon t\to t^{\prime}$, denote by $f^{*}(\pi^{\prime})$ the pullback function as shown ${s\times_{t^{\prime}}s^{\prime}}$${s^{\prime}}$${t}$${t^{\prime}}$$\scriptstyle{f^{*}(\pi^{\prime})}$$\scriptstyle{\pi^{\prime}}$$\scriptstyle{f}$${\lrcorner}$ ###### Definition 4. Let $\pi\colon s\to t$ and $\pi^{\prime}\colon s^{\prime}\to t^{\prime}$ be functions between finite sets. * • a _bundle morphism_ consists of a pair $(f,f_{\sharp})$ where $f\colon t\to t^{\prime}$ is a function and $f_{\sharp}\colon\pi\to f^{*}(\pi^{\prime})$ is a morphism in the slice category over $t$; * • a _container morphism_ consists of a pair $(f,f^{\sharp})$ where $f\colon t\to t^{\prime}$ is a function and $f^{\sharp}\colon f^{*}(\pi^{\prime})\to\pi$ is a morphism in the slice category over $t$. We say a bundle morphism $(f,f_{\sharp})$ (resp. a container morphism $(f,f^{\sharp})$) is _cartesian_ if $f_{\sharp}$ (resp. $f^{\sharp})$ is an isomorphism. ${s}$${t\times_{t^{\prime}}s^{\prime}}$${s^{\prime}}$${t}$${t^{\prime}}$$\scriptstyle{\pi}$$\scriptstyle{f^{*}\pi^{\prime}}$$\scriptstyle{f_{\sharp}}$$\scriptstyle{\pi^{\prime}}$$\scriptstyle{f}$${\lrcorner}$ ${s}$${t\times_{t^{\prime}}s^{\prime}}$${s^{\prime}}$${t}$${t^{\prime}}$$\scriptstyle{\pi}$$\scriptstyle{f^{*}\pi^{\prime}}$$\scriptstyle{f^{\sharp}}$$\scriptstyle{\pi^{\prime}}$$\scriptstyle{f}$${\lrcorner}$ Figure 1: The categories $\mathsf{Bun}$ and $\mathsf{Cont}$ have the same objects, functions $\pi\colon s\to t$. Here a morphism $(f,f_{\sharp})\colon\pi\to\pi^{\prime}$ in $\mathsf{Bun}$ and a morphism $(f,f^{\sharp})\colon\pi\to\pi^{\prime}$ in $\mathsf{Cont}$ are shown. Define $\mathsf{Bun}$ (resp. $\mathsf{Cont}$) to be the category for which an object is a function between finite sets and a morphism is a bundle morphism (resp. container morphism); see Fig. 1. Denote by $\mathsf{Bun}_{\textnormal{cart}}$ (resp. $\mathsf{Cont}_{\textnormal{cart}}$) the subcategory of cartesian bundle morphisms. One may note that $\mathsf{Bun}$ is the Grothendieck construction of the self- indexing $\mathsf{Fin}_{/(-)}\colon\mathsf{Fin}^{\textnormal{op}}\to\mathsf{Cat}$, while $\mathsf{Cont}$ is the Grothendieck construction of its point-wise opposite $(\mathsf{Fin}_{/(-)})^{\textnormal{op}}\colon\mathsf{Fin}^{\textnormal{op}}\to\mathsf{Cat}$. The name _container_ comes from the work of Abbot, Altenkirch, and Ghani abbott2003categoriesabbott2005containersabbot2003categoriesthesis (see Remark 2.18 in [GK12] for a discussion of the precise relationship between the notion of container and the notion of polynomial and polynomial functor). ###### Remark 5. By the universal property of pullbacks, $\mathsf{Bun}\simeq\mathsf{Fin}^{\to}$ is equivalent (in fact isomorphic) to the category of morphisms and commuting squares in $\mathsf{Fin}$. Furthermore, $\mathsf{Bun}_{\textnormal{cart}}$ is equivalent to the category of morphisms and pullback squares in $\mathsf{Fin}$, and $\mathsf{Bun}_{\textnormal{cart}}\simeq\mathsf{Cont}_{\textnormal{cart}}$ (as in both cases a cartesian morphism $(f,f_{\sharp})$ or $(f,f^{\sharp})$ is determined by $f$ alone). Next we show that $\mathsf{Bun}\simeq\mathsf{Dir}$ is also equivalent to the category of Dirichlet functors, from Definition 1. Recall that a natural transformation is called _cartesian_ if its naturality squares are pullbacks. ###### Theorem 6. We have equivalences of categories $\mathsf{Poly}\simeq\mathsf{Cont}\qquad\text{and}\qquad\mathsf{Dir}\simeq\mathsf{Bun}.$ In particular, this gives an equivalence $\mathsf{Poly}_{\textnormal{cart}}\simeq\mathsf{Dir}_{\textnormal{cart}}$ between the category of polynomial functors and cartesian natural transformations and the category of Dirichlet functors and cartesian natural transformations. ###### Proof. The functors $P_{-}\colon\mathsf{Cont}\to\mathsf{Poly}$ and $D_{-}\colon\mathsf{Bun}\to\mathsf{Dir}$ are defined on each object, i.e. function $\pi\colon s\to t$, by the formula $\pi\mapsto P_{\pi}$ and $\pi\mapsto D_{\pi}\coloneqq\overline{P_{\pi}}$ as in Proposition 2. For each $1\leq i\leq t$, denote the fiber of $\pi$ over $i$ by $k_{i}\coloneqq\pi^{-1}(i)$. For any finite set $X$, consider the unique map $X!\colon X\to 1$. Applying $P_{-}$ and $D_{-}$ to it, we obtain the corresponding representable: $P_{X!}\cong\mathcal{y}^{X}$ and $D_{X!}\cong X^{\mathcal{y}}$. We next check that there are natural isomorphisms $\displaystyle\mathsf{Poly}(P_{X!},P_{\pi})\cong P_{\pi}(X)=\sum X^{k_{i}}\cong\mathsf{Cont}(X!,\pi),$ $\displaystyle\mathsf{Dir}(D_{X!},D_{\pi})\cong D_{\pi}(X)=\sum_{i=1}^{t}(k_{i})^{X}\cong\mathsf{Bun}(X!,\pi).$ (3) In both lines, the first isomorphism is the Yoneda lemma and the second is a computation using Definition 4 (see Fig. 1). Thus we define $P_{-}$ on morphisms by sending $f\colon\pi\to\pi^{\prime}$ in $\mathsf{Cont}$ to the “compose-with-$f$” natural transformation, i.e. having $X$-component $\mathsf{Cont}(X!,f)\colon\mathsf{Cont}(X!,\pi)\to\mathsf{Cont}(X!,\pi^{\prime})$, which is clearly natural in $X$. We define $D_{-}$ on morphisms similarly: for $f$ in $\mathsf{Bun}$, use the natural transformation $\mathsf{Bun}(-!,f)$. By definition, every object in $\mathsf{Poly}$ and $\mathsf{Dir}$ is a coproduct of representables, so to prove that we have the desired equivalences, one first checks that coproducts in $\mathsf{Cont}$ and $\mathsf{Bun}$ are taken pointwise: $(\pi\colon s\to t)+(\pi^{\prime}\colon s^{\prime}\to t^{\prime})\cong(\pi+\pi^{\prime})\colon(s+s^{\prime})\to(t+t^{\prime}),$ and then that $P_{\pi+\pi^{\prime}}=P_{\pi}+P_{\pi^{\prime}}$ and $D_{\pi+\pi^{\prime}}=D_{\pi}+D_{\pi^{\prime}}$; see Remark 4. By Remark 5, we know that $\mathsf{Bun}_{\textnormal{cart}}\simeq\mathsf{Cont}_{\textnormal{cart}}$, and we have just established the equivalences $\mathsf{Poly}\simeq\mathsf{Cont}$ and $\mathsf{Dir}\simeq\mathsf{Bun}$. It thus remains to check that the latter equivalences identify cartesian natural transformations in $\mathsf{Poly}$ with cartesian morphisms in $\mathsf{Cont}$, and similarly for $\mathsf{Dir}$ and $\mathsf{Bun}$. For polynomial functors, we may refer to [GK12, Section 2]. Turning to Dirichlet functors, we want to show that for any $f\colon D\to D^{\prime}$ the square ${D(1)}$${D^{\prime}(1)}$${D(0)}$${D^{\prime}(0)}$$\scriptstyle{f_{1}}$$\scriptstyle{\pi}$$\scriptstyle{\pi^{\prime}}$$\scriptstyle{f_{0}}$ (4) is a pullback in $\mathsf{Set}$ iff for all functions $g\colon X\to X^{\prime}$, the naturality square ${D(X^{\prime})}$${D^{\prime}(X^{\prime})}$${D(X)}$${D^{\prime}(X)}$$\scriptstyle{f_{X^{\prime}}}$$\scriptstyle{D(g)}$$\scriptstyle{D^{\prime}(g)}$$\scriptstyle{f_{X}}$ (5) is a pullback in $\mathsf{Set}$; we will freely use the natural isomorphism $D_{\pi}(X)\cong\mathsf{Bun}(X!,\pi)$ from Eq. 3. The square in Eq. 4 is a special case of that in Eq. 5, namely for $g\coloneqq 0!$ the unique function $0\to 1$; this establishes the only-if direction. To complete the proof, suppose that Eq. 4 is a pullback, take an arbitrary $g\colon X\to X^{\prime}$, and suppose given a commutative solid-arrow diagram as shown: ${X}$${X^{\prime}}$${D(1)}$${D^{\prime}(1)}$${1}$${1}$${D(0)}$${D^{\prime}(0)}$$\scriptstyle{g}$ We can interpret the statement that Eq. 5 is a pullback as saying that there are unique dotted arrows making the diagram commute, since $DX\cong\mathsf{Bun}(X!,D0!)$ and similarly for the other corners of the square in Eq. 5. So, we need to show that if the front face is a pullback, then there are unique diagonal dotted arrows as shown, making the diagram commute. This follows quickly from the universal property of the pullback. ∎ ###### Corollary 7. $\mathsf{Dir}$ is an elementary topos. ###### Proof. For any finite category $C$, the functor category $\mathsf{Fin}^{C}$ is an elementary topos. The result now follows from Remarks 5 and 6, noting that $\mathsf{Dir}\simeq\mathsf{Fin}^{\to}$. ∎ As we mentioned in the introduction, this all goes through smoothly when one drops all finiteness conditions. The general topos of Dirichlet functors is the category of (arbitrary) sums of representables $\mathsf{Set}^{\textnormal{op}}\to\mathsf{Set}$, and this is equivalent to the arrow category $\mathsf{Set}^{\to}$ and so is itself a topos. We conclude with the equivalence promised in Dirichlet Polynomials. ###### Theorem 8. A functor $D\colon\mathsf{Fin}^{\textnormal{op}}\to\mathsf{Fin}$ is a Dirichlet polynomial if and only if it preserves connected limits, or equivalently wide pullbacks. ###### Proof. Let $D(\mathcal{y})=\sum_{i:D(0)}(d_{i})^{\mathcal{y}}$, and suppose that $J$ is any connected category. Then for any diagram $X\colon J\to\mathsf{Fin}$, we have $\displaystyle D(\operatorname*{colim}X_{j})$ $\displaystyle=\sum_{i:D(0)}(d_{i})^{\operatorname*{colim}X_{j}}$ $\displaystyle\cong\sum_{i:D(0)}\lim(d_{i})^{X_{j}}$ $\displaystyle\cong\lim\sum_{i:D(0)}(d_{i})^{X_{j}}$ $\displaystyle=\lim D(X_{j})$ since connected limits commute with sums in any topos (in particular $\mathsf{Set}$). Now suppose $D\colon\mathsf{Fin}^{\textnormal{op}}\to\mathsf{Fin}$ is any functor that preserves connected limits; in particular, it sends wide pushouts to wide pullbacks. Every finite set $X$ can be expressed as the wide pushout ${X}$${1}$${1}$${\cdots}$${1}$${1}$${0}$ of its elements. Therefore, we have the following limit diagram: ${D(X)}$${D(1)}$${D(1)}$${\cdots}$${D(1)}$${D(1)}$${D(0)}$ That is, an element of $D(X)$ is a family of elements $a_{x}\in D(1)$, one for each $x\in X$, such that the $D(0!)(a_{x})$ are all equal in $D(0)$. But this is just a bundle map, i.e. $D(X)\cong\mathsf{Bun}(X!,D(0!))$ where $X!\colon X\to 1$ and $D(0!)\colon D(1)\to D(0)$. Thus by Theorem 6, the functor $D$ is the Dirichlet polynomial associated to the bundle $D(0!)$. ∎ ### Acknowledgments The authors thank Joachim Kock, André Joyal, and Brendan Fong for helpful comments that improved the quality of this note. Spivak also appreciates support by Honeywell Inc. as well as AFOSR grants FA9550-17-1-0058 and FA9550-19-1-0113. Jaz Myers appreciates support by his advisor Emily Riehl and the National Science Foundation grant DMS-1652600. ## References * [AAG03] Michael Gordon Abbott, Thorsten Altenkirch and Neil Ghani “Categories of Containers” In _FoSSaCS_ , 2003 * [AAG05] Michael Abbott, Thorsten Altenkirch and Neil Ghani “Containers: Constructing strictly positive types” Applied Semantics: Selected Topics In _Theoretical Computer Science_ 342.1, 2005, pp. 3–27 * [Abb03] Michael Gordon Abbott “Categories of Containers”, 2003 * [BD] John Baez and James Dolan “This Week’s Finds 300” Accessed: 2020-02-16 URL: http://math.ucr.edu/home/baez/week300.html * [GK12] Nicola Gambino and Joachim Kock “Polynomial functors and polynomial monads” In _Mathematical Proceedings of the Cambridge Philosophical Society_ 154.1 Cambridge University Press (CUP), 2012, pp. 153–192 * [Joy81] André Joyal “Une théorie combinatoire des séries formelles” In _Advances in Mathematics_ 42.1, 1981, pp. 1–82 * [RW94] Robert Rosebrugh and R.. Wood “An Adjoint Characterization of the Category of Sets” In _Proc. Amer. Math. Soc_ 122, 1994, pp. 409–413
2024-09-04T02:54:58.798419
2020-03-10T17:09:00
2003.04857
{ "authors": "Yuqian Zhou, David Ren, Neil Emerton, Sehoon Lim, Timothy Large", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26140", "submitter": "Yuqian Zhou", "url": "https://arxiv.org/abs/2003.04857" }
arxiv-papers
# Image Restoration for Under-Display Camera Yuqian Zhou1, David Ren2, Neil Emerton3, Sehoon Lim3, Timothy Large3 1IFP, UIUC, 2CIL, UC Berkeley, 3Microsoft ###### Abstract The new trend of full-screen devices encourages us to position a camera behind a screen. Removing the bezel and centralizing the camera under the screen brings larger display-to-body ratio and enhances eye contact in video chat, but also causes image degradation. In this paper, we focus on a newly-defined Under-Display Camera (UDC), as a novel real-world single image restoration problem. First, we take a 4k Transparent OLED (T-OLED) and a phone Pentile OLED (P-OLED) and analyze their optical systems to understand the degradation. Second, we design a Monitor-Camera Imaging System (MCIS) for easier real pair data acquisition, and a model-based data synthesizing pipeline to generate Point Spread Function (PSF) and UDC data only from display pattern and camera measurements. Finally, we resolve the complicated degradation using deconvolution-based pipeline and learning-based methods. Our model demonstrates a real-time high-quality restoration. The presented methods and results reveal the promising research values and directions of UDC. ## 1 Introduction Under-display Camera (UDC) is a new imaging system that mounts display screen on top of a traditional digital camera lens, as shown in Figure 1. Such a system has mainly two advantages. First, it brings a new product trend of full-screen devices [11] with larger screen-to-body ratio, which can provide better user perceptive and intelligent experience [12]. Without seeing the bezel and extra buttons, users can easily access more functions by directly touching the screen. Second, it provides better human computer interaction. By putting the camera in the center of the display, it enhances teleconferencing experiences with perfect gaze tracking, and it is increasingly relevant for larger display devices such as laptops and TVs. Unlike pressure or fingerprint sensors that can be easily integrated into a display, it is relatively difficult to retain full functionality of an imaging sensor after mounting it behind a display. The imaging quality of a camera will be severely degraded due to lower light transmission rate and diffraction effects. As a result, images captured will be noisy and blurry. Therefore, while bringing better user experience and interaction, UDC may sacrifice the quality of photography, face processing [35] and other downstream vision tasks. Restoring and enhancing the images captured by UDC system will be desired. Figure 1: The newly proposed imaging system named Under-Display Camera (UDC). We mount display screen on top of a traditional digital camera lens. The design brings new trend of full-screen devices. Traditional image restoration approaches form the task as an inverse problem or an optimization problem like Maximum-a-Posterior (MAP). For the UDC problem, for practical purposes, the proposed image restoration algorithm and system are expected to work in real-time. Therefore, deconvolutional-based methods like Wiener Filter [14] should be preferred. Deconvolution is the inverse process of convolution and recovers the original signal from the point-spread-function (PSF)-convolved image. The fidelity of the deconvolution process is dependent on the space-invariance of the PSF over the image field of-view (FOV) and on a low condition number for the inverse of the PSF [19]. For strongly non-delta-function-like PSFs such as those encountered when imaging through a display, the value of condition number can be large. For such PSFs an additional denoising step may be essential. Another option is the emerging discriminative learning-based image restoration model. Data-driven discriminative learning-based image restoration models usually outperform traditional methods in specific tasks like image de-noising [42, 48, 47, 26, 43, 3], de-bluring [21, 28],de-raining [40, 39], de-hazing [13, 33], super-resolution [22, 37], and light-enhancement [9]. However, working on synthesis data with single degradation type, existing models can be hardly utilized to enhance real-world low-quality images with complicated or combined degradation types. To address complicated real degradation like the UDC problem, directly collecting real paired data or synthesizing near- realistic data after fully understanding the degradation model is necessary. In this paper, we present the first study to define and analyze the novel Under-Display Camera (UDC) system from both optics and image restoration viewpoints. For optics, we parse the optical system of the UDC pipeline and analyze the characteristics of light transmission. Then we relate the obtained intuitions and measurements to an image restoration pipeline, and propose two ways of resolving the single-image restoration: A deconvolution-based Wiener Filter [29] pipeline (DeP) and a data-driven learning-based approach. Specifically, we regard UDC restoration as a combination of tasks such as low- light enhancement, de-blurring, and de-noising. Without loss of generality, our analysis focuses on two types of displays, a 4K Transparent Organic Light-Emitting Diode (T-OLED) and a phone Pentile OLED (P-OLED), and a single camera type, a 2K FLIR RGB Point Grey research camera. To obtain the real imaging data and measure the optical factors of the system, we also propose a data acquisition system using the above optical elements. In summary, the main contributions of our paper are: (1) A brand new imaging system named Under-Display Camera (UDC) is defined, measured and analyzed. Extensive experiments reveal the image degradation process of the system, inspiring better approaches for restoring the captured images. (2) As a baseline, two practical and potential solutions are proposed, including conventional Wiener Filter and the recent learning-based method. (3) Adopting the newly-assembled image acquisition system, we collect the first Under- Display Camera (UDC) dataset which will be released and evaluated by the public. ## 2 Related Work #### Real-world Image Reconstruction and Restoration Image restoration for UDC [46, 24, 23, 49] can be categorized into the problem of Real-world restoration[3, 45]. It is becoming a new concept in low-level vision. In the past decades, low-level vision works on synthetic data (denoising on AWGN and SR on Bicubic), but the models are not efficient for images with real degradation such as real noises or blur kernels. Making models perform better on real-world inputs usually requires new problem analysis and a more challenging data collection. Recently, researchers also worked on challenging cases like lensless imaging problems [30, 27, 20], or integrating optics theory with High Dynamic Range imaging [34]. Previously, there has been two common ways to prepare adaptive training data for real- world problems: real data collection and near-realistic data synthesis. Recently, more real noise datasets such as DND [31], SIDD [2, 28], and RENOIR [5], have been introduced to address practical denoising problems. Abdelrahman et al. [3] proposed to estimate ground truth from captured smartphone noise images, and utilized the paired data to train and evaluate the real denoising algorithms. In addition to noise, Chen et al. first introduced the SID dataset [9] to resolve extreme low-light imaging. In the area of Single Image Super Resolution (SISR), researchers considered collecting optical zoom data [45, 10] to learn better computational zoom. Other restoration problems including reflection removal [36, 32] also follow the trend of real data acquisition. Collecting real data suffers from limitation of scene variety since most previous models acquire images of postcards, static objects or color boards. In this paper, we propose a novel monitor-camera imaging system, to add real degradation to the existing natural image datasets like DIV2K [4]. A realistic dataset can be synthesized if the degradation model is fully understood and resolved. One good practice of data synthesis is generating real noises on raw sensors or RGB images. CBDNet [17] and Tim et al. [8] synthesized realistic noise by unfolding the in-camera pipeline, and Abdelhamed et al. [1] better fitted the real noise distribution with flow- based generative models. Zhou et al. [48] adapted the AWGN-RVIN noise into real RGB noise by analyzing the demosacing process. Other physics-based synthesis was also explored in blur[7] or hazing[6]. For the UDC problem in this paper, we either collected real paired data, or synthesized near- realistic data from model simulation. In particular, we applied the theory of Fourier optics to simulate the diffraction effects, and further adjusted the data with other camera measurements. Our data synthesizing pipeline demonstrates a promising performance for addressing real complicated degradation problem. (a) (b) Figure 2: Image formation pipeline of under-display camera (UDC) problem. (a) Image Formation Pipeline. (b)Optics characteristics of UDC. The structure of the 4K T-OLED has a grating-like pixel layout. P-OLED differs from T-OLED in sub-pixel design. From left to right: Micrography of display patterns, PSFs (red light only) and MTFs (red, green, and blue). ## 3 Formulation In this section, we discuss the optical system and image formation process of the proposed UDC imaging system. We analyze the degradation type, light transmission rate and visualize the Point Spread Function (PSF). Moreover, we formulate the image formation pipeline to compute simulated PSF from measurements. ### 3.1 Optical System Analysis Optical Elements. In our experiments, we focus on the Organic Light-Emitting Diode (OLED) displays [38] as they have superior optical properties compared to the traditional LCDs (Liquid Crystal Display). Due to confidentiality reasons it is often difficult to obtain the sample materials used for demos from commercial companies. In this case, we select the displays with different transparencies to improve the generalization. Note that all the displays are non-active in our experiments, since in real scenario, the display can be turned off locally by setting black pixels on local regions of the OLED display when the camera is in operation to 1) reduce unnecessary difficulty from display contents while not affecting user experience and 2) provide users with the status of the device and thus ensure privacy. Owing to transparent materials being used in OLED display panels, visible lights can be better transmitted through the OLEDs than LCDs. In the meantime, pixels are also arranged such that open area is maximized. In particular, we focus on 4k Transparent OLED (T-OLED) and a phone Pentile OLED (P-OLED). Figure 2 is a micrograph illustration of the pixel layout in the two types of OLED displays. The structure of the 4K T-OLED has a grating-like pixel layout. P-OLED differs from T-OLED in sub-pixel design. It follows the basic structure of RGBG matrix. Table 1: Comparison of two displays in terms of light transmission rate and physical pixel layout and open areas. Metrics | T-OLED | P-OLED ---|---|--- Pixel Layout Type | Stripe | Pentile Open Area | 21$\%$ | 23$\%$ Transmission Rate | 20$\%$ | 2.9$\%$ Major Degradation | Blur, Noise | Low-light, Color Shift, Noise Light Transmission Rate. We measure the transmission efficiency of the OLEDs by using a spectrophotometer and white light source. Table 1 compares the light transmission rate of the two displays. For T-OLED, the open area occupies about 21$\%$, and the light transmission rate is around 20$\%$. For P-OLED, although the open area can be as large as 23$\%$, the light transmission rate is only 2.9$\%$. The loss of photons can be attributed mainly to the structure of P-OLED. First, the P-OLED has a finer pixel pitch, so photos are scattered to higher angles comparing to the T-OLED. As a result, high angle photons are not collected by the lens. Second, P-OLED is a flexible/bendable display, which has a poly-amide substrate on which the OLED is formed. Such a substrate has relatively low transmission efficiency, causing photons to be absorbed. The absorbed light with certain wavelengths may make the images captured through a polyamide-containing display panel by a UDC appear yellow. As a result, imaging through a P-OLED results in lower signal-to-noise ratio (SNR) comparing to using a T-OLED, and has a color shift issue. One real imaging example is shown in Figure 4. Diffraction Pattern and Point Spread Function (PSF). Light diffracts as it propagates through obstacles with sizes that are similar to its wavelength. Unfortunately, the size of the openings in the pixel layout is on the order of wavelength of visible light, and images formed will be degraded due to diffraction. Here we characterize our system by measuring the point spread function (PSF). We do so by pointing a collimated red laser beam ($\lambda=$ 650nm) at the display panel and recording the image formed on the sensor, as demonstrated in Figure 1 and 2. An ideal PSF shall resemble a delta function, which then forms a perfect image of the scene. However, light greatly spreads out in UDC. For T-OLED, light spreads mostly across the horizontal direction due to its nearly one dimensional structure in the pixel layout, while for P-OLED, light is more equally distributed as the pixel layout is complex. Therefore, images captured by UDC are either blurry (T-OLED) or hazy (P-OLED). Modulation Transfer Function (MTF) Modulation Transfer Function (MTF) is another important metric for an imaging system, as it considers the effect of finite lens aperture, lens performance, finite pixel size, noise, non- linearities, quantization (spatial and bit depth), and diffraction in our systems. We characterize the MTF of our systems by recording sinusoidal patterns with increasing frequency in both lateral dimensions, and we report them in Figure 2. For T-OLED, contrasts along the horizontal direction are mostly lost in the mid-band frequency due to diffraction. This phenomenon is due to the nearly one-dimensional pixel layout of the T-OLED. Figure 4 shows severe smearing horizontally when putting T-OLED in front of the camera. While for P-OLED, the MTF is almost identical to that of display-free camera, except with severe contrast loss. Fortunately, however, nulls have not been observed in any particular frequencies. ### 3.2 Image Formation Pipeline In this section, we derive the image formation process of UDC based on the analysis in the previous sections. Given a calibrated pixel layout and measurements using a specific camera, degraded images can be simulated from a scene. From the forward model, we can compute the ideal PSF and consequently synthesize datasets from ground truth images. Given an object in the scene $\mathbf{x}$, the degraded observation $\mathbf{y}$ can be modeled by a convolution process, $\mathbf{y}=(\gamma\mathbf{x})\otimes\mathbf{k}+\mathbf{n},$ (1) where $\gamma$ is the intensity scaling factor under the current gain setting and display type, $\mathbf{k}$ is the PSF, and $\mathbf{n}$ is the zero-mean signal-dependent noise. Notice that this is a simple noise model that approximately resembles the combination of shot noise and readout noise of the camera sensor, and it will be discussed in a later section. Intensity Scaling Factor ($\gamma$) The intensity scaling factor measures the changing ratio of the average pixel values after covering the camera with a display. It simultaneously relates to the physical light transmission rate of the display, as well as the digital gain $\delta$ setting of the camera. $\gamma$ can be computed from the ratio of $\delta$-gain amplified average intensity values $I_{d}(\delta,s)$ at position $s$ captured by UDC, to the 0-gain average intensity values $I_{nd}(0,s)$ by naked camera within an enclosed region $S$. It is represented by, $\gamma=\frac{\int_{S}I_{d}(\delta,s)ds}{\int_{S}I_{nd}(0,s)ds}$ (2) Diffraction Model We approximate the blur kernel $\mathbf{k}$, which is the Point Spread Function (PSF) of the UDC. As shown in Figure 1, in our model, we assume the display panel is at the principle plane of the lens. We also assume the input light is monochromatic plane wave with wavelength $\lambda$ (i.e. perfectly coherent), or equivalently light from a distance object with unit amplitude. Let the display pattern represented by transparency with complex amplitude transmittance be $g(m,n)$ at the Cartesian co-ordinate $(m,n)$, and let the camera aperture/pupil function $p(m,n)$ be 1 if $(m,n)$ lies inside the lens aperture region and 0 otherwise, then the display pattern inside the aperture range $g_{p}(m,n)$ becomes, $g_{p}(m,n)=g(m,n)p(m,n).$ (3) At the focal plane of the lens (i.e. 1 focal length away from the principle plane), the image measured is the intensity distribution of the complex field, which is proportional to the Fourier transform of the electric field at the principle plane [16]: $I(u,v)\propto\left|{\iint}^{\infty}_{-\infty}g_{p}(m,n)\exp\left[-j\frac{2\pi}{\lambda f}(mu+nv)\right]\text{d}m\text{d}n\right|^{2}.$ (4) Suppose $G_{p}(v_{m},v_{n})=\mathscr{F}(g_{p}(m,n))$, where $\mathscr{F}(\cdot)$ is the Fourier transform operator, then $I(u,v)\propto\left|G_{p}(v_{m},v_{n})\right|^{2}=\left|G_{p}(\frac{u}{\lambda f},\frac{v}{\lambda f})\right|^{2},$ (5) which performs proper scaling on the Fourier transform of the display pattern on the focal plane. Therefore, to compute the PSF $\mathbf{k}$ for image $\mathbf{x}$, we start from computing Discrete Fourier Transform (DFT) with squared magnitude $M(a,b)=|\hat{G_{p}}(a,b)|^{2}$ of the $N\times N$ microscope transmission images $\hat{g_{p}}$ of the display pattern and re-scaling it. Then, the spatial down-sampling factor $r$ (denoted by $\downarrow r$) becomes, $r=\frac{1}{\lambda f}\cdot{\delta_{N}N}\cdot{\rho},$ (6) where $\delta_{N}$ is the pixel size of the $\hat{g_{p}}$ images, and $\rho$ is the pixel size of the sensor. Finally, $\mathbf{k}$ can be represented as $k(i,j)=\frac{M_{\downarrow r}(i,j)}{\sum_{(\hat{i},\hat{j})}M_{\downarrow r}(\hat{i},\hat{j})}.$ (7) $k$ is a normalized form since we want to guarantee that it represents the density distribution of the intensity with diffraction effect. Note that only PSF for a single wavelength is computed for simplicity. However, scenes in the real-world are by no means monochromatic. Therefore, in order to calculate an accurate color image from such UDC systems, PSF for multiple wavelengths shall be computed. More details will be shown in Section 4.2. Adding Noises We follow the commonly used shot-read noise model [8, 18, 25] to represent the real noise on the imaging sensor. Given the dark and blur signal $w=(\gamma\mathbf{x})\otimes\mathbf{k}$, the shot and readout noise can be modeled by a heteroscedastic Gaussian, $\mathbf{n}\sim\mathcal{N}(\mu=0,\sigma^{2}=\lambda_{read}+\lambda_{shot}w),$ (8) where the variance $\sigma$ is signal-dependent, and $\lambda_{read}$ , $\lambda_{shot}$ are determined by camera sensor and gain values. ## 4 Data Acquisition and Synthesis We propose an image acquisition system called Monitor-Camera Imaging System (MCIS). In particular, we display natural images with rich textures on high- resolution monitor and capture them with a static camera. The method is more controllable, efficient, and automatic to capture a variety of scene contents than using mobile set-ups to capture limited static objects or real scenes. ### 4.1 Monitor-Camera Imaging System Figure 3: Monitor-Camera Imaging System (MCIS). MCIS consists of a 4K LCD monitor, the 2K FLIR RGB Point-Grey research camera, and a panel that is either T-OLED, P-OLED or Glass(i.e. no display). The camera is mounted on the center line of the 4K monitor, and adjusted to cover the full monitor range. (a) Display-free (b) TOLED (c) POLED Figure 4: Real samples collected by the proposed MCIS. Images captured by T-OLED are blur and noisy, while those captured by P-OLED are low-light, color-shifted and hazy. The system architecture is shown in Figure 3. MCIS consists of a 4K LCD monitor, the 2K FLIR RGB Point-Grey research camera, and a panel that is either T-OLED, P-OLED or Glass(i.e. no display). The camera is mounted on the center line of the 4K monitor, and adjusted to cover the full monitor range. We calibrate the camera gain by measuring a $256\times 256$ white square shown on the monitor and matching the RGB histogram. For fair comparison and simplicity, we adjust the focus and fix the aperture to f/1.8. It guarantees a reasonable pixel intensity range avoiding saturation while collecting data with no gain. Suppose we develop a real-time video system, the frame rate has to be higher than 8 fps. So the lowest shutter speed is 125 ms for the better image quality and the higher Signal-to-Noise Ratio (SNR). Table 2: Camera Settings for different set of collected data Parameteres | No-Display | T-OLED | P-OLED ---|---|---|--- Aperture | f/1.8 FPS/Shutter | 8/125ms Brightness | 0 Gamma | 1 Gain | 1 | 6 | 25(Full) White-balance | Yes | None | None We split 300 images from DIV2K dataset [4], and take turns displaying them on a 4K LCD in full screen mode. We either rotate or resize the images to maintain the Aspect Ratio. For training purposes, we capture two sets of images, which are the degraded images $\\{y_{i}\\}$, and the degradation-free set $\\{x_{i}\\}$. To capture $\\{x_{i}\\}$, we first cover the camera with a thin glass panel which has the same thickness as a display panel. This process allows us to avoid the pixel misalignment issues caused by light refraction inside the panel. To eliminate the image noises in $\\{x_{i}\\}$, we average the 16 repeated captured frames. Then we replace the glass with a display panel (T-OLED or P-OLED), calibrate the specific gain value avoiding saturation, and capture $\\{y_{i}\\}$. For each set, we record both the 16-bit 1-channel linear RAW CMOS sensor data as well as the 8-bit 3-channel linear RGB data after in-camera pipeline that includes demosaicing. The collected pairs are naturally well spatially-aligned in pixel-level. They can be directly used for deep model training without further transformations. Due to the yellow substrate inside the P-OLED, certain light colors, especially blue, are filtered out and changes the white balance significantly. We therefore did not further alter the white balance. The light transmission ratio of P-OLED is extremely low, so we set up the gain value to be the maximum (25) for higher signal values. All the detailed camera settings for the two display types are shown in Table 2. One real data sample is shown in Figure 4. As discussed and analyzed in Section 3.1, images captured by T-OLED are blur and noisy, while those captured by P-OLED are low-light, color- shifted and hazy. Table 3: Measured parameters for data synthesis Parameteres | T-OLED | P-OLED ---|---|--- | R | G | B | R | G | B $\gamma$ | 0.97 | 0.97 | 0.97 | 0.34 | 0.34 | 0.20 $\lambda$ (nm) | 640 | 520 | 450 | 640 | 520 | 450 r | 2.41 | 2.98 | 3.44 | 2.41 | 2.98 | 3.44 ### 4.2 Realistic Data Synthesis Pipeline We follow the image formation pipeline to simulate the degradation on the collected $\\{x_{i}\\}$. A model-based data synthesis method will benefit concept understanding and further generalization. Note that all the camera settings are the same as the one while collecting real data. We first transform the 16-bit raw sensor data $\\{x_{i}\\}$ into four bayer channels $x_{r}$, $x_{gr}$, $x_{gl}$, and $x_{b}$. Then, we multiply the measured intensity scaling factor $\gamma$, compute the normalized and scaled PSF $k$, and add noises to the synthesize degraded data. Measuring $\gamma$: To measure $\gamma$ for each channel using the MCIS, we select the region of interest $S$ to be a square region of size $256\times 256$, and display the intensity value input from 0 to 255 with stride 10 on the monitor. We then record the average intensity both with and without the display for each discrete intensity value, and plot the relationship between display-covered values and no-display-covered ones. Using linear regression, we obtain the ratios of lines for different RGGB channel. For T-OLED, the measured $\gamma$ is 0.97, same for all the channels. For P-OLED, $\gamma=0.20$ for the blue channel, and $\gamma=0.34$ for the other three channels. Computing PSF: Following equation 3, we acquire the transmission microscope images of the display pattern and crop them with the approximated circular aperture shape with diameter $3333\mu m$, the size of the camera aperture. In equation 6, the $\delta_{N}N$ is $3333\mu m$. $\rho$ equals to $1.55\mu m/pixel$ in Sony sensor. However, after re-arranging the raw image into four RGGB channels, $\rho$ becomes 3.1 for each channel. The focal length is $6000\mu m$. $\lambda=(640,520,450)$ for R, G, B channel, which are the approximated center peaks of the R, G, B filters respectively on the sensor. It yields the down-sampling ratio $r=(2.41,2.98,3.44)$ for the R, G, and B channels. Adding Noises: We measure $\lambda_{read}$ and $\lambda_{shot}$ to estimate the noise statistics. We display random patterns within the $256\times 256$ window on the monitor, collect the paired noisy and noise-free RAW data, and compute their differences. For each of the RGGB channel, we linearly regress the function of noise variance to the intensity value, and obtain the ratio as the shot noise variance, and the y-intersection as the readout noise variance. We then repeat the process 100 times and collect pairs of data points. Finally, we estimate the distribution and randomly sample $\lambda_{read}$ and $\lambda_{shot}$. All the measurements are listed in Table 3. Figure 5: Network structure of the proposed UNet. It takes a 4-channel RAW sensor data observation $y$, and outputs the restored 3-channel RGB image $x$. ## 5 Image Restoration Baselines We use the collected real paired data, synthetic paired data, simulated PSF, and all the necessary measurements to perform image restoration. We split the 300 pairs of images in the UDC dataset into 200 for training, 40 for validation and 60 images in the testing partition. All the images have a resolution of $1024\times 2048$. ### 5.1 Deconvolution Pipeline (DeP) The DeP is a general-purpose conventional pipeline concatenating denoising and deconvolution (Wiener Filter), which is an inverse process of the analyzed image formation. To better utilize the unsupervised Wiener Filter (WF) [29], we first apply the BM3D denoiser to each RAW channel separately, afterwards we linearly divide the measured $\gamma$ with the outputs for intensity scaling. After that, WF is applied to each channel given the pre-computed PSF $\mathbf{k}$. Finally, RAW images with bayer pattern are demosaiced by linear interpolation. The restored results are evaluated on the testing partition of the UDC dataset. ### 5.2 Learning-based Methods UNet. We propose a learning-based restoration network baseline as shown in Figure 5. The proposed model takes a 4-channel RAW sensor data observation $y$, and outputs the restored 3-channel RGB image $x$. The model conducts denoising, debluring, white-balancing, intensity scaling, and demosaicing in a single network, whose structure is basically a UNet. We split the encoder into two sub-encoders, one of which is for computing residual details to add, and the other one learns content encoding from degraded images. By splitting the encoder, compared with doubling the width of each layer, we will have fewer parameters, and make the inference and learning more efficient. To train the model from paired images, we apply the $L_{1}$ loss, which will at large guarantee the temporal stability compared with adversarial loss [15]. Besides, we also apply $SSIM$ and Perception Loss (VGG Loss) for ablation study. We crop patches of $256\times 256$, and augment the training data using the raw image augmentation [26] while preserving the RGGB bayer pattern. We train the model for 400 epochs using Adam optimizer ($\beta_{1}=0.9$, $\beta_{2}=0.999$ and $\epsilon=10^{-8}$) with learning rate $10^{-4}$ and decay factor 0.5 after 200 epoches. We also train the same structure using the synthetic data (denoted as UNet(Syn)) generated by the pipeline proposed in section 4.2. ResNet. Additionally, a data-driven ResNet trained with the same data is utilized for evaluation. To our knowledge, UNet and ResNet-based structures are two widely-used deep models for image restoration. We use 16 residual blocks with a feature width of 64 for our ResNet architecture, just as Lim et al. do for EDSR [22]. The model also takes 4-channel RAW data, and outputs 3-channel RGB images. The data-driven models cannot be directly adaptive to UDC inputs if only trained with bi-cubic degradation. We did not compare with their model structures because model novelty is not our main claim, and the presented two methods are the most general ways which can achieve real-time inference as the baselines. Other model variants can be further explored in future work. (a) T-OLED (b) DeP (c) UNet(Syn) (d) UNet (e) GT Figure 6: Restoration Results Comparison for T-OLED. GT: Ground Truth. (a) P-OLED (b) DeP (c) UNet(Syn) (d) UNet (e) GT Figure 7: Restoration Results Comparison for P-OLED. GT: Ground Truth. Table 4: Pipeline Comparison . 4K T-OLED P-OLED Pipeline Structure $\\#$P $\downarrow$ GFLOPs $\downarrow$ T $\downarrow$ PSNR/SSIM $\uparrow$ LPIPS $\downarrow$ PSNR/SSIM $\uparrow$ LPIPS $\downarrow$ DeP - - - 28.50/0.9117 0.4219 16.97/0.7084 0.6306 ResNet 1.37M 721.76 92.92 36.26/0.9703 0.1214 27.42/0.9176 0.2500 UNet(Syn) 8.93M 124.36 21.37 32.42/0.9343 0.1739 25.88/0.9006 0.3089 UNet 8.93M 124.36 21.37 36.71/0.9713 0.1209 30.45/0.9427 0.2219 Table 5: Ablation Study on UNet alternatives. Alternatives | | | | 4K T-OLED | P-OLED ---|---|---|---|---|--- | $\\#$P $\downarrow$ | GFLOPs $\downarrow$ | T $\downarrow$ | PSNR/SSIM $\uparrow$ | LPIPS $\downarrow$ | PSNR/SSIM $\uparrow$ | LPIPS $\downarrow$ UNet Basseline | 8.93M | 124.36 | 21.37 | 36.71/0.9713 | 0.1209 | 30.45/0.9427 | 0.2219 Double Width | 31.03M | 386.37 | 40.42 | 37.00/0.9730 | 0.1171 | 30.37/0.9425 | 0.2044 Single Encoder | 7.76M | 97.09 | 15.85 | 36.47/0.9704 | 0.1288 | 30.26/0.9387 | 0.2318 $L_{1}\rightarrow L_{1}+SSIM$ | 8.93M | 124.36 | 21.37 | 36.69/0.9714 | 0.1246 | 30.37/0.9403 | 0.2131 $L_{1}\rightarrow L_{1}+VGG$ | 8.93M | 124.36 | 21.37 | 36.31/0.9711 | 0.1130 | 30.37/0.9403 | 0.2130 ## 6 Experimental Results ### 6.1 Qualitative and Quantitative Comparisons The qualitative restoration results are shown in Figure 6 and 7. As shown, image Deconvolution Pipeline (DeP) successfully recovers image details but still introduces some artifacts, and suffers from the inaccuracy of the computed ideal PSF. The UNet-based model achieves better visual quality and denoising performance. The results of UNet trained with the synthetic data are visually better than DeP. The quantitative results are listed in Table 4. We report the performance in PSNR, SSIM, a perceptual metric LPIPS [44], inference time T (ms/MPixels) and GFLOPs. The inference time is tested with one single Titan X, and the GFLOPs is computed by input size of $512\times 1024\times 4$. ResNet achieves a comparable performance to UNet, but it requires more computation operations and longer inference time. The proposed UNet-based structure is efficient and effective, which can therefore be deployed for real-time inference for high- resolution inputs with a single GPU. In Table 4, we demonstrate that synthetic data still has gaps with the real data, though it has already greatly out- performed the DeP for the two display types. The domain gap mainly comes from the following aspects. First, due to the existing distances between display and lens, in real data there appears visible patterns of the display on the image plane. We recall in the assumption of the diffraction model, the display panel is exactly at the principle plane of the lens system. The cause of the visible bands are illustrated in the supplementary material. Second, the approximated light transmission rate may not be accurate, the measured values may be influenced by other environment light sources. Third, impulse noise caused by dead pixels or over-exposure in the camera sensors widely exist in the real dataset. Those factors provide more improvement space for this work. Figure 8: Face detection performance before and after applying restoration. Without display, the original face recall rate is 60$\%$. Covering the camera with T-OLED or P-OLED will decrease the recall rate to 8$\%$ and 0$\%$. After image restoration, the recall rates recovered back to 56$\%$ and 39$\%$. ### 6.2 Ablation Study For the best-performed UNet structure, we compare different UNet alternatives in Table 5. We increase the parameter size by splitting the original encoders into two sub-encoders, so the performance is also increased. The increment parameter size and inference time is far less than doubling the width of each layer of UNet, but the performance improvement is comparable (T-OLED), even better (P-OLED). We claim that the proposed UNet structure will both maintain a small number of parameters and operations, and achieve a real-time high- quality inference. To try alternative loss functions, we add $SSIM$ or $VGG$ loss in additional to $L_{1}$ loss with 1:1 ratio. However, the performance gains on either $SSIM$ or perceptual metric LPIPS are not significant enough, and are not visually distinctive. Adversarial loss is not implemented due to its temporal instability of GAN-based training. ### 6.3 Downstream Applications The proposed image restoration also enhances the performance of downstream applications including face detection. Figure 8 shows an example of detecting faces using MTCNN [41]. Without display, the original face recall rate is 60$\%$. Covering the camera with T-OLED or P-OLED will decrease the recall rate to 8$\%$ and 0$\%$. After image restoration, the recall rates are recovered to 56$\%$ and 39$\%$. ## 7 Conclusion and Limitations This paper defined and presented a novel imaging system named Under-Display- Camera (UDC). Deploying UDC to full-screen devices improves the user interaction as well as teleconferencing experience, but does harm to imaging quality and other downstream vision applications. We systematically analyzed the optical systems and modelled the image formation pipeline of UDC, and both collected real data using a novel acquisition system and synthesized realistic data and the PSF of the system using optical model. We then proposed to address the image restoration of UDC using a Deconvolution-based Pipeline (DeP) and data-driven learning-based methods. Our experiments showed that the former achieved basic restoration and the latter demonstrated an efficient high-quality restoration. The model trained with synthetic data also achieved a remarkable performance indicating the potential generalization ability. UDC problem has its promising research values in complicated degradation analysis. In real-world applications, other factors like an active display, reflection, lens flare etc. are still very challenging and complicated. Future work can be exploring UDC-specific restoration models and working with aperture and display researchers to analyze the influential factors of image degradation. It will make the restoration model generalized better for mass production, or helpful for down-stream tasks, as an ultimate goal. ## Appendix A Appendices (a) Display-free (b) T-OLED (c) P-OLED Figure A.1: More real data samples acquried by our MCIS set-up. (a) The image captured with camera covered by thin glass, (b)T-OLED, and (c) P-OLED. ### A.1 Real Data More examples in 8-bit RGB version in the UDC real dataset are shown in Fig. A.1. Each image has a high resolution of $1024\times 2048\times 3$. Images captured by T-OLED demonstrate a blur effect along the horizontal direction. Some spatial frequencies (i.e. vertical bands) are missing due to diffraction effects. Images captured by P-OLED are yellow-shifted, dark, and noisy. We also stored the 16-bit raw sensor data, which is mainly used for training and testing in the paper. ### A.2 Synthetic Data Figure A.2: Real and the computed point spread function (kernel). We follow the image formation pipeline to synthesize the near-realistic data. Given only the display pattern, and some specific measurements of the cameras, we could generate the blur kernels as shown in Fig. A.2 along with the degraded images for training. Fig. A.3 compares the synthetic data with the real data. Perceptually, two sets of data samples have similar visual characteristics. ### A.3 Visible Bands for T-OLED (a) Real data samples. (b) Synthetic data samples. Figure A.3: Comparison of real data and synthetic data. First row: T-OLED. Second row: P-OLED. (a) Synthetic data. (b) Real data with bands. Figure A.4: Visible bands in real data. In addition to the degradation formulated in the paper, there is another minor image artifact caused by the periodic grating-like pixel structure (i.e. T-OLED): superposition of periodic bands over the image at low to moderate visibility levels. As shown in the Fig. A.4, periodic bands are visible in the real data, but not in the synthetic data. We regard it as the main gap of data synthesis. Those bands are caused by the imperfect adhesion of the display to the camera lens. In the degradation model, we assume the display pattern or objects are exactly placed against the lens, while in practical set-up of our experiments, there is still a small distance between them. We can consider the grating as being imaged very out-of-focus on the sensor plane. There will be an image on the image sensors consisting of the grating convoluted with the very-out-of-focus point spread function – a circle. This problem can be mitigated by real industrial manufacturing process, so we did not resolve it explicitly in the paper with experimental settings. However, it still forms an interesting problem regarding eliminating real periodic noises left for future works. ### A.4 More Restoration Results We show more restoration results in Fig. A.5. (a) Display (b) DeP (c) UNet(Syn) (d) UNet (e) GT Figure A.5: More restoration results. For each two-row group, the first row is for T-OLED, and the second one is for P-OLED. ## References * [1] Abdelrahman Abdelhamed, Marcus A Brubaker, and Michael S Brown. Noise flow: Noise modeling with conditional normalizing flows. In Proceedings of the IEEE International Conference on Computer Vision, pages 3165–3173, 2019. * [2] Abdelrahman Abdelhamed, Stephen Lin, and Michael S Brown. A high-quality denoising dataset for smartphone cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1692–1700, 2018. * [3] Abdelrahman Abdelhamed, Radu Timofte, and Michael S Brown. Ntire 2019 challenge on real image denoising: Methods and results. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019. * [4] Eirikur Agustsson and Radu Timofte. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 126–135, 2017. * [5] Josue Anaya and Adrian Barbu. Renoir–a dataset for real low-light image noise reduction. Journal of Visual Communication and Image Representation, 51:144–154, 2018. * [6] Codruta O Ancuti, Cosmin Ancuti, Mateu Sbert, and Radu Timofte. Dense haze: A benchmark for image dehazing with dense-haze and haze-free images. arXiv preprint arXiv:1904.02904, 2019. * [7] Tim Brooks and Jonathan T Barron. Learning to synthesize motion blur. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6840–6848, 2019. * [8] Tim Brooks, Ben Mildenhall, Tianfan Xue, Jiawen Chen, Dillon Sharlet, and Jonathan T Barron. Unprocessing images for learned raw denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11036–11045, 2019. * [9] Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3291–3300, 2018. * [10] Chang Chen, Zhiwei Xiong, Xinmei Tian, Zheng-Jun Zha, and Feng Wu. Camera lens super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1652–1660, 2019. * [11] Dong-Ming Chen, Bin Xiong, and Zhen-Yu Guo. Full-screen smartphone, Sept. 3 2019. US Patent App. 29/650,323. * [12] V David John Evans, Xinrui Jiang, Andrew E Rubin, Matthew Hershenson, and Xiaoyu Miao. Optical sensors disposed beneath the display of an electronic device, Oct. 17 2019. US Patent App. 16/450,727. * [13] Raanan Fattal. Single image dehazing. ACM transactions on graphics (TOG), 27(3):72, 2008. * [14] J Scott Goldstein, Irving S Reed, and Louis L Scharf. A multistage representation of the wiener filter based on orthogonal projections. IEEE Transactions on Information Theory, 44(7):2943–2959, 1998\. * [15] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. * [16] Joseph W Goodman. Introduction to Fourier optics. Roberts and Company Publishers, 2005. * [17] Shi Guo, Zifei Yan, Kai Zhang, Wangmeng Zuo, and Lei Zhang. Toward convolutional blind denoising of real photographs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1712–1722, 2019. * [18] Samuel W Hasinoff. Photon, poisson noise. Computer Vision: A Reference Guide, pages 608–610, 2014. * [19] Michael T Heath. Scientific Computing: An Introductory Survey, Revised Second Edition. SIAM, 2018. * [20] Salman S Khan, VR Adarsh, Vivek Boominathan, Jasper Tan, Ashok Veeraraghavan, and Kaushik Mitra. Towards photorealistic reconstruction of highly multiplexed lensless images. In Proceedings of the IEEE International Conference on Computer Vision, pages 7860–7869, 2019. * [21] Orest Kupyn, Volodymyr Budzan, Mykola Mykhailych, Dmytro Mishkin, and Jiří Matas. Deblurgan: Blind motion deblurring using conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8183–8192, 2018. * [22] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 136–144, 2017. * [23] Sehoon Lim, Yuqian Zhou, Neil Emerton, and Tim Large. Aperture design for learning-based image restoration. In 3D Image Acquisition and Display: Technology, Perception and Applications, pages DF3A–2. Optical Society of America, 2020. * [24] Sehoon Lim, Yuqian Zhou, Neil Emerton, Tim Large, and Steven Bathiche. 74-1: Image restoration for display-integrated camera. In SID Symposium Digest of Technical Papers, volume 51, pages 1102–1105. Wiley Online Library, 2020. * [25] Ce Liu, Richard Szeliski, Sing Bing Kang, C Lawrence Zitnick, and William T Freeman. Automatic estimation and removal of noise from a single image. IEEE transactions on pattern analysis and machine intelligence, 30(2):299–314, 2007. * [26] Jiaming Liu, Chi-Hao Wu, Yuzhi Wang, Qin Xu, Yuqian Zhou, Haibin Huang, Chuan Wang, Shaofan Cai, Yifan Ding, Haoqiang Fan, et al. Learning raw image denoising with bayer pattern unification and bayer preserving augmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019. * [27] Kristina Monakhova, Joshua Yurtsever, Grace Kuo, Nick Antipa, Kyrollos Yanny, and Laura Waller. Learned reconstructions for practical mask-based lensless imaging. Optics express, 27(20):28075–28090, 2019. * [28] Seungjun Nah, Radu Timofte, Sungyong Baik, Seokil Hong, Gyeongsik Moon, Sanghyun Son, and Kyoung Mu Lee. Ntire 2019 challenge on video deblurring: Methods and results. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019. * [29] François Orieux, Jean-François Giovannelli, and Thomas Rodet. Bayesian estimation of regularization and point spread function parameters for wiener–hunt deconvolution. JOSA A, 27(7):1593–1607, 2010. * [30] Yifan Peng, Qilin Sun, Xiong Dun, Gordon Wetzstein, Wolfgang Heidrich, and Felix Heide. Learned large field-of-view imaging with thin-plate optics. ACM Trans. Graph., 38(6):219–1, 2019. * [31] Tobias Plotz and Stefan Roth. Benchmarking denoising algorithms with real photographs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1586–1595, 2017. * [32] Abhijith Punnappurath and Michael S Brown. Reflection removal using a dual-pixel sensor. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1556–1565, 2019. * [33] Wenqi Ren, Si Liu, Hua Zhang, Jinshan Pan, Xiaochun Cao, and Ming-Hsuan Yang. Single image dehazing via multi-scale convolutional neural networks. In European conference on computer vision, pages 154–169. Springer, 2016. * [34] Qilin Sun, Ethan Tseng, Qiang Fu, Wolfgang Heidrich, and Felix Heide. Learning rank-1 diffractive optics for single-shot high dynamic range imaging. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1386–1396, 2020. * [35] Jasper Tan, Li Niu, Jesse K Adams, Vivek Boominathan, Jacob T Robinson, Richard G Baraniuk, and Ashok Veeraraghavan. Face detection and verification using lensless cameras. IEEE Transactions on Computational Imaging, 5(2):180–194, 2018\. * [36] Renjie Wan, Boxin Shi, Ling-Yu Duan, Ah-Hwee Tan, and Alex C Kot. Benchmarking single-image reflection removal algorithms. In Proceedings of the IEEE International Conference on Computer Vision, pages 3922–3930, 2017. * [37] Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy. Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 0–0, 2018. * [38] Ing G Wenke. Organic light emitting diode (oled). Research gate, 2016. * [39] He Zhang and Vishal M Patel. Density-aware single image de-raining using a multi-stream dense network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 695–704, 2018. * [40] He Zhang, Vishwanath Sindagi, and Vishal M Patel. Image de-raining using a conditional generative adversarial network. IEEE transactions on circuits and systems for video technology, 2019\. * [41] Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters, 23(10):1499–1503, 2016. * [42] Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155, 2017. * [43] Kai Zhang, Wangmeng Zuo, and Lei Zhang. Ffdnet: Toward a fast and flexible solution for cnn-based image denoising. IEEE Transactions on Image Processing, 27(9):4608–4622, 2018. * [44] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 586–595, 2018. * [45] Xuaner Zhang, Qifeng Chen, Ren Ng, and Vladlen Koltun. Zoom to learn, learn to zoom. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3762–3770, 2019. * [46] Zhenhua Zhang. Image deblurring of camera under display by deep learning. In SID Symposium Digest of Technical Papers, volume 51, pages 43–46. Wiley Online Library, 2020. * [47] Yuqian Zhou, Jianbo Jiao, Haibin Huang, Jue Wang, and Thomas Huang. Adaptation strategies for applying awgn-based denoiser to realistic noise. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 10085–10086, 2019. * [48] Yuqian Zhou, Jianbo Jiao, Haibin Huang, Yang Wang, Jue Wang, Honghui Shi, and Thomas Huang. When awgn-based denoiser meets real noises. arXiv preprint arXiv:1904.03485, 2019. * [49] Yuqian Zhou, Michael Kwan, Kyle Tolentino, Neil Emerton, Sehoon Lim, Tim Large, Lijiang Fu, Zhihong Pan, Baopu Li, Qirui Yang, et al. Udc 2020 challenge on image restoration of under-display camera: Methods and results. In European Conference on Computer Vision, pages 337–351. Springer, 2020.
2024-09-04T02:54:58.814672
2020-03-10T17:17:01
2003.04866
{ "authors": "Ivan Vuli\\'c, Simon Baker, Edoardo Maria Ponti, Ulla Petti, Ira\n Leviant, Kelly Wing, Olga Majewska, Eden Bar, Matt Malone, Thierry Poibeau,\n Roi Reichart, Anna Korhonen", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26141", "submitter": "Ivan Vuli\\'c", "url": "https://arxiv.org/abs/2003.04866" }
arxiv-papers
12020 # Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual Lexical Semantic Similarity https://multisimlex.com/ Ivan Vulić ♠ ♠Equal contribution; English Faculty Building, 9 West Road Cambridge CB3 9DA, United Kingdom. E-mail: <EMAIL_ADDRESS>LTL, University of Cambridge Simon Baker ∗♠ LTL, University of Cambridge Edoardo Maria Ponti ∗♠ LTL, University of Cambridge Ulla Petti ∗ LTL, University of Cambridge Ira Leviant Technion City, Haifa 3200003, Israel. E-mail: <EMAIL_ADDRESS><EMAIL_ADDRESS>Faculty of Industrial Engineering and Management, Technion, IIT Kelly Wing ∗ LTL, University of Cambridge Olga Majewska ∗ LTL, University of Cambridge Eden Bar ∗∗ Faculty of Industrial Engineering and Management, Technion, IIT Matt Malone ∗ LTL, University of Cambridge Thierry Poibeau Rue Maurice Arnoux, 92120 Montrouge, France. E-mail<EMAIL_ADDRESS>LATTICE Lab, CNRS and ENS/PSL and Univ. Sorbonne nouvelle/USPC Roi Reichart ∗∗ Faculty of Industrial Engineering and Management, Technion, IIT Anna Korhonen ∗ LTL, University of Cambridge ###### Abstract We introduce Multi-SimLex, a large-scale lexical resource and evaluation benchmark covering datasets for 12 typologically diverse languages, including major languages (e.g., Mandarin Chinese, Spanish, Russian) as well as less- resourced ones (e.g., Welsh, Kiswahili). Each language dataset is annotated for the lexical relation of semantic similarity and contains 1,888 semantically aligned concept pairs, providing a representative coverage of word classes (nouns, verbs, adjectives, adverbs), frequency ranks, similarity intervals, lexical fields, and concreteness levels. Additionally, owing to the alignment of concepts across languages, we provide a suite of 66 cross-lingual semantic similarity datasets. Due to its extensive size and language coverage, Multi-SimLex provides entirely novel opportunities for experimental evaluation and analysis. On its monolingual and cross-lingual benchmarks, we evaluate and analyze a wide array of recent state-of-the-art monolingual and cross-lingual representation models, including static and contextualized word embeddings (such as fastText, M-BERT and XLM), externally informed lexical representations, as well as fully unsupervised and (weakly) supervised cross- lingual word embeddings. We also present a step-by-step dataset creation protocol for creating consistent, Multi-Simlex -style resources for additional languages. We make these contributions - the public release of Multi-SimLex datasets, their creation protocol, strong baseline results, and in-depth analyses which can be be helpful in guiding future developments in multilingual lexical semantics and representation learning - available via a website which will encourage community effort in further expansion of Multi- Simlex to many more languages. Such a large-scale semantic resource could inspire significant further advances in NLP across languages. ††issue: 1 ## 1 Introduction The lack of annotated training and evaluation data for many tasks and domains hinders the development of computational models for the majority of the world’s languages Snyder and Barzilay (2010); Adams et al. (2017); Ponti et al. (2019a). The necessity to guide and advance multilingual and cross-lingual NLP through annotation efforts that follow cross-lingually consistent guidelines has been recently recognized by collaborative initiatives such as the Universal Dependency (UD) project Nivre et al. (2019). The latest version of UD (as of March 2020) covers more than 70 languages. Crucially, this resource continues to steadily grow and evolve through the contributions of annotators from across the world, extending the UD’s reach to a wide array of typologically diverse languages. Besides steering research in multilingual parsing Zeman et al. (2018); Kondratyuk and Straka (2019); Doitch et al. (2019) and cross-lingual parser transfer Rasooli and Collins (2017); Lin et al. (2019); Rotman and Reichart (2019), the consistent annotations and guidelines have also enabled a range of insightful comparative studies focused on the languages’ syntactic (dis)similarities Bjerva and Augenstein (2018); Ponti et al. (2018a); Pires, Schlinger, and Garrette (2019). Inspired by the UD work and its substantial impact on research in (multilingual) syntax, in this article we introduce Multi-SimLex, a suite of manually and consistently annotated semantic datasets for 12 different languages, focused on the fundamental lexical relation of semantic similarity Budanitsky and Hirst (2006); Hill, Reichart, and Korhonen (2015). For any pair of words, this relation measures whether their referents share the same (functional) features, as opposed to general cognitive association captured by co-occurrence patterns in texts (i.e., the distributional information). Datasets that quantify the strength of true semantic similarity between concept pairs such as SimLex-999 Hill, Reichart, and Korhonen (2015) or SimVerb-3500 Gerz et al. (2016) have been instrumental in improving models for distributional semantics and representation learning. Discerning between semantic similarity and relatedness/association is not only crucial for theoretical studies on lexical semantics (see §2), but has also been shown to benefit a range of language understanding tasks in NLP. Examples include dialog state tracking Mrkšić et al. (2017); Ren et al. (2018), spoken language understanding Kim et al. (2016); Kim, de Marneffe, and Fosler-Lussier (2016), text simplification Glavaš and Vulić (2018); Ponti et al. (2018b); Lauscher et al. (2019), dictionary and thesaurus construction Cimiano, Hotho, and Staab (2005); Hill et al. (2016). Despite the proven usefulness of semantic similarity datasets, they are available only for a small and typologically narrow sample of resource-rich languages such as German, Italian, and Russian Leviant and Reichart (2015), whereas some language types and low-resource languages typically lack similar evaluation data. Even if some resources do exist, they are limited in their size (e.g., 500 pairs in Turkish Ercan and Yıldız (2018), 500 in Farsi Camacho-Collados et al. (2017), or 300 in Finnish Venekoski and Vankka (2017)) and coverage (e.g., all datasets which originated from the original English SimLex-999 contain only high-frequent concepts, and are dominated by nouns). This is why, as our departure point, we introduce a larger and more comprehensive English word similarity dataset spanning 1,888 concept pairs (see §4). Most importantly, semantic similarity datasets in different languages have been created using heterogeneous construction procedures with different guidelines for translation and annotation, as well as different rating scales. For instance, some datasets were obtained by directly translating the English SimLex-999 in its entirety Leviant and Reichart (2015); Mrkšić et al. (2017) or in part Venekoski and Vankka (2017). Other datasets were created from scratch Ercan and Yıldız (2018) and yet others sampled English concept pairs differently from SimLex-999 and then translated and reannotated them in target languages Camacho-Collados et al. (2017). This heterogeneity makes these datasets incomparable and precludes systematic cross-linguistic analyses. In this article, consolidating the lessons learned from previous dataset construction paradigms, we propose a carefully designed translation and annotation protocol for developing monolingual Multi-SimLex datasets with aligned concept pairs for typologically diverse languages. We apply this protocol to a set of 12 languages, including a mixture of major languages (e.g., Mandarin, Russian, and French) as well as several low-resource ones (e.g., Kiswahili, Welsh, and Yue Chinese). We demonstrate that our proposed dataset creation procedure yields data with high inter-annotator agreement rates (e.g., the average mean inter-annotator agreement for Welsh is 0.742). The unified construction protocol and alignment between concept pairs enables a series of quantitative analyses. Preliminary studies on the influence that polysemy and cross-lingual variation in lexical categories (see §2.3) have on similarity judgments are provided in §5. Data created according to Multi- SimLex protocol also allow for probing into whether similarity judgments are universal across languages, or rather depend on linguistic affinity (in terms of linguistic features, phylogeny, and geographical location). We investigate this question in §5.4. Naturally, Multi-SimLex datasets can be used as an intrinsic evaluation benchmark to assess the quality of lexical representations based on monolingual, joint multilingual, and transfer learning paradigms. We conduct a systematic evaluation of several state-of- the-art representation models in §7, showing that there are large gaps between human and system performance in all languages. The proposed construction paradigm also supports the automatic creation of 66 cross-lingual Multi-SimLex datasets by interleaving the monolingual ones. We outline the construction of the cross-lingual datasets in §6, and then present a quantitative evaluation of a series of cutting-edge cross-lingual representation models on this benchmark in §8. Contributions. We now summarize the main contributions of this work: 1) Building on lessons learned from prior work, we create a more comprehensive lexical semantic similarity dataset for the English language spanning a total of 1,888 concept pairs balanced with respect to similarity, frequency, and concreteness, and covering four word classes: nouns, verbs, adjectives and, for the first time, adverbs. This dataset serves as the main source for the creation of equivalent datasets in several other languages. 2) We present a carefully designed and rigorous language-agnostic translation and annotation protocol. These well-defined guidelines will facilitate the development of future Multi-SimLex datasets for other languages. The proposed protocol eliminates some crucial issues with prior efforts focused on the creation of multi-lingual semantic resources, namely: i) limited coverage; ii) heterogeneous annotation guidelines; and iii) concept pairs which are semantically incomparable across different languages. 3) We offer to the community manually annotated evaluation sets of 1,888 concept pairs across 12 typologically diverse languages, and 66 large cross- lingual evaluation sets. To the best of our knowledge, Multi-SimLex is the most comprehensive evaluation resource to date focused on the relation of semantic similarity. 4) We benchmark a wide array of recent state-of-the-art monolingual and cross- lingual word representation models across our sample of languages. The results can serve as strong baselines that lay the foundation for future improvements. 5) We present a first large-scale evaluation study on the ability of encoders pretrained on language modeling (such as bert Devlin et al. (2019) and xlm Conneau and Lample (2019)) to reason over word-level semantic similarity in different languages. To our own surprise, the results show that monolingual pretrained encoders, even when presented with word types out of context, are sometimes competitive with static word embedding models such as fastText Bojanowski et al. (2017) or word2vec Mikolov et al. (2013). The results also reveal a huge gap in performance between massively multilingual pretrained encoders and language-specific encoders in favor of the latter: our findings support other recent empirical evidence related to the “curse of multilinguality” Conneau et al. (2019); Bapna and Firat (2019) in representation learning. 6) We make all of these resources available on a website which facilitates easy creation, submission and sharing of Multi-Simlex-style datasets for a larger number of languages. We hope that this will yield an even larger repository of semantic resources that inspire future advances in NLP within and across languages. In light of the success of Universal Dependencies Nivre et al. (2019), we hope that our initiative will instigate a collaborative public effort with established and clear-cut guidelines that will result in additional Multi- SimLex datasets in a large number of languages in the near future. Moreover, we hope that it will provide means to advance our understanding of distributional and lexical semantics across a large number of languages. All monolingual and cross-lingual Multi-SimLex datasets–along with detailed translation and annotation guidelines–are available online at: https://multisimlex.com/. ## 2 Lexical Semantic Similarity ### 2.1 Similarity and Association The focus of the Multi-SimLex initiative is on the lexical relation of pure semantic similarity. For any pair of words, this relation measures whether their referents share the same features. For instance, graffiti and frescos are similar to the extent that they are both forms of painting and appear on walls. This relation can be contrasted with the cognitive association between two words, which often depends on how much their referents interact in the real world, or are found in the same situations. For instance, a painter is easily associated with frescos, although they lack any physical commonalities. Association is also known in the literature under other names: relatedness Budanitsky and Hirst (2006), topical similarity (McKeown et al., 2002), and domain similarity (Turney, 2012). Semantic similarity and association overlap to some degree, but do not coincide Kiela, Hill, and Clark (2015); Vulić, Kiela, and Korhonen (2017). In fact, there exist plenty of pairs that are intuitively associated but not similar. Pairs where the converse is true can also be encountered, although more rarely. An example are synonyms where a word is common and the other infrequent, such as to seize and to commandeer. Hill, Reichart, and Korhonen (2015) revealed that while similarity measures based on the WordNet graph (Wu and Palmer, 1994) and human judgments of association in the University of South Florida Free Association Database (Nelson, McEvoy, and Schreiber, 2004) do correlate, a number of pairs follow opposite trends. Several studies on human cognition also point in the same direction. For instance, semantic priming can be triggered by similar words without association (Lucas, 2000). On the other hand, a connection with cue words is established more quickly for topically related words rather than for similar words in free association tasks De Deyne and Storms (2008). A key property of semantic similarity is its gradience: pairs of words can be similar to a different degree. On the other hand, the relation of synonymy is binary: pairs of words are synonyms if they can be substituted in all contexts (or most contexts, in a looser sense), otherwise they are not. While synonyms can be conceived as lying on one extreme of the semantic similarity continuum, it is crucial to note that their definition is stated in purely relational terms, rather than invoking their referential properties (Lyons, 1977; Cruse, 1986; Coseriu, 1967). This makes behavioral studies on semantic similarity fundamentally different from lexical resources like WordNet Miller (1995), which include paradigmatic relations (such as synonymy). ### 2.2 Similarity for NLP: Intrinsic Evaluation and Semantic Specialization The ramifications of the distinction between similarity and association are profound for distributional semantics. This paradigm of lexical semantics is grounded in the distributional hypothesis, formulated by Firth (1957) and Harris (1951). According to this hypothesis, the meaning of a word can be recovered empirically from the contexts in which it occurs within a collection of texts. Since both pairs of topically related words and pairs of purely similar words tend to appear in the same contexts, their associated meaning confounds the two distinct relations Hill, Reichart, and Korhonen (2015); Schwartz, Reichart, and Rappoport (2015); Vulić et al. (2017b). As a result, distributional methods obscure a crucial facet of lexical meaning. This limitation also reflects onto word embeddings (WEs), representations of words as low-dimensional vectors that have become indispensable for a wide range of NLP applications (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016, inter alia). In particular, it involves both static WEs learned from co-occurrence patterns Mikolov et al. (2013); Levy and Goldberg (2014); Bojanowski et al. (2017) and contextualized WEs learned from modeling word sequences (Peters et al., 2018; Devlin et al., 2019, inter alia). As a result, in the induced representations, geometrical closeness (measured e.g. through cosine distance) conflates genuine similarity with broad relatedness. For instance, the vectors for antonyms such as sober and drunk, by definition dissimilar, might be neighbors in the semantic space under the distributional hypothesis. Turney (2012), Kiela and Clark (2014), and Melamud et al. (2016) demonstrated that different choices of hyper-parameters in WE algorithms (such as context window) emphasize different relations in the resulting representations. Likewise, Agirre et al. (2009) and Levy and Goldberg (2014) discovered that WEs learned from texts annotated with syntactic information mirror similarity better than simple local bag-of-words neighborhoods. The failure of WEs to capture semantic similarity, in turn, affects model performance in several NLP applications where such knowledge is crucial. In particular, Natural Language Understanding tasks such as statistical dialog modeling, text simplification, or semantic text similarity Mrkšić et al. (2016); Kim et al. (2016); Ponti et al. (2019c), among others, suffer the most. As a consequence, resources providing clean information on semantic similarity are key in mitigating the side effects of the distributional signal. In particular, such databases can be employed for the intrinsic evaluations of specific WE models as a proxy of their reliability for downstream applications (Collobert and Weston, 2008; Baroni and Lenci, 2010; Hill, Reichart, and Korhonen, 2015); intuitively, the more WEs are misaligned with human judgments of similarity, the more their performance on actual tasks is expected to be degraded. Moreover, word representations can be specialized (a.k.a. retrofitted) by disentangling word relations of similarity and association. In particular, linguistic constraints sourced from external databases (such as synonyms from WordNet) can be injected into WEs (Faruqui et al., 2015; Wieting et al., 2015; Mrkšić et al., 2017; Lauscher et al., 2019; Kamath et al., 2019, inter alia) in order to enforce a particular relation in a distributional semantic space while preserving the original adjacency properties. ### 2.3 Similarity and Language Variation: Semantic Typology In this work, we tackle the concept of (true) semantic similarity from a multilingual perspective. While the same meaning representations may be shared by all human speakers at a deep cognitive level, there is no one-to-one mapping between the words in the lexicons of different languages. This makes the comparison of similarity judgments across languages difficult, since the meaning overlap of translationally equivalent words is sometimes far less than exact. This results from the fact that the way languages ‘partition’ semantic fields is partially arbitrary (Trier, 1931), although constrained cross- lingually by common cognitive biases Majid et al. (2007). For instance, consider the field of colors: English distinguishes between green and blue, whereas Murle (South Sudan) has a single word for both (Kay and Maffi, 2013). In general, semantic typology studies the variation in lexical semantics across the world’s languages. According to (Evans, 2011), the ways languages categorize concepts into the lexicon follow three main axes: 1) granularity: what is the number of categories in a specific domain?; 2) boundary location: where do the lines marking different categories lie?; 3) grouping and dissection: what are the membership criteria of a category; which instances are considered to be more prototypical? Different choices with respect to these axes lead to different lexicalization patterns.111More formally, colexification is a phenomenon when different meanings can be expressed by the same word in a language François (2008). For instance, the two senses which are distinguished in English as time and weather are co-lexified in Croatian: the word vrijeme is used in both cases. For instance, distinct senses in a polysemous word in English, such as skin (referring to both the body and fruit), may be assigned separate words in other languages such as Italian pelle and buccia, respectively (Rzymski et al., 2020). We later analyze whether similarity scores obtained from native speakers also loosely follow the patterns described by semantic typology. ## 3 Previous Work and Evaluation Data Word Pair Datasets. Rich expert-created resources such as WordNet Miller (1995); Fellbaum (1998), VerbNet Kipper Schuler (2005); Kipper et al. (2008), or FrameNet Baker, Fillmore, and Lowe (1998) encode a wealth of semantic and syntactic information, but are expensive and time-consuming to create. The scale of this problem gets multiplied by the number of languages in consideration. Therefore, crowd-sourcing with non-expert annotators has been adopted as a quicker alternative to produce smaller and more focused semantic resources and evaluation benchmarks. This alternative practice has had a profound impact on distributional semantics and representation learning Hill, Reichart, and Korhonen (2015). While some prominent English word pair datasets such as WordSim-353 Finkelstein et al. (2002), MEN Bruni, Tran, and Baroni (2014), or Stanford Rare Words Luong, Socher, and Manning (2013) did not discriminate between similarity and relatedness, the importance of this distinction was established by Hill, Reichart, and Korhonen (2015, see again the discussion in §2.1) through the creation of SimLex-999. This inspired other similar datasets which focused on different lexical properties. For instance, SimVerb-3500 Gerz et al. (2016) provided similarity ratings for 3,500 English verbs, whereas CARD-660 Pilehvar et al. (2018) aimed at measuring the semantic similarity of infrequent concepts. Semantic Similarity Datasets in Other Languages. Motivated by the impact of datasets such as SimLex-999 and SimVerb-3500 on representation learning in English, a line of related work focused on creating similar resources in other languages. The dominant approach is translating and reannotating the entire original English SimLex-999 dataset, as done previously for German, Italian, and Russian Leviant and Reichart (2015), Hebrew and Croatian Mrkšić et al. (2017), and Polish Mykowiecka, Marciniak, and Rychlik (2018). Venekoski:2017nodalida apply this process only to a subset of 300 concept pairs from the English SimLex-999. On the other hand, Camacho-Collados et al. (2017) sampled a new set of 500 English concept pairs to ensure wider topical coverage and balance across similarity spectra, and then translated those pairs to German, Italian, Spanish, and Farsi (SEMEVAL-500). A similar approach was followed by Ercan and Yıldız (2018) for Turkish, by Huang et al. (2019) for Mandarin Chinese, and by Sakaizawa and Komachi (2018) for Japanese. Netisopakul, Wohlgenannt, and Pulich (2019) translated the concatenation of SimLex-999, WordSim-353, and the English SEMEVAL-500 into Thai and then reannotated it. Finally, Barzegar et al. (2018) translated English SimLex-999 and WordSim-353 to 11 resource-rich target languages (German, French, Russian, Italian, Dutch, Chinese, Portuguese, Swedish, Spanish, Arabic, Farsi), but they did not provide details concerning the translation process and the resolution of translation disagreements. More importantly, they also did not reannotate the translated pairs in the target languages. As we discussed in § 2.3 and reiterate later in §5, semantic differences among languages can have a profound impact on the annotation scores; particulary, we show in §5.4 that these differences even roughly define language clusters based on language affinity. A core issue with the current datasets concerns a lack of one unified procedure that ensures the comparability of resources in different languages. Further, concept pairs for different languages are sourced from different corpora (e.g., direct translation of the English data versus sampling from scratch in the target language). Moreover, the previous SimLex-based multilingual datasets inherit the main deficiencies of the English original version, such as the focus on nouns and highly frequent concepts. Finally, prior work mostly focused on languages that are widely spoken and do not account for the variety of the world’s languages. Our long-term goal is devising a standardized methodology to extend the coverage also to languages that are resource-lean and/or typologically diverse (e.g., Welsh, Kiswahili as in this work). Multilingual Datasets for Natural Language Understanding. The Multi-SimLex initiative and corresponding datasets are also aligned with the recent efforts on procuring multilingual benchmarks that can help advance computational modeling of natural language understanding across different languages. For instance, pretrained multilingual language models such as multilingual bert Devlin et al. (2019) or xlm Conneau and Lample (2019) are typically probed on XNLI test data Conneau et al. (2018b) for cross-lingual natural language inference. XNLI was created by translating examples from the English MultiNLI dataset, and projecting its sentence labels Williams, Nangia, and Bowman (2018). Other recent multilingual datasets target the task of question answering based on reading comprehension: i) MLQA Lewis et al. (2019) includes 7 languages ii) XQuAD Artetxe, Ruder, and Yogatama (2019) 10 languages; iii) TyDiQA Clark et al. (2020) 9 widely spoken typologically diverse languages. While MLQA and XQuAD result from the translation from an English dataset, TyDiQA was built independently in each language. Another multilingual dataset, PAWS-X Yang et al. (2019), focused on the paraphrase identification task and was created translating the original English PAWS Zhang, Baldridge, and He (2019) into 6 languages. We believe that Multi-SimLex can substantially contribute to this endeavor by offering a comprehensive multilingual benchmark for the fundamental lexical level relation of semantic similarity. In future work, Multi-SimLex also offers an opportunity to investigate the correlations between word-level semantic similarity and performance in downstream tasks such as QA and NLI across different languages. ## 4 The Base for Multi-SimLex: Extending English SimLex-999 In this section, we discuss the design principles behind the English (eng) Multi-SimLex dataset, which is the basis for all the Multi-SimLex datasets in other languages, as detailed in §5. We first argue that a new, more balanced, and more comprehensive evaluation resource for lexical semantic similarity in English is necessary. We then describe how the 1,888 word pairs contained in the eng Multi-SimLex were selected in such a way as to represent various linguistic phenomena within a single integrated resource. Construction Criteria. The following criteria have to be satisfied by any high-quality semantic evaluation resource, as argued by previous studies focused on the creation of such resources (Hill, Reichart, and Korhonen, 2015; Gerz et al., 2016; Vulić et al., 2017a; Camacho-Collados et al., 2017, inter alia): (C1) Representative and diverse. The resource must cover the full range of diverse concepts occurring in natural language, including different word classes (e.g., nouns, verbs, adjectives, adverbs), concrete and abstract concepts, a variety of lexical fields, and different frequency ranges. (C2) Clearly defined. The resource must provide a clear understanding of which semantic relation exactly is annotated and measured, possibly contrasting it with other relations. For instance, the original SimLex-999 and SimVerb-3500 explicitly focus on true semantic similarity and distinguish it from broader relatedness captured by datasets such as MEN Bruni, Tran, and Baroni (2014) or WordSim-353 Finkelstein et al. (2002). (C3) Consistent and reliable. The resource must ensure consistent annotations obtained from non-expert native speakers following simple and precise annotation guidelines. In choosing the word pairs and constructing eng Multi-SimLex, we adhere to these requirements. Moreover, we follow good practices established by the research on related resources. In particular, since the introduction of the original SimLex-999 dataset Hill, Reichart, and Korhonen (2015), follow-up works have improved its construction protocol across several aspects, including: 1) coverage of more lexical fields, e.g., by relying on a diverse set of Wikipedia categories Camacho-Collados et al. (2017), 2) infrequent/rare words Pilehvar et al. (2018), 3) focus on particular word classes, e.g., verbs Gerz et al. (2016), 4) annotation quality control Pilehvar et al. (2018). Our goal is to make use of these improvements towards a larger, more representative, and more reliable lexical similarity dataset in English and, consequently, in all other languages. The Final Output: English Multi-SimLex. In order to ensure that the criterion C1 is satisfied, we consolidate and integrate the data already carefully sampled in prior work into a single, comprehensive, and representative dataset. This way, we can control for diversity, frequency, and other properties while avoiding to perform this time-consuming selection process from scratch. Note that, on the other hand, the word pairs chosen for English are scored from scratch as part of the entire Multi-SimLex annotation process, introduced later in §5. We now describe the external data sources for the final set of word pairs: 1) Source: SimLex-999. Hill, Reichart, and Korhonen (2015). The English Multi- SimLex has been initially conceived as an extension of the original SimLex-999 dataset. Therefore, we include all 999 word pairs from SimLex, which span 666 noun pairs, 222 verb pairs, and 111 adjective pairs. While SimLex-999 already provides examples representing different POS classes, it does not have a sufficient coverage of different linguistic phenomena: for instance, it contains only very frequent concepts, and it does not provide a representative set of verbs (Gerz et al., 2016). 2) Source: SemEval-17: Task 2 (henceforth SEMEVAL-500; Camacho-Collados et al., 2017). We start from the full dataset of 500 concept pairs to extract a total of 334 concept pairs for English Multi-SimLex a) which contain only single-word concepts, b) which are not named entities, c) where POS tags of the two concepts are the same, d) where both concepts occur in the top 250K most frequent word types in the English Wikipedia, and e) do not already occur in SimLex-999. The original concepts were sampled as to span all the 34 domains available as part of BabelDomains Camacho-Collados and Navigli (2017), which roughly correspond to the main high-level Wikipedia categories. This ensures topical diversity in our sub-sample. 3) Source: CARD-660 Pilehvar et al. (2018). 67 word pairs are taken from this dataset focused on rare word similarity, applying the same selection criteria a) to e) employed for SEMEVAL-500. Words are controlled for frequency based on their occurrence counts from the Google News data and the ukWaC corpus Baroni et al. (2009). CARD-660 contains some words that are very rare (logboat), domain-specific (erythroleukemia) and slang (2mrw), which might be difficult to translate and annotate across a wide array of languages. Hence, we opt for retaining only the concept pairs above the threshold of top 250K most frequent Wikipedia concepts, as above. 4) Source: SimVerb-3500 Gerz et al. (2016) Since both CARD-660 and SEMEVAL-500 are heavily skewed towards noun pairs, and nouns also dominate the original SimLex-999, we also extract additional verb pairs from the verb-specific similarity dataset SimVerb-3500. We randomly sample 244 verb pairs from SimVerb-3500 that represent all similarity spectra. In particular, we add 61 verb pairs for each of the similarity intervals: $[0,1.5),[1.5,3),[3,4.5),[4.5,6]$. Since verbs in SimVerb-3500 were originally chosen from VerbNet Kipper, Snyder, and Palmer (2004); Kipper et al. (2008), they cover a wide range of verb classes and their related linguistic phenomena. 5) Source: University of South Florida (USF; Nelson, McEvoy, and Schreiber, 2004) norms, the largest database of free association for English. In order to improve the representation of different POS classes, we sample additional adjectives and adverbs from the USF norms following the procedure established by Hill, Reichart, and Korhonen (2015); Gerz et al. (2016). This yields additional 122 adjective pairs, but only a limited number of adverb pairs (e.g., later – never, now – here, once – twice). Therefore, we also create a set of adverb pairs semi-automatically by sampling adjectives that can be derivationally transformed into adverbs (e.g. adding the suffix -ly) from the USF, and assessing the correctness of such derivation in WordNet. The resulting pairs include, for instance, primarily – mainly, softly – firmly, roughly – reliably, etc. We include a total of 123 adverb pairs into the final English Multi-SimLex. Note that this is the first time adverbs are included into any semantic similarity dataset. Fulfillment of Construction Criteria. The final eng Multi-SimLex dataset spans 1,051 noun pairs, 469 verb pairs, 245 adjective pairs, and 123 adverb pairs.222There is a very small number of adjective and verb pairs extracted from CARD-660 and SEMEVAL-500 as well. For instance, the total number of verbs is 469 since we augment the original 222 SimLex-999 verb pairs with 244 SimVerb-3500 pairs and 3 SEMEVAL-500 pairs; and similarly for adjectives. As mentioned above, the criterion C1 has been fulfilled by relying only on word pairs that already underwent meticulous sampling processes in prior work, integrating them into a single resource. As a consequence, Multi-SimLex allows for fine-grained analyses over different POS classes, concreteness levels, similarity spectra, frequency intervals, relation types, morphology, lexical fields, and it also includes some challenging orthographically similar examples (e.g., infection – inflection).333Unlike SEMEVAL-500 and CARD-660, we do not explicitly control for the equal representation of concept pairs across each similarity interval for several reasons: a) Multi-SimLex contains a substantially larger number of concept pairs, so it is possible to extract balanced samples from the full data; b) such balance, even if imposed on the English dataset, would be distorted in all other monolingual and cross-lingual datasets; c) balancing over similarity intervals arguably does not reflect a true distribution “in the wild” where most concepts are only loosely related or completely unrelated. We ensure that the criteria C2 and C3 are satisfied by using similar annotation guidelines as Simlex-999, SimVerb-3500, and SEMEVAL-500 that explicitly target semantic similarity. In what follows, we outline the carefully tailored process of translating and annotating Multi- SimLex datasets in all target languages. ## 5 Multi-SimLex: Translation and Annotation We now detail the development of the final Multi-SimLex resource, describing our language selection process, as well as translation and annotation of the resource, including the steps taken to ensure and measure the quality of this resource. We also provide key data statistics and preliminary cross-lingual comparative analyses. Language Selection. Multi-SimLex comprises eleven languages in addition to English. The main objective for our inclusion criteria has been to balance language prominence (by number of speakers of the language) for maximum impact of the resource, while simultaneously having a diverse suite of languages based on their typological features (such as morphological type and language family). Table 1 summarizes key information about the languages currently included in Multi-SimLex. We have included a mixture of fusional, agglutinative, isolating, and introflexive languages that come from eight different language families. This includes languages that are very widely used such as Chinese Mandarin and Spanish, and low-resource languages such as Welsh and Kiswahili. We hope to further include additional languages and inspire other researchers to contribute to the effort over the lifetime of this project. The work on data collection can be divided into two crucial phases: 1) a translation phase where the extended English language dataset with 1,888 pairs (described in §4) is translated into eleven target languages, and 2) an annotation phase where human raters scored each pair in the translated set as well as the English set. Detailed guidelines for both phases are available online at: https://multisimlex.com. Language | ISO 639-3 | Family | Type | # Speakers ---|---|---|---|--- Chinese Mandarin | cmn | Sino-Tibetan | Isolating | 1.116 B Welsh | cym | IE: Celtic | Fusional | 0.7 M English | eng | IE: Germanic | Fusional | 1.132 B Estonian | est | Uralic | Agglutinative | 1.1 M Finnish | fin | Uralic | Agglutinative | 5.4 M French | fra | IE: Romance | Fusional | 280 M Hebrew | heb | Afro-Asiatic | Introflexive | 9 M Polish | pol | IE: Slavic | Fusional | 50 M Russian | rus | IE: Slavic | Fusional | 260 M Spanish | spa | IE: Romance | Fusional | 534.3 M Kiswahili | swa | Niger-Congo | Agglutinative | 98 M Yue Chinese | yue | Sino-Tibetan | Isolating | 73.5 M Table 1: The list of 12 languages in the Multi-SimLex multilingual suite along with their corresponding language family (IE = Indo-European), broad morphological type, and their ISO 639-3 code. The number of speakers is based on the total count of L1 and L2 speakers, according to ethnologue.com. ### 5.1 Word Pair Translation Translators for each target language were instructed to find direct or approximate translations for the 1,888 word pairs that satisfy the following rules. (1) All pairs in the translated set must be unique (i.e., no duplicate pairs); (2) Translating two words from the same English pair into the same word in the target language is not allowed (e.g., it is not allowed to translate car and automobile to the same Spanish word coche). (3) The translated pairs must preserve the semantic relations between the two words when possible. This means that, when multiple translations are possible, the translation that best conveys the semantic relation between the two words found in the original English pair is selected. (4) If it is not possible to use a single-word translation in the target language, then a multi-word expression (MWE) can be used to convey the nearest possible semantics given the above points (e.g., the English word homework is translated into the Polish MWE praca domowa). Satisfying the above rules when finding appropriate translations for each pair–while keeping to the spirit of the intended semantic relation in the English version–is not always straightforward. For instance, kinship terminology in Sinitic languages (Mandarin and Yue) uses different terms depending on whether the family member is older or younger, and whether the family member comes from the mother’s side or the father’s side. In Mandarin, _brother_ has no direct translation and can be translated as either: 哥哥 (_older brother_) or 弟弟 (_younger brother_). Therefore, in such cases, the translators are asked to choose the best option given the semantic context (relation) expressed by the pair in English, otherwise select one of the translations arbitrarily. This is also used to remove duplicate pairs in the translated set, by differentiating the duplicates using a variant at each instance. Further, many translation instances were resolved using near- synonymous terms in the translation. For example, the words in the pair: _wood – timber_ can only be directly translated in Estonian to _puit_ , and are not distinguishable. Therefore, the translators approximated the translation for timber to the compound noun _puitmaterjal_ (literally: _wood material_) in order to produce a valid pair in the target language. In some cases, a direct transliteration from English is used. For example, the pair: _physician_ and _doctor_ both translate to the same word in Estonian (arst); the less formal word _doktor_ is used as a translation of _doctor_ to generate a valid pair. Languages: | cmn | cym | est | fin | fra | heb | pol | rus | spa | swa | yue | Avg ---|---|---|---|---|---|---|---|---|---|---|---|--- Nouns | 84.5 | 80.0 | 90.0 | 87.3 | 78.2 | 98.2 | 90.0 | 95.5 | 85.5 | 80.0 | 77.3 | 86.0 Adjectives | 88.5 | 88.5 | 61.5 | 73.1 | 69.2 | 100.0 | 84.6 | 100.0 | 69.2 | 88.5 | 84.6 | 82.5 Verbs | 88.0 | 74.0 | 82.0 | 76.0 | 78.0 | 100.0 | 74.0 | 100.0 | 74.0 | 76.0 | 86.0 | 82.5 Adverbs | 92.9 | 100.0 | 57.1 | 78.6 | 92.9 | 100.0 | 85.7 | 100.0 | 85.7 | 85.7 | 78.6 | 87.0 Overall | 86.5 | 81.0 | 82.0 | 82.0 | 78.0 | 99.0 | 85.0 | 97.5 | 80.5 | 81.0 | 80.5 | 84.8 Table 2: Inter-translator agreement (% of matched translated words) by independent translators using a randomly selected 100-pair English sample from the Multi-SimLex dataset, and the corresponding 100-pair samples from the other datasets. We measure the quality of the translated pairs by using a random sample set of 100 pairs (from the 1,888 pairs) to be translated by an independent translator for each target language. The sample is proportionally stratified according to the part-of-speech categories. The independent translator is given identical instructions to the main translator; we then measure the percentage of matched translated words between the two translations of the sample set. Table 2 summarizes the inter-translator agreement results for all languages and by part-of-speech subsets. Overall across all languages, the agreement is 84.8%, which is similar to prior work Camacho-Collados et al. (2017); Vulić, Ponzetto, and Glavaš (2019). ### 5.2 Guidelines and Word Pair Scoring Across all languages, 145 human annotators were asked to score all 1,888 pairs (in their given language). We finally collect at least ten valid annotations for each word pair in each language. All annotators were required to abide by the following instructions: 1\. Each annotator must assign an integer score between 0 and 6 (inclusive) indicating how semantically similar the two words in a given pair are. A score of 6 indicates very high similarity (i.e., perfect synonymy), while zero indicates no similarity. 2\. Each annotator must score the entire set of 1,888 pairs in the dataset. The pairs must not be shared between different annotators. 3\. Annotators are able to break the workload over a period of approximately 2-3 weeks, and are able to use external sources (e.g. dictionaries, thesauri, WordNet) if required. 4\. Annotators are kept anonymous, and are not able to communicate with each other during the annotation process. The selection criteria for the annotators required that all annotators must be native speakers of the target language. Preference to annotators with university education was given, but not required. Annotators were asked to complete a spreadsheet containing the translated pairs of words, as well as the part-of-speech, and a column to enter the score. The annotators did not have access to the original pairs in English. To ensure the quality of the collected ratings, we have employed an adjudication protocol similar to the one proposed and validated by Pilehvar:2018emnlp. It consists of the following three rounds: Round 1: All annotators are asked to follow the instructions outlined above, and to rate all 1,888 pairs with integer scores between 0 and 6. Round 2: We compare the scores of all annotators and identify the pairs for each annotator that have shown the most disagreement. We ask the annotators to reconsider the assigned scores for those pairs only. The annotators may chose to either change or keep the scores. As in the case with Round 1, the annotators have no access to the scores of the other annotators, and the process is anonymous. This process gives a chance for annotators to correct errors or reconsider their judgments, and has been shown to be very effective in reaching consensus, as reported by Pilehvar et al. (2018). We used a very similar procedure as Pilehvar et al. (2018) to identify the pairs with the most disagreement; for each annotator, we marked the $i$th pair if the rated score $s_{i}$ falls within: $s_{i}\geq\mu_{i}+1.5$ or $s_{i}\leq\mu_{i}-1.5$, where $\mu_{i}$ is the mean of the other annotators’ scores. Round 3: We compute the average agreement for each annotator (with the other annotators), by measuring the average Spearman’s correlation against all other annotators. We discard the scores of annotators that have shown the least average agreement with all other annotators, while we maintain at least ten annotators per language by the end of this round. The actual process is done in multiple iterations: (S1) we measure the average agreement for each annotator with every other annotator (this corresponds to the APIAA measure, see later); (S2) if we still have more than 10 valid annotators and the lowest average score is higher than in the previous iteration, we remove the lowest one, and rerun S1. Table 3 shows the number of annotators at both the start (Round 1) and end (Round 3) of our process for each language. We measure the agreement between annotators using two metrics, average pairwise inter-annotator agreement (APIAA), and average mean inter-annotator agreement (AMIAA). Both of these use Spearman’s correlation ($\rho$) between annotators scores, the only difference is how they are averaged. They are computed as follows: Languages: | cmn | cym | eng | est | fin | fra | heb | pol | rus | spa | swa | yue ---|---|---|---|---|---|---|---|---|---|---|---|--- R1: Start | 13 | 12 | 14 | 12 | 13 | 10 | 11 | 12 | 12 | 12 | 11 | 13 R3: End | 11 | 10 | 13 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 11 Table 3: Number of human annotators. R1 = Annotation Round 1, R3 = Round 3. $\displaystyle{1)\textsc{apiaa}}=\frac{2\sum_{i,j}\rho(s_{i},s_{j})}{N(N-1)}\hskip 17.07164pt{2)\textsc{amiaa}}=\frac{\sum_{i}\rho(s_{i},\mu_{i})}{N}\,\text{, where:}\;\mu_{i}=\frac{\sum_{j,j\neq i}s_{j}}{N-1}$ (1) where $\rho(s_{i},s_{j})$ is the Spearman’s correlation between annotators $i$ and $j$’s scores ($s_{i}$,$s_{j}$) for all pairs in the dataset, and $N$ is the number of annotators. APIAA has been used widely as the standard measure for inter-annotator agreement, including in the original SimLex paper Hill, Reichart, and Korhonen (2015). It simply averages the pairwise Spearman’s correlation between all annotators. On the other hand, AMIAA compares the average Spearman’s correlation of one held-out annotator with the average of all the other $N-1$ annotators, and then averages across all $N$ ‘held-out’ annotators. It smooths individual annotator effects and arguably serves as a better upper bound than APIAA (Gerz et al., 2016; Vulić et al., 2017a; Pilehvar et al., 2018, inter alia). Languages: | cmn | cym | eng | est | fin | fra | heb | pol | rus | spa | swa | yue ---|---|---|---|---|---|---|---|---|---|---|---|--- Nouns | 0.661 | 0.622 | 0.659 | 0.558 | 0.647 | 0.698 | 0.538 | 0.606 | 0.524 | 0.582 | 0.626 | 0.727 Adjectives | 0.757 | 0.698 | 0.823 | 0.695 | 0.721 | 0.741 | 0.683 | 0.699 | 0.625 | 0.64 | 0.658 | 0.785 Verbs | 0.694 | 0.604 | 0.707 | 0.58 | 0.644 | 0.691 | 0.615 | 0.593 | 0.555 | 0.588 | 0.631 | 0.76 Adverbs | 0.699 | 0.593 | 0.695 | 0.579 | 0.646 | 0.595 | 0.561 | 0.543 | 0.535 | 0.563 | 0.562 | 0.716 Overall | 0.68 | 0.619 | 0.698 | 0.583 | 0.646 | 0.697 | 0.572 | 0.609 | 0.53 | 0.576 | 0.623 | 0.733 Table 4: Average pairwise inter-annotator agreement (APIAA). A score of $0.6$ and above indicates strong agreement. Languages: | cmn | cym | eng | est | fin | fra | heb | pol | rus | spa | swa | yue ---|---|---|---|---|---|---|---|---|---|---|---|--- Nouns | 0.757 | 0.747 | 0.766 | 0.696 | 0.766 | 0.809 | 0.68 | 0.717 | 0.657 | 0.71 | 0.725 | 0.804 Adjectives | 0.800 | 0.789 | 0.865 | 0.79 | 0.792 | 0.831 | 0.754 | 0.792 | 0.737 | 0.743 | 0.686 | 0.811 Verbs | 0.774 | 0.733 | 0.811 | 0.715 | 0.757 | 0.808 | 0.72 | 0.722 | 0.69 | 0.71 | 0.702 | 0.784 Adverbs | 0.749 | 0.693 | 0.777 | 0.697 | 0.748 | 0.729 | 0.645 | 0.655 | 0.608 | 0.671 | 0.623 | 0.716 Overall | 0.764 | 0.742 | 0.794 | 0.715 | 0.76 | 0.812 | 0.699 | 0.723 | 0.667 | 0.703 | 0.71 | 0.792 Table 5: Average mean inter-annotator agreement (AMIAA). A score of $0.6$ and above indicates strong agreement. We present the respective APIAA and AMIAA scores in Table 4 and Table 5 for all part-of-speech subsets, as well as the agreement for the full datasets. As reported in prior work Gerz et al. (2016); Vulić et al. (2017a), AMIAA scores are typically higher than APIAA scores. Crucially, the results indicate ‘strong agreement’ (across all languages) using both measurements. The languages with the highest annotator agreement were French (fra) and Yue Chinese (yue), while Russian (rus) had the lowest overall IAA scores. These scores, however, are still considered to be ‘moderately strong agreement’. ### 5.3 Data Analysis Similarity Score Distributions. Across all languages, the average score (mean $=1.61$, median$=1.1$) is on the lower side of the similarity scale. However, looking closer at the scores of each language in Table 6, we indicate notable differences in both the averages and the spread of scores. Notably, French has the highest average of similarity scores (mean$=2.61$, median$=2.5$), while Kiswahili has the lowest average (mean$=1.28$, median$=0.5$). Russian has the lowest spread ($\sigma=1.37$), while Polish has the largest ($\sigma=1.62$). All of the languages are strongly correlated with each other, as shown in Figure 1, where all of the Spearman’s correlation coefficients are greater than 0.6 for all language pairs. Languages that share the same language family are highly correlated (e.g, cmn-yue, rus-pol, est-fin). In addition, we observe high correlations between English and most other languages, as expected. This is due to the effect of using English as the base/anchor language to create the dataset. In simple words, if one translates to two languages $L_{1}$ and $L_{2}$ starting from the same set of pairs in English, it is higly likely that $L_{1}$ and $L_{2}$ will diverge from English in different ways. Therefore, the similarity between $L_{1}$-eng and $L_{2}$-eng is expected to be higher than between $L_{1}$-$L_{2}$, especially if $L_{1}$ and $L_{2}$ are typologically dissimilar languages (e.g., heb-cmn, see Figure 1). This phenomenon is well documented in related prior work (Leviant and Reichart, 2015; Camacho-Collados et al., 2017; Mrkšić et al., 2017; Vulić, Ponzetto, and Glavaš, 2019). While we acknowledge this as a slight artifact of the dataset design, it would otherwise be impossible to construct a semantically aligned and comprehensive dataset across a large number of languages. Lang: cmn cym eng est fin fra heb pol rus spa swa yue Interval $[0,1)$ 56.99 52.01 50.95 35.01 47.83 17.69 28.07 49.36 50.21 43.96 61.39 57.89 $[1,2)$ 8.74 19.54 17.06 30.67 21.35 20.39 35.86 17.32 22.40 22.35 11.86 7.84 $[2,3)$ 13.72 11.97 12.66 16.21 12.02 22.03 16.74 11.86 11.81 14.83 9.11 11.76 $[3,4)$ 11.60 8.32 8.16 10.22 10.17 17.64 8.47 8.95 8.10 9.38 7.10 12.98 $[4,5)$ 6.41 5.83 6.89 6.25 5.61 12.55 6.62 7.57 5.88 6.78 6.30 6.89 $[5,6]$ 2.54 2.33 4.29 1.64 2.97 9.64 4.24 4.93 1.59 2.70 4.24 2.65 Table 6: Fine-grained distribution of concept pairs over different rating intervals in each Multi-SimLex language, reported as percentages. The total number of concept pairs in each dataset is 1,888. Figure 1: Spearman’s correlation coefficient ($\rho$) of the similarity scores for all languages in Multi-SimLex. We also report differences in the distribution of the frequency of words among the languages in Multi-SimLex. Figure 2 shows six example languages, where each bar segment shows the proportion of words in each language that occur in the given frequency range. For example, the 10K-20K segment of the bars represents the proportion of words in the dataset that occur in the list of most frequent words between the frequency rank of 10,000 and 20,000 in that language; likewise with other intervals. Frequency lists for the presented languages are derived from Wikipedia and Common Crawl corpora.444Frequency lists were obtained from fastText word vectors which are sorted by frequency: https://fasttext.cc/docs/en/crawl-vectors.html While many concept pairs are direct or approximate translations of English pairs, we can see that the frequency distribution does vary across different languages, and is also related to inherent language properties. For instance, in Finnish and Russian, while we use infinitive forms of all verbs, conjugated verb inflections are often more frequent in raw corpora than the corresponding infinitive forms. The variance can also be partially explained by the difference in monolingual corpora size used to derive the frequency rankings in the first place: absolute vocabulary sizes are expected to fluctuate across different languages. However, it is also important to note that the datasets also contain subsets of lower-frequency and rare words, which can be used for rare word evaluations in multiple languages, in the spirit of Pilehvar:2018emnlp’s English rare word dataset. Figure 2: A distribution over different frequency ranges for words from Multi- SimLex datasets for selected languages. Multi-word expressions are excluded from the analysis. Cross-Linguistic Differences. Table 7 shows some examples of average similarity scores of English, Spanish, Kiswahili and Welsh concept pairs. Remember that the scores range from 0 to 6: the higher the score, the more similar the participants found the concepts in the pair. The examples from Table 7 show evidence of both the stability of average similarity scores across languages (_unlikely – friendly_ , _book – literature_ , and _vanish – disappear_), as well as language-specific differences (_care – caution_). Some differences in similarity scores seem to group languages into clusters. For example, the word pair _regular – average_ has an average similarity score of 4.0 and 4.1 in English and Spanish, respectively, whereas in Kiswahili and Welsh the average similarity score of this pair is 0.5 and 0.8. We analyze this phenomenon in more detail in §5.4. Word Pair | POS | eng | spa | swa | cym ---|---|---|---|---|--- Similar average rating | | | | | unlikely – friendly | ADV | 0 | 0 | 0 | 0 book – literature | N | 2.5 | 2.3 | 2.1 | 2.3 vanish – disappear | V | 5.2 | 5.3 | 5.5 | 5.3 Different average rating | | | | | regular – average | ADJ | 4 | 4.1 | 0.5 | 0.8 care – caution | N | 4.1 | 5.7 | 0.2 | 3.1 One language higher | | | | | large – big | ADJ | 5.9 | 2.7 | 3.8 | 3.8 bank – seat | N | 0 | 5.1 | 0 | 0.1 sunset - evening | N | 1.6 | 1.5 | 5.5 | 2.8 purely – completely | ADV | 2.3 | 2.3 | 1.1 | 5.4 One language lower | | | | | woman – wife | N | 0.9 | 2.9 | 4.1 | 4.8 amazingly – fantastically | ADV | 5.1 | 0.4 | 4.1 | 4.1 wonderful – terrific | ADJ | 5.3 | 5.4 | 0.9 | 5.7 promise – swear | V | 4.8 | 5.3 | 4.3 | 0 Table 7: Examples of concept pairs with their similarity scores from four languages. For brevity, only the original English concept pair is included, but note that the pair is translated to all target languages, see §5.1. There are also examples for each of the four languages having a notably higher or lower similarity score for the same concept pair than the three other languages. For example, _large – big_ in English has an average similarity score of 5.9, whereas Spanish, Kiswahili and Welsh speakers rate the closest concept pair in their native language to have a similarity score between 2.7 and 3.8. What is more, _woman – wife_ receives an average similarity of 0.9 in English, 2.9 in Spanish, and greater than 4.0 in Kiswahili and Welsh. The examples from Spanish include _banco – asiento_ (_bank – seat_) which receives an average similarity score 5.1, while in the other three languages the similarity score for this word pair does not exceed 0.1. At the same time, the average similarity score of _espantosamente – fantásticamente_ (_amazingly – fantastically_) is much lower in Spanish (0.4) than in other languages (4.1 – 5.1). In Kiswahili, an example of a word pair with a higher similarity score than the rest would be _machweo – jioni_ (_sunset – evening_), having an average score of 5.5, while the other languages receive 2.8 or less, and a notably lower similarity score is given to _wa ajabu - mkubwa sana_ (_wonderful – terrific_), getting 0.9, while the other languages receive 5.3 or more. Welsh examples include _yn llwyr - yn gyfan gwbl_ (_purely – completely_), which scores 5.4 among Welsh speakers but 2.3 or less in other languages, while _addo – tyngu_ (_promise – swear_) is rated as 0 by all Welsh annotators, but in the other three languages 4.3 or more on average. There can be several explanations for the differences in similarity scores across languages, including but not limited to cultural context, polysemy, metonymy, translation, regional and generational differences, and most commonly, the fact that words and meanings do not exactly map onto each other across languages. For example, it is likely that the other three languages do not have two separate words for describing the concepts in the concept pair: _big – large_ , and the translators had to opt for similar lexical items that were more distant in meaning, explaining why in English the concept pair received a much higher average similarity score than in other languages. A similar issue related to the mapping problem across languages arose in the Welsh concept pair _yn llwye – yn gyfan gwbl_ , where Welsh speakers agreed that the two concepts are very similar. When asked, bilingual speakers considered the two Welsh concepts more similar than English equivalents _purely – completely_ , potentially explaining why a higher average similarity score was reached in Welsh. The example of _woman – wife_ can illustrate cultural differences or another translation-related issue where the word ‘wife’ did not exist in some languages (for example, Estonian), and therefore had to be described using other words, affecting the comparability of the similarity scores. This was also the case with the _football – soccer_ concept pair. The pair _bank – seat_ demonstrates the effect of the polysemy mismatch across languages: while ‘bank’ has two different meanings in English, neither of them is similar to the word ‘seat’, but in Spanish, ‘ _banco_ ’ can mean ‘bank’, but it can also mean ‘bench’. Quite naturally, Spanish speakers gave the pair _banco – asiento_ a higher similarity score than the speakers of languages where this polysemy did not occur. An example of metonymy affecting the average similarity score can be seen in the Kiswahili version of the word pair: _sunset – evening_ (_machweo – jioni_). The average similarity score for this pair is much higher in Kiswahili, likely because the word ‘sunset’ can act as a metonym of ‘evening’. The low similarity score of _wonderful – terrific_ in Kiswahili (_wa ajabu - mkubwa sana_) can be explained by the fact that while ‘ _mkubwa sana_ ’ can be used as ‘terrific’ in Kiswahili, it technically means ‘very big’, adding to the examples of translation- and mapping-related effects. The word pair _amazingly – fantastically_ (_espantosamente – fantásticamente_) brings out another translation-related problem: the accuracy of the translation. While ‘ _espantosamente_ ’ could arguably be translated to ‘amazingly’, more common meanings include: ‘frightfully’, ‘terrifyingly’, and ‘shockingly’, explaining why the average similarity score differs from the rest of the languages. Another problem was brought out by _addo – tyngu_ (_promise – swear_) in Welsh, where the ‘ _tyngu_ ’ may not have been a commonly used or even a known word choice for annotators, pointing out potential regional or generational differences in language use. Language | Word Pair | POS | Rating all participants agree with ---|---|---|--- eng | trial – test | N | 4-5 swa | archbishop – bishop | N | 4-5 spa, cym | start – begin | V | 5-6 eng | smart – intelligent | ADJ | 5-6 eng, spa | quick – rapid | ADJ | 5-6 spa | circumstance – situation | N | 5-6 cym | football – soccer | N | 5-6 swa | football – soccer | N | 6 swa | pause – wait | V | 6 swa | money – cash | N | 6 cym | friend – buddy | N | 6 Table 8: Examples of concept pairs with their similarity scores from four languages where all participants show strong agreement in their rating. Table 8 presents examples of concept pairs from English, Spanish, Kiswahili, and Welsh on which the participants agreed the most. For example, in English all participants rated the similarity of _trial – test_ to be 4 or 5. In Spanish and Welsh, all participants rated _start – begin_ to correspond to a score of 5 or 6. In Kiswahili, _money – cash_ received a similarity rating of 6 from every participant. While there are numerous examples of concept pairs in these languages where the participants agreed on a similarity score of 4 or higher, it is worth noting that none of these languages had a single pair where all participants agreed on either 1-2, 2-3, or 3-4 similarity rating. Interestingly, in English all pairs where all the participants agreed on a 5-6 similarity score were adjectives. ### 5.4 Effect of Language Affinity on Similarity Scores Based on the analysis in Figure 1 and inspecting the anecdotal examples in the previous section, it is evident that the correlation between similarity scores across languages is not random. To corroborate this intuition, we visualize the vectors of similarity scores for each single language by reducing their dimensionality to 2 via Principal Component Analysis (Pearson, 1901). The resulting scatter plot in Figure 3 reveals that languages from the same family or branch have similar patterns in the scores. In particular, Russian and Polish (both Slavic), Finnish and Estonian (both Uralic), Cantonese and Mandarin Chinese (both Sinitic), and Spanish and French (both Romance) are all neighbors. Figure 3: PCA of the language vectors resulting from the concatenation of similarity judgments for all pairs. In order to quantify exactly the effect of language affinity on the similarity scores, we run correlation analyses between these and language features. In particular, we extract feature vectors from URIEL (Littell et al., 2017), a massively multilingual typological database that collects and normalizes information compiled by grammarians and field linguists about the world’s languages. In particular, we focus on information about geography (the areas where the language speakers are concentrated), family (the phylogenetic tree each language belongs to), and typology (including syntax, phonological inventory, and phonology).555For the extraction of these features, we employed lang2vec: github.com/antonisa/lang2vec Moreover, we consider typological representations of languages that are not manually crafted by experts, but rather learned from texts. Malaviya, Neubig, and Littell (2017) proposed to construct such representations by training language-identifying vectors end- to-end as part of neural machine translation models. The vector for similarity judgments and the vector of linguistic features for a given language have different dimensionality. Hence, we first construct a distance matrix for each vector space, such that both columns and rows are language indices, and each cell value is the cosine distance between the vectors of the corresponding language pair. Given a set of L languages, each resulting matrix $S$ has dimensionality of $\mathbb{R}^{|L|\times|L|}$ and is symmetrical. To estimate the correlation between the matrix for similarity judgments and each of the matrices for linguistic features, we run a Mantel test (Mantel, 1967), a non-parametric statistical test based on matrix permutations that takes into account inter-dependencies among pairwise distances. The results of the Mantel test reported in Table 3 show that there exist statistically significant correlations between similarity judgments and geography, family, and syntax, given that $p<0.05$ and $z>1.96$. The correlation coefficient is particularly strong for geography ($r=0.647$) and syntax ($r=0.649$). The former result is intuitive, because languages in contact easily borrow and loan lexical units, and cultural interactions may result in similar cognitive categorizations. The result for syntax, instead, cannot be explained so easily, as formal properties of language do not affect lexical semantics. Instead, we conjecture that, while no causal relation is present, both syntactic features and similarity judgments might be linked to a common explanatory variable (such as geography). In fact, several syntactic properties are not uniformly spread across the globe. For instance, verbs with Verb–Object–Subject word order are mostly concentrated in Oceania (Dryer, 2013). In turn, geographical proximity leads to similar judgment patterns, as mentioned above. On the other hand, we find no correlation with phonology and inventory, as expected, nor with the bottom-up typological features from Malaviya, Neubig, and Littell (2017). Features | Dimension | Mantel r | Mantel p | Mantel z ---|---|---|---|--- geography | 299 | 0.647 | 0.007* | 3.443 family | 3718 | 0.329 | 0.023* | 2.711 syntax | 103 | 0.649 | 0.007* | 3.787 inventory | 158 | 0.155 | 0.459 | 0.782 phonology | 28 | 0.397 | 0.046 | 1.943 Malaviya, Neubig, and Littell (2017) | 512 | -0.431 | 0.264 | -1.235 Table 9: Mantel test on the correlation between similarity judgments from Multi-SimLex and linguistic features from typological databases. ## 6 Cross-Lingual Multi-SimLex Datasets A crucial advantage of having semantically aligned monolingual datasets across different languages is the potential to create cross-lingual semantic similarity datasets. Such datasets allow for probing the quality of cross- lingual representation learning algorithms Camacho-Collados et al. (2017); Conneau et al. (2018a); Chen and Cardie (2018); Doval et al. (2018); Ruder, Vulić, and Søgaard (2019); Conneau and Lample (2019); Ruder, Søgaard, and Vulić (2019) as an intrinsic evaluation task. However, the cross-lingual datasets previous work relied upon Camacho-Collados et al. (2017) were limited to a homogeneous set of high-resource languages (e.g., English, German, Italian, Spanish) and a small number of concept pairs (all less than 1K pairs). We address both problems by 1) using a typologically more diverse language sample, and 2) relying on a substantially larger English dataset as a source for the cross-lingual datasets: 1,888 pairs in this work versus 500 pairs in the work of Camacho:2017semeval. As a result, each of our cross- lingual datasets contains a substantially larger number of concept pairs, as shown in Table 11. The cross-lingual Multi-Simlex datasets are constructed automatically, leveraging word pair translations and annotations collected in all 12 languages. This yields a total of 66 cross-lingual datasets, one for each possible combination of languages. Table 11 provides the final number of concept pairs, which lie between 2,031 and 3,480 pairs for each cross-lingual dataset, whereas Table 10 shows some sample pairs with their corresponding similarity scores. The automatic creation and verification of cross-lingual datasets closely follows the procedure first outlined by Camacho:2015acl and later adopted by Camacho:2017semeval (for semantic similarity) and Vulic:2019acl (for graded lexical entailment). First, given two languages, we intersect their aligned concept pairs obtained through translation. For instance, starting from the aligned pairs attroupement – foule in French and rahvasumm – rahvahulk in Estonian, we construct two cross-lingual pairs attroupement – rahvaluk and rahvasumm – foule. The scores of cross-lingual pairs are then computed as averages of the two corresponding monolingual scores. Finally, in order to filter out concept pairs whose semantic meaning was not preserved during this operation, we retain only cross-lingual pairs for which the corresponding monolingual scores $(s_{s},s_{t})$ differ at most by one fifth of the full scale (i.e., $\mid s_{s}-s_{t}\mid\leq 1.2$). This heuristic mitigates the noise due to cross-lingual semantic shifts Camacho-Collados et al. (2017); Vulić, Ponzetto, and Glavaš (2019). We refer the reader to the work of Camacho:2015acl for a detailed technical description of the procedure. Pair | Concept-1 | Concept-2 | Score | Pair | Concept-1 | Concept-2 | Score ---|---|---|---|---|---|---|--- cym-eng | rhyddid | liberty | 5.37 | cmn-est | 可能 | optimistlikult | 0.83 cym-pol | plentynaidd | niemądry | 2.15 | fin-swa | psykologia | sayansi | 2.20 swa-eng | kutimiza | accomplish | 5.24 | eng-fra | normally | quotidiennement | 2.41 cmn-fra | 有弹性 | flexible | 4.08 | fin-spa | auto | bicicleta | 0.85 fin-spa | tietämättömyys | inteligencia | 0.55 | cmn-yue | 使灰心 | 使气馁 | 4.78 spa-fra | ganador | candidat | 2.15 | cym-swa | sefyllfa | mazingira | 1.90 est-yue | takso | 巴士 | 2.08 | est-spa | armee | legión | 3.25 eng-fin | orange | sitrushedelmä | 3.43 | fin-est | halveksuva | põlglik | 5.55 spa-pol | palabra | wskazówka | 0.55 | cmn-cym | 学生 | disgybl | 4.45 pol-swa | prawdopodobnie | uwezekano | 4.05 | pol-eng | grawitacja | meteor | 0.27 Table 10: Example concept pairs with their scores from a selection of cross- lingual Multi-SimLex datasets. To assess the quality of the resulting cross-lingual datasets, we have conducted a verification experiment similar to Vulic:2019acl. We randomly sampled 300 concept pairs in the English-Spanish, English-French, and English- Mandarin cross-lingual datasets. Subsequently, we asked bilingual native speakers to provide similarity judgments of each pair. The Spearman’s correlation score $\rho$ between automatically induced and manually collected ratings achieves $\rho\geq 0.90$ on all samples, which confirms the viability of the automatic construction procedure. | cmn | cym | eng | est | fin | fra | heb | pol | rus | spa | swa | yue ---|---|---|---|---|---|---|---|---|---|---|---|--- cmn | 1,888 | – | – | – | – | – | – | – | – | – | – | – cym | 3,085 | 1,888 | – | – | – | – | – | – | – | – | – | – eng | 3,151 | 3,380 | 1,888 | – | – | – | – | – | – | – | – | – est | 3,188 | 3,305 | 3,364 | 1,888 | – | – | – | – | – | – | – | – fin | 3,137 | 3,274 | 3,352 | 3,386 | 1,888 | – | – | – | – | – | – | – fra | 2,243 | 2,301 | 2,284 | 2,787 | 2,682 | 1,888 | – | – | – | – | – | – heb | 3,056 | 3,209 | 3,274 | 3,358 | 3,243 | 2,903 | 1,888 | – | – | – | – | – pol | 3,009 | 3,175 | 3,274 | 3,310 | 3,294 | 2,379 | 3,201 | 1,888 | – | – | – | – rus | 3,032 | 3,196 | 3,222 | 3,339 | 3,257 | 2,219 | 3,226 | 3,209 | 1,888 | – | – | – spa | 3,116 | 3,205 | 3,318 | 3,312 | 3,256 | 2,645 | 3,256 | 3,250 | 3,189 | 1,888 | – | – swa | 2,807 | 2,926 | 2,828 | 2,845 | 2,900 | 2,031 | 2,775 | 2,819 | 2,855 | 2,811 | 1,888 | – yue | 3,480 | 3,062 | 3,099 | 3,080 | 3,063 | 2,313 | 3,005 | 2,950 | 2,966 | 3,053 | 2,821 | 1,888 Table 11: The sizes of all monolingual (main diagonal) and cross-lingual datasets. (a) Rating distribution (b) Distribution over POS classes Figure 4: (a) Rating distribution and (b) distribution of pairs over the four POS classes in cross-lingual Multi-SimLex datasets averaged across each of the 66 language pairs ($y$ axes plot percentages as the total number of concept pairs varies across different cross-lingual datasets). Minimum and maximum percentages for each rating interval and POS class are also plotted. Score and Class Distributions. The summary of score and class distributions across all 66 cross-lingual datasets are provided in Figure 4(a) and Figure 4(b), respectively. First, it is obvious that the distribution over the four POS classes largely adheres to that of the original monolingual Multi-SimLex datasets, and that the variance is quite low: e.g., the eng-fra dataset contains the lowest proportion of nouns (49.21%) and the highest proportion of verbs (27.1%), adjectives (15.28%), and adverbs (8.41%). On the other hand, the distribution over similarity intervals in Figure 4(a) shows a much greater variance. This is again expected as this pattern resurfaces in monolingual datasets (see Table 6). It is also evident that the data are skewed towards lower-similarity concept pairs. However, due to the joint size of all cross- lingual datasets (see Table 11), even the least represented intervals contain a substantial number of concept pairs. For instance, the rus-yue dataset contains the least highly similar concept pairs (in the interval $[4,6]$) of all 66 cross-lingual datasets. Nonetheless, the absolute number of pairs (138) in that interval for rus-yue is still substantial. If needed, this makes it possible to create smaller datasets which are balanced across the similarity spectra through sub-sampling. ## 7 Monolingual Evaluation of Representation Learning Models After the numerical and qualitative analyses of the Multi-SimLex datasets provided in §§ 5.3–5.4, we now benchmark a series of representation learning models on the new evaluation data. We evaluate standard static word embedding algorithms such as fastText Bojanowski et al. (2017), as well as a range of more recent text encoders pretrained on language modeling such as multilingual BERT (Devlin et al., 2019). These experiments provide strong baseline scores on the new Multi-SimLex datasets and offer a first large-scale analysis of pretrained encoders on word-level semantic similarity across diverse languages. In addition, the experiments now enabled by Multi-SimLex aim to answer several important questions. (Q1) Is it viable to extract high-quality word-level representations from pretrained encoders receiving subword-level tokens as input? Are such representations competitive with standard static word-level embeddings? (Q2) What are the implications of monolingual pretraining versus (massively) multilingual pretraining for performance? (Q3) Do lightweight unsupervised post-processing techniques improve word representations consistently across different languages? (Q4) Can we effectively transfer available external lexical knowledge from resource-rich languages to resource-lean languages in order to learn word representations that distinguish between true similarity and conceptual relatedness (see the discussion in §2.3)? ### 7.1 Models in Comparison Static Word Embeddings in Different Languages. First, we evaluate a standard method for inducing non-contextualized (i.e., static) word embeddings across a plethora of different languages: fastText (ft) vectors Bojanowski et al. (2017) are currently the most popular and robust choice given 1) the availability of pretrained vectors in a large number of languages Grave et al. (2018) trained on large Common Crawl (CC) plus Wikipedia (Wiki) data, and 2) their superior performance across a range of NLP tasks Mikolov et al. (2018). In fact, fastText is an extension of the standard word-level CBOW and skip- gram word2vec models Mikolov et al. (2013) that takes into account subword- level information, i.e. the contituent character n-grams of each word Zhu, Vulić, and Korhonen (2019). For this reason, fastText is also more suited for modeling rare words and morphologically rich languages.666We have also trained standard word-level CBOW and skip-gram with negative sampling (SGNS) on full Wikipedia dumps for several languages, but our preliminary experiments have verified that they under-perform compared to fastText. This finding is consistent with other recent studies demonstrating the usefulness of subword- level information Vania and Lopez (2017); Mikolov et al. (2018); Zhu, Vulić, and Korhonen (2019); Zhu et al. (2019). Therefore, we do not report the results with CBOW and SGNS for brevity. We rely on $300$-dimensional ft word vectors trained on CC+Wiki and available online for 157 languages.777https://fasttext.cc/docs/en/crawl-vectors.html The word vectors for all languages are obtained by CBOW with position-weights, with character n-grams of length 5, a window of size 5, 10 negative examples, and 10 training epochs. We also probe another (older) collection of ft vectors, pretrained on full Wikipedia dumps of each language.888https://fasttext.cc/docs/en/pretrained-vectors.html. The vectors are 300-dimensional, trained with the skip-gram objective for 5 epochs, with 5 negative examples, a window size set to 5, and relying on all character n-grams from length 3 to 6. Following prior work, we trim the vocabularies for all languages to the 200K most frequent words and compute representations for multi-word expressions by averaging the vectors of their constituent words. Unsupervised Post-Processing. Further, we consider a variety of unsupervised post-processing steps that can be applied post-training on top of any pretrained input word embedding space without any external lexical semantic resource. So far, the usefulness of such methods has been verified only on the English language through benchmarks for lexical semantics and sentence-level tasks Mu, Bhat, and Viswanath (2018). In this paper, we assess if unsupervised post-processing is beneficial also in other languages. To this end, we apply the following post-hoc transformations on the initial word embeddings: 1) Mean centering (mc) is applied after unit length normalization to ensure that all vectors have a zero mean, and is commonly applied in data mining and analysis Bro and Smilde (2003); van den Berg et al. (2006). 2) All-but-the-top (abtt) Mu, Bhat, and Viswanath (2018); Tang, Mousavi, and de Sa (2019) eliminates the common mean vector and a few top dominating directions (according to PCA) from the input distributional word vectors, since they do not contribute towards distinguishing the actual semantic meaning of different words. The method contains a single (tunable) hyper- parameter $dd_{A}$ which denotes the number of the dominating directions to remove from the initial representations. Previous work has verified the usefulness of abtt in several English lexical semantic tasks such as semantic similarity, word analogies, and concept categorization, as well as in sentence-level text classification tasks Mu, Bhat, and Viswanath (2018). 3) uncovec Artetxe et al. (2018) adjusts the similarity order of an arbitrary input word embedding space, and can emphasize either syntactic or semantic information in the transformed vectors. In short, it transforms the input space $\bm{X}$ into an adjusted space $\bm{X}\bm{W}_{\alpha}$ through a linear map $\bm{W}_{\alpha}$ controlled by a single hyper-parameter $\alpha$. The $n^{\text{th}}$-order similarity transformation of the input word vector space $\bm{X}$ (for which $n=1$) can be obtained as $\bm{M}_{n}(\bm{X})=\bm{M}_{1}(\bm{X}\bm{W}_{(n-1)/2})$, with $\bm{W}_{\alpha}=\bm{Q}\bm{\Gamma}^{\alpha}$, where $\bm{Q}$ and $\bm{\Gamma}$ are the matrices obtained via eigendecomposition of $\bm{X}^{T}\bm{X}=\bm{Q}\bm{\Gamma}\bm{Q}^{T}$. $\bm{\Gamma}$ is a diagonal matrix containing eigenvalues of $\bm{X}^{T}\bm{X}$; $\bm{Q}$ is an orthogonal matrix with eigenvectors of $\bm{X}^{T}\bm{X}$ as columns. While the motivation for the uncovec methods does originate from adjusting discrete similarity orders, note that $\alpha$ is in fact a continuous real-valued hyper-parameter which can be carefully tuned. For more technical details we refer the reader to the original work of Artetxe et al. (2018). As mentioned, all post-processing methods can be seen as unsupervised retrofitting methods that, given an arbitrary input vector space $\bm{X}$, produce a perturbed/transformed output vector space $\bm{X}^{\prime}$, but unlike common retrofitting methods Faruqui et al. (2015); Mrkšić et al. (2017), the perturbation is completely unsupervised (i.e., self-contained) and does not inject any external (semantic similarity oriented) knowledge into the vector space. Note that different perturbations can also be stacked: e.g., we can apply uncovec and then use abtt on top the output uncovec vectors. When using uncovec and abtt we always length-normalize and mean-center the data first (i.e., we apply the simple mc normalization). Finally, we tune the two hyper-parameters $d_{A}$ (for abtt) and $\alpha$ (uncovec) on the English Multi-SimLex and use the same values on the datasets of all other languages; we report results with $dd_{A}=3$ or $dd_{A}=10$, and $\alpha=-0.3$. Contextualized Word Embeddings. We also evaluate the capacity of unsupervised pretraining architectures based on language modeling objectives to reason over lexical semantic similarity. To the best of our knowledge, our article is the first study performing such analyses. State-of-the-art models such as bert Devlin et al. (2019), xlm Conneau and Lample (2019), or roberta Liu et al. (2019b) are typically very deep neural networks based on the Transformer architecture Vaswani et al. (2017). They receive subword-level tokens as inputs (such as WordPieces Schuster and Nakajima (2012)) to tackle data sparsity. In output, they return contextualized embeddings, dynamic representations for words in context. To represent words or multi-word expressions through a pretrained model, we follow prior work Liu et al. (2019a) and compute an input item’s representation by 1) feeding it to a pretrained model in isolation; then 2) averaging the $H$ last hidden representations for each of the item’s constituent subwords; and then finally 3) averaging the resulting subword representations to produce the final $d$-dimensional representation, where $d$ is the embedding and hidden-layer dimensionality (e.g., $d=768$ with bert). We opt for this approach due to its proven viability and simplicity Liu et al. (2019a), as it does not require any additional corpora to condition the induction of contextualized embeddings.999We also tested another encoding method where we fed pairs instead of single words/concepts into the pretrained encoder. The rationale is that the other concept in the pair can be used as disambiguation signal. However, this method consistently led to sub-par performance across all experimental runs. Other ways to extract the representations from pretrained models Aldarmaki and Diab (2019); Wu et al. (2019); Cao, Kitaev, and Klein (2020) are beyond the scope of this work, and we will experiment with them in the future. In other words, we treat each pretrained encoder enc as a black-box function to encode a single word or a multi-word expression $x$ in each language into a $d$-dimensional contextualized representation $\mathbf{x}_{\textsc{enc}}\in\mathbb{R}^{d}=\textsc{enc}(x)$ (e.g., $d=768$ with bert). As multilingual pretrained encoders, we experiment with the multilingual bert model (m-bert) Devlin et al. (2019) and xlm (Conneau and Lample, 2019). m-bert is pretrained on monolingual Wikipedia corpora of 102 languages (comprising all Multi-SimLex languages) with a 12-layer Transformer network, and yields $768$-dimensional representations. Since the concept pairs in Multi-SimLex are lowercased, we use the uncased version of m-bert.101010https://github.com/google- research/bert/blob/master/multilingual.md m-bert comprises all Multi-SimLex languages, and its evident ability to perform cross-lingual transfer Pires, Schlinger, and Garrette (2019); Wu and Dredze (2019); Wang et al. (2020) also makes it a convenient baseline model for cross-lingual experiments later in §8. The second multilingual model we consider, xlm-100,111111https://github.com/facebookresearch/XLM is pretrained on Wikipedia dumps of 100 languages, and encodes each concept into a $1,280$-dimensional representation. In contrast to m-bert, xlm-100 drops the next-sentence prediction objective and adds a cross-lingual masked language modeling objective. For both encoders, the representations of each concept are computed as averages over the last $H=4$ hidden layers in all experiments, as suggested by Wu:2019arxiv.121212In our preliminary experiments on several language pairs, we have also verified that this choice is superior to: a) using the output of only the last hidden layer (i.e., $H=1$) and b) averaging over all hidden layers (i.e., $H=12$ for the bert-base architecture). Likewise, using the special prepended ‘[CLS]’ token rather than the constituent sub-words to encode a concept also led to much worse performance across the board. Besides m-bert and xlm, covering multiple languages, we also analyze the performance of “language-specific” bert and xlm models for the languages where they are available: Finnish, Spanish, English, Mandarin Chinese, and French. The main goal of this comparison is to study the differences in performance between multilingual “one-size-fits-all” encoders and language-specific encoders. For all experiments, we rely on the pretrained models released in the Transformers repository Wolf et al. (2019).131313github.com/huggingface/transformers. The full list of currently supported pretrained encoders is available here: huggingface.co/models. Unsupervised post-processing steps devised for static word embeddings (i.e., mean-centering, abtt, uncovec) can also be applied on top of contextualized embeddings if we predefine a vocabulary of word types $V$ that will be represented in a word vector space $\mathbf{X}$. We construct such $V$ for each language as the intersection of word types covered by the corresponding CC+Wiki fastText vectors and the (single-word or multi-word) expressions appearing in the corresponding Multi-SimLex dataset. Finally, note that it is not feasible to evaluate a full range of available pretrained encoders within the scope of this work. Our main intention is to provide the first set of baseline results on Multi-SimLex by benchmarking a sample of most popular encoders, at the same time also investigating other important questions such as performance of static versus contextualized word embeddings, or multilingual versus language-specific pretraining. Another purpose of the experiments is to outline the wide potential and applicability of the Multi-SimLex datasets for multilingual and cross-lingual representation learning evaluation. Languages: cmn cym eng est fin fra heb pol rus spa swa yue fastText (CC+Wiki) (272) (151) (12) (319) (347) (43) (66) (326) (291) (46) (222) (–) (1) ft:init .534 .363 .528 .469 .607 .578 .450 .405 .422 .511 .439 – (2) ft:+mc .539 .393 .535 .473 .621 .584 .480 .412 .424 .516 .469 – (3) ft:+abtt (-3) .557 .389 .536 .495 .642 .610 .501 .427 .459 .523 .473 – (4) ft:+abtt (-10) .583 .384 .551 .476 .651 .623 .503 .455 .500 .542 .462 – (5) ft:+uncovec .572 .387 .550 .465 .642 .595 .501 .435 .437 .525 .437 – (1)+(2)+(5)+(3) .574 .386 .549 .476 .655 .604 .503 .442 .452 .528 .432 – (1)+(2)+(5)+(4) .577 .376 .542 .455 .652 .613 .510 .466 .491 .540 .424 – fastText (Wiki) (429) (282) (6) (343) (345) (73) (62) (354) (343) (57) (379) (677) (1) ft:init .315 .318 .436 .400 .575 .444 .428 .370 .359 .432 .332 .376 (2) ft:+mc .373 .337 .445 .404 .583 .463 .447 .383 .378 .447 .373 .427 (3) ft:+abtt (-3) .459 .343 .453 .404 .584 .487 .447 .387 .394 .456 .423 .429 (4) ft:+abtt (-10) .496 .323 .460 .385 .581 .494 .460 .401 .400 .477 .406 .399 (5) ft:+uncovec .518 .328 .469 .375 .568 .483 .449 .389 .387 .469 .386 .394 (1)+(2)+(5)+(3) .526 .323 .470 .369 .564 .495 .448 .392 .392 .473 .388 .388 (1)+(2)+(5)+(4) .526 .307 .471 .355 .548 .495 .450 .394 .394 .476 .382 .396 m-bert (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (1) m-bert:init .408 .033 .138 .085 .162 .115 .104 .069 .085 .145 .125 .404 (2) m-bert:+mc .458 .044 .256 .122 .173 .183 .128 .097 .123 .203 .128 .469 (3) m-bert:+abtt (-3) .487 .056 .321 .137 .200 .287 .144 .126 .197 .299 .135 .492 (4) m-bert:+abtt (-10) .456 .056 .329 .122 .164 .306 .121 .126 .183 .315 .136 .467 (5) m-bert:+uncovec .464 .063 .317 .144 .213 .288 .164 .144 .198 .287 .143 .464 (1)+(2)+(5)+(3) .464 .083 .326 .130 .201 .304 .149 .122 .199 .295 .148 .456 (1)+(2)+(5)+(4) .444 .086 .326 .112 .179 .305 .135 .127 .187 .285 .119 .447 Table 12: A summary of results (Spearman’s $\rho$ correlation scores) on the full monolingual Multi-SimLex datasets for 12 languages. We benchmark fastText word embeddings trained on two different corpora (CC+Wiki and only Wiki) as well the multilingual m-bert model (see §7.1). Results with the initial word vectors are reported (i.e., without any unsupervised post-processing), as well as with different unsupervised post-processing methods, described in §7.1. The language codes are provided in Table 1. The numbers in the parentheses (gray rows) refer to the number of OOV concepts excluded from the computation. The highest scores for each language and per model are in bold. ### 7.2 Results and Discussion The results we report are Spearman’s $\rho$ coefficients of the correlation between the ranks derived from the scores of the evaluated models and the human scores provided in each Multi-SimLex dataset. The main results with static and contextualized word vectors for all test languages are summarized in Table 12. The scores reveal several interesting patterns, and also pinpoint the main challenges for future work. State-of-the-Art Representation Models. The absolute scores of CC+Wiki ft, Wiki ft, and m-bert are not directly comparable, because these models have different coverage. In particular, Multi-SimLex contains some out-of- vocabulary (OOV) words whose static ft embeddings are not available.141414We acknowledge that it is possible to approximate word-level representations of OOVs with ft by summing the constituent n-gram embeddings as proposed by Bojanowski:2017tacl. However, we do not perform this step as the resulting embeddings are typically of much lower quality than non-OOV embeddings Zhu, Vulić, and Korhonen (2019). On the other hand, m-bert has perfect coverage. A general comparison between CC+Wiki and Wiki ft vectors, however, supports the intuition that larger corpora (such as CC+Wiki) yield higher correlations. Another finding is that a single massively multilingual model such as m-bert cannot produce semantically rich word-level representations. Whether this actually happens because the training objective is different—or because the need to represent 100+ languages reduces its language-specific capacity—is investigated further below. Languages: cmn cym eng est fin fra heb pol rus spa swa yue fastText (CC+Wiki) ft:init nouns (1,051) .561 .497 .592 .627 .709 .641 .560 .538 .526 .583 .544 .426 verbs (469) .511 .265 .408 .379 .527 .551 .458 .384 .464 .499 .391 .252 adj (245) .448 .338 .564 .401 .546 .616 .467 .284 .349 .401 .344 .288 adv (123) .622 .187 .482 .378 .547 .648 .491 .266 .514 .423 .172 .103 fastText (CC+Wiki) ft:+abtt (-3) nouns .601 .512 .599 .621 .730 .653 .592 .585 .578 .605 .553 .431 verbs .583 .305 .454 .379 .575 .602 .520 .390 .475 .526 .381 .314 adj .526 .372 .601 .427 .592 .646 .483 .316 .409 .411 .402 .312 adv .675 .150 .504 .397 .546 .695 .491 .230 .495 .416 .223 .081 m-bert m-bert:+abtt (-3) nouns .517 .091 .446 .191 .210 .364 .191 .188 .266 .418 .142 .539 verbs .511 .005 .200 .039 .077 .248 .038 .107 .181 .266 .091 .503 adj .227 .050 .226 .028 .128 .193 .044 .046 .002 .099 .192 .267 adv .282 .012 .343 .112 .173 .390 .326 .036 .046 .207 161 .049 xlm-100 xlm:+abtt (-3) all .498 .096 .270 .118 .203 .234 .195 .106 .170 .289 .130 .506 nouns .551 .132 .381 .193 .238 .234 .242 .184 .292 .378 .165 .559 verbs .544 .038 .169 .006 .190 .132 .136 .073 .095 .243 .047 .570 adj .356 .140 .256 .081 .179 .185 .150 .046 .022 .100 .220 .291 adv .284 .017 .040 .086 .043 .027 .221 .014 .022 .315 .095 .156 Table 13: Spearman’s $\rho$ correlation scores over the four POS classes represented in Multi-SimLex datasets. In addition to the word vectors considered earlier in Table 12, we also report scores for another contextualized model, xlm-100. The numbers in parentheses refer to the total number of POS-class pairs in the original eng dataset and, consequently, in all other monolingual datasets. The overall results also clearly indicate that (i) there are differences in performance across different monolingual Multi-SimLex datasets, and (ii) unsupervised post-processing is universally useful, and can lead to huge improvements in correlation scores for many languages. In what follows, we also delve deeper into these analyses. Impact of Unsupervised Post-Processing. First, the results in Table 12 suggest that applying dimension-wise mean centering to the initial vector spaces has positive impact on word similarity scores in all test languages and for all models, both static and contextualized (see the +mc rows in Table 12). Mimno and Thompson (2017) show that distributional word vectors have a tendency towards narrow clusters in the vector space (i.e., they occupy a narrow cone in the vector space and are therefore anisotropic Mu, Bhat, and Viswanath (2018); Ethayarajh (2019)), and are prone to the undesired effect of hubness Radovanović, Nanopoulos, and Ivanović (2010); Lazaridou, Dinu, and Baroni (2015).151515Hubness can be defined as the tendency of some points/vectors (i.e., “hubs”) to be nearest neighbors of many points in a high-dimensional (vector) space Radovanović, Nanopoulos, and Ivanović (2010); Lazaridou, Dinu, and Baroni (2015); Conneau et al. (2018a) Applying dimension-wise mean centering has the effect of spreading the vectors across the hyper-plane and mitigating the hubness issue, which consequently improves word-level similarity, as it emerges from the reported results. Previous work has already validated the importance of mean centering for clustering-based tasks Suzuki et al. (2013), bilingual lexicon induction with cross-lingual word embeddings Artetxe, Labaka, and Agirre (2018a); Zhang et al. (2019); Vulić et al. (2019), and for modeling lexical semantic change Schlechtweg et al. (2019). However, to the best of our knowledge, the results summarized in Table 12 are the first evidence that also confirms its importance for semantic similarity in a wide array of languages. In sum, as a general rule of thumb, we suggest to always mean-center representations for semantic tasks. The results further indicate that additional post-processing methods such as abtt and uncovec on top of mean-centered vector spaces can lead to further gains in most languages. The gains are even visible for languages which start from high correlation scores: for instance., cmn with CC+Wiki ft increases from 0.534 to 0.583, from 0.315 to 0.526 with Wiki ft, and from 0.408 to 0.487 with m-bert. Similarly, for rus with CC+Wiki ft we can improve from 0.422 to 0.500, and for fra the scores improve from 0.578 to 0.613. There are additional similar cases reported in Table 12. Overall, the unsupervised post-processing techniques seem universally useful across languages, but their efficacy and relative performance does vary across different languages. Note that we have not carefully fine-tuned the hyper- parameters of the evaluated post-processing methods, so additional small improvements can be expected for some languages. The main finding, however, is that these post-processing techniques are robust to semantic similarity computations beyond English, and are truly language independent. For instance, removing dominant latent (PCA-based) components from word vectors emphasizes semantic differences between different concepts, as only shared non- informative latent semantic knowledge is removed from the representations. In summary, pretrained word embeddings do contain more information pertaining to semantic similarity than revealed in the initial vectors. This way, we have corroborated the hypotheses from prior work Mu, Bhat, and Viswanath (2018); Artetxe et al. (2018) which were not previously empirically verified on other languages due to a shortage of evaluation data; this gap has now been filled with the introduction of the Multi-SimLex datasets. In all follow-up experiments, we always explicitly denote which post-processing configuration is used in evaluation. POS-Specific Subsets. We present the results for subsets of word pairs grouped by POS class in Table 13. Prior work based on English data showed that representations for nouns are typically of higher quality than those for the other POS classes Schwartz, Reichart, and Rappoport (2015, 2016); Vulić et al. (2017b). We observe a similar trend in other languages as well. This pattern is consistent across different representation models and can be attributed to several reasons. First, verb representations need to express a rich range of syntactic and semantic behaviors rather than purely referential features Gruber (1976); Levin (1993); Kipper et al. (2008). Second, low correlation scores on the adjective and adverb subsets in some languages (e.g., pol, cym, swa) might be due to their low frequency in monolingual texts, which yields unreliable representations. In general, the variance in performance across different word classes warrants further research in class-specific representation learning Baker, Reichart, and Korhonen (2014); Vulić et al. (2017b). The scores further attest the usefulness of unsupervised post- processing as almost all class-specific correlation scores are improved by applying mean-centering and abtt. Finally, the results for m-bert and xlm-100 in Table 13 further confirm that massively multilingual pretraining cannot yield reasonable semantic representations for many languages: in fact, for some classes they display no correlation with human ratings at all. Differences across Languages. Naturally, the results from Tables 12 and 13 also reveal that there is variation in performance of both static word embeddings and pretrained encoders across different languages. Among other causes, the lowest absolute scores with ft are reported for languages with least resources available to train monolingual word embeddings, such as Kiswahili, Welsh, and Estonian. The low performance on Welsh is especially indicative: Figure 1 shows that the ratings in the Welsh dataset match up very well with the English ratings, but we cannot achieve the same level of correlation in Welsh with Welsh ft word embeddings. Difference in performance between two closely related languages, est (low-resource) and fin (high- resource), provides additional evidence in this respect. The highest reported scores with m-bert and xlm-100 are obtained for Mandarin Chinese and Yue Chinese: this effectively points to the weaknesses of massively multilingual training with a joint subword vocabulary spanning 102 and 100 languages. Due to the difference in scripts, “language-specific” subwords for yue and cmn do not need to be shared across a vast amount of languages and the quality of their representation remains unscathed. This effectively means that m-bert’s subword vocabulary contains plenty of cmn- specific and yue-specific subwords which are exploited by the encoder when producing m-bert-based representations. Simultaneously, higher scores with m-bert (and xlm in Table 13) are reported for resource-rich languages such as French, Spanish, and English, which are better represented in m-bert’s training data. We also observe lower absolute scores (and a larger number of OOVs) for languages with very rich and productive morphological systems such as the two Slavic languages (Polish and Russian) and Finnish. Since Polish and Russian are known to have large Wikipedias and Common Crawl data Conneau et al. (2019) (e.g., their Wikipedias are in the top 10 largest Wikipedias worldwide), the problem with coverage can be attributed exactly to the proliferation of morphological forms in those languages. Finally, while Table 12 does reveal that unsupervised post-processing is useful for all languages, it also demonstrates that peak scores are achieved with different post-processing configurations. This finding suggests that a more careful language-specific fine-tuning is indeed needed to refine word embeddings towards semantic similarity. We plan to inspect the relationship between post-processing techniques and linguistic properties in more depth in future work. Multilingual vs. Language-Specific Contextualized Embeddings. Recent work has shown that—despite the usefulness of massively multilingual models such as m-bert and xlm-100 for zero-shot cross-lingual transfer Pires, Schlinger, and Garrette (2019); Wu and Dredze (2019)—stronger results in downstream tasks for a particular language can be achieved by pretraining language-specific models on language-specific data. In this experiment, motivated by the low results of m-bert and xlm-100 (see again Table 13), we assess if monolingual pretrained encoders can produce higher-quality word-level representations than multilingual models. Therefore, we evaluate language-specific bert and xlm models for a subset of the Multi- SimLex languages for which such models are currently available: Finnish Virtanen et al. (2019) (bert-base architecture, uncased), French Le et al. (2019) (the FlauBERT model based on xlm), English (bert-base, uncased), Mandarin Chinese (bert-base) Devlin et al. (2019) and Spanish (bert-base, uncased). In addition, we also evaluate a series of pretrained encoders available for English: (i) bert-base, bert-large, and bert-large with whole word masking (wwm) from the original work on BERT Devlin et al. (2019), (ii) monolingual “English-specific” xlm Conneau and Lample (2019), and (iii) two models which employ parameter reduction techniques to build more compact encoders: albert-b uses a configuration similar to bert-base, while albert-l is similar to bert-large, but with an $18\times$ reduction in the number of parameters Lan et al. (2020).161616All models and their further specifications are available at the following link: https://huggingface.co./models. From the results in Table 5, it is clear that monolingual pretrained encoders yield much more reliable word-level representations. The gains are visible even for languages such as cmn which showed reasonable performance with m-bert and are substantial on all test languages. This further confirms the validity of language-specific pretraining in lieu of multilingual training, if sufficient monolingual data are available. Moreover, a comparison of pretrained English encoders in Figure 5(b) largely follows the intuition: the larger bert-large model yields slight improvements over bert-base, and we can improve a bit more by relying on word-level (i.e., lexical-level) masking.Finally, light-weight albert model variants are quite competitive with the original bert models, with only modest drops reported, and albert-l again outperforms albert-b. Overall, it is interesting to note that the scores obtained with monolingual pretrained encoders are on a par with or even outperform static ft word embeddings: this is a very intriguing finding per se as it shows that such subword-level models trained on large corpora can implicitly capture rich lexical semantic knowledge. (a) Monolingual vs multilingual (b) Pretrained eng encoders Figure 5: (a) A performance comparison between monolingual pretrained language encoders and massively multilingual encoders. For four languages (cmn, eng, fin, spa), we report the scores with monolingual uncased bert-base architectures and multilingual uncased m-bert model, while for fra we report the results of the multilingual xlm-100 architecture and a monolingual French FlauBERT model Le et al. (2019), which is based on the same architecture as xlm-100. (b) A comparison of various pretrained encoders available for English. All these models are post-processed via abtt (-3). Similarity-Specialized Word Embeddings. Conflating distinct lexico-semantic relations is a well-known property of distributional representations Turney and Pantel (2010); Melamud et al. (2016). Semantic specialization fine-tunes distributional spaces to emphasize a particular lexico-semantic relation in the transformed space by injecting external lexical knowledge Glavaš, Ponti, and Vulić (2019). Explicitly discerning between true semantic similarity (as captured in Multi-SimLex) and broad conceptual relatedness benefits a number of tasks, as discussed in §2.1.171717For an overview of specialization methods for semantic similarity, we refer the interested reader to the recent tutorial Glavaš, Ponti, and Vulić (2019). Since most languages lack dedicated lexical resources, however, one viable strategy to steer monolingual word vector spaces to emphasize semantic similarity is through cross-lingual transfer of lexical knowledge, usually through a shared cross-lingual word vector space Ruder, Vulić, and Søgaard (2019). Therefore, we evaluate the effectiveness of specialization transfer methods using Multi-SimLex as our multilingual test bed. We evaluate a current state-of-the-art cross-lingual specialization transfer method with minimal requirements, put forth recently by Ponti:2019emnlp.181818We have also evaluated other specialization transfer methods, e.g., Glavaš and Vulić (2018); Ponti et al. (2018b), but they are consistently outperformed by the method of Ponti:2019emnlp. In a nutshell, their li-postspec method is a multi-step procedure that operates as follows. First, the knowledge about semantic similarity is extracted from WordNet in the form of triplets, that is, linguistic constraints $(w_{1},w_{2},r)$, where $w_{1}$ and $w_{2}$ are two concepts, and $r$ is a relation between them obtained from WordNet (e.g., synonymy or antonymy). The goal is to “attract” synonyms closer to each other in the transformed vector space as they reflect true semantic similarity, and “repel” antonyms further apart. In the second step, the linguistic constraints are translated from English to the target language via a shared cross-lingual word vector space. To this end, following Ponti:2019emnlp we rely on cross-lingual word embeddings (CLWEs) Joulin et al. (2018) available online, which are based on Wiki ft vectors.191919https://fasttext.cc/docs/en/aligned-vectors.html; for target languages for which there are no pretrained CLWEs, we induce them following the same procedure of Joulin:2018emnlp. Following that, a constraint refinement step is applied in the target language which aims to eliminate the noise inserted during the translation process. This is done by training a relation classification tool: it is trained again on the English linguistic constraints and then used on the translated target language constraints, where the transfer is again enabled via a shared cross-lingual word vector space.202020We again follow Ponti:2019emnlp and use a state-of-the-art relation classifier Glavaš and Vulić (2018). We refer the reader to the original work for additional technical details related to the classifier design. Finally, a state-of-the-art monolingual specialization procedure from Ponti:2018emnlp injects the (now target language) linguistic constraints into the target language distributional space. The scores are summarized in Table 14. Semantic specialization with li- postspec leads to substantial improvements in correlation scores for the majority of the target languages, demonstrating the importance of external semantic similarity knowledge for semantic similarity reasoning. However, we also observe deteriorated performance for the three target languages which can be considered the lowest-resource ones in our set: cym, swa, yue. We hypothesize that this occurs due to the inferior quality of the underlying monolingual Wikipedia word embeddings, which generates a chain of error accumulations. In particular, poor distributional word estimates compromise the alignment of the embedding spaces, which in turn results in increased translation noise, and reduced refinement ability of the relation classifier. On a high level, this “poor get poorer” observation again points to the fact that one of the primary causes of low performance of resource-low languages in semantic tasks is the sheer lack of even unlabeled data for distributional training. On the other hand, as we see from Table 13, typological dissimilarity between the source and the target does not deteriorate the effectiveness of semantic specialization. In fact, li-postspec does yield substantial gains also for the typologically distant targets such as heb, cmn, and est. The critical problem indeed seems to be insufficient raw data for monolingual distributional training. Languages: cmn cym eng est fin fra heb pol rus spa swa yue fastText (Wiki) (429) (282) (6) (343) (345) (73) (62) (354) (343) (57) (379) (677) ft:init .315 .318 – .400 .575 .444 .428 .370 .359 .432 .332 .376 li-postspec .584 .204 – .515 .619 .601 .510 .531 .547 .635 .238 .267 Table 14: The impact of vector space specialization for semantic similarity. The scores are reported using the current state-of-the-art specialization transfer li-postspec method of Ponti:2019emnlp, relying on English as a resource-rich source language and the external lexical semantic knowledge from the English WordNet. ## 8 Cross-Lingual Evaluation Similar to monolingual evaluation in §7, we now evaluate several state-of-the- art cross-lingual representation models on the suite of 66 automatically constructed cross-lingual Multi-SimLex datasets. Again, note that evaluating a full range of cross-lingual models available in the rich prior work on cross- lingual representation learning is well beyond the scope of this article. We therefore focus our cross-lingual analyses on several well-established and indicative state-of-the-art cross-lingual models, again spanning both static and contextualized cross-lingual word embeddings. ### 8.1 Models in Comparison Static Word Embeddings. We rely on a state-of-the-art mapping-based method for the induction of cross-lingual word embeddings (CLWEs): vecmap Artetxe, Labaka, and Agirre (2018b). The core idea behind such mapping-based or projection-based approaches is to learn a post-hoc alignment of independently trained monolingual word embeddings Ruder, Vulić, and Søgaard (2019). Such methods have gained popularity due to their conceptual simplicity and competitive performance coupled with reduced bilingual supervision requirements: they support CLWE induction with only as much as a few thousand word translation pairs as the bilingual supervision Mikolov, Le, and Sutskever (2013); Xing et al. (2015); Upadhyay et al. (2016); Ruder, Søgaard, and Vulić (2019). More recent work has shown that CLWEs can be induced with even weaker supervision from small dictionaries spanning several hundred pairs Vulić and Korhonen (2016); Vulić et al. (2019), identical strings Smith et al. (2017), or even only shared numerals Artetxe, Labaka, and Agirre (2017). In the extreme, fully unsupervised projection-based CLWEs extract such seed bilingual lexicons from scratch on the basis of monolingual data only (Conneau et al., 2018a; Artetxe, Labaka, and Agirre, 2018b; Hoshen and Wolf, 2018; Alvarez- Melis and Jaakkola, 2018; Chen and Cardie, 2018; Mohiuddin and Joty, 2019, inter alia). Recent empirical studies Glavaš et al. (2019); Vulić et al. (2019); Doval et al. (2019) have compared a variety of unsupervised and weakly supervised mapping-based CLWE methods, and vecmap emerged as the most robust and very competitive choice. Therefore, we focus on 1) its fully unsupervised variant (unsuper) in our comparisons. For several language pairs, we also report scores with two other vecmap model variants: 2) a supervised variant which learns a mapping based on an available seed lexicon (super), and 3) a supervised variant with self-learning (super+sl) which iteratively increases the seed lexicon and improves the mapping gradually. For a detailed description of these variants, we refer the reader to recent work Artetxe, Labaka, and Agirre (2018b); Vulić et al. (2019). We again use CC+Wiki ft vectors as initial monolingual word vectors, except for yue where Wiki ft is used. The seed dictionaries of two different sizes (1k and 5k translation pairs) are based on PanLex Kamholz, Pool, and Colowick (2014), and are taken directly from prior work Vulić et al. (2019),212121https://github.com/cambridgeltl/panlex-bli or extracted from PanLex following the same procedure as in the prior work. Contextualized Cross-Lingual Word Embeddings. We again evaluate the capacity of (massively) multilingual pretrained language models, m-bert and xlm-100, to reason over cross-lingual lexical similarity. Implicitly, such an evaluation also evaluates “the intrinsic quality” of shared cross-lingual word-level vector spaces induced by these methods, and their ability to boost cross- lingual transfer between different language pairs. We rely on the same procedure of aggregating the models’ subword-level parameters into word-level representations, already described in §7.1. As in monolingual settings, we can apply unsupervised post-processing steps such as abtt to both static and contextualized cross-lingual word embeddings. | cmn | cym | eng | est | fin | fra | heb | pol | rus | spa | swa | yue ---|---|---|---|---|---|---|---|---|---|---|---|--- cmn | | .076 | .348 | .139 | .154 | .392 | .190 | .207 | .227 | .300 | .049 | .484 cym | .041 | | .087 | .017 | .049 | .095 | .033 | .072 | .085 | .089 | .002 | .083 eng | .565 | .004 | | .168 | .159 | .401 | .171 | .182 | .236 | .309 | .014 | .357 est | .014 | .097 | .335 | | .143 | .161 | .100 | .113 | .083 | .134 | .025 | .124 fin | .049 | .020 | .542 | .530 | | .195 | .077 | .110 | .111 | .157 | .029 | .167 fra | .224 | .015 | .662 | .559 | .533 | | .191 | .229 | .297 | .382 | .038 | .382 heb | .202 | .110 | .516 | .465 | .445 | .469 | | .095 | .154 | .181 | .038 | .185 pol | .121 | .028 | .464 | .415 | .465 | .534 | .412 | | .139 | .183 | .013 | .205 rus | .032 | .037 | .511 | .408 | .476 | .529 | .430 | .390 | | .248 | .037 | .226 spa | .546 | .048 | .498 | .450 | .490 | .600 | .462 | .398 | .419 | | .055 | .313 swa | -.01 | .116 | .029 | .006 | .013 | -.05 | .033 | .052 | .035 | .045 | | .043 yue | .004 | .047 | .059 | .004 | .002 | .059 | .001 | .074 | .032 | .089 | -.02 | Table 15: Spearman’s $\rho$ correlation scores on all 66 cross-lingual datasets. 1) The scores below the main diagonal are computed based on cross- lingual word embeddings (CLWEs) induced by aligning CC+Wiki ft in all languages (except for yue where we use Wiki ft) in a fully unsupervised way (i.e., without any bilingual supervision). We rely on a standard CLWE mapping- based (i.e., alignment) approach: vecmap Artetxe, Labaka, and Agirre (2018b). 2) The scores above the main diagonal are computed by obtaining 768-dimensional word-level vectors from pretrained multilingual BERT (m-bert) following the procedure described in §7.1. For both fully unsupervised vecmap and m-bert, we report the results with unsupervised postprocessing enabled: all $2\times 66$ reported scores are obtained using the +abbt (-10) variant. (a) Average scores (b) Scores on eng-fra Figure 6: Further performance analyses of cross-lingual Multi-SimLex datasets. (a) Spearman’s $\rho$ correlation scores averaged over all 66 cross-lingual Multi-SimLex datasets for two pretrained multilingual encoders (m-bert and xlm). The scores are obtained with different configurations that exclude (init) or enable unsupervised post-processing. (b) A comparison of various pretrained encoders available for the English-French language pair, see the main text for a short description of each benchmarked pretrained encoder. ### 8.2 Results and Discussion Main Results and Differences across Language Pairs. A summary of the results on the 66 cross-lingual Multi-SimLex datasets are provided in Table 15 and Figure 6(a). The findings confirm several interesting findings from our previous monolingual experiments (§7.2), and also corroborate several hypotheses and findings from prior work, now on a large sample of language pairs and for the task of cross-lingual semantic similarity. First, we observe that the fully unsupervised vecmap model, despite being the most robust fully unsupervised method at present, fails to produce a meaningful cross-lingual word vector space for a large number of language pairs (see the bottom triangle of Table 15): many correlation scores are in fact no-correlation results, accentuating the problem of fully unsupervised cross-lingual learning for typologically diverse languages and with fewer amounts of monolingual data Vulić et al. (2019). The scores are particularly low across the board for lower-resource languages such as Welsh and Kiswahili. It also seems that the lack of monolingual data is a larger problem than typological dissimilarity between language pairs, as we do observe reasonably high correlation scores with vecmap for language pairs such as cmn-spa, heb- est, and rus-fin. However, typological differences (e.g., morphological richness) still play an important role as we observe very low scores when pairing cmn with morphologically rich languages such fin, est, pol, and rus. Similar to prior work of Vulic:2019we and doval2019onthe, given the fact that unsupervised vecmap is the most robust unsupervised CLWE method at present Glavaš et al. (2019), our results again question the usefulness of fully unsupervised approaches for a large number of languages, and call for further developments in the area of unsupervised and weakly supervised cross-lingual representation learning. The scores of m-bert and xlm-100222222The xlm-100 scores are not reported for brevity; they largely follow the patterns observed with m-bert. The aggregated scores between the two encoders are also very similar as indicated by Figure 6(a). lead to similar conclusions as in the monolingual settings. Reasonable correlation scores are achieved only for a small subset of resource-rich language pairs (e.g., eng, fra, spa, cmn) which dominate the multilingual m-bert training. Interestingly, the scores indicate a much higher performance of language pairs where yue is one of the languages when we use m-bert instead of vecmap. This boils down again to the fact that yue, due to its specific language script, has a good representation of its words and subwords in the shared m-bert vocabulary. At the same time, a reliable vecmap mapping between yue and other languages cannot be found due to a small monolingual yue corpus. In cases when vecmap does not yield a degenerate cross-lingual vector space starting from two monolingual ones, the final correlation scores seem substantially higher than the ones obtained by the single massively multilingual m-bert model. Finally, the results in Figure 6(a) again verify the usefulness of unsupervised post-processing also in cross-lingual settings. We observe improved performance with both m-bert and xlm-100 when mean centering (+mc) is applied, and further gains can be achieved by using abtt on the mean-centered vector spaces. A similar finding also holds for static cross-lingual word embeddings232323Note that vecmap does mean centering by default as one of its preprocessing steps prior to learning the mapping function Artetxe, Labaka, and Agirre (2018b); Vulić et al. (2019)., where applying abbt (-10) yields higher scores on 61/66 language pairs. Fully Unsupervised vs. Weakly Supervised Cross-Lingual Embeddings. The results in Table 15 indicate that fully unsupervised cross-lingual learning fails for a large number of language pairs. However, recent work Vulić et al. (2019) has noted that these sub-optimal non-alignment solutions with the unsuper model can be avoided by relying on (weak) cross-lingual supervision spanning only several thousands or even hundreds of word translation pairs. Therefore, we examine 1) if we can further improve the results on cross-lingual Multi-SimLex resorting to (at least some) cross-lingual supervision for resource-rich language pairs; and 2) if such available word-level supervision can also be useful for a range of languages which displayed near-zero performance in Table 15. In other words, we test if recent “tricks of the trade” used in the rich literature on CLWE learning reflect in gains on cross-lingual Multi-SimLex datasets. First, we reassess the findings established on the bilingual lexicon induction task Søgaard, Ruder, and Vulić (2018); Vulić et al. (2019): using at least some cross-lingual supervision is always beneficial compared to using no supervision at all. We report improvements over the unsuper model for all 10 language pairs in Table 16, even though the unsuper method initially produced strong correlation scores. The importance of self-learning increases with decreasing available seed dictionary size, and the +sl model always outperforms unsuper with 1k seed pairs; we observe the same patterns also with even smaller dictionary sizes than reported in Table 16 (250 and 500 seed pairs). Along the same line, the results in Table 17 indicate that at least some supervision is crucial for the success of static CLWEs on resource-leaner language pairs. We note substantial improvements on all language pairs; in fact, the vecmap model is able to learn a more reliable mapping starting from clean supervision. We again note large gains with self-learning. | cmn-eng | eng-fra | eng-spa | eng-rus | est-fin | est-heb | fin-heb | fra-spa | pol-rus | pol-spa ---|---|---|---|---|---|---|---|---|---|--- unsuper | .565 | .662 | .498 | .511 | .510 | .465 | .445 | .600 | .390 | .398 super (1k) | .575 | .602 | .453 | .376 | .378 | .363 | .442 | .588 | .399 | .406 +sl (1k) | .577 | .703 | .547 | .548 | .591 | .513 | .488 | .639 | .439 | .456 super (5k) | .587 | .704 | .542 | .535 | .518 | .473 | .585 | .631 | .455 | .463 +sl (5k) | .581 | .707 | .548 | .551 | .556 | .525 | .589 | .645 | .432 | .476 Table 16: Results on a selection of cross-lingual Multi-SimLex datasets where the fully unsupervised (unsuper) CLWE variant yields reasonable performance. We also show the results with supervised vecmap without self-learning (super) and with self-learning (+sl), with two seed dictionary sizes: 1k and 5k pairs; see §8.1 for more detail. Highest scores for each language pair are in bold. | cmn-fin | cmn-rus | cmn-yue | cym-fin | cym-fra | cym-pol | fin-swa ---|---|---|---|---|---|---|--- unsuper | .049 | .032 | .004 | .020 | .015 | .028 | .013 super (1k) | .410 | .388 | .372 | .384 | .475 | .326 | .206 +sl (1k) | .590 | .537 | .458 | .471 | .578 | .380 | .264 Table 17: Results on a selection of cross-lingual Multi-SimLex datasets where the fully unsupervised (unsuper) CLWE variant fails to learn a coherent shared cross-lingual space. See also the caption of Table 16. Multilingual vs. Bilingual Contextualized Embeddings. Similar to the monolingual settings, we also inspect if massively multilingual training in fact dilutes the knowledge necessary for cross-lingual reasoning on a particular language pair. Therefore, we compare the 100-language xlm-100 model with i) a variant of the same model trained on a smaller set of 17 languages (xlm-17); ii) a variant of the same model trained specifically for the particular language pair (xlm-2); and iii) a variant of the bilingual xlm-2 model that also leverages bilingual knowledge from parallel data during joint training (xlm-2++). We again use the pretrained models made available by Conneau:2019nips, and we refer to the original work for further technical details. The results are summarized in Figure 6(b), and they confirm the intuition that massively multilingual pretraining can damage performance even on resource- rich languages and language pairs. We observe a steep rise in performance when the multilingual model is trained on a much smaller set of languages (17 versus 100), and further improvements can be achieved by training a dedicated bilingual model. Finally, leveraging bilingual parallel data seems to offer additional slight gains, but a tiny difference between xlm-2 and xlm-2++ also suggests that this rich bilingual information is not used in the optimal way within the xlm architecture for semantic similarity. In summary, these results indicate that, in order to improve performance in cross-lingual transfer tasks, more work should be invested into 1) pretraining dedicated language pair-specific models, and 2) creative ways of leveraging available cross-lingual supervision (e.g., word translation pairs, parallel or comparable corpora) Liu et al. (2019a); Wu et al. (2019); Cao, Kitaev, and Klein (2020) with pretraining paradigms such as bert and xlm. Using such cross-lingual supervision could lead to similar benefits as indicated by the results obtained with static cross-lingual word embeddings (see Table 16 and Table 17). We believe that Multi-SimLex can serve as a valuable means to track and guide future progress in this research area. ## 9 Conclusion and Future Work We have presented Multi-SimLex, a resource containing human judgments on the semantic similarity of word pairs for 12 monolingual and 66 cross-lingual datasets. The languages covered are typologically diverse and include also under-resourced ones, such as Welsh and Kiswahili. The resource covers an unprecedented amount of 1,888 word pairs, carefully balanced according to their similarity score, frequency, concreteness, part-of-speech class, and lexical field. In addition to Multi-Simlex, we release the detailed protocol we followed to create this resource. We hope that our consistent guidelines will encourage researchers to translate and annotate Multi-Simlex -style datasets for additional languages. This can help and create a hugely valuable, large-scale semantic resource for multilingual NLP research. The core Multi-SimLex we release with this paper already enables researchers to carry out novel linguistic analysis as well as establishes a benchmark for evaluating representation learning models. Based on our preliminary analyses, we found that speakers of closely related languages tend to express equivalent similarity judgments. In particular, geographical proximity seems to play a greater role than family membership in determining the similarity of judgments across languages. Moreover, we tested several state-of-the-art word embedding models, both static and contextualized representations, as well as several (supervised and unsupervised) post-processing techniques, on the newly released Multi-SimLex. This enables future endeavors to improve multilingual representation learning with challenging baselines. In addition, our results provide several important insights for research on both monolingual and cross- lingual word representations: 1) Unsupervised post-processing techniques (mean centering, elimination of top principal components, adjusting similarity orders) are always beneficial independently of the language, although the combination leading to the best scores is language-specific and hence needs to be tuned. 2) Similarity rankings obtained from word embeddings for nouns are better aligned with human judgments than all the other part-of-speech classes considered here (verbs, adjectives, and, for the first time, adverbs). This confirms previous generalizations based on experiments on English. 3) The factor having the greatest impact on the quality of word representations is the availability of raw texts to train them in the first place, rather than language properties (such as family, geographical area, typological features). 4) Massively multilingual pretrained encoders such as m-bert (Devlin et al., 2019) and xlm-100 (Conneau and Lample, 2019) fare quite poorly on our benchmark, whereas pretrained encoders dedicated to a single language are more competitive with static word embeddings such as fastText (Bojanowski et al., 2017). Moreover, for language-specific encoders, parameter reduction techniques reduce performance only marginally. 5) Techniques to inject clean lexical semantic knowledge from external resources into distributional word representations were proven to be effective in emphasizing the relation of semantic similarity. In particular, methods capable of transferring such knowledge from resource-rich to resource-lean languages (Ponti et al., 2019c) increased the correlation with human judgments for most languages, except for those with limited unlabelled data. Future work can expand our preliminary, yet large-scale study on the ability of pretrained encoders to reason over word-level semantic similarity in different languages. For instance, we have highlighted how sharing the same encoder parameters across multiple languages may harm performance. However, it remains unclear if, and to what extent, the input language embeddings present in xlm-100 but absent in m-bert help mitigate this issue. In addition, pretrained language embeddings can be obtained both from typological databases (Littell et al., 2017) and from neural architectures (Malaviya, Neubig, and Littell, 2017). Plugging these embeddings into the encoders in lieu of embeddings trained end-to-end as suggested by prior work (Tsvetkov et al., 2016; Ammar et al., 2016; Ponti et al., 2019b) might extend the coverage to more resource-lean languages. Another important follow-up analysis might involve the comparison of the performance of representation learning models on multilingual datasets for both word-level semantic similarity and sentence-level Natural Language Understanding. In particular, Multi-SimLex fills a gap in available resources for multilingual NLP and might help understand how lexical and compositional semantics interact if put alongside existing resources such as XNLI Conneau et al. (2018b) for natural language inference or PAWS-X Yang et al. (2019) for cross-lingual paraphrase identification. Finally, the Multi-SimLex annotation could turn out to be a unique source of evidence to study the effects of polysemy in human judgments on semantic similarity: for equivalent word pairs in multiple languages, are the similarity scores affected by how many senses the two words (or multi-word expressions) incorporate? In light of the success of initiatives like Universal Dependencies for multilingual treebanks, we hope that making Multi-SimLex and its guidelines available will encourage other researchers to expand our current sample of languages. We particularly encourage creation and submission of comparable Multi-SimLex datasets for under-resourced and typologically diverse languages in future work. In particular, we have made a Multi-Simlex community website available to facilitate easy creation, gathering, dissemination, and use of annotated datasets: https://multisimlex.com/. ###### Acknowledgements. This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). Thierry Poibeau is partly supported by a PRAIRIE 3IA Institute fellowship ("Investissements d’avenir" program, reference ANR-19-P3IA-0001). ## References * Adams et al. (2017) Adams, Oliver, Adam Makarucha, Graham Neubig, Steven Bird, and Trevor Cohn. 2017\. Cross-lingual word embeddings for low-resource language modeling. In _Proceedings of EACL_ , pages 937–947. * Agirre et al. (2009) Agirre, Eneko, Enrique Alfonseca, Keith Hall, Jana Kravalová, Marius Pasca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and wordnet-based approaches. In _Proceedings of NAACL-HLT_ , pages 19–27. * Aldarmaki and Diab (2019) Aldarmaki, Hanan and Mona Diab. 2019. Context-aware cross-lingual mapping. In _Proceedings of NAACL-HLT_ , pages 3906–3911. * Alvarez-Melis and Jaakkola (2018) Alvarez-Melis, David and Tommi Jaakkola. 2018. Gromov-Wasserstein alignment of word embedding spaces. In _Proceedings of EMNLP_ , pages 1881–1890. * Ammar et al. (2016) Ammar, Waleed, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah Smith. 2016\. Many languages, one parser. _Transactions of the ACL_ , 4:431–444. * Artetxe, Labaka, and Agirre (2017) Artetxe, Mikel, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In _Proceedings of ACL_ , pages 451–462. * Artetxe, Labaka, and Agirre (2018a) Artetxe, Mikel, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In _Proceedings of AAAI_ , pages 5012–5019. * Artetxe, Labaka, and Agirre (2018b) Artetxe, Mikel, Gorka Labaka, and Eneko Agirre. 2018b. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In _Proceedings of ACL_ , pages 789–798. * Artetxe et al. (2018) Artetxe, Mikel, Gorka Labaka, Iñigo Lopez-Gazpio, and Eneko Agirre. 2018. Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation. In _Proceedings of CoNLL_ , pages 282–291. * Artetxe, Ruder, and Yogatama (2019) Artetxe, Mikel, Sebastian Ruder, and Dani Yogatama. 2019. On the cross-lingual transferability of monolingual representations. _CoRR_ , abs/1910.11856. * Baker, Fillmore, and Lowe (1998) Baker, Collin F., Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In _Proceedings of ACL_ , pages 86–90. * Baker, Reichart, and Korhonen (2014) Baker, Simon, Roi Reichart, and Anna Korhonen. 2014. An unsupervised model for instance level subcategorization acquisition. In _Proceedings of EMNLP_ , pages 278–289. * Bapna and Firat (2019) Bapna, Ankur and Orhan Firat. 2019. Simple, scalable adaptation for neural machine translation. In _Proceedings of EMNLP_ , pages 1538–1548. * Baroni et al. (2009) Baroni, Marco, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: A collection of very large linguistically processed web-crawled corpora. _Language Resources and Evaluation_ , 43(3):209–226. * Baroni and Lenci (2010) Baroni, Marco and Alessandro Lenci. 2010. Distributional memory: A general framework for corpus-based semantics. _Computational Linguistics_ , 36(4):673–721. * Barzegar et al. (2018) Barzegar, Siamak, Brian Davis, Manel Zarrouk, Siegfried Handschuh, and André Freitas. 2018. SemR-11: A multi-lingual gold-standard for semantic similarity and relatedness for eleven languages. In _Proceedings of LREC_ , pages 3912–3916. * van den Berg et al. (2006) van den Berg, Robert A., Huub C.J. Hoefsloot, Johan A. Westerhuis, Age K. Smilde, and Mariët J. van der Werf. 2006. Centering, scaling, and transformations: Improving the biological information content of metabolomics data. _BMC Genomics_ , 7(1):142. * Bjerva and Augenstein (2018) Bjerva, Johannes and Isabelle Augenstein. 2018. From phonology to syntax: Unsupervised linguistic typology at different levels with language embeddings. In _Proceedings of NAACL-HLT_ , pages 907–916. * Bojanowski et al. (2017) Bojanowski, Piotr, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. _Transactions of the ACL_ , 5:135–146. * Bro and Smilde (2003) Bro, Rasmus and Age K. Smilde. 2003. Centering and scaling in component analysis. _Journal of Chemometrics_ , 17(1):16–33. * Bruni, Tran, and Baroni (2014) Bruni, Elia, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. _Journal of Artificial Intelligence Research_ , 49:1–47. * Budanitsky and Hirst (2006) Budanitsky, Alexander and Graeme Hirst. 2006. Evaluating WordNet-based measures of lexical semantic relatedness. _Computational Linguistics_ , 32(1):13–47. * Camacho-Collados and Navigli (2017) Camacho-Collados, Jose and Roberto Navigli. 2017. BabelDomains: Large-scale domain labeling of lexical resources. In _Proceedings of EACL_ , pages 223–228. * Camacho-Collados et al. (2017) Camacho-Collados, Jose, Mohammad Taher Pilehvar, Nigel Collier, and Roberto Navigli. 2017. Semeval-2017 task 2: Multilingual and cross-lingual semantic word similarity. In _Proceedings of SEMEVAL_ , pages 15–26. * Camacho-Collados, Pilehvar, and Navigli (2015) Camacho-Collados, José, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. A framework for the construction of monolingual and cross-lingual word similarity datasets. In _Proceedings of ACL_ , pages 1–7. * Cao, Kitaev, and Klein (2020) Cao, Steven, Nikita Kitaev, and Dan Klein. 2020. Multilingual alignment of contextual word representations. In _Proceedings of ICLR_. * Chen and Manning (2014) Chen, Danqi and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In _Proceedings of EMNLP_ , pages 740–750. * Chen and Cardie (2018) Chen, Xilun and Claire Cardie. 2018. Unsupervised multilingual word embeddings. In _Proceedings of EMNLP_ , pages 261–270. * Cimiano, Hotho, and Staab (2005) Cimiano, Philipp, Andreas Hotho, and Steffen Staab. 2005. Learning concept hierarchies from text corpora using formal concept analysis. _Journal of Artificial Intelligence Research_ , 24:305–339. * Clark et al. (2020) Clark, Jonathan H., Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. _Transactions of the ACL_. * Collobert and Weston (2008) Collobert, Ronan and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In _Proceedings of ICML_ , pages 160–167. * Collobert et al. (2011) Collobert, Ronan, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. _Journal of Machine Learning Research_ , 12:2493–2537. * Conneau et al. (2019) Conneau, Alexis, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. _CoRR_ , abs/1911.02116. * Conneau and Lample (2019) Conneau, Alexis and Guillaume Lample. 2019. Cross-lingual language model pretraining. In _Proceedings of NeurIPS_ , pages 7057–7067. * Conneau et al. (2018a) Conneau, Alexis, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018a. Word translation without parallel data. In _Proceedings of ICLR_. * Conneau et al. (2018b) Conneau, Alexis, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018b. XNLI: Evaluating cross-lingual sentence representations. In _Proceedings of EMNLP_ , pages 2475–2485. * Coseriu (1967) Coseriu, Eugenio. 1967. Lexikalische solidaritäten. _Poetica_ , 1:293–303. * Cruse (1986) Cruse, David Alan. 1986. _Lexical Semantics_. Cambridge University Press. * De Deyne and Storms (2008) De Deyne, Simon and Gert Storms. 2008. Word associations: Network and semantic properties. _Behavior Research Methods_ , 40(1):213–231. * Devlin et al. (2019) Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of NAACL-HLT_ , pages 4171–4186. * Doitch et al. (2019) Doitch, Amichay, Ram Yazdi, Tamir Hazan, and Roi Reichart. 2019. Perturbation based learning for structured NLP tasks with application to dependency parsing. _Transactions of the ACL_ , 7:643–659. * Doval et al. (2018) Doval, Yerai, Jose Camacho-Collados, Luis Espinosa-Anke, and Steven Schockaert. 2018\. Improving cross-lingual word embeddings by meeting in the middle. In _Proceedings of EMNLP_ , pages 294–304. * Doval et al. (2019) Doval, Yerai, Jose Camacho-Collados, Luis Espinosa-Anke, and Steven Schockaert. 2019\. On the robustness of unsupervised and semi-supervised cross-lingual word embedding learning. _CoRR_ , abs/1908.07742. * Dryer (2013) Dryer, Matthew S. 2013. Order of subject, object and verb. In Matthew S. Dryer and Martin Haspelmath, editors, _The World Atlas of Language Structures Online_. Max Planck Institute for Evolutionary Anthropology, Leipzig. * Ercan and Yıldız (2018) Ercan, Gökhan and Olcay Taner Yıldız. 2018. AnlamVer: Semantic model evaluation dataset for Turkish - Word similarity and relatedness. In _Proceedings of COLING_ , pages 3819–3836. * Ethayarajh (2019) Ethayarajh, Kawin. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In _Proceedings of EMNLP_ , pages 55–65. * Evans (2011) Evans, Nicholas. 2011. Semantic Typology. In _The Oxford Handbook of Linguistic Typology_. Oxford University Press, pages 504–533. * Faruqui et al. (2015) Faruqui, Manaal, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In _Proceedings of NAACL-HLT_ , pages 1606–1615. * Fellbaum (1998) Fellbaum, Christiane. 1998. _WordNet_. MIT Press. * Finkelstein et al. (2002) Finkelstein, Lev, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing search in context: The concept revisited. _ACM Transactions on Information Systems_ , 20(1):116–131. * Firth (1957) Firth, John R. 1957. A synopsis of linguistic theory, 1930-1955. _Studies in linguistic analysis_. * François (2008) François, Alexandre. 2008. Semantic maps and the typology of colexification. _From polysemy to semantic change: Towards a typology of lexical semantic associations_ , 106:163. * Gerz et al. (2016) Gerz, Daniela, Ivan Vulić, Felix Hill, Roi Reichart, and Anna Korhonen. 2016\. SimVerb-3500: A large-scale evaluation set of verb similarity. In _Proceedings of EMNLP_ , pages 2173–2182. * Glavaš, Ponti, and Vulić (2019) Glavaš, Goran, Edoardo Maria Ponti, and Ivan Vulić. 2019. Semantic specialization of distributional word vectors. In _Proceedings of EMNLP: Tutorial Abstracts_. * Glavaš and Vulić (2018) Glavaš, Goran and Ivan Vulić. 2018. Discriminating between lexico-semantic relations with the specialization tensor model. In _Proceedings of NAACL-HLT_ , pages 181–187. * Glavaš and Vulić (2018) Glavaš, Goran and Ivan Vulić. 2018. Explicit retrofitting of distributional word vectors. In _Proceedings of ACL_ , pages 34–45. * Glavaš et al. (2019) Glavaš, Goran, Robert Litschko, Sebastian Ruder, and Ivan Vulić. 2019. How to (properly) evaluate cross-lingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. In _Proceedings of ACL_ , pages 710–721. * Grave et al. (2018) Grave, Edouard, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In _Proceedings of LREC_ , pages 3483–3487. * Gruber (1976) Gruber, Jeffrey. 1976. _Lexical Structures in Syntax and Semantics_ , volume 25. North-Holland. * Harris (1951) Harris, Zellig S. 1951. _Methods in Structural Linguistics_. University of Chicago Press. * Hill et al. (2016) Hill, Felix, KyungHyun Cho, Anna Korhonen, and Yoshua Bengio. 2016. Learning to understand phrases by embedding the dictionary. _Transactions of the ACL_ , 4:17–30. * Hill, Reichart, and Korhonen (2015) Hill, Felix, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (genuine) similarity estimation. _Computational Linguistics_ , 41(4):665–695. * Hoshen and Wolf (2018) Hoshen, Yedid and Lior Wolf. 2018. Non-adversarial unsupervised word translation. In _Proceedings of EMNLP_ , pages 469–478. * Huang et al. (2019) Huang, Junjie, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, and Maosong Sun. 2019. COS960: A Chinese word similarity dataset of 960 word pairs. _CoRR_ , abs/1906.00247. * Joulin et al. (2018) Joulin, Armand, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In _Proceedings of EMNLP_ , pages 2979–2984. * Kamath et al. (2019) Kamath, Aishwarya, Jonas Pfeiffer, Edoardo Maria Ponti, Goran Glavaš, and Ivan Vulić. 2019. Specializing distributional vectors of all words for lexical entailment. In _Proceedings of the 4th Workshop on Representation Learning for NLP_ , pages 72–83. * Kamholz, Pool, and Colowick (2014) Kamholz, David, Jonathan Pool, and Susan M. Colowick. 2014. PanLex: Building a resource for panlingual lexical translation. In _Proceedings of LREC_ , pages 3145–3150. * Kay and Maffi (2013) Kay, Paul and Luisa Maffi. 2013. Green and blue. In Matthew S. Dryer and Martin Haspelmath, editors, _The World Atlas of Language Structures Online_. Max Planck Institute for Evolutionary Anthropology, Leipzig. * Kiela and Clark (2014) Kiela, Douwe and Stephen Clark. 2014. A systematic study of semantic vector space model parameters. In _Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC)_ , pages 21–30. * Kiela, Hill, and Clark (2015) Kiela, Douwe, Felix Hill, and Stephen Clark. 2015. Specializing word embeddings for similarity or relatedness. In _Proceedings of EMNLP_ , pages 2044–2048. * Kim, de Marneffe, and Fosler-Lussier (2016) Kim, Joo-Kyung, Marie-Catherine de Marneffe, and Eric Fosler-Lussier. 2016. Adjusting word embeddings with semantic intensity orders. In _Proceedings of the 1st Workshop on Representation Learning for NLP_ , pages 62–69. * Kim et al. (2016) Kim, Joo-Kyung, Gokhan Tur, Asli Celikyilmaz, Bin Cao, and Ye-Yi Wang. 2016. Intent detection using semantically enriched word embeddings. In _SLT_. * Kipper et al. (2008) Kipper, Karin, Anna Korhonen, Neville Ryant, and Martha Palmer. 2008. A large-scale classification of English verbs. _Language Resources and Evaluation_ , 42(1):21–40. * Kipper, Snyder, and Palmer (2004) Kipper, Karin, Benjamin Snyder, and Martha Palmer. 2004. Extending a verb-lexicon using a semantically annotated corpus. In _Proceedings of LREC_. * Kipper Schuler (2005) Kipper Schuler, Karin. 2005. _VerbNet: A broad-coverage, comprehensive verb lexicon_. Ph.D. thesis, University of Pennsylvania. * Kondratyuk and Straka (2019) Kondratyuk, Dan and Milan Straka. 2019. 75 languages, 1 model: Parsing Universal Dependencies universally. In _Proceedings of EMNLP-IJCNLP_ , pages 2779–2795. * Lan et al. (2020) Lan, Zhenzhong, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In _Proceedings of ICLR_. * Lauscher et al. (2019) Lauscher, Anne, Ivan Vulić, Edoardo Maria Ponti, Anna Korhonen, and Goran Glavaš. 2019. Informing unsupervised pretraining with external linguistic knowledge. _arXiv preprint arXiv:1909.02339_. * Lazaridou, Dinu, and Baroni (2015) Lazaridou, Angeliki, Georgiana Dinu, and Marco Baroni. 2015. Hubness and pollution: Delving into cross-space mapping for zero-shot learning. In _Proceedings of ACL_ , pages 270–280. * Le et al. (2019) Le, Hang, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, and Didier Schwab. 2019. FlauBERT: Unsupervised language model pre-training for french. _CoRR_ , abs/1912.05372. * Leviant and Reichart (2015) Leviant, Ira and Roi Reichart. 2015. Separated by an un-common language: Towards judgment language informed vector space modeling. _CoRR_ , abs/1508.00106. * Levin (1993) Levin, Beth. 1993. _English verb classes and alternation, A preliminary investigation_. The University of Chicago Press. * Levy and Goldberg (2014) Levy, Omer and Yoav Goldberg. 2014. Dependency-based word embeddings. In _Proceedings of ACL_ , pages 302–308. * Lewis et al. (2019) Lewis, Patrick S. H., Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2019. MLQA: Evaluating cross-lingual extractive question answering. _CoRR_ , abs/1910.07475. * Lin et al. (2019) Lin, Yu-Hsiang, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019. Choosing transfer languages for cross-lingual learning. In _Proceedings of ACL_ , pages 3125–3135. * Littell et al. (2017) Littell, Patrick, David R Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In _Proceedings of EACL_ , pages 8–14. * Liu et al. (2019a) Liu, Qianchu, Diana McCarthy, Ivan Vulić, and Anna Korhonen. 2019a. Investigating cross-lingual alignment methods for contextualized embeddings with token-level evaluation. In _Proceedings of CoNLL_ , pages 33–43. * Liu et al. (2019b) Liu, Yinhan, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A robustly optimized BERT pretraining approach. _CoRR_ , abs/1907.11692. * Lucas (2000) Lucas, Margery. 2000. Semantic priming without association: A meta-analytic review. _Psychonomic Bulletin & Review_, 7(4):618–630. * Luong, Socher, and Manning (2013) Luong, Thang, Richard Socher, and Christopher Manning. 2013. Better word representations with recursive neural networks for morphology. In _Proceedings of CoNLL_ , pages 104–113. * Lyons (1977) Lyons, John. 1977. _Semantics_ , volume 2. Cambridge University Press. * Majid et al. (2007) Majid, Asifa, Melissa Bowerman, Miriam van Staden, and James S Boster. 2007. The semantic categories of cutting and breaking events: A crosslinguistic perspective. _Cognitive Linguistics_ , 18(2):133–152. * Malaviya, Neubig, and Littell (2017) Malaviya, Chaitanya, Graham Neubig, and Patrick Littell. 2017. Learning language representations for typology prediction. In _Proceedings of EMNLP_ , pages 2529–2535. * Mantel (1967) Mantel, Nathan. 1967. The detection of disease clustering and a generalized regression approach. _Cancer Research_ , 27(2 Part 1):209–220. * McKeown et al. (2002) McKeown, Kathleen R., Regina Barzilay, David Evans, Vasileios Hatzivassiloglou, Judith L. Klavans, Ani Nenkova, Carl Sable, Barry Schiffman, and Sergey Sigelman. 2002. Tracking and summarizing news on a daily basis with Columbia’s newsblaster. In _Proceedings of HLT_ , page 280–285. * Melamud et al. (2016) Melamud, Oren, David McClosky, Siddharth Patwardhan, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embeddings. In _Proceedings of NAACL-HLT_ , pages 1030–1040. * Mikolov et al. (2018) Mikolov, Tomas, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In _Proceedings of LREC_. * Mikolov, Le, and Sutskever (2013) Mikolov, Tomas, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. _arXiv preprint, CoRR_ , abs/1309.4168. * Mikolov et al. (2013) Mikolov, Tomas, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013\. Distributed representations of words and phrases and their compositionality. In _Proceedings of NeurIPS_ , pages 3111–3119. * Miller (1995) Miller, George A. 1995. WordNet: A lexical database for English. _Communications of the ACM_ , pages 39–41. * Mimno and Thompson (2017) Mimno, David and Laure Thompson. 2017. The strange geometry of skip-gram with negative sampling. In _Proceedings of EMNLP_ , pages 2873–2878. * Mohiuddin and Joty (2019) Mohiuddin, Tasnim and Shafiq Joty. 2019. Revisiting adversarial autoencoder for unsupervised word translation with cycle consistency and improved training. In _Proceedings of NAACL-HLT_ , pages 3857–3867. * Mrkšić et al. (2016) Mrkšić, Nikola, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gašić, Lina Maria Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In _Proceedings of NAACL-HLT_ , pages 142–148. * Mrkšić et al. (2017) Mrkšić, Nikola, Ivan Vulić, Diarmuid Ó Séaghdha, Ira Leviant, Roi Reichart, Milica Gašić, Anna Korhonen, and Steve Young. 2017. Semantic specialisation of distributional word vector spaces using monolingual and cross-lingual constraints. _Transactions of the ACL_ , 5:309–324. * Mu, Bhat, and Viswanath (2018) Mu, Jiaqi, Suma Bhat, and Pramod Viswanath. 2018. All-but-the-top: Simple and effective postprocessing for word representations. In _Proceedings of ICLR_. * Mykowiecka, Marciniak, and Rychlik (2018) Mykowiecka, Agnieszka, Małgorzata Marciniak, and Piotr Rychlik. 2018. SimLex-999 for Polish. In _Proceedings of LREC_. * Nelson, McEvoy, and Schreiber (2004) Nelson, Douglas L., Cathy L. McEvoy, and Thomas A. Schreiber. 2004. The University of South Florida free association, rhyme, and word fragment norms. _Behavior Research Methods_ , 36(3):402–407. * Netisopakul, Wohlgenannt, and Pulich (2019) Netisopakul, Ponrudee, Gerhard Wohlgenannt, and Aleksei Pulich. 2019. Word similarity datasets for Thai: Construction and evaluation. _CoRR_ , abs/1904.04307. * Nivre et al. (2019) Nivre, Joakim, Mitchell Abrams, Željko Agić, Lars Ahrenberg, Gabrielė Aleksandravičiūtė, Lene Antonsen, Katya Aplonova, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, et al. 2019. Universal Dependencies 2.4. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University. * Pearson (1901) Pearson, Karl. 1901. On lines and planes of closest fit to systems of points in space. _The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science_ , 2(11):559–572. * Peters et al. (2018) Peters, Matthew, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In _Proceedings of NAACL-HLT_ , pages 2227–2237. * Pilehvar et al. (2018) Pilehvar, Mohammad Taher, Dimitri Kartsaklis, Victor Prokhorov, and Nigel Collier. 2018. Card-660: Cambridge rare word dataset - a reliable benchmark for infrequent word representation models. In _Proceedings of EMNLP_ , pages 1391–1401. * Pires, Schlinger, and Garrette (2019) Pires, Telmo, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In _Proceedings of ACL_ , pages 4996–5001. * Ponti et al. (2019a) Ponti, Edoardo Maria, Helen O’Horan, Yevgeni Berzak, Ivan Vulić, Roi Reichart, Thierry Poibeau, Ekaterina Shutova, and Anna Korhonen. 2019a. Modeling language variation and universals: A survey on typological linguistics for natural language processing. _Computational Linguistics_ , 45(3):559–601. * Ponti et al. (2018a) Ponti, Edoardo Maria, Roi Reichart, Anna Korhonen, and Ivan Vulić. 2018a. Isomorphic transfer of syntactic structures in cross-lingual nlp. In _Proceedings of ACL_ , pages 1531–1542. * Ponti et al. (2019b) Ponti, Edoardo Maria, Ivan Vulić, Ryan Cotterell, Roi Reichart, and Anna Korhonen. 2019b. Towards zero-shot language modeling. In _Proceedings of EMNLP-IJCNLP_ , pages 2893–2903. * Ponti et al. (2018b) Ponti, Edoardo Maria, Ivan Vulić, Goran Glavaš, Nikola Mrkšić, and Anna Korhonen. 2018b. Adversarial propagation and zero-shot cross-lingual transfer of word vector specialization. In _Proceedings of EMNLP_ , pages 282–293. * Ponti et al. (2019c) Ponti, Edoardo Maria, Ivan Vulić, Goran Glavaš, Roi Reichart, and Anna Korhonen. 2019c. Cross-lingual semantic specialization via lexical relation induction. In _Proceedings of EMNLP_ , pages 2206–2217. * Radovanović, Nanopoulos, and Ivanović (2010) Radovanović, Miloš, Alexandros Nanopoulos, and Mirjana Ivanović. 2010\. Hubs in space: Popular nearest neighbors in high-dimensional data. _Journal of Machine Learning Research_ , 11:2487–2531. * Rasooli and Collins (2017) Rasooli, Mohammad Sadegh and Michael Collins. 2017. Cross-lingual syntactic transfer with limited resources. _Transactions of the ACL_ , 5:279–293. * Ren et al. (2018) Ren, Liliang, Kaige Xie, Lu Chen, and Kai Yu. 2018. Towards universal dialogue state tracking. In _Proceedings of EMNLP_ , pages 2780–2786. * Rotman and Reichart (2019) Rotman, Guy and Roi Reichart. 2019. Deep contextualized self-training for low resource dependency parsing. _Transactions of the ACL_ , 7:695–713. * Ruder, Søgaard, and Vulić (2019) Ruder, Sebastian, Anders Søgaard, and Ivan Vulić. 2019. Unsupervised cross-lingual representation learning. In _Proceedings of ACL: Tutorial Abstracts_ , pages 31–38. * Ruder, Vulić, and Søgaard (2019) Ruder, Sebastian, Ivan Vulić, and Anders Søgaard. 2019. A survey of cross-lingual embedding models. _Journal of Artificial Intelligence Research_ , 65:569–631. * Rzymski et al. (2020) Rzymski, Christoph, Tiago Tresoldi, Simon J Greenhill, Mei-Shin Wu, Nathanael E Schweikhard, Maria Koptjevskaja-Tamm, Volker Gast, Timotheus A Bodt, Abbie Hantgan, Gereon A Kaiping, et al. 2020. The database of cross-linguistic colexifications, reproducible analysis of cross-linguistic polysemies. _Scientific Data_ , 7(1):1–12. * Sakaizawa and Komachi (2018) Sakaizawa, Yuya and Mamoru Komachi. 2018. Construction of a Japanese word similarity dataset. In _Proceedings of LREC_ , pages 948–951. * Schlechtweg et al. (2019) Schlechtweg, Dominik, Anna Hätty, Marco Del Tredici, and Sabine Schulte im Walde. 2019. A wind of change: Detecting and evaluating lexical semantic change across times and domains. In _Proceedings of ACL_ , pages 732–746. * Schuster and Nakajima (2012) Schuster, Mike and Kaisuke Nakajima. 2012. Japanese and korean voice search. In _International Conference on Acoustics, Speech and Signal Processing_ , pages 5149–5152. * Schwartz, Reichart, and Rappoport (2015) Schwartz, Roy, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for improved word similarity prediction. In _Proceedings of CoNLL_ , pages 258–267. * Schwartz, Reichart, and Rappoport (2016) Schwartz, Roy, Roi Reichart, and Ari Rappoport. 2016. Symmetric patterns and coordinations: Fast and enhanced representations of verbs and adjectives. In _Proceedings of NAACL-HLT_ , pages 499–505. * Smith et al. (2017) Smith, Samuel L., David H.P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017\. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In _Proceedings of ICLR (Conference Track)_. * Snyder and Barzilay (2010) Snyder, Benjamin and Regina Barzilay. 2010. Climbing the tower of Babel: Unsupervised multilingual learning. In _Proceedings of ICML_ , pages 29–36. * Søgaard, Ruder, and Vulić (2018) Søgaard, Anders, Sebastian Ruder, and Ivan Vulić. 2018. On the limitations of unsupervised bilingual dictionary induction. In _Proceedings of ACL_ , pages 778–788. * Suzuki et al. (2013) Suzuki, Ikumi, Kazuo Hara, Masashi Shimbo, Marco Saerens, and Kenji Fukumizu. 2013\. Centering similarity measures to reduce hubs. In _Proceedings of EMNLP_ , pages 613–623. * Tang, Mousavi, and de Sa (2019) Tang, Shuai, Mahta Mousavi, and Virginia R. de Sa. 2019. An empirical study on post-processing methods for word embeddings. _CoRR_ , abs/1905.10971. * Trier (1931) Trier, Jost. 1931. _Der Deutsche Wortschatz im Sinnbezirk des Verstandes: Die Geschichte eines sprachlichen Feldes. 1. Von den Anfängen bis zum Beginn des 13. Jahrhunderts_. Ph.D. thesis, University of Bonn. * Tsvetkov et al. (2016) Tsvetkov, Yulia, Sunayana Sitaram, Manaal Faruqui, Guillaume Lample, Patrick Littell, David Mortensen, Alan W. Black, Lori Levin, and Chris Dyer. 2016. Polyglot neural language models: A case study in cross-lingual phonetic representation learning. In _Proceedings of NAACL-HLT_ , pages 1357–1366. * Turney (2012) Turney, Peter D. 2012. Domain and function: A dual-space model of semantic relations and compositions. _Journal of Artificial Intelligence Research_ , 44:533–585. * Turney and Pantel (2010) Turney, Peter D. and Patrick Pantel. 2010. From frequency to meaning: vector space models of semantics. _Journal of Artifical Intelligence Research_ , 37(1):141–188. * Upadhyay et al. (2016) Upadhyay, Shyam, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word embeddings: An empirical comparison. In _Proceedings of ACL_ , pages 1661–1670. * Vania and Lopez (2017) Vania, Clara and Adam Lopez. 2017. From characters to words to in between: Do we capture morphology? In _Proceedings of ACL_ , pages 2016–2027. * Vaswani et al. (2017) Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Proceedings of NeurIPS_ , pages 6000–6010. * Venekoski and Vankka (2017) Venekoski, Viljami and Jouko Vankka. 2017. Finnish resources for evaluating language model semantics. In _Proceedings of NODALIDA_ , pages 231–236. * Virtanen et al. (2019) Virtanen, Antti, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: BERT for Finnish. _CoRR_ , abs/1912.07076. * Vulić et al. (2017a) Vulić, Ivan, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. 2017a. Hyperlex: A large-scale evaluation of graded lexical entailment. _Computational Linguistics_ , 43(4):781–835. * Vulić et al. (2019) Vulić, Ivan, Goran Glavaš, Roi Reichart, and Anna Korhonen. 2019. Do we really need fully unsupervised cross-lingual embeddings? In _Proceedings of EMNLP_ , pages 4407–4418. * Vulić, Kiela, and Korhonen (2017) Vulić, Ivan, Douwe Kiela, and Anna Korhonen. 2017. Evaluation by association: A systematic study of quantitative word association evaluation. In _Proceedings of EACL_ , pages 163–175. * Vulić and Korhonen (2016) Vulić, Ivan and Anna Korhonen. 2016. On the role of seed lexicons in learning bilingual word embeddings. In _Proceedings of ACL_ , pages 247–257. * Vulić, Ponzetto, and Glavaš (2019) Vulić, Ivan, Simone Paolo Ponzetto, and Goran Glavaš. 2019. Multilingual and cross-lingual graded lexical entailment. In _Proceedings of ACL_ , pages 4963–4974. * Vulić et al. (2017b) Vulić, Ivan, Roy Schwartz, Ari Rappoport, Roi Reichart, and Anna Korhonen. 2017b. Automatic selection of context configurations for improved class-specific word representations. In _Proceedings of CoNLL_ , pages 112–122. * Wang et al. (2020) Wang, Zihan, Karthikeyan K, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilingual BERT: An empirical study. In _Proceedings of ICLR_. * Wieting et al. (2015) Wieting, John, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compositional paraphrase model and back. _Transactions of the ACL_ , 3:345–358. * Williams, Nangia, and Bowman (2018) Williams, Adina, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In _Proceedings of NAACL-HLT_ , pages 1112–1122. * Wolf et al. (2019) Wolf, Thomas, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s Transformers: State-of-the-art Natural Language Processing. _ArXiv_ , abs/1910.03771. * Wu et al. (2019) Wu, Shijie, Alexis Conneau, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2019\. Emerging cross-lingual structure in pretrained language models. _CoRR_ , abs/1911.01464. * Wu and Dredze (2019) Wu, Shijie and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In _Proceedings of EMNLP_ , pages 833–844. * Wu and Palmer (1994) Wu, Zhibiao and Martha Palmer. 1994. Verb semantics and lexical selection. In _Proceedings of ACL_ , pages 133–138. * Xing et al. (2015) Xing, Chao, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In _Proceedings of NAACL-HLT_ , pages 1006–1011. * Yang et al. (2019) Yang, Yinfei, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A cross-lingual adversarial dataset for paraphrase identification. In _Proceedings of EMNLP_ , pages 3687–3692. * Zeman et al. (2018) Zeman, Daniel, Jan Hajič, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Multilingual parsing from raw text to universal dependencies. In _Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies_ , pages 1–21. * Zhang et al. (2019) Zhang, Mozhi, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, and Jordan Boyd-Graber. 2019. Are girls neko or shōjo? Cross-lingual alignment of non-isomorphic embeddings with iterative normalization. In _Proceedings of ACL_ , pages 3180–3189. * Zhang, Baldridge, and He (2019) Zhang, Yuan, Jason Baldridge, and Luheng He. 2019. PAWS: Paraphrase adversaries from word scrambling. In _Proceedings of NAACL-HLT_ , pages 1298–1308. * Zhu et al. (2019) Zhu, Yi, Benjamin Heinzerling, Ivan Vulić, Michael Strube, Roi Reichart, and Anna Korhonen. 2019. On the importance of subword information for morphological tasks in truly low-resource languages. In _Proceedings of CoNLL_ , pages 216–226. * Zhu, Vulić, and Korhonen (2019) Zhu, Yi, Ivan Vulić, and Anna Korhonen. 2019. A systematic study of leveraging subword information for learning word representations. In _Proceedings of NAACL-HLT_ , pages 912–932.
2024-09-04T02:54:58.838230
2020-03-10T17:42:28
2003.04875
{ "authors": "Manuel Schilling, \\'Etienne Wodey, Ludger Timmen, Dorothee Tell, Klaus\n H. Zipfel, Dennis Schlippert, Christian Schubert, Ernst M. Rasel, J\\\"urgen\n M\\\"uller", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26142", "submitter": "Manuel Schilling", "url": "https://arxiv.org/abs/2003.04875" }
arxiv-papers
# Gravity field modelling for the Hannover $10\text{\,}\mathrm{m}$ atom interferometer Manuel Schilling German Aerospace Center (DLR), Institute for Satellite Geodesy and Inertial Sensing, c/o Leibniz Universität Hannover, DLR-Institut, Welfengarten 1, 30167 Hannover, Germany Leibniz Universität Hannover, Institut für Erdmessung, Schneiderberg 50, 30167 Hannover, Germany Étienne Wodey Leibniz Universität Hannover, Institut für Quantenoptik, Welfengarten 1, 30167 Hannover, Germany Ludger Timmen Leibniz Universität Hannover, Institut für Erdmessung, Schneiderberg 50, 30167 Hannover, Germany Dorothee Tell Leibniz Universität Hannover, Institut für Quantenoptik, Welfengarten 1, 30167 Hannover, Germany Klaus H. Zipfel Leibniz Universität Hannover, Institut für Quantenoptik, Welfengarten 1, 30167 Hannover, Germany Dennis Schlippert Leibniz Universität Hannover, Institut für Quantenoptik, Welfengarten 1, 30167 Hannover, Germany Christian Schubert German Aerospace Center (DLR), Institute for Satellite Geodesy and Inertial Sensing, c/o Leibniz Universität Hannover, DLR-Institut, Welfengarten 1, 30167 Hannover, Germany Leibniz Universität Hannover, Institut für Quantenoptik, Welfengarten 1, 30167 Hannover, Germany Ernst M. Rasel Leibniz Universität Hannover, Institut für Quantenoptik, Welfengarten 1, 30167 Hannover, Germany Jürgen Müller Leibniz Universität Hannover, Institut für Erdmessung, Schneiderberg 50, 30167 Hannover, Germany (This is a post-peer-review, pre-copyedit version of an article published in Journal of Geodesy 94:122. The final authenticated version is available online at: https://dx.doi.org/10.1007/s00190-020-01451-y) Absolute gravimeters are used in geodesy, geophysics, and physics for a wide spectrum of applications. Stable gravimetric measurements over timescales from several days to decades are required to provide relevant insight into geophysical processes. Users of absolute gravimeters participate in comparisons with a metrological reference in order to monitor the temporal stability of the instruments and determine the bias to that reference. However, since no measurement standard of higher-order accuracy currently exists, users of absolute gravimeters participate in key comparisons led by the International Committee for Weights and Measures. These comparisons provide the reference values of highest accuracy compared to the calibration against a single gravimeter operated at a metrological institute. The construction of stationary, large scale atom interferometers paves the way towards a new measurement standard in absolute gravimetry used as a reference with a potential stability up to $1\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ at $1\text{\,}\mathrm{s}$ integration time. At the Leibniz University Hannover, we are currently building such a very long baseline atom interferometer with a $10\text{\,}\mathrm{m}$ long interaction zone. The knowledge of local gravity and its gradient along and around the baseline is required to establish the instrument’s uncertainty budget and enable transfers of gravimetric measurements to nearby devices for comparison and calibration purposes. We therefore established a control network for relative gravimeters and repeatedly measured its connections during the construction of the atom interferometer. We additionally developed a 3D model of the host building to investigate the self-attraction effect and studied the impact of mass changes due to groundwater hydrology on the gravity field around the reference instrument. The gravitational effect from the building 3D model is in excellent agreement with the latest gravimetric measurement campaign which opens the possibility to transfer gravity values with an uncertainty below the $10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ level. Keywords: atom interferometry, gravity acceleration, absolute gravimetry, gravimeter reference ## Introduction A variety of applications in geodesy, geophysics and physics require the knowledge of local gravity g [67]. These applications include observing temporal variations of the mass distribution in the hydrosphere, atmosphere and cryosphere and furthermore the establishment and monitoring of height and gravity reference frames, the determination of glacial isostatic adjustment, and the realisation of SI111Système International d’unités units, e. g., of force and mass [32, 28, 54]. The absolute value of gravity g is usually measured by tracking the free-fall of a test mass using a laser interferometer [33]. The operation of an absolute gravimeter (AG), especially the combination of several instruments in a project, requires special consideration of the offset to _true g_ and the change thereof. In addition, the long-term stability of absolute gravimeters is of particular relevance when measuring small gravity trends. For example, the determination of the glacial isostatic adjustment (GIA) on regional scales of around $1000\text{\,}\mathrm{km}$ [63] requires an instrument stable to the $20\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ level over several years. Extending this effort by deploying several AGs also requires the knowledge of the biases of all the instruments involved [36]. The lack of a calibration service with a $10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}20\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ uncertainty requires the participation in key comparisons [9, KC, e. g. ] where the reference values are determined with an uncertainty of approximately $10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$. This uncertainty level requires the participation of multiple gravimeters and cannot be achieved by comparison against a single gravimeter operated at a metrological institute. However, the development of stationary atom interferometers, which can be operated as gravimeters, so-called quantum gravimeters (QG), may result in such a superior reference in the future available for regular comparisons or on demand by the user. A major requirement in this respect is the control of systematic effects like wavefront aberration or the Coriolis effect. In this paper, we focus on the modelling and measurement of the local gravity field. We start by discussing the typical approaches for monitoring the long-term stability of an AG and tracing the measurements back to the SI (section 2). Then, after briefly describing the working principle of atomic gravimeters and the case for very long baseline atom interferometry (section 3), we present a gravity model for the Hannover Very Long Baseline Atom Interferometry (Hannover-VLBAI) facility, a new $10\text{\,}\mathrm{m}$-scale baseline atom interferometer in commissioning at the Leibniz University Hannover (section 4). Finally, we present the micro-gravimetric surveys performed at the instrument’s site (section 5) to assess the accuracy of the gravity model (section 6). This paves the way towards control of the systematics in the atom interferometer and accurate transfers of measured g values between the VLBAI operating as a gravimeter and transportable AGs in a nearby laboratory. ## Gravimeter bias and SI traceability Micro-g LaCoste FG5(X) [34] instruments represent the current state of the art in absolute gravimetry. They track the trajectories of a free-falling test mass with corner cubes by means of laser interferometry to determine the local acceleration of gravity g. These types of absolute gravimeters are referred to as _classical absolute gravimeters_ in the following text. As described by the 2015 CCM-IAG222Consultative Committee for Mass and related quantities – International Association of Geodesy Strategy for Metrology in Absolute Gravimetry [5], there are two complementary paths for the traceability of absolute gravity measurements: a) calibration of incorporated frequency generators and b) additional gravimeter comparisons against a reference. The direct way of tracing absolute gravity measurements back to the SI goes through the calibration of their incorporated laser and oscillator to standards of length and time [68]. In high-accuracy instruments, the laser frequency is typically locked to a standard transition of molecular iodine [6, 44]. The time reference is usually given by a rubidium oscillator which needs to be regularly compared with a reference oscillator to ensure its accuracy as external higher-accuracy time sources are typically not available at measurement sites. In most cases, the oscillator’s frequency drift is linear $($<0.5\text{\,}\mathrm{mHz}\text{/}\mathrm{month}$\text{ or }$<1\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$/$\mathrm{month}$)$ and a few calibrations per year are sufficient. However, [30] and [53] report on sudden jumps in frequency333Current publications refer to the Microsemi (formerly Symmetricon) SA.22c rubidium oscillator equivalent to several tens of $\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ due to increased concentrations of gaseous helium [43] when measuring near superconducting gravimeters. Such higher concentrations might occur after installation, maintenance, or repair of a superconducting gravimeter and are unlikely during normal operation. The frequency drift changes to an exponential decrease after the helium event and may remain this way for years [51]. Figure 1: Degree of Equivalence (DoE) of joint participants of EURAMET.M.G-K1 [10, ], CCM.G-K2 [11, ], EURAMET.M.G-K2 [37, ] and EURAMET.M.G-K3 [9, ]. The participants are sorted by DoE of the first KC. The expanded uncertainty is given only for the last KC. Pilot Study (PS) indicates instruments of non NMI/DI institutions. All AGs shown are laser interferometers of which eight are FG5(X) type instruments. The equivalence of gravity measurement standards and the definition of the gravity reference are established by international comparisons in the framework of the CIPM MRA444Mutual Recognition Agreement of the Comité International des Poids et Mesures. Since no higher-order reference instrument is available, key comparisons are held in an approximately two-year interval, alternating between CIPM key comparisons and regional comparisons. There, the instruments operated by National Metrology Institutes (NMI) and Designated Institutes (DI) are used to determine the Key Comparison Reference Value (KCRV). The bias to the KCRV, or Degree of Equivalence (DoE) is then calculated for all individual instruments, including those without NMI/DI status participating in the so-called pilot study (PS), and serves as validation for their uncertainty. Figure 1 shows the common participants, out of a total number of 35 gravimeters participating in the comparisons, to the last four KC held in Europe [10, 11, 37, 9]. One observes that the spread of DoE over all instruments is around $\pm 75\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$, and at a similar level for the most extreme cases of individual instruments. Even though the DoEs of the instruments in these comparisons are typically within the uncertainties declared by the participants, figure 1 also shows the necessity of determining these biases of gravimeters, classical and quantum alike, to monitor an instrument’s stability in time. Biases can then be taken into account in gravimetric projects. The variation of the bias of an instrument can be explained by a variety of factors. For example, [35] show that a permanent change in the bias of a classical AG can occur during manufacturer service or unusual transport conditions (e. g. aviation transport). Also, [25, 26] identified, characterised and partially removed biases originating in the signal processing chain of FG5 gravimeters, e. g. due to cable length and fringe signal amplitude. Regional KCs are linked to a CIPM KC by a small number of common NMI/DI participants applying the so-called linking converter [22, typically around $\pm 10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$,]. The underlying assumption is that instrumental biases of the NMI/DI instruments remain stable [8]. Otherwise, this would introduce an additional shift in the bias of all participating instruments of the regional KC and PS. Quantum gravimeters, based on matter wave interferometry with cold atoms, offer a fully independent design. They have demonstrated stabilities and accuracies at levels comparable to those from state of the art classical AGs by participating in KCs [16, 23] or common surveys with other instruments at various locations [13, 51]. The availability of improved QGs as gravity references provides an opportunity to enhance the stability of reference values obtained during key comparisons and therefore lead to an international gravity datum of better stability in time. Just by that alone QGs could become a serious alternative to classical absolute gravimeters. ## Very long baseline atomic gravimetry ### Atominterferometric gravimetry Most atomic gravimeters use cold matter waves as free-falling test masses to measure absolute gravity. They exploit the coherent manipulation of the external degrees of freedom of these atomic test masses with light pulses to realise interferometers sensitive to inertial quantities and other forces. These techniques are for example used to perform precision measurements of fundamental constants [49, 3, 39], test fundamental physics [55, 48, 21], sense small forces [2] and perform gravimetry, gravity-gradiometry, and measure rotations with record instabilities and inaccuracies [31, 12, 15, 72, 50, 57]. Figure 2: Mach–Zehnder light-pulse atom interferometer geometry in a uniform acceleration field $\mathbf{a}$. At time $t_{0}$, the atomic matterwave is put in a superposition of momenta $p$ ( ) and $p+\hbar k_{\mathrm{eff}}$ ( ). The momenta are reversed at time $t_{0}+T$ to recombine the wave packets with a last light pulse at time $t_{0}+2T$. The populations in the two momentum classes after the last light pulse allow extracting the interferometric phase $\Delta\phi$. Atomic gravimeters typically realise the Mach–Zehnder light-pulse atom interferometer geometry [24] depicted in figure 2. In this analogon to the eponymous configuration for optical interferometers, the leading-order interferometric phase $\Delta\phi$ scales with the space-time area enclosed by the interferometer: $\Delta\phi=\mathbf{k}_{\mathrm{eff}}\cdot\mathbf{a}T^{2}$ (1) where $\hbar\mathbf{k}_{\mathrm{eff}}$ is the recoil transfered to the atomic wave packets by the atom-light interaction processes (cf. figure 2, $\hbar$ is the reduced Planck constant and $\mathbf{k}_{\mathrm{eff}}$ the effective optical wave vector), $\mathbf{a}$ the uniform acceleration experienced by the atoms during the interferometric sequence, and $T$ the pulse separation time. The full interferometer has a duration of $2T$. The knowledge of the instrument’s scale factor $k_{\mathrm{eff}}T^{2}$ and the measurement of the phase $\Delta\phi$ allow determining the projection of the acceleration $\mathbf{a}$ along $\mathbf{k}_{\mathrm{eff}}$. When $\mathbf{k}_{\mathrm{eff}}$ is parallel to $\mathbf{g}$, such an instrument can therefore be used as a gravimeter, measuring the total vertical acceleration of the matter waves used as test masses. The Mach–Zehnder light-pulse atom interferometer works as follows. For each interferometric sequence, a sample of cold atoms is prepared in a time $T_{p}$. Then, at time $t=t_{0}$, the first atom-light interaction pulse puts the matter wave in a superposition of quantum states with different momenta $\mathbf{p}$ and $\mathbf{p}+\hbar\mathbf{k}_{\mathrm{eff}}$, thus effectively creating two distinct semi-classical trajectories. At time $t=t_{0}+T$, a second atom-light interaction process redirects the two atomic trajectories to allow closing the interferometer at time $t=t_{0}+2T$ with a third light pulse. Counting the population of atoms in the two momentum states provides an estimation of the interferometric phase $\Delta\phi$. Finally, the cycle of preparation of the cold atoms, coherent manipulation of the matter waves, and detection is repeated. Since the atom-light interaction imprints the local phase of the light on the matter waves, the above measurement principle can be interpreted as measuring the successive positions of a free-falling matter wave at known times $t_{0}$, $t_{0}+T$, and $t_{0}+2T$ with respect to the light field. The inertial reference frame for the measurement system, similar to the superspring in FG5(X) gravimeters, is usually realised by a mirror retro-reflecting the light pulses, creating well-defined equiphase fronts. Practically, the interferometric phase $\Delta\phi$ is scanned by accelerating the optical wave fronts at a constant rate $\alpha$, effectively continuously tuning the differential velocity between the matter waves and the optical equiphase fronts. Assuming that $\mathbf{k}_{\mathrm{eff}}$ and $\mathbf{a}$ are parallel, the interferometric phase reads: $\Delta\phi=k_{\mathrm{eff}}\left(a-\frac{\alpha}{k_{\mathrm{eff}}}\right)T^{2}\ .$ (2) When $\alpha=k_{\mathrm{eff}}a$, the interferometric phase vanishes independently of the interferometer’s duration $2T$, allowing to unambiguously identify this operation point. Physically, $\alpha=k_{\mathrm{eff}}a$ exactly compensates the Doppler effect experienced by the atomic matter waves due to the acceleration $a$. Therefore, the measurement of the acceleration $a$ amounts to a measurement of the acceleration rate $\alpha$ which can be traced back to the SI since it corresponds to frequency generation in the radio- frequency domain. Assuming white noise at a level $\delta\phi$ for the detection of the interferometric phase, the instrument’s instability is given by: $\delta a(\tau)=\sqrt{2T+T_{p}}\cdot\frac{\delta\phi}{k_{\mathrm{eff}}T^{2}}\cdot\frac{1}{\sqrt{\tau}}\ .$ (3) where $\tau$ is the measurement’s integration time. This expression reveals the three levers for reducing the measurement instability: decreasing the single shot noise level $\delta\phi$, increasing the scale factor $k_{\mathrm{eff}}T^{2}$, and minimising the sample preparation time $T_{p}$, as it contributes to the total cycle time without providing phase information. In transportable devices, record instabilities have been achieved by [12] with $\delta a=$96\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$$ at $\tau=$1\text{\,}\mathrm{s}$$. Commercial instruments like the Muquans AQG [31] reached instabilities of $500\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ at $\tau=$1\text{\,}\mathrm{s}$$ with sample rates up to $2\text{\,}\mathrm{Hz}$. The dominant noise source is vibrations of the mirror realising the reference frame for the measurements. The accuracy of such quantum gravimeters stems from the well-controlled interaction between the test masses and their environment during the measurement sequence. The main sources of inaccuracy in such instruments originate from uncertainties in the atom-light interaction parameters (e. g. imperfections of the equiphase fronts of the light wave), stray electromagnetic field gradients creating spurious forces, thus breaking the free-fall assumption, and knowledge of the inhomogeneous gravity field along the trajectories. Extensive characterisation of these effects led to uncertainties in QGs below $40\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$, consistent with the results from CIPM key comparisons [15] or common surveys with classical AGs [12]. ### Very Long Baseline Atom Interferometry Very Long Baseline Atom Interferometry (VLBAI) represents a new class of ground-based atom interferometric platforms which extends the length of the interferometer's baseline from tens of centimetres like in typical transportable instruments [12, 15] to multiple meters. According to equation (1), the vertical acceleration sensitivity of a Mach–Zehnder type atom interferometer scales linearly with the length of the baseline ($\sim aT^{2}$). Therefore, an increase in the length of the baseline potentially enables a finer sensitivity for the atomic gravimeter through an increased scale factor $k_{\mathrm{eff}}T^{2}$. A $10\text{\,}\mathrm{m}$-long baseline instrument can for example extend the interferometric time $2T$ to around $1\text{\,}\mathrm{s}$ if the atoms are simply dropped along the baseline or up to $2.4\text{\,}\mathrm{s}$ if they are launched upwards in a fountain-like fashion. In the simple drop case, the velocity acquired by the atoms between their release from the source and the start of the interferometer leads to an interferometer duration shorter than half of the one for the launch case. For our apparatus, the distance between the top source chamber and the region of interest is around $2\text{\,}\mathrm{m}$ (see figure 3), constraining $T<$400\text{\,}\mathrm{ms}$$ for simple drops. Using realistic parameters ($T_{p}=$3\text{\,}\mathrm{s}$$, $\delta\phi=$10\text{\,}\mathrm{mrad}$$), equation (3) yields potential short- term instabilities for VLBAIs ($\tau=$1\text{\,}\mathrm{s}$$ integration time): $\begin{array}[]{ll}T=$400\text{\,}\mathrm{ms}$\text{:\leavevmode\nobreak\ }&\delta a=$8\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$\\\ T=$1.2\text{\,}\mathrm{s}$\text{:\leavevmode\nobreak\ }&\delta a=$1\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$\end{array}$ (4) competing with the noise level of superconducting gravimeters [45, 46] while providing absolute values of the gravity acceleration g. Nevertheless, the increased scale factor $k_{\mathrm{eff}}T^{2}$ gained by the expanded baseline comes at the price of a stationary device with added complexity due to its size, and a vibration noise sensitivity magnified by the same scale factor as the gravitational acceleration for frequencies below $\nicefrac{{1}}{{(2T)}}$. Hence, the use of VLBAIs as ultra stable gravimeters requires new developments in the control of environmental vibrations [19]. Also, time- and space-varying electromagnetic and gravity fields along the free-fall trajectories of the matter waves have a direct impact on the accuracy and stability of the instrument, as the corresponding spurious forces depart from the assumptions of equation (1), therefore leading to biases [7] and impacting the instrument’s effective height [60]. ### Effective height In order to compare measurements of a VLBAI gravimeter with other instruments, it is crucial to determine the effective height $z_{\mathrm{eff}}$ defined by: $g_{0}-\gamma z_{\mathrm{eff}}=\frac{\Delta\phi_{\mathrm{tot}}}{k_{\mathrm{eff}}T^{2}}$ (5) where $g_{0}\approx$9.81\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$$ is the value of gravity at $z=0$, $\gamma\approx$3\text{\,}\mathrm{\SIUnitSymbolMicro m}\text{\,}{\mathrm{s}}^{-2}\text{\,}{\mathrm{m}}^{-1}$$ the magnitude of the linear gravity gradient, and $\Delta\phi_{\mathrm{tot}}$ the phase shift measured by the interferometer. The right-hand side is the value of gravity measured by the atom interferometer, including all bias sources. Restricting to first order in the gravity-gradient $\gamma$, and applying a path-integral formalism, one gets [40]: $z_{\mathrm{eff}}=z_{0}-\dfrac{\Delta g}{\gamma}\quad\text{with}\quad\Delta g=\frac{7}{12}\gamma g_{0}T^{2}-\gamma\bar{v}_{0}T$ (6) where $z_{0}$ is the height of the start of the interferometer and $\bar{v}_{0}=v_{0}+\nicefrac{{\hbar k_{\mathrm{eff}}}}{{(2m)}}$ the mean atomic velocity just after the interferometer opens ($v_{0}$ is the atomic velocity before the first beamsplitter, and $m$ is the atomic mass). This expression for $z_{\mathrm{eff}}$ is compatible with the one given for FG5 gravimeters by [38]. In particular, it only depends on the value of the gradient $\gamma$ through $v_{0}$ and $z_{0}$. Indeed, the interferometer is controlled in time and the initial position and velocity $z_{0}$ and $v_{0}$ are therefore given by the free-fall motion of the atoms between the source chamber and the region of interest. In general, $z_{\mathrm{eff}}$ depends on the geometry of the atom interferometer. For the simple drop case in the Hannover VLBAI facility (see section 3.4), $z_{\mathrm{eff}}\approx$9.2\text{\,}\mathrm{m}$$. Corrections to equation (6) must be taken into account to constrain the uncertainty on gravity at $z_{\mathrm{eff}}$ below $10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$. On the one hand, terms of order $\gamma^{2}$ and higher in $\Delta\phi_{\mathrm{tot}}$ contribute at the sub-$\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ level. On the other hand, one can use perturbation theory [66] to estimate the effect of the non-homogeneous gravity gradient along the interferometer’s baseline. Using the data discussed here, we evaluate this effect below $5\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$, therefore lying within the model’s uncertainty (see section 6) and similar to the known contribution for FG5(X) gravimeters [60]. Finally, when using multiple concurrent interferometers at different heights, the effect of a homogeneous gravity gradient can be mitigated by measuring it simultaneously with the acceleration value [4]. In this case, the effective height corresponds to the position of the mirror giving the inertial reference. Detailed modelling is however still necessary to push the uncertainty budget in the sub-$10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ and calibrate the instrument to the level of its instability. ### The Hannover VLBAI facility We introduce the Hannover Very Long Baseline Atom Interferometry facility, an instrument developed at the newly founded Hannover Institute of Technology (HITec) of the Leibniz University Hannover, Germany. It builds on the concepts outlined in section 3.2 to provide a platform to tackle challenges in extended baseline atom interferometry. In the long term, it aims at tests of our physical laws and postulates like for example the universality of free fall [20], searches for new forces or phenomena, and the development of new methods for absolute gravimetry and gravity gradiometry [56]. Upper atomic sourceLower atomic sourceBaselineultra-high vacuum chamberand magnetic shieldInertial referencevibration isolated mirrorRegion of interestfor precisionatom interferometry$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$10$$11$$12$$13$$14$$15$Height / $\mathrm{m}$ Figure 3: The Hannover Very Long Baseline Atom Interferometry (VLBAI) facility and its three main elements: source chambers , baseline , and inertial reference system and vacuum vessel (VTS) . The baseline and upper source chambers are supported by an aluminium structure (VSS, dark blue). The region of interest for atom interferometry is shaded in light blue. The Hannover VLBAI facility is built around three main elements shown in figure 3: 1. 1. Ultra cold samples of rubidium and ytterbium atoms are prepared in the two _source chambers_ , allowing for both drop (max $T=$400\text{\,}\mathrm{ms}$$) and launch (max $T=$1.2\text{\,}\mathrm{s}$$) modes of operation. Advanced atom-optics promise enhanced free-fall times by relaunching the wave packets during the interferometric sequence [1]; 2. 2. The reference frame for the inertial measurements is realised by a _seismically isolated mirror_ at the bottom of the apparatus. The seismic attenuation system (SAS) uses geometric anti-spring filters [69] to achieve vibration isolation above its natural resonance frequency of $320\text{\,}\mathrm{mHz}$. The isolation platform is operated under high vacuum conditions to reduce acoustic and thermal coupling. The vacuum vessel containing the SAS is denoted VTS in sections 4–6; 3. 3. The $10.5\text{\,}\mathrm{m}$-long _baseline_ consists of a $20\text{\,}\mathrm{cm}$ diameter cylindrical aluminium vacuum chamber and a high-performance magnetic shield [71]. The interferometric sequences take place along this baseline, in the $8\text{\,}\mathrm{m}$-long central _region of interest_ where the longitudinal magnetic field gradients fall below $2.5\text{\,}\mathrm{nT}\text{/}\mathrm{m}$. In order to decouple the instrument from oscillations of the walls of the building, the apparatus is only rigidly connected to the foundations of the building. The VTS (and SAS) and lower source chamber are mounted on a baseplate directly connected to the foundation. The baseline and upper source chamber are supported by a $10\text{\,}\mathrm{m}$ high aluminium tower, denoted as VLBAI support structure (VSS) in the following sections. The footprint of the device on the floor is $$2.5\text{\,}\mathrm{m}$\times$2.5\text{\,}\mathrm{m}$$. Traceability to the SI is ensured by locking the instrument’s frequency references to standards at the German NMI (PTB Braunschweig) via an optical link [42]. All heights are measured from the instrument’s baseplate. The altitude of this reference point in the German height datum is $50.545\text{\,}\mathrm{m}$. ## Environmental model (a) HITec cross-section (not to scale) (b) HITec top view Figure 4: Views of HITec: cross-section (4(a)) of the VLBAI laboratories with the gravimetric network of 2019 along two vertical profiles and region of interest (blue). The indicated groundwater variation (thick bar) refers to an average annual amplitude of $0.3\text{\,}\mathrm{m}$. The thin bar indicates extreme low and high levels. The height $z=$0\text{\,}\mathrm{m}$$ refers to the top of the baseplate. The top view of HITec (4(b)) shows the orientation of our coordinate system, the location of the VLBAI facility (blue) and the gravimetry lab including piers for gravimeters (light grey). The VLBAI facility is implemented in the laboratory building of the Hannover Institute of Technology. The building consists of three floors (one basement level, two above street level) and is divided into a technical part mainly containing the climate control systems, and a section with the laboratories (see figure 4). In the laboratory part, a so-called backbone gives laboratories access to the technical infrastructure and divides the building in two parts along its long axis. The backbone and southern row of laboratories have a footprint of $$13.4\text{\,}\mathrm{m}$\times$55.4\text{\,}\mathrm{m}$$ and extend approximately $5\text{\,}\mathrm{m}$ below surface level. The northern row of laboratories is fully above ground except for the gravimetry laboratory which is on an intermediate level, around $1.5\text{\,}\mathrm{m}$ below street level and $3.4\text{\,}\mathrm{m}$ above basement level (see figure 4(a)). The foundation of the building is $0.5\text{\,}\mathrm{m}$ thick except beneath the gravimetry laboratory, which has a separate and $0.8\text{\,}\mathrm{m}$ thick one. Figure 4(a) also shows the measurement points for the relative gravimeters along the VLBAI main axis and a second validation profile, occupied using tripods, next to the VLBAI which were used for the measurements presented in section 5. ### Physical model Following the methods described by [27], we discretise the HITec building into a model of rectangular prisms that accounts for more than $500$ elements. The geometry is extracted from the construction plans, and we verified all the heights by levelling, also including a benchmark with a known elevation in the German height datum. The building is embedded in a sedimentary ground of sand, clay, and marl ($2050\text{\,}\mathrm{kg}\text{/}{\mathrm{m}}^{3}$). For the edifice itself, we include all walls and floors made of reinforced concrete ($2500\text{\,}\mathrm{kg}\text{/}{\mathrm{m}}^{3}$), the $7\text{\,}\mathrm{cm}13\text{\,}\mathrm{cm}$ thick liquid flow screed covering the concrete floors in the labs ($2100\text{\,}\mathrm{kg}\text{/}{\mathrm{m}}^{3}$), and the gypsum drywalls ($800\text{\,}\mathrm{kg}\text{/}{\mathrm{m}}^{3}$). We also incorporate the insulation material ($150\text{\,}\mathrm{kg}\text{/}{\mathrm{m}}^{3}$) and gravel on the roof ($1350\text{\,}\mathrm{kg}\text{/}{\mathrm{m}}^{3}$). We use a simplified geometry to model the large research facilities in the surroundings. This is for example the case for the Einstein-Elevator [29], a free-fall simulator with a weight of $165\text{\,}\mathrm{t}$ and horizontal distances of $32\text{\,}\mathrm{m}$ and $16\text{\,}\mathrm{m}$ to the VLBAI facility and gravimetry laboratory, respectively. Finally, we account for laboratory equipment, e. g. optical tables ($550\text{\,}\mathrm{kg}$ each) according to the configuration at the time of the gravimetric measurement campaigns. During the first measurements (2017), the interior construction was still in progress, and the laboratories were empty. By the time of the second campaign (2019), the building was fully equipped. The VLBAI support structure (VSS) and the vacuum tank (VTS) for the seismic attenuation system were in place. The VLBAI instrument (atomic sources, magnetic shield, $10\text{\,}\mathrm{m}$ vacuum tube) and seismic attenuation system were completed after the second campaign. Due to their inclined or rounded surfaces, the VLBAI experimental apparatus and its support structure require a more flexible method than rectangular prisms to model their geometry. We apply the method described by [41] and divide the surface of the bodies to be modelled into polygonal faces to calculate the gravitational attraction from surface integrals. Contrary to the rectangular prisms method, there are only few restrictions on the underlying geometry. Most notably, all vertices of a face must lie in one plane and the normal vectors of all surfaces must point outward of the mass. For example, normal vectors of faces describing the outside surface of a hollow sphere must point away from the sphere and normal vectors on the inside surface must point towards the centre, away from the mass of the wall of the sphere. We extract the geometry of the VLBAI facility components from their tridimensional CAD model through an export in STL555Stereolithography or standard triangulation language format [47]. This divides the surface of the bodies into triangular faces, therefore ensuring planar faces by default. Moreover, the STL format encodes normal vectors pointing away from the object. Both prerequisites for the polygonal method by [41] are thus met. Using this method, the VSS (aluminium, $2650\text{\,}\mathrm{kg}\text{/}{\mathrm{m}}^{3}$, total weight $5825\text{\,}\mathrm{kg}$) consists of roughly $86000$ faces and the VTS and corresponding baseplates (stainless steel, $8000\text{\,}\mathrm{kg}\text{/}{\mathrm{m}}^{3}$, total weight $2810\text{\,}\mathrm{kg}$) contain $187000$ faces, mostly due to the round shape and fixtures of the VTS. As the overall computation time to extract the attraction of these components with a $\mathrm{c}\mathrm{m}$-resolution on both vertical profiles remains in the range of minutes on a desktop PC, we do not need to simplify the models. The Monte Carlo simulations described in section 6 nevertheless require the computing cluster of the Leibniz University Hannover (LUH). We use MATLAB666MATLAB Version 9.4.0.813654 (R2018a) to perform the numerical calculations. As a cross-check, we implemented both the rectangular prisms and polyhedral bodies methods for the calculation of the attraction effect of the main frame of the HITec building. Both approaches agree within floating point numerical accuracy. ### Time variable gravity changes Mostly for the benefit of the future operations of the VLBAI, we include the effects of groundwater level changes, atmospheric mass change, and Earth’s body and ocean tides in our modelling. This is necessary for the individual gravimetry experiment (and other physics experiments as well) in the VLBAI on one hand, and for comparing measurements from different epochs, e. g. with different groundwater levels, on the other hand. Previous investigations in the gravimetry lab of a neighbouring building showed a linear coefficient of $170\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ per meter change in the local groundwater table [64]. This corresponds to a porosity of $>30\text{\,}\mathrm{\char 37\relax}$ of the soil [17]. For our model, we adapt a porevolume of $30\text{\,}\mathrm{\char 37\relax}$, which has to be verified by gravimetric measurements and correlation with local groundwater measurements. Two automatic groundwater gauges are available around the building: one installed during the construction work and a second with records dating back several decades also used by [64]. The effect of atmospheric mass changes is calculated using the ERA5 atmospheric model provided by the European Centre for Medium-Range Weather Forecasts777https://www.ecmwf.int and the methods described by [51]. Tidal parameters are extracted from observational time series [58, 52]. Other temporal gravity changes are not in the scope of this work. Currently, time variable gravity is also monitored with the gPhone-98 gravimeter of the Institute of Geodesy (IfE) at the LUH. In the long term, we consider the addition of a superconducting gravimeter for this purpose when the VLBAI facility is fully implemented and the experimental work is beginning. The support of a superconducting gravimeter is also vital in the characterisation of new gravimeters [12]. ### Self-attraction results Figure 5 shows the vertical component of the gravitational acceleration generated by the building, equipment, VSS and VTS. The VLBAI main axis is in the centre of the left plot ($x=$0\text{\,}\mathrm{m}$$). The large structures around $5\text{\,}\mathrm{m}$ and $10\text{\,}\mathrm{m}$ correspond to the floor levels. Smaller structures are associated to, for example, optical tables or the VSS. The right panel of figure 5 highlights the attraction calculated for the main axis ($x=$0\text{\,}\mathrm{m}$$) and for a second profile along $x=$-1.8\text{\,}\mathrm{m}$$ and $y=$0\text{\,}\mathrm{m}$$. The first profile shows a smooth curve except for the bottom $2\text{\,}\mathrm{m}$, which are affected by the VTS. In this model, the part above $2\text{\,}\mathrm{m}$ on the main axis is empty space. The second profile, chosen as a sample from the xz-plane, passes through the floors, hence the zig-zag features around $5\text{\,}\mathrm{m}$ and $10\text{\,}\mathrm{m}$. While the main axis will later be occupied by the instrument’s baseline, this second profile, similar to the validation profile, represents a location that will always remain accessible to gravimeters. Figure 5: Calculated gravitational attraction from the building, large laboratory equipment, VSS and VTS in the xz-plane (left) and exemplarily on two profiles (right). ### Effect of groundwater level changes Based on the extensive groundwater level recordings from the gauge nearby the HITec building, we study the impact of groundwater level changes [67, see also] on gravitational attraction inside the building, specifically along the VLBAI main and validation profiles, as well as in the gravimetry laboratory. Due to the layout of the different basement levels in the building (see figure 4(a)), a change of the groundwater table affects gravity in the VLBAI laboratories differently than in the gravimetry lab. Depending on the groundwater level, the foundation beneath the VLBAI laboratories can be partially within the groundwater table, whereas this is never the case for the gravimetry laboratory. As shown on figure 4(a), the mean groundwater table is nevertheless below the level of the foundation below the VLBAI laboratories. Therefore, at certain points of the average annual cycle of amplitude $0.3\text{\,}\mathrm{m}$, the groundwater table will rise only around the foundation of the VLBAI laboratories, whereas its level will still increase below the gravimetry laboratory. This effect is even more stringent for years where the average cycle amplitude is exceeded (around one in four years). Figure 6: Effect of groundwater variations (all heights in the height system of the model, cf. figure 4(a)) on gravity in the gravimetry lab (left) and along the VLBAI axis (right) with respect to the mean groundwater level (dotted line ). The dashed line ( ) indicates the bottom of the foundation below the VLBAI. The coloured lines indicate the change of gravity $\delta g_{\mathrm{gw}}$ in various heights in the gravimetry and VLBAI laboratories. The height of the gravimetry piers in the height system of the model is $3.35\text{\,}\mathrm{m}$. Figure 6 illustrates the different influence of the groundwater table level on gravity in the VLBAI and gravimetry laboratories. The estimated change of gravity $\delta g_{\mathrm{gw}}$ due to the attraction corresponding to groundwater level variations is presented for different heights above the gravimetry pier and along the VLBAI main axis. As the groundwater level is always changing directly beneath the instrument piers in the gravimetry laboratory, we expect an almost linear change of gravity with changing groundwater level. The change of gravity is also almost independent of the height above the pier, as shown by the almost identical lines for $z=$3.35\text{\,}\mathrm{m}$$ directly on the pier and $1.4\text{\,}\mathrm{m}$ above the pier, covering the instrumental heights of transportable gravimeters. Therefore, AGs with various sensor heights, e. g., A-10 and FG5X, are affected in the same manner. The increase of $\delta g_{\mathrm{gw}}$ is $32\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ in an average year. This behaviour is different in the VLBAI laboratories. In current records, the groundwater level never fell below the foundation of the backbone (cf. figure 4(a)). This effect is seen in the small divergence (up to $3\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$) for groundwater levels below the foundation of the VLBAI (dashed line). Once the groundwater level reaches the lower edge of the VLBAI foundation, gravity will not increase linearly along the VLBAI main axis as the groundwater rises further. Moreover, in this situation, the effect has a different magnitude depending on the height in the room. In a year with the average amplitude of groundwater level variation, ca. $\pm 0.15\text{\,}\mathrm{m}$ around the line indicating the mean groundwater level, $\delta g_{\mathrm{gw}}$ will differ by $5\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ between basement and the top floor. In years exceeding the average groundwater variation, the difference between the basement and upper levels increases further. This effect is within $\pm 2\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ on the validation profile in the average groundwater cycle. These observations will be crucial when comparing AGs in the gravimetry laboratory to the VLBAI facility operated as a quantum gravimeter. Depending on the geometry of a specific atom interferometer realisation, the instrumental height of the VLBAI gravimeter changes and can introduce changes in the measured value of g of more than $10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ as a result of the groundwater effect in years with a higher than usual groundwater level. The magnitude of $10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ is larger than the targeted accuracy of the VLBAI and also a relevant size for classical AGs in comparisons. It should also be noted, that the model only calculates the gravitational attraction of the groundwater variation. A potential vertical displacement of the ground itself is currently not taken into account, leading to a possible underestimation of the effect. In order to track the effect of groundwater level changes more accurately, we plan to extend the findings of [64] by correlating periodic gravimetric measurements on the validation profile in the VLBAI laboratories with the recordings of the two groundwater level gauges around the building. This should in particular allow us to take into account that, due to capillarity effects, the groundwater level will probably not sink uniformly below the foundation beneath the VLBAI laboratories once it reaches that level. ## Gravimetric measurements In June 2017 and August 2019, we performed surveys using relative gravimeters to verify our model from section 4 along the VLBAI main and validation profiles. This approach was already demonstrated in [54], in which the gravity field impact of a $200\text{\,}\mathrm{kN}$ force standard machine at the Physikalisch-Technische Bundesanstalt in Braunschweig was modelled. That model was verified with gravimetric measurements prior and after the installation of the force machine. The difference between the modelled impact and the measurement was within the uncertainty of the gravimeters used. For each measurement point, we measured its connection to at least another point and applied the step method with ten connections [65]. A connection corresponds to one gravity difference observation between two points. Ten connections require five occupations of a measurement point with a gravimeter. We measured most connections with at least two different instruments, reducing the outcomes to a mean instrumental height of $0.22\text{\,}\mathrm{m}$ above ground or platform. We then performed a global least-squares adjustment using the Gravimetry Net Least Squares Adjustment software from IfE [70, GNLSA,]. The measurements are also calibrated in this process. We determined the individual calibration factors of the gravimeters on the Vertical Gravimeter Calibration Line in Hannover [61, 59] at least once in the week prior to the measurement campaigns. The software also corrects Earth tides, applying our observed parameters, and atmospheric mass changes by means of the linear factor of $3\text{\,}\mathrm{nm}\text{/}{\mathrm{s}}^{2}\text{/}\mathrm{hPa}$ with respect to normal air pressure at station elevation. In order to account for instrumental drift in the global adjustment, we treat each day and each instrument independently and use a variance component estimation to weight the measurements in the global network adjustment. The specific groundwater effect discussed in section 4.4, considering different magnitudes depending on height, does not apply for either 2017 or 2019 because the groundwater levels were below the foundation of the VLBAI in both years. ### 2017 Gravimetry campaign We first mapped the gravity profile along the VLBAI profiles in June 2017, when the HITec building was still under construction and the VLBAI experimental apparatus not yet installed. Using the Scintrex CG3M-4492 (short CG3M) and ZLS Burris B-144 (B-114) spring gravimeters [62, 52], we measured a total of $147$ connections between seven positions spaced by ca. $2\text{\,}\mathrm{m}$ along the VLBAI main axis, nine positions on the validation profile, and two points outside of the building. We used a scaffolding to access the measurement points on the main axis. However, although the scaffold was anchored against the walls, the uppermost platforms were too unstable to ensure reliable measurements. The B-114 was only able to measure on the bottom three positions, because the feedback system was not powerful enough to null the oscillating beam on the upper levels. The four upper levels were only occupied by the CG3M. We connected each point on the scaffold to another one on the same structure and to the closest fixed floor level, at a point part of the validation profile. As shown in figure 4(a), the validation profile included measurements on the floor and on different sized tripods to determine the gradients. The variance component estimation gives a posteriori standard deviations for a single gravity tie observation of $50\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ for the B-114 and $100\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ for the CG3M. The standard deviations for the adjusted gravity values range from $15\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}42\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ with a mean value of $28\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$. The standard deviations of the adjusted gravity differences vary from $21\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ between fixed floor levels to $59\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ between consecutive levels on the scaffold. The transfer of height from the upper floor to the basement through the intermediate levels on the scaffold showed a $2\text{\,}\mathrm{mm}$ discrepancy compared to the heights from levelling. We included the corresponding $$2\text{\,}\mathrm{mm}$\cdot$3\text{\,}\mathrm{nm}\text{/}{\mathrm{s}}^{2}\text{/}\mathrm{mm}$=$6\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$$ as a systematic uncertainty for the adjusted gravity values for the values measured on the scaffold. We also account for a $1\text{\,}\mathrm{mm}$ uncertainty on the determination of the relative gravimeter sensor height. ### 2019 Gravimetry campaign Figure 7: Measurement at the VSS in 2019 with the B-64 (foreground) on the validation profile and the CG6 (background) inside the VSS on a platform with an operator wearing a security harness. The B-64 is operated on a small tripod to raise the sensor height closer to the CG6 sensor height. We mapped the gravity profile along the VLBAI axes in a more extensive manner in summer and fall 2019. Most measurements were performed in one week of August 2019, adding two days in October and November 2019. We used moveable platforms inside the VSS, installed in June 2019, and could measure on $16$ levels on the main axis, spaced by $0.45\text{\,}\mathrm{m}0.95\text{\,}\mathrm{m}$. The scheme for the validation profile did not change. The layout of the network is depicted in figure 4(a). For this campaign, we used the CG3M, the Scintrex CG6-0171 (CG6), and ZLS Burris B-64 (B-64) spring gravimeters [62, 59, 52]. Owing to the high mechanical stability of the VSS, measurements along the main axis were unproblematic for all instruments and the measurement noise was at a similar level on the moveable platforms and on the fixed floors (see figure 7). All but one position were occupied with at least two gravimeters, amounting to $439$ connections in the network adjustment. The a posteriori standard deviations (single gravity tie measurement) of the observations range from $15\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}60\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ with more than $50\text{\,}\mathrm{\char 37\relax}$ below $30\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$. The higher standard deviations are a result of two days of measurements with the CG3M and connections to two particular positions outside of the region of interest of the VLBAI. The standard deviations of adjusted gravity values in the network range from $7\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}19\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ with a mean of $9\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$. This improvement, compared to the previous campaign, can be attributed to the stability of the VSS, the addition of the CG6 and the total number of measurements performed. The height of the moveable platforms inside the VSS was determined by a combination of levelling and laser distance measurements888Leica Disto D210 to two fixed platforms and the ceiling. For the height determination of the platforms, the uncertainty is $1\text{\,}\mathrm{mm}$ due to the laser distance measurement. We also account for an $1\text{\,}\mathrm{mm}$ uncertainty in the determination of the instrumental height above the platforms. ## Combination of model and measurement The measurement and model results along the VLBAI main and validation profiles are presented in figure 8. Figure 8(a) shows the total variation of gravity along the main axis. The plot is dominated by the normal decrease of gravity with height. The effect of the building can be better seen when removing the change of gravity with height and visualising only the attraction effect of the building and laboratory equipment, as on figure 8(b). There, the model corresponds to the configuration for the 2019 campaign and is identical to the $x=$0\text{\,}\mathrm{m}$$, $y=$0\text{\,}\mathrm{m}$$ line in figure 5. Figure 8(d) shows the model and measurements along the validation profile. The models presented in figure 8 use the nominal values for the densities of building elements (concrete floors and walls, drywalls, etc.). Since these can have variations over the building, we performed a Monte Carlo simulation ($50000$ runs) varying the densities of the corresponding model elements by $\pm 5\text{\,}\mathrm{\char 37\relax}$ according to a normal distribution. This leads to a variation of attraction of $\pm 27\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}\pm 37\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ for heights between $4\text{\,}\mathrm{m}$ and $13\text{\,}\mathrm{m}$, as shown by the thin blue lines on figures 88(b)–8(d). Using a uniform distribution of the density parameters increases the variability by around $20\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$. The VSS and VTS are not part of the Monte Carlo simulation since their geometry and materials are well known. (a) central axis: gravity variation (b) central axis: model (c) central axis: residuals (d) validation profile: model Figure 8: Measurement and model results on the VLBAI central axis (8(a)–8(c)) and the validation profile (8(d)). The shaded area in (8(a)–8(c)) indicates the region of interest. The total variation of gravity along the central axis is shown in (8(a)). The modelled and measured attraction by the environment (with the change of gravity with height removed) on the central and validation profile is shown in (8(b)) and (8(d)). The errorbars indicate the standard deviations from the network adjustment and the model simulations according to equation (7). The maximum and minimum results of the $\pm 5\text{\,}\mathrm{\char 37\relax}$ density variations from Monte Carlo (MC) simulation of model parameters are indicated by the thin blue lines. The residuals of observations minus model $\delta g_{\mathrm{omc}}$ are given in (8(c)) along with the standard deviation of the model $\sigma_{\text{mod}}$ according to equation (8). The final location of the VLBAI facility and its main axis could only be approximated to the $\mathrm{cm}$-level during the measurement campaigns because of necessary installation tolerances. We estimated the effect of a horizontal variation of $\pm 3\text{\,}\mathrm{cm}$ and a vertical variation of $\pm 2\text{\,}\mathrm{mm}$ in a Monte Carlo simulation. The total amplitude of the variations at the locations of the gravimetric measurements is within $\pm 2\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ with a mean standard deviation of $0.3\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ for the horizontal and $0.4\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ for the vertical component along the main axis. The measurements, i. e. the markers in figure 8, are the result of the gravity network adjustment. Additionally, we removed the effect of the change of gravity with height for figures 88(b)–8(d). For this, the free air gradient is modified with a model of the soil surrounding HITec. As the density is only known to a certain degree, the Monte Carlo simulation also included the ground around HITec. The standard deviation of the simulation results for each gravimeter position is added to the measurements standard deviation by error propagation. The simulations’ standard deviations range from $10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ at the height of $4\text{\,}\mathrm{m}$ to $35\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ at the topmost position. This is also reflected in the increase in the standard deviations indicated by the errorbars in figure 8(c). The uncertainty of the measurements now consists of the following components: $\sigma_{\rm obs}=\sqrt{\sigma_{g}^{2}+\sigma_{h,\mathrm{geo}}^{2}+\sigma_{z,\mathrm{mod}}^{2}+\sigma_{\mathrm{grad}}^{2}}\ .$ (7) Here, the standard deviation of the network adjustment is $\sigma_{g}$. The contribution of the determination of the height of the gravimeter is $\sigma_{h,\mathrm{geo}}$. The result of the Monte Carlo simulations of the vertical component of geometric position of the central axis $\sigma_{z,\mathrm{mod}}$, and the modelling of the gravity gradient $\sigma_{\mathrm{grad}}$ are also attributed to the measurements. The standard deviation of the model consists of the following components: $\sigma_{\mathrm{mod}}=\sqrt{\sigma_{\mathrm{MC}}^{2}+\sigma_{hz,\mathrm{mod}}^{2}}\ ,$ (8) where $\sigma_{\mathrm{MC}}$ is the standard deviation of the Monte Carlo simulations of the model density, calculated in the heights of the gravimetric measurements, and $\sigma_{hz,\mathrm{mod}}$ is the standard deviation of the Monte Carlo simulations for the horizontal component of the geometric positions along the VLBAI main axis. $\sigma_{\mathrm{mod}}$ is shown in figure 8(c) with a range of $6\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}11\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ in the region of interest and about $8\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ at $z_{\mathrm{eff}}=$9.2\text{\,}\mathrm{m}$$ (see section 3.3). Furthermore, a single parameter is estimated to reduce the gravity values from the magnitude of $9.81\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-2}$ to the order of magnitude of the model values for the attraction. This parameter is the mean difference of observed minus computed results at the location of the observation in the region of interest. The measurements of 2017 are also corrected for the changes within the building with respect to 2019. No additional parameters were estimated to fit the measurements to the model or vice versa. The remaining signal should now contain the effect of the HITec building on gravity. In general, the 2017 measurements and the main axis model do not show a good agreement [51, see also] due to the instability of the scaffolding used as a platform [18, see also]. The agreement on the validation profile is better, and only the two topmost points do not agree with the model and simulation. These earlier measurements serve as a proof of concept and are given for the sake of completeness. The following discussion concerns only the 2019 measurements. The 2019 campaign provides a clear improvement considering the number of stations along the VLBAI main axis, the stability of the platforms in the VSS and therefore data quality. Consequently, the agreement between measurement and model is significantly improved. The measurement scheme on the validation profile remained unchanged compared to the 2017 campaign. Figure 8(c) shows the difference between the measurements and the model on the central axis. The region of interest for experiments in the VLBAI is approximately between $4\text{\,}\mathrm{m}$ and $13\text{\,}\mathrm{m}$ (see figure 3). Within this region, only the second-highest point is not within the simulation’s $\pm 5\text{\,}\mathrm{\char 37\relax}$ density variations. The two tailed statistical test ($\alpha=0.05$) on the equality of model $\delta g_{\mathrm{mod},i}$ and measurement $\delta g_{\mathrm{obs},i}$ at point $i$ according to Null hypothesis: $\displaystyle\delta g_{\mathrm{omc},i}=$ $\displaystyle\delta g_{\mathrm{obs},i}-\delta g_{\mathrm{mod},i}=0$ Alternative hypothesis: $\displaystyle\delta g_{\mathrm{omc},i}\neq$ $\displaystyle 0$ Test statistics: $\displaystyle t_{i}=$ $\displaystyle\frac{\left|\delta g_{\mathrm{omc},i}\right|}{\sqrt{\sigma_{\mathrm{obs},i}^{2}+\sigma_{\mathrm{mod},i}^{2}}}$ passes for all but three points. The null hypothesis, considering the symmetry of the normal distribution, is rejected if $t_{i}>N_{(0,1,1-\nicefrac{{\alpha}}{{2}})}$. The test fails for the points at $z=$1.72\text{\,}\mathrm{m}5.55\text{\,}\mathrm{m}12.99\text{\,}\mathrm{m}$$. The lowest point at $z=$1.72\text{\,}\mathrm{m}$$, directly on the VTS, was challenging to measure, as the pump of the vacuum tank was active during the measurements causing high-frequency vibrations. As this position is outside of the experimental region of interest, no additional measurements were taken. The cause for the significant deviation from the model at $z=$12.99\text{\,}\mathrm{m}$$, which was measured with only one gravimeter, is unknown. The height difference to the point above is only $0.16\text{\,}\mathrm{m}$ of free space, so a real gravity variation appears unlikely. Treating this point as an outlier, and repeating the test after calculating the offset between adjusted gravity values and model without this measurement, the test also passes for the point at $z=$5.55\text{\,}\mathrm{m}$$. All points on the validation profile pass the statistical test. The standard deviation of observations minus model is $20\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ ($31\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ if the second-highest point is included) for the central axis in the region of interest and $34\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ on the validation profile. The density of the different model components, chosen initially from technical documentation, are sufficient to generate a model which is identical to in situ measurements at a $95\text{\,}\mathrm{\char 37\relax}$ confidence level. Modelling a $5\text{\,}\mathrm{\char 37\relax}$ normally distributed variation of these densities results in a narrow range of possible model variations, which covers almost all measurements used to verify the model. We expect that using individual densities for each floor instead of one common density value for all concrete components in the building would improve the agreement between model and observations on the validation profile. Such extra modelling step should however be constrained not to deteriorate the model accuracy in the experimental region of interest. As a final step, the VLBAI magnetic shield and vacuum system [71] will be added to the model. Similarly to the VSS and VTS, this component was designed using CAD, built with known materials, and can be exported into the required format for our model. While the assembly is significantly more complex, we expect the octagonal symmetry of the magnetic shield to simplify the numerical calculations and allow us to reach the same level of accuracy in the gravity model as for the VSS and VTS. It will however only be possible to check the quality of the extended model with measurements on the validation profile, as the main axis is obstructed by the instrument’s vacuum chamber. Nevertheless, the understanding of environmental variations (mostly hydrology) outlined in section 4.4 will render this possible with good accuracy. Due to the work associated with the installation of the VLBAI baseline components, this last model extension and its corresponding validation have not been done yet. Extending our model with the VLBAI baseline components will allow us to connect gravimetric measurements along the validation profile and future data acquired by a VLBAI quantum gravimeter along its main axis in our adjusted gravimetric network. Since the measurement positions along the validation profile will remain free during operation of the VLBAI facility, this will for example enable comparisons of the VLBAI QG with FG5(X)-type classical AGs positioned in the VLBAI laboratories. In this specific setup, contributions of time variable gravity to the measurements are minimal for the VLBAI and instrument under test. To further minimize the height dependency due to the groundwater effect, the atom interferometer could be realised with an effective height close to the instrumental height of the classical AG, e. g. with the AG on the groundfloor. Taking into consideration the mean standard deviation of the relative gravimeter network of $9\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$, we expect to be able to transfer g with an uncertainty of $10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ and possibly below from the VLBAI baseline. Furthermore, creating a similar network including stations along the validation profile and in the HITec gravimetry laboratory would permit gravimetric comparisons between the VLBAI QG and instruments operated on the gravimetric piers. The estimates so far exclude the inevitable contribution of the VLBAI gravity measurement. The determination and validation of the VLBAI uncertainty budget will be published in a separate study. ## Conclusions We established a gravimetric control network for the Hannover VLBAI facility, a novel $10\text{\,}\mathrm{m}$-scale atom interferometer. The network consists of $439$ connections measured by relative gravimeters. A least squares adjustment of the network results in a mean standard deviation of the adjusted gravity values of $9\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$. In addition, we developed a structural model of the building hosting the VLBAI facility and its surroundings. When compared, the model and the measurements agree with $95\text{\,}\mathrm{\char 37\relax}$ confidence, with standard deviations of the residuals of $20\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ along the atom interferometer’s baseline, and $34\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ on a second, parallel profile. Moreover, we gained insight on some dynamical aspects of the gravity field around the instrument, namely the effect of groundwater level variations. We anticipate this gravimetric network to contribute to the assessment of the quantum gravimeter’s uncertainty budget, which is currently not included in our study. The current work is also essential to help determining the effective instrumental height (g-value reference position) and enable transfers of g values from the atom interferometer’s baseline to the validation profile, accessible to mobile gravimeters for comparison and possibly calibration purposes, at the $10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ repeatability level (relative to the VLBAI deduced g-values). Completing the model by including the VLBAI baseline, refining the description of the soil surrounding the host building, and including better estimates for the building material densities, we expect to shift the possibility for gravity field measurement transfers and mobile instrument calibration towards the $5\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ level, improving the temporal stability of the current state of the art, which is still largely based on gravimeter comparisons. This paves the way for the realisation of a new gravity standard based on atom interferometry. Finally, the knowledge of the dynamical gravity field and its gradients is key to reaching new frontiers in fundamental physics tests with very long baseline atom interferometry. ###### Acknowledgements. The Hannover Very Long Baseline Atom Interferometry facility is a major research equipment funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation). This work was supported by the DFG Collaborative Research Center 1128 “geo-Q” (project A02, Contract Number 239994235) and is supported by the CRC 1227 “DQ-mat” (project B07, Contract Number 274200144), Germany's Excellence Strategy – EXC-2123 “QuantumFrontiers” – 390837967, and the computing cluster of the Leibniz University Hannover under patronage of the Lower Saxony Ministry of Science and Culture (MWK) and the DFG. M. S., É. W., and C. S. acknowledge support from “Niedersächsisches Vorab” through the “Quantum- and Nano-Metrology (QUANOMET)” initiative (project QT3), and for initial funding of research in the DLR-SI institute. D. S. acknowledges funding from the German Federal Ministry of Education and Research (BMBF) through the funding program Photonics Research Germany (contract number 13N14875). The VLBAI support structure was conceived by the engineering office Heinz Berlin (Wennigsen, Germany) in collaboration with the VLBAI science team, and produced by Aljo Aluminium-Bau Jonuscheit GmbH (Berne, Germany). We thank W. Ertmer for his vision and long lasting support on very long baseline atom interferometry and the acquisition of funding for the Hannover Institute of Technology. We are grateful to T. Froböse and A. Wanner for their assistance during the installation of the vacuum tank and support structure. We thank the three reviewers for their valuable input to improve this article. ###### Author contributions. M.S., É.W., L.T. planned geometric and gravimetric measurements, evaluated the data and prepared the initial draft. É.W., D.T., D.S., C.S., E.M.R. conceptualised VSS, VTS. É.W., D.T., K.H.Z. designed and built measurement platforms for VSS. M.S., É.W., L.T., D.T., K.H.Z. carried out the measurements. M.S. developed and implemented the gravity model. D.T., K.H.Z., D.S., C.S., E.M.R., J.M. provided critical input to the manuscript and approved the final version. ###### Data availability statement. Data of absolute gravimeter key comparisons is available in the Key Comparison Database (https://www.bipm.org/kcdb) and cited literature. Gravimetric measurements in instrument specific ascii data formats and datasets generated in this study are available from the corresponding author on reasonable request. ## References * [1] S. Abend, M. Gebbe, M. Gersemann, H. Ahlers, H. Müntinga, E. Giese, N. Gaaloul, C. Schubert, C. Lämmerzahl, W. Ertmer, W.. Schleich and E.. Rasel “Atom-chip fountain gravimeter” In _Phys. Rev. Lett._ 117.20 American Physical Society, 2016 DOI: 10.1103/PhysRevLett.117.203003 * [2] Xavier Alauze, Alexis Bonnin, Cyrille Solaro and F Pereira Dos Santos “A trapped ultracold atom force sensor with a $\mu$m-scale spatial resolution” In _New Journal of Physics_ 20.8 IOP Publishing, 2018 DOI: 10.1088/1367-2630/aad716 * [3] Rym Bouchendira, Pierre Cladé, Saı̈da Guellati-Khélifa, Francois Nez and Francois Biraben “New determination of the fine structure constant and test of the quantum electrodynamics” In _Phys. Rev. Lett._ 106.8 American Physical Society, 2011 DOI: 10.1103/PhysRevLett.106.080801 * [4] R. Caldani, K.. Weng, S. Merlet and F. Pereira Dos Santos “Simultaneous accurate determination of both gravity and its vertical gradient” In _Phys. Rev. A_ 99 American Physical Society, 2019 DOI: 10.1103/PhysRevA.99.033601 * [5] CCM-IAG “CCM - IAG strategy for metrology in absolute gravimetry - role of CCM and IAG” Last Update 2015-01-28, 2015 URL: http://www.bipm.org/wg/AllowedDocuments.jsp?wg=CCM-WGG * [6] J.-M. Chartier, J. Labot, G. Sasagawa, T.. Niebauer and W. Hollander “A portable iodine stabilized He-Ne laser and its use in an absolute gravimeter” In _IEEE Transactions on Instrumentation and Measurement_ 42.2 Institute of ElectricalElectronics Engineers (IEEE), 1993, pp. 420–422 DOI: 10.1109/19.278595 * [7] Giancarlo D’Agostino, S Merlet, A Landragin and F Pereira Dos Santos “Perturbations of the local gravity field due to mass distribution on precise measuring instruments: a numerical method applied to a cold atom gravimeter” In _Metrologia_ 48.5 IOP Publishing, 2011, pp. 299–305 DOI: 10.1088/0026-1394/48/5/009 * [8] F. Delahaye and T.. Witt “Linking the results of key comparison CCEM-K4 with the 10 pF results of EUROMET.EM-K4” In _Metrologia_ 39.1A IOP Publishing, 2002 DOI: 10.1088/0026-1394/39/1a/5 * [9] R. Falk, V. Pálinkáš, H. Wziontek, A. Rülke, M. Val’ko, Ch. Ullrich, H. Butta, J. Kostelecký, M. Bilker-Koivula, J. Näränen, A. Prato, F. Mazzoleni, C. Kirbaş, İ Coşkun, M. Van Camp, S. Castelein, J.. Bernard, A. Lothhammer, M. Schilling, L. Timmen, D. Iacovone, G. Nettis, F. Greco, A.. Messina, R. Reudink, M. Petrini, P. Dykowski, M. Sękowski, J. Janák, J. Papčo, A. Engfeldt and H. Steffen “Final report of EURAMET.M.G-K3 regional comparison of absolute gravimeters” In _Metrologia_ 57.1A, 2020 DOI: 10.1088/0026-1394/57/1A/07019 * [10] Olivier Francis, Henri Baumann, Tomas Volarik, Christian Rothleitner, Gilbert Klein, Marc Seil, Nicolas Dando, Ray Tracey, Christian Ullrich, Stefaan Castelein, Hu Hua, Wu Kang, Shen Chongyang, Xuan Songbo, Tan Hongbo, Li Zhengyuan, Vojtech Pálinkáš, Jakub Kostelecký, Jaakko Mäkinen, Jyri Näränen, Sébastien Merlet, Tristan Farah, Christine Guerlin, Franck Pereira Dos Santos, Nicolas Le Moigne, Cédric Champollion, Sabrina Deville, Ludger Timmen, Reinhard Falk, Herbert Wilmes, Domenico Iacovone, Francesco Baccaro, Alessandro Germak, Emanuele Biolcati, Jan Krynski, Marcin Sękowski, Tomasz Olszak, Andrzej Pachuta, Jonas Ågren, Andreas Engfeldt, René Reudink, Pedro Inacio, Daniel McLaughlin, Geoff Shannon, Marc Eckl, Tim Wilkins, Derek Westrum and Ryan Billson “The European Comparison of Absolute Gravimeters 2011 (ECAG-2011) in Walferdange, Luxembourg: results and recommendations” In _Metrologia_ 50.3 IOP Publishing, 2013, pp. 257–268 DOI: 10.1088/0026-1394/50/3/257 * [11] Olivier Francis, Henri Baumann, Christian Ullrich, Stefaan Castelein, Michel Van Camp, M. Andrade de Sousa, R.. Melhorato, C. Li, J. Xu, D. Su, S. Wu, H. Hu, K. Wu, G. Li, Z. Li, W.-C. Hsieh, V. Pálinkáš, J. Kostelecký, J. Mäkinen, J. Näränen, S. Merlet, F. Pereira Dos Santos, P. Gillot, J. Hinderer, J.-D. Bernard, N. Le Moigne, B. Fores, O. Gitlein, M. Schilling, R. Falk, H. Wilmes, A. Germak, E. Biolcati, C. Origlia, D. Iacovone, F. Baccaro, S. Mizushima, R. De Plaen, G. Klein, M. Seil, R. Radinovic, M. Sękowski, P. Dykowski, I.-M. Choi, M.-S. Kim, A. Borreguero, S. Sainz-Maza, M. Calvo, A. Engfeldt, J. Ågren, R. Reudink, M. Eckl, D. Westrum, R. Billson and B. Ellis “CCM.G-K2 key comparison” In _Metrologia_ 52.1A IOP Publishing, 2015 DOI: 10.1088/0026-1394/52/1a/07009 * [12] C Freier, M Hauth, V Schkolnik, B Leykauf, M Schilling, H Wziontek, H-G Scherneck, J Müller and A Peters “Mobile quantum gravity sensor with unprecedented stability” In _8th symposium on frequency standards and metrology 2015_ 723, Journal of Physics: Conference Series IOP Publishing, Bristol, 2016, pp. 012050 DOI: 10.1088/1742-6596/723/1/012050 * [13] C. Freier “Atom interferometry at geodetic observatories”, 2017 DOI: 10.18452/17795 * [14] “International symposium on Earth and environmental sciences for future generations” 147, International Association of Geodesy Symposia Springer, Cham, 2016 DOI: 10.1007/978-3-319-69170-1 * [15] P Gillot, O Francis, A Landragin, F. Pereira Dos Santos and S Merlet “Stability comparison of two absolute gravimeters: optical versus atomic interferometers” In _Metrologia_ 51.5 IOP Publishing, 2014, pp. L15–L17 DOI: 10.1088/0026-1394/51/5/l15 * [16] P. Gillot, B. Cheng, A. Imanaliev, S. Merlet and F. Pereira Dos Santos “The LNE-SYRTE cold atom gravimeter” In _Proceedings of the European Frequency and Time Forum (EFTF)_ IEEE, 2016 DOI: 10.1109/eftf.2016.7477832 * [17] Olga Gitlein “Absolutgravimetrische Bestimmung der Fennoskandischen Landhebung mit dem FG5-220”, 2009 * [18] Filippo Greco, Valerio Iafolla, Antonio Pistorio, Emiliano Fiorenza, Gilda Currenti, Rosalba Napoli, Alessandro Bonaccorso and Ciro Del Negro “Characterization of the response of spring-based relative gravimeters during paroxysmal eruptions at Etna volcano” In _Earth, Planets and Space_ 66.1, 2014 DOI: 10.1186/1880-5981-66-44 * [19] Kyle S Hardman “A BEC Based Precision Gravimeter and Magnetic Gradiometer: Design and Implementation” https://doi.org/10.25911/5d723b873573a, 2016 * [20] J. Hartwig, S. Abend, C. Schubert, D. Schlippert, H. Ahlers, K. Posso-Trujillo, N. Gaaloul, W. Ertmer and E.. Rasel “Testing the universality of free fall with rubidium and ytterbium in a very large baseline atom interferometer” In _New Journal of Physics_ 17.3 IOP Publishing, 2015 DOI: 10.1088/1367-2630/17/3/035011 * [21] Matt Jaffe, Philipp Haslinger, Victoria Xu, Paul Hamilton, Amol Upadhye, Benjamin Elder, Justin Khoury and Holger Müller “Testing sub-gravitational forces on atoms from a miniature in-vacuum source mass” In _Nature Physics_ 13.10 Nature Publishing Group, 2017, pp. 938–942 DOI: 10.1038/nphys4189 * [22] Z. Jiang, V. Pálinkáš, O. Francis, H. Baumann, J. Mäkinen, L. Vitushkin, S. Merlet, L. Tisserand, P. Jousset, C. Rothleitner, M. Becker, L. Robertsson and E.. Arias “On the gravimetric contribution to watt balance experiments” In _Metrologia_ 50.5 IOP Publishing, 2013, pp. 452–471 DOI: 10.1088/0026-1394/50/5/452 * [23] R. Karcher, A. Imanaliev, S. Merlet and F. Pereira Dos Santos “Improving the accuracy of atom interferometers with ultracold sources” In _New Journal of Physics_ 20.11 IOP Publishing, 2018 DOI: 10.1088/1367-2630/aaf07d * [24] Mark A. Kasevich and Steven Chu “Atomic interferometry using stimulated Raman transitions” In _Phys. Rev. Lett._ 67.2 American Physical Society (APS), 1991, pp. 181–184 DOI: 10.1103/physrevlett.67.181 * [25] Petr Křen, Vojtech Pálinkáš, Pavel Mašika and Miloš Vaľko “Effects of impedance mismatch and coaxial cable length on absolute gravimeters” In _Metrologia_ 54.2 IOP Publishing, 2017, pp. 161–170 DOI: 10.1088/1681-7575/aa5ba1 * [26] Petr Křen, Vojtech Pálinkáš, Pavel Mašika and Miloš Val’ko “FFT swept filtering: a bias-free method for processing fringe signals in absolute gravimeters” In _Journal of Geodesy_ 93.2, 2019, pp. 219–227 DOI: 10.1007/s00190-018-1154-y * [27] Xiong Li and Michel Chouteau “Three-dimensional gravity modeling in all space” In _Surveys in Geophysics_ 19.4 Springer, 1998, pp. 339–368 DOI: 10.1023/A:1006554408567 * [28] J.. Liard, C.. Sanchez, B.. Wood, A.. Inglis and R.. Silliker “Gravimetry for watt balance measurements” In _Metrologia_ 51.2 IOP Publishing, 2014, pp. S32–S41 DOI: 10.1088/0026-1394/51/2/S32 * [29] Christoph Lotz, Yvonne Wessarges, Jörg Hermsdorf, Wolfgang Ertmer and Ludger Overmeyer “Novel active driven drop tower facility for microgravity experiments investigating production technologies on the example of substrate-free additive manufacturing” In _Advances in Space Research_ 61.8, 2018, pp. 1967–1974 DOI: 10.1016/j.asr.2018.01.010 * [30] Jaakko Mäkinen, Heikki Virtanen, Mirjam Bilker-Koivula, Hannu Ruotsalainen, Jyri Näränen and Arttu Raja-Halli “The effect of helium emissions by a superconducting gravimeter on the rubidium frequency standards of absolute gravimeters” In _Proceedings of the 3rd International Gravity Field Service (IGFS)_ 144, International Association of Geodesy Symposia Springer, Cham, 2015, pp. 45–51 DOI: 10.1007/1345_2015_205 * [31] Vincent Ménoret, Pierre Vermeulen, Nicolas Le Moigne, Sylvain Bonvalot, Philippe Bouyer, Arnaud Landragin and Bruno Desruelle “Gravity measurements below $10^{-9}$ g with a transportable absolute quantum gravimeter” In _Scientific Reports_ 8.1 Nature Publishing Group, 2018 DOI: 10.1038/s41598-018-30608-1 * [32] Sébastien Merlet, Alexander Kopaev, Michel Diament, Gérard Geneves, Arnaud Landragin and Franck Pereira Dos Santos “Micro-gravity investigations for the LNE watt balance project” In _Metrologia_ 45.3 IOP Publishing, 2008, pp. 265–274 DOI: 10.1088/0026-1394/45/3/002 * [33] T.. Niebauer, G.. Sasagawa, J.. Faller, R. Hilt and F. Klopping “A new generation of absolute gravimeters” In _Metrologia_ 32.3, 1995, pp. 159–180 DOI: 10.1088/0026-1394/32/3/004 * [34] T.. Niebauer, R. Billson, A. Schiel, D. Westrum and F. Klopping “The self-attraction correction for the FG5X absolute gravity meter” In _Metrologia_ 50.1 IOP Publishing, 2013, pp. 1–8 DOI: 10.1088/0026-1394/50/1/1 * [35] Per-Anders Olsson, Andreas Engfeldt and Jonas Ågren “Investigations of a suspected jump in Swedish repeated absolute gravity time series” In _International symposium on Earth and environmental sciences for future generations_ 147, International Association of Geodesy Symposia Springer, Cham, 2016, pp. 137–143 DOI: 10.1007/1345_2016_250 * [36] Per-Anders Olsson, Kristian Breili, Vegard Ophaug, Holger Steffen, Mirjam Bilker-Koivula, Emil Nielsen, Tõnis Oja and Ludger Timmen “Postglacial gravity change in Fennoscandia – three decades of repeated absolute gravity observations” In _Geophysical Journal International_ 217.2 Oxford University Press (OUP), 2019, pp. 1141–1156 DOI: 10.1093/gji/ggz054 * [37] V. Pálinkáš, O. Francis, M. Val’ko, J. Kostelecký, M. Van Camp, S. Castelein, M. Bilker-Koivula, J. Näränen, A. Lothhammer, R. Falk, M. Schilling, L. Timmen, D. Iacovone, F. Baccaro, A. Germak, E. Biolcati, C. Origlia, F. Greco, A. Pistorio, R. Plaen, G. Klein, M. Seil, R. Radinovic, R. Reudink, P. Dykowski, M. Sękowski, D. Próchniewicz, R. Szpunar, M. Mojzeš, J. Janák, J. Papčo, A. Engfeldt, P.. Olsson, V. Smith, D. Westrum, B. Ellis and B. Lucero “Regional comparison of absolute gravimeters, EURAMET.M.G-K2 key comparison” In _Metrologia_ 54.1A IOP Publishing, 2017 DOI: 10.1088/0026-1394/54/1a/07012 * [38] V. Pálinkáš, J. Liard and Z. Jiang “On the effective position of the free-fall solution and the self-attraction effect of the FG5 gravimeters” In _Metrologia_ 49.4 IOP Publishing, 2012, pp. 552–559 DOI: 10.1088/0026-1394/49/4/552 * [39] Richard H Parker, Chenghui Yu, Weicheng Zhong, Brian Estey and Holger Müller “Measurement of the fine-structure constant as a test of the Standard Model” In _Science_ 360.6385 American Association for the Advancement of Science, 2018, pp. 191–195 DOI: 10.1126/science.aap7706 * [40] A. Peters, K.Y. Chung and S. Chu “High-precision gravity measurements using atom interferometry” In _Metrologia_ 38 IOP Publishing, 2001, pp. 25–61 DOI: 10.1088/0026-1394/38/1/4 * [41] V. Pohánka “Optimum expression for computation of the gravity field of a homogeneous polyhedral body” In _Geophysical Prospecting_ 36.7 Wiley-Blackwell, 1988, pp. 733–751 DOI: 10.1111/j.1365-2478.1988.tb02190.x * [42] Sebastian M.. Raupach, Andreas Koczwara and Gesine Grosche “Brillouin amplification supports $1\times{}{10}^{-20}$ uncertainty in optical frequency transfer over 1400 km of underground fiber” In _Phys. Rev. A_ 92.2 American Physical Society, 2015 DOI: 10.1103/PhysRevA.92.021801 * [43] Fritz Riehle “Frequency standards” Wiley-VCH, Weinheim, 2004 * [44] Fritz Riehle, Patrick Gill, Felicitas Arias and Lennart Robertsson “The CIPM list of recommended frequency standard values: guidelines and procedures” In _Metrologia_ 55.2 IOP Publishing, 2018, pp. 188–200 DOI: 10.1088/1681-7575/aaa302 * [45] S. Rosat and J. Hinderer “Noise levels of superconducting gravimeters: updated comparison and time stability” In _Bulletin of the Seismological Society of America_ 101.3 Seismological Society of America (SSA), 2011, pp. 1233–1241 DOI: 10.1785/0120100217 * [46] S. Rosat, J. Hinderer, J.-P. Boy, F. Littel, Jean-Daniel Bernard, D. Boyer, A. Mémin, Y. Rogister and S. Gaffet “A two-year analysis of the iOSG-24 superconducting gravimeter at the low noise underground laboratory (LSBB URL) of Rustrel, France: Environmental noise estimate” In _Journal of Geodynamics_ 119 Elsevier BV, 2018, pp. 1–8 DOI: 10.1016/j.jog.2018.05.009 * [47] L. Roscoe “Stereolithography interface specification”, 1988 * [48] G Rosi, G D’Amico, L Cacciapuoti, F Sorrentino, M Prevedelli, M Zych, Č Brukner and GM Tino “Quantum test of the equivalence principle for atoms in coherent superposition of internal energy states” In _Nature Communications_ 8 Nature Publishing Group, 2017 DOI: 10.1038/ncomms15529 * [49] G Rosi, F Sorrentino, L Cacciapuoti, M Prevedelli and GM Tino “Precision measurement of the Newtonian gravitational constant using cold atoms” In _Nature_ 510.7506 Nature Publishing Group, 2014, pp. 518–521 DOI: 10.1038/nature13433 * [50] D. Savoie, M. Altorio, B. Fang, L.. Sidorenkov, R. Geiger and A. Landragin “Interleaved atom interferometry for high-sensitivity inertial measurements” In _Science Advances_ 4.12, 2018 DOI: 10.1126/sciadv.aau7948 * [51] Manuel Schilling “Kombination von klassischen Gravimetern mit Quantensensoren”, 2019 DOI: 10.15488/4710 * [52] Manuel Schilling and Olga Gitlein “Accuracy estimation of the IfE gravimeters Micro-g LaCoste gPhone-98 and ZLS Burris Gravity Meter B-64” In _IAG 150 years_ 143, International Association of Geodesy Symposia Springer, Cham, 2015, pp. 249–256 DOI: 10.1007/1345_2015_29 * [53] Manuel Schilling and Ludger Timmen “Traceability of the Hannover FG5X-220 to the SI units” In _International symposium on Earth and environmental sciences for future generations_ 147, International Association of Geodesy Symposia Springer, Cham, 2016, pp. 69–75 DOI: 10.1007/1345_2016_226 * [54] Manuel Schilling, Ludger Timmen and R. Kumme “The gravity field in force standard machines” In _Proceedings of the IMEKO TC3, TC5, TC22 Joint Conference_ , 2017 DOI: 10.15488/3073 * [55] D. Schlippert, J. Hartwig, H. Albers, L.. Richardson, C. Schubert, A. Roura, W.. Schleich, W. Ertmer and E.. Rasel “Quantum test of the universality of free fall” In _Phys. Rev. Lett._ 112.20 American Physical Society, 2014 DOI: 10.1103/PhysRevLett.112.203002 * [56] D. Schlippert, C. Meiners, R.. Rengelink, C. Schubert, D. Tell, É. Wodey, K.. Zipfel, W. Ertmer and E.. Rasel “Matter wave interferometry for inertial sensing and tests of fundamental physics” In _Proceedings of the 8th meeting on CPT and Lorentz symmetry_ , 2020, pp. 37–40 DOI: 10.1142/9789811213984_0010 * [57] F. Sorrentino, Q. Bodart, L. Cacciapuoti, Y.-H. Lien, M. Prevedelli, G. Rosi, L. Salvi and G.. Tino “Sensitivity limits of a Raman atom interferometer as a gravity gradiometer” In _Phys. Rev. A_ 89.2 American Physical Society, 2014 DOI: 10.1103/PhysRevA.89.023607 * [58] L Timmen and H-G Wenzel “Improved gravimetric Earth tide parameters for station Hannover” In _Bulletin d’Information des Marées Terrestres_ 119, 1994, pp. 8834–8846 * [59] L. Timmen, Ch. Rothleitner, M. Reich, S. Schröder and M. Cieslack “Investigation of Scintrex CG-6 gravimeters in the Gravity Meter Calibration System Hannover” In _avn - Allgemeine Vermessungs Nachrichten_ 127.4, 2020, pp. 155–162 * [60] Ludger Timmen “Precise definition of the effective measurement height of free-fall absolute gravimeters” In _Metrologia_ 40.2 IOP Publishing, 2003, pp. 62–65 DOI: 10.1088/0026-1394/40/2/310 * [61] Ludger Timmen, Reinhard Falk, Gerald Gabriel, Alexander Lothhammer, Manuel Schilling and Detlef Vogel “Das Relativgravimeter-Kalibriersystem Hannover für $10^{-4}$-Maßstabsbestimmungen (The Relative Gravimeter Calibration System Hannover for $10^{-4}$ Scale Determination)” In _avn - Allgemeine Vermessungs Nachrichten_ 125.5, 2018, pp. 140–150 * [62] Ludger Timmen and Olga Gitlein “The capacity of the Scintrex Autograv CG-3M no. 4492 gravimeter for “absolute-scale” surveys” In _Revista Brasileira de Cartografia_ 2.56, 2004, pp. 89–95 * [63] Ludger Timmen, Olga Gitlein, Volker Klemann and Detlef Wolf “Observing gravity change in the Fennoscandian uplift area with the Hanover absolute gravimeter” In _Pure and Applied Geophysics_ 169.8 Springer ScienceBusiness Media LLC, 2011, pp. 1331–1342 DOI: 10.1007/s00024-011-0397-9 * [64] Ludger Timmen, Olga Gitlein, J. Müller, G. Strykowski and R. Forsberg “Absolute gravimetry with the Hannover meters JILAg-3 and FG5-220, and their deployment in a Danish-German cooperation” In _zfv – Zeitschrift für Geodäsie, Geoinformation und Landmanagement_ 133.3, 2008, pp. 149–163 * [65] W. Torge and Jürgen Müller “Geodesy” Walter de Gruyter, Berlin/Boston, 2012 * [66] C. Ufrecht and E. Giese “Perturbative operator approach to high-precision light-pulse atom interferometry” In _Phys. Rev. A_ 101 American Physical Society, 2020 DOI: 10.1103/PhysRevA.101.053615 * [67] M. Van Camp, O. Viron, A. Watlet, B. Meurers, O. Francis and C. Caudron “Geophysics from terrestrial time-variable gravity measurements” In _Reviews of Geophysics_ 55.4, 2017, pp. 938–992 DOI: 10.1002/2017rg000566 * [68] L.. Vitushkin “Measurement standards in gravimetry” In _Gyroscopy and Navigation_ 2.3 Pleiades Publishing Ltd, 2011, pp. 184–191 DOI: 10.1134/s2075108711030126 * [69] A. Wanner, G. Bergmann, A. Bertolini, T. Fricke, H. Lück, C.. Mow-Lowry, K.. Strain, S. Gossler and K. Danzmann “Seismic attenuation system for the AEI 10 meter Prototype” In _Classical and Quantum Gravity_ 29.24 IOP Publishing, 2012 DOI: 10.1088/0264-9381/29/24/245007 * [70] Hans-Georg Wenzel “Schwerenetze” In _Geodätische Netze in der Landes-und Ingenieurvermessung II: Vorträge des Kontaktstudiums Februar 1985 in Hannover_ Konrad Wittwer, Stuttgart, 1985, pp. 457–486 * [71] É. Wodey, D. Tell, E.. Rasel, D. Schlippert, R. Baur, U. Kissling, B. Kölliker, M. Lorenz, M. Marrer, U. Schläpfer, M. Widmer, C. Ufrecht, S. Stuiber and P. Fierlinger “A scalable high-performance magnetic shield for Very Long Baseline Atom Interferometry” In _Review of Scientific Instruments_ 91, 2020 DOI: 10.1063/1.5141340 * [72] M.-K. Zhou, Z.-K. Hu, X.-C. Duan, B.-L. Sun, L.-L. Chen, Q.-Z. Zhang and J. Luo “Performance of a cold-atom gravimeter with an active vibration isolator” In _Phys. Rev. A_ 86.4 American Physical Society, 2012 DOI: 10.1103/PhysRevA.86.043630
2024-09-04T02:54:58.853231
2020-03-10T17:48:47
2003.04878
{ "authors": "Aritra De, Christopher Plumberg and Joseph I. Kapusta", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26143", "submitter": "Aritra De", "url": "https://arxiv.org/abs/2003.04878" }
arxiv-papers
# Calculating Fluctuations and Self-Correlations Numerically for Causal Charge Diffusion in Relativistic Heavy-Ion Collisions Aritra De School of Physics & Astronomy, University of Minnesota, Minneapolis, MN 55455, USA Christopher Plumberg Department of Astronomy and Theoretical Physics, Lund University, Sölvegatan 14A, SE-223 62 Lund, Sweden Joseph I. Kapusta School of Physics & Astronomy, University of Minnesota, Minneapolis, MN 55455, USA ###### Abstract We study the propagation and diffusion of electric charge fluctuations in the Bjorken hydrodynamic model with both white and Catteneo noise using purely numerical methods. We show that a global lattice of noise fluctuations is required to fully calculate the two-point correlators of charge. We solve the stochastic differential equations that arise from the charge conservation equation on the lattice. We explicitly identify the self-correlation term in the case of Catteneo noise and provide a physical interpretation. We provide a numerical recipe to remove this contribution from the full two-point correlators. Finally, we calculate the balance functions for charged hadrons. By limiting the speed of signal propagation, we observe the expected narrowing of the balance functions after removing the self-correlations. ## I Introduction Relativistic hydrodynamics is used to study not only the equation of state but also dynamical quantities, such as the transport coefficients, of the quark- gluon plasma. The applicability of hydrodynamics is justified if the mean free paths of the particles are small compared to the distances over which thermodynamic quantities vary. It turns out that hydrodynamics is very successful in modeling high energy nuclear collisions. There are experimental facilities which produce and study quark-gluon plasma: the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory and the Large Hadron Collider (LHC) at CERN. Fluid equations describe conservation of energy, momentum, baryon number, electric charge, and strangeness. Anisotropic particle production, such as elliptic flow, in heavy ion collisions gives credence to the use of hydrodynamics in simulating these collisions. It has been successful in describing various properties like particle spectra, particle correlations, and in obtaining values of transport quantities like the ratio of shear viscosity to entropy density $\eta/s$. By comparing particle spectra with experimental data, hydrodynamical simulations also help in understanding the initial state, its fluctuations, and hence properties of strongly interacting matter in general. Initially, the assumption of ideal hydrodynamics worked very well in describing the data which indicated that the system was strongly interacting. Later, important aspects like the lattice- computed QCD equation of state and viscous properties were taken into account to study transport properties of quark-gluon plasma with more precision. The fluctuation-dissipation theorem relates the dissipative properties of a system to its hydrodynamical fluctuations. In particular, it allows us to infer quantities like shear and bulk viscosity, and electrical conductivity, from the magnitude of fluctuations. Hydrodynamical fluctuations have also been used to study static critical phenomena torres near a possible critical point. More recently, there have been studies of dynamic critical phenomena near a QCD critical point teaney ; yiyin ; nahrgang ; Bluhm . Critical points are characterized by large fluctuations. This led to the suggestion to study fluctuations in conserved quantities, such as electric charge, baryon number, and strangeness on an event-by-event basis stephanov_shuryak . It has also been suggested to study non-gaussianities (higher order cumulants) of the fluctuations near critical points as they are more sensitive to large correlation lengths stephanovprl . Correlation lengths theoretically diverge near the critical points, but in the scenario of the heavy ion collisions are limited by the finite system size stephanov_shuryak as well as by the finite lifetime of the system berdnikov . Thus it becomes imperative to study the hydrodynamics of fluctuations in the context of heavy ion collisions. The relativistic theory of hydrodynamic fluctuations in the context of heavy-ion collisions was introduced in Ref. kapusta2012 . In the current work, we focus on calculating the two-point correlations of charge fluctuations and the resulting balance functions of pions. The balance function measures the difference in probability of finding a particle of opposite charge in another fluid cell versus a particle of same charge given a charged particle in a given fluid cell spratt . This problem was studied analytically in the 1+1 dimensional Bjorken model in Ref. plumbergkapusta . Analytic calculations are not possible for state of the art 3+1 dimensional, non-boost invariant, hydrodynamics. In preparation for extensions to modern hydrodynamic models, we develop numerical methods to solve the relevant stochastic differential equations numerically. Particular attention is paid to the physical interpretation of self-correlations and how they can be subtracted to make comparison to experimental data. The outline of the paper is as follows. In Sec. II we review the normal diffusion, the Cattaneo, and the Gurtin-Pipkin equations, and discuss how self-correlations arise. In Sec. III we outline the application of the relevant stochastic differential equations in the context of the Bjorken hydrodynamic model for heavy-ion collisions. In Sec. IV we present solutions to those equations. In Sec. V we show how self-correlations can be clearly identified. In Sec. VI we calculate the balance functions that relate theory to experiment. Our conclusions are presented in Sec. VII. Details of how the stochastic differential equations are solved are presented in the Appendices. The numerical method is readily transferrable to heavy-ion collisions which have no spatial symmetries and thus useful for future calculations. ## II Noise, Fluctuations and Self-Correlations Since the usual diffusion equation leads to instantaneous signal propagation, which is inconsistent with special relativity, one needs a diffusion equation which is the same order in spatial and temporal derivatives with characteristic relaxation times and lengths. In this paper we solve the simplest diffusion equation satisfying this condition, called the Catteneo equation catteneo , numerically. The resulting differential equation is a stochastic differential equation (SDE) because it contains random noise terms. The way to solve SDEs is to solve the differential equation for a large number of events (here on the order of 1 million or more) and study the correlation functions. A finite difference method is used to solve the SDE. Consider the ordinary diffusion equation with white noise. In the context of the Bjorken model, which has boost invariance and no dependence on transverse coordinates, the two variables are proper time $\tau=\sqrt{t^{2}-z^{2}}$ and spatial rapidity $\xi={\textstyle{\frac{1}{2}}}\ln[(t+z)/(t-z)]$, where the beam axis is along the $z$ direction. The noise $f$, appropriately defined (see below), is a dimensionless random variable with correlator $\langle f(\tau_{1},\xi_{1})f(\tau_{2},\xi_{2})\rangle=\frac{N(\tau_{2})}{2\pi}\delta(\tau_{1}-\tau_{2})\delta(\xi_{1}-\xi_{2})\,,$ (1) which is a product of Dirac $\delta$-functions in time and space with normalization determined by the fluctuation-dissipation theorem $N(\tau)=\frac{4\pi\sigma_{Q}(\tau)T(\tau)}{A\tau s^{2}(\tau)}\,.$ (2) Here $\sigma_{Q}$ is the charge conductivity, $T$ is the temperature, $s$ is the entropy density, and $A$ is the transverse area. To generate this numerically on a discrete lattice with spacings $\Delta\xi$ and $\Delta\tau$, we sample $f$ from a normal distribution with zero mean and variance $N(\tau)/(2\pi\Delta\xi\Delta\tau)$. The analysis of how finite difference methods work computationally for solving SDEs is discussed in Appendix A. Consider the difference between white and colored noise. The standard two- point function for white noise in frequency and momentum space is $\displaystyle\langle\tilde{f}(\omega_{1},k_{1})\tilde{f}(\omega_{2},k_{2})\rangle$ $\displaystyle=$ $\displaystyle\int d\tau_{1}\,d\tau_{2}\,d\xi_{1}\,d\xi_{2}\,e^{-i(k_{1}\xi_{1}+k_{2}\xi_{2})}\,e^{-i(\omega_{1}\tau_{1}+\omega_{2}\tau_{2})}\langle f(\tau_{1},\xi_{1})f(\tau_{2},\xi_{2})\rangle$ (3) $\displaystyle=$ $\displaystyle\delta(k_{1}+k_{2})\tilde{N}(\omega_{1}+\omega_{2})$ where $\tilde{N}$ is the Fourier transform of $N$. Generalizing this to Catteneo noise (which is an example of colored noise), we recall that the two- point function for the noise obeys kapustayoung . $\langle(\tau_{Q}\,\partial\tau_{1}+1)\tilde{f}(k_{1},\tau_{1})(\tau_{Q}\,\partial\tau_{2}+1)\tilde{f}(k_{2},\tau_{2})\rangle=N(\tau_{1})\delta(\tau_{1}-\tau_{2})\delta(k_{1}+k_{2})\,$ (4) where $\tau_{Q}$ is a relaxation time. In frequency and momentum space this becomes $\langle\tilde{f}(\omega_{1},k_{1})\tilde{f}(\omega_{2},k_{2})\rangle=\frac{\delta(k_{1}+k_{2})\tilde{N}(\omega_{1}+\omega_{2})}{(i\tau_{Q}\omega_{1}+1)(i\tau_{Q}\omega_{2}+1)}\,.$ (5) The noise correlator is no longer a Dirac $\delta$-function in time anymore; instead, it is smeared out, hence the name colored noise. The following three figures will help illustrate some of the physics to come. Figure 1 shows a fluctuation, represented by a star, in a particular spacetime cell. The signal, represented by bursts, is transmitted to the two adjacent spatial cells in the next time step. Hence those two cells have correlated fluctuations. This type of correlation can arise from either white or colored noise. Figure 2 shows a fluctuation in one spacetime cell with its signal transmitted to two spacetime cells two time steps later. This type of correlation can also happen with either white or colored noise. Figure 3 shows a situation that only happens with colored noise. The two stars are correlated, and their signals lead to correlations between the same two cells as shown in the previous figures. Self-correlations arise from correlations in the same spatial cell. For white noise this means the star and the burst are in the same spacetime cell. In discretized spacetime this leads to a Kronecker $\delta$-function in $\xi$, while in the continuum limit this leads to a Dirac $\delta$-function. The latter is somewhat unphysical, since all correlations have some finite extent. For colored noise, the self-correlation begins in the cell hosting the original fluctuation, and then continues in subsequent time steps but always in the same spatial cell due to the time-correlated nature of colored noise. Noise generated at a previous time in the same spatial cell will hydrodynamically evolve to a correlated charge fluctuation in a different spatial fluid cell. Hence the self-correlation will be non-trivial for colored noise. Figure 1: An example of either white or colored noise. A fluctuation in one cell, represented by a star, causes a correlation between two cells in the next time step, represented by bursts, separated in space from each other and from the original fluctuation. (color online) Figure 2: An example of either white or colored noise. A fluctuation in one cell, represented by a star, causes a correlation between two cells two time steps later, represented by bursts, but only one is separated in space from the original fluctuation. (color online) Figure 3: An example of colored noise. Fluctuations at the same point in space but at different times are correlated, as represented by the stars. This results in a correlation between the two cells, represented by bursts. (color online) Figure 4 shows another way to visualize the colored Cattaneo noise. At a fixed spatial cell, correlations arise at different times due to $\tau_{Q}>0$. Correlations also propagate to other spatial cells with increasing time via a Green’s function. The mathematical formalism and details of how it is implemented numerically with be presented in the following sections. Figure 4: Schematic of the lattice setup for Catteneo noise. The final charge correlations are determined at some $\tau_{f}$. One must integrate over all prior times $\tau_{i}\leq\tau\leq\tau_{f}$ to obtain the final time charge correlators. Thus one can define self-correlations as the correlation of a charge fluctuation generated in $\xi_{1}$ at final time $\tau_{f}$ with another charge fluctuation generated at the same $\xi_{1}$ but at a previous time and hence had time to travel to a different $\xi_{2}$ at $\tau_{f}$. It is non-trivial for colored noise because colored noise generated in same $\xi$ are correlated in time. One can go further and consider the Gurtin-Pipkin noise gurtin which introduces a noise correlation in spatial rapidity in addition to the correlation in proper time. Gurtin-Pipkin noise has been dealt with analytically in Ref. kapustayoung . In Cartesian coordinates Gurtin-Pipkin noise results in the following diffusion equation $\left[\frac{\partial}{\partial t}-D_{Q}\nabla^{2}+\tau_{Q}\frac{\partial^{2}}{\partial t^{2}}+\tau_{2}^{2}\frac{\partial^{3}}{\partial t^{3}}-\tau_{3}D_{Q}\frac{\partial}{\partial t}\nabla^{2}\right]n_{Q}=0\,.$ (6) Numerical simulation of Gurtin-Pipkin noise is deferred to a future work. ## III Diffusion in Boost Invariant 1+1 Hydrodynamics This section is a mini-review of the problem addressed previously in Ref. plumbergkapusta to help setup the use of numerical methods for solving the resulting SDE. We will work in 1+1 dimensional boost-invariant Bjorken hydrodynamics. The longitudinal boost-invariance implies that the initial conditions for local variables are only functions of the proper time $\tau$. We neglect the bulk and shear viscosities in order to focus on charge transport. The energy-momentum tensor for an ideal fluid is $T^{\mu\nu}=wu^{\mu}u^{\nu}-pg^{\mu\nu}\,.$ (7) We take the Landau-Lifshitz approach where $u^{\mu}$ is the velocity of energy transport. The electric current takes the form $J_{Q}^{\mu}=n_{Q}u^{\mu}+\Delta J^{\mu}$ (8) where $n_{Q}$ is the proper charge density and $\Delta J^{\mu}$ is the dissipative part. In first-order viscous fluid dynamics $\Delta J^{\mu}$ takes the form $\Delta J^{\mu}=D_{Q}\Delta^{\mu}n_{Q}=\sigma_{Q}\Delta^{\mu}\mu_{Q}$ (9) where $\mu_{Q}$ is the charge chemical potential, $\sigma_{Q}$ is the charge conductivity and $\Delta^{\mu}$ is the transverse derivative $\Delta^{\mu}=\partial^{\mu}-u^{\mu}(u\cdot\partial)\,.$ (10) Conventional charge diffusion follows the usual diffusion equation $\left(\frac{\partial}{\partial t}-D_{Q}\nabla^{2}\right)n_{Q}=0\,.$ (11) The diffusion constant $D_{Q}$ and charge conductivity are related by the Einstein relation $D_{Q}=\sigma_{Q}/\chi_{Q}$, where $\chi_{Q}$ is the electric charge susceptibility defined by $\chi_{Q}=\frac{\partial n_{Q}(T,\mu_{Q})}{\partial\mu_{Q}}\,.$ (12) The diffusion equation leads to an infinite speed of propagation which is unphysical and not suitable for hydrodynamic simulations of heavy-ion collision. Therefore the usual diffusion equation is replaced by one with a double derivative in time with a relaxation time factor $\tau_{Q}$. $\left(\frac{\partial}{\partial t}-D_{Q}\nabla^{2}+\tau_{Q}\frac{\partial^{2}}{\partial t^{2}}\right)n_{Q}=0$ (13) This equation is called the Cattaneo equation catteneo . It is a combination of the diffusion equation with the wave equation. The dissipative current gets modified to $\Delta J^{\mu}=D_{Q}\Delta^{\mu}\left[\frac{1}{1+\tau_{Q}(u\cdot\partial)}\right]n_{Q}$ (14) One can show that high frequency waves travel at a speed of $v_{Q}=\sqrt{D_{Q}/\tau_{Q}}$ kapustayoung . The fluctuation-dissipation theorem relates the two-point function, which provides a measure of the variance of fluctuations, to the dissipation from diffusion. A stochastic noise term $I^{\mu}$ is therefore added to the charge current. $J^{\mu}=n_{Q}u^{\mu}+\Delta J^{\mu}+I^{\mu}$ (15) One-point functions vanish and the two-point functions are determined by the fluctuation-dissipation theorem. For the usual diffusion equation $\langle I^{\mu}(x)\rangle=0\qquad\langle I^{\mu}(x_{1})I^{\nu}(x_{2})\rangle=2\sigma_{Q}T\,h^{\mu\nu}\delta(x_{1}-x_{2})$ (16) where $h^{\mu\nu}=u^{\mu}u^{\nu}-g^{\mu\nu}$ is the transverse projector. This is white noise. In the Catteneo equation the fluctuations are $\langle I^{i}(x_{1})I^{j}(x_{2})\rangle=\frac{\sigma_{Q}T}{\tau_{Q}}\delta(\mbox{\boldmath$x$}_{1}-\mbox{\boldmath$x$}_{2})\,e^{-|t_{1}-t_{2}|/\tau_{Q}}\,\delta_{ij}$ (17) The delta function in time is replaced by an exponential decay function. In the limit $\tau_{Q}\rightarrow 0$ this two-point function becomes the Dirac $\delta$\- function for white noise. The following are the relations between the Cartesian coordinates and the proper time and spatial rapidity appropriate for the Bjorken model. $\displaystyle\begin{aligned} t&=&\tau\cosh\xi\qquad z&=\tau\sinh\xi\\\ \tau&=&\sqrt{t^{2}-z^{2}}\qquad\xi&=\tanh^{-1}\left(\frac{z}{t}\right)\\\ \end{aligned}$ (18) The flow velocity is $u^{0}=\cosh\xi\quad u^{z}=\sinh\xi\,.$ (19) The transverse derivatives are $\displaystyle\Delta^{0}=-\frac{\sinh\xi}{\tau}\frac{\partial}{\partial\xi}\qquad\Delta^{3}=-\frac{\cosh\xi}{\tau}\frac{\partial}{\partial\xi}\quad\text{with}\quad u\cdot\partial=\frac{\partial}{\partial\tau}\,.$ (20) The fluctuating contribution to the current is written as $\displaystyle I^{0}$ $\displaystyle=$ $\displaystyle s(\tau)f(\xi,\tau)\sinh\xi$ (21) $\displaystyle I^{3}$ $\displaystyle=$ $\displaystyle s(\tau)f(\xi,\tau)\cosh\xi\,.$ (22) The entropy density $s$ is factored out to make $f$ dimensionless. The background fluid equations for the proper charge density and entropy density are $\displaystyle\frac{ds}{d\tau}+\frac{s}{\tau}=0\;\;$ $\displaystyle\Rightarrow$ $\displaystyle\;\;s(\tau)=\frac{s_{i}\tau_{i}}{\tau}$ (23) $\displaystyle\frac{dn_{Q}}{d\tau}+\frac{n_{Q}}{\tau}=0\;\;$ $\displaystyle\Rightarrow$ $\displaystyle\;\;n_{Q}(\tau)=\frac{n_{i}\tau_{i}}{\tau}\,.$ (24) These are a manifestation of the conservation of entropy and charge, respectively. The $s_{i}$ and $n_{i}$ are the densities at some initial time $\tau_{i}$. We take the initial proper charge density $n_{i}$ to be zero, hence the average charge density for subsequent times is zero as well. Now let us look at the charge current conservation equation $\partial_{\mu}J^{\mu}=0$. It is convenient to define the variable $X=\tau\delta n$ because, in the absence of fluctuations, this quantity is conserved during the hydrodynamic evolution. After a few steps of algebra the full charge conservation equation becomes $\left[\frac{\tau}{D_{Q}\chi_{Q}T}+\tau_{Q}\frac{\partial}{\partial\tau}\left(\frac{\tau}{D_{Q}\chi_{Q}T}\right)\right]\frac{\partial X}{\partial\tau}+\frac{\tau_{Q}\tau}{D_{Q}\chi_{Q}T}\frac{\partial^{2}X}{\partial\tau^{2}}-\frac{1}{\tau\chi_{Q}T}\frac{\partial^{2}X}{\partial\xi^{2}}$ $+\left[\frac{\tau s}{D_{Q}\chi_{Q}T}+\tau_{Q}\frac{\partial}{\partial\tau}\left(\frac{\tau s}{D_{Q}\chi_{Q}T}\right)\right]\frac{\partial f}{\partial\xi}+\frac{\tau_{Q}\tau s}{D_{Q}\chi_{Q}T}\frac{\partial^{2}f}{\partial\xi\partial\tau}=0\,.$ (25) For the case $\tau_{Q}=0$ (usual diffusion equation) this simplifies to $\frac{\partial X}{\partial\tau}-\frac{D_{Q}}{\tau^{2}}\frac{\partial^{2}X}{\partial\xi^{2}}+s\frac{\partial f}{\partial\xi}=0\,.$ (26) Due to boost invariance it is useful to use the Fourier transform $X(\xi,\tau)=\int_{-\infty}^{\infty}\frac{dk}{2\pi}e^{ik\xi}\tilde{X}(k,\tau)\,,$ (27) and similarly for $f$. Then the SDE for white noise is $\frac{\partial}{\partial\tau}\tilde{X}+\frac{D_{Q}k^{2}}{\tau^{2}}\tilde{X}=-iks\tilde{f}$ (28) and for colored Cattaneo noise $\left[\frac{\tau}{D_{Q}\chi_{Q}T}+\tau_{Q}\frac{\partial}{\partial\tau}\left(\frac{\tau}{D_{Q}\chi_{Q}T}\right)\right]\frac{\partial\tilde{X}}{\partial\tau}+\frac{\tau_{Q}\tau}{D_{Q}\chi_{Q}T}\frac{\partial^{2}\tilde{X}}{\partial\tau^{2}}+\frac{k^{2}}{\tau\chi_{Q}T}\tilde{X}$ $=-ik\left[\frac{\tau s}{D_{Q}\chi_{Q}T}+\tau_{Q}\frac{\partial}{\partial\tau}\left(\frac{\tau s}{D_{Q}\chi_{Q}T}\right)\right]\tilde{f}-i\frac{k\tau_{Q}\tau s}{D_{Q}\chi_{Q}T}\frac{\partial\tilde{f}}{\partial\tau}\,.$ (29) For the sake of comparison and for definiteness, we follow Ref. plumbergkapusta and assume both $D_{Q}$ and $\tau_{Q}$ are constant within the range of temperature to be considered. This means that high frequency waves propagate with a constant value of $v_{Q}$. For the same reasons we assume that $s\sim T^{3}$ and $\chi\sim T^{2}$. Hence $T\sim\tau^{-1/3}$. ## IV Solving the Stochastic Differential Equations We start by solving the stochastic differential equation for white noise. As explained earlier, we will solve it on a spacetime lattice and choose spacings $\Delta\xi=0.09$ and $\Delta\tau=10^{-4}$ fm/c. We set the parameters such that $(\chi(\tau_{f})T_{f})/(\tau_{f}\Delta\xi)=0.5122$ MeV3 fm-3. We source the noise function $f$ from a normal distribution with mean $0$ and variance $1/\sqrt{\Delta t\Delta\xi}$. The density-density correlator arising from the noise fluctuation which is a solution to the SDE in our discretized system, evaluted at the final time $\tau_{f}$, has the analytical form $\langle\delta n(\xi_{1},\tau_{f})\delta n(\xi_{2},\tau_{f})\rangle=\frac{\chi_{Q}(\tau_{f})T_{f}}{A\tau_{f}}\left[\frac{\delta_{\xi_{1},\xi_{2}}}{\Delta\xi}-\frac{1}{\sqrt{\pi w^{2}}}e^{-(\xi_{1}-\xi_{2})^{2}/w^{2}}\right]$ (30) where $w^{2}=8D_{Q}\left(\frac{1}{\tau_{i}}-\frac{1}{\tau_{f}}\right)\,.$ (31) In the continuum limit $\delta_{\xi_{1},\xi_{2}}/\Delta\xi\rightarrow\delta(\xi_{1}-\xi_{2})$. The parameters chosen for this work are the same as in Ref. plumbergkapusta , namely $\tau_{i}=0.5$ fm/c, $\tau_{f}=6.352$ fm/c, $T_{i}=350$ MeV, and $T_{f}=150$ MeV. We use diffusion constant $D_{Q}=0.162\>\text{fm}$ which is an average over the temperature interval from 150 to 350 MeV taken from Ref. Aarts . The equation of state used is the same as in Ref. torres , which is $\chi_{Q}=\frac{2}{3}T^{2}$ (including up, down and strange quarks). The details of how we solve an SDE are discussed in Appendix A. The solution is presented in Fig. 5. The dots represent the result of the SDE simulation for ten million random events. The solid curve is the Gaussian from Eq. (30); it overlays the dots within the width of the line. The Kronecker $\delta$-function at $\xi=0$ is clearly evident. Figure 5: White noise density-density correlation function for 10 million events. The solid curve is the Gaussian from Eq. 30. (color online) Next we turn to colored noise. We have to generate a noise that has the desired correlation in proper time but is uncorrelated in rapidity. The way we do that is by solving another SDE which is called the Langevin equation. $f+\tau_{Q}\frac{\partial f}{\partial\tau}=\zeta$ (32) Here $\zeta$ is the regular white noise. The relaxation time $\tau_{Q}$ smoothens the Dirac $\delta$-correlation in proper time. The $\tau_{Q}$ also introduces the maximum mode velocity to be $v_{Q}^{2}=D_{Q}/\tau_{Q}$, thereby removing instantaneous signal propagation. The analytical solution to the Langevin equation (with rapidity dependences suppressed) is $\langle f(\tau_{1})f(\tau_{2})\rangle=\frac{N(\tau_{2})}{4\pi\tau_{Q}}\left[e^{-|\tau_{1}-\tau_{2}|/\tau_{Q}}-e^{(2\tau_{i}-\tau_{1}-\tau_{2})/\tau_{Q}}\right]\equiv{\cal N}(\tau_{1},\tau_{2})\,.$ (33) The derivation is given in Appendix B. The numerically computed two-point function is plotted in Fig. 6. The expected result (33) and the numerical result are consistent for ten million simulated events. Figure 6: Comparison of numerical and analytical results for $v_{Q}^{2}=0.16$ with rapidity dependences suppressed. (color online) The grid sizes chosen ensures that they obey the Courant Friedrichs Lewy (CFL) condition CFL . This condition states that the numerical domain of dependence of any point in space and time must include the analytical domain of dependence. Physically, this condition amounts to a signal propagating no more than one spatial cell away during one time step. For speed $v_{Q}$ being a constant, this amounts to the condition $\Delta\tau/\tau<\Delta\xi/v_{Q}$. Figure 7 shows the dependence of the two-point correlator for two very different values of the propogation speed, or equivalently the relaxation time $\tau_{Q}$. Figure 7: Variation of the density-density correlator with the propogation speed $v^{2}_{Q}$. (color online) ## V Characterizing Self-Correlations The self-correlations are trivial for the case of white noise; it’s a Dirac $\delta$-function. Even the two-point correlation function of a free Boltzmann gas has a $\delta$\- function term landau1 ; landau2 . $\langle\delta n(\textbf{x}_{1})\delta n(\textbf{x}_{2})\rangle=\chi T\delta^{3}(\textbf{x}_{1}-\textbf{x}_{2})+\cdots$ (34) This is explained in the Ref. landau2 where $\langle(\Delta N)^{2}\rangle=\chi TV$. One can see this in Eq. (30), where the denomintor $\tau A$ is the Jacobian factor from the Bjorken expansion instead of stationary Cartesian coordinates. Experiments measure just the two-particle correlation and hence we have to subtract the self-correlation spratt . The challenge is to characterize the self-correlations in the presence of colored noise, since it is no longer a Dirac $\delta$-function. Figure 8 shows the numerically computed self-correlation. If we subtract the two-point correlation in Fig. 5 from that presented in this figure, we get the expected Gaussian. This is shown in Fig. 9 where it is compared with the analytical Gaussian function in Eq. (30) for 1 million events. Figure 8: The self-correlation. Figure 9: The solid curve is the expected Gaussian while the dots represent the result of the SDE simulation for 1 million random events. (color online) Now we move on to the meaning of self-correlation for colored noise. Based on the prescription of self-correlation that we discussed in the introduction, we consider the schematic diagram in Fig. 10. We are interested in noise sources at one particular $\xi$ because noise generated at any other $\xi$ would be uncorrelated. Figure 10: Schematic of the self-correlation. The star denotes a noise source and the bursts are the charge fluctuations resulting from noise. (color online) Let us try to understand what the analytical formula for this would look like. We start with the following expression for the charge fluctuation in $k$-space. $\delta\tilde{n}(k,\tau)=-\frac{1}{\tau}\int_{\tau_{i}}^{\tau}d\tau^{\prime}s(\tau^{\prime})\tilde{G}(k,\tau,\tau^{\prime})\tilde{f}(k,\tau^{\prime})\,.$ (35) Here $\tilde{G}$ is the Green’s function for the homogeneous part of the SDE (29), which can be written down in terms of Kummer’s function for the temperature dependences listed after that equation plumbergkapusta . This gives the full form of the two-point correlation function as in Eq. (49) of Ref. plumbergkapusta . $\displaystyle\langle\delta n(\xi_{1},\tau_{f})\delta n(\xi_{2},\tau_{f})\rangle$ $\displaystyle=$ $\displaystyle\frac{1}{\tau_{f}^{2}}\int\frac{dk}{2\pi}e^{ik(\xi_{1}-\xi_{2})}\int d\tau^{\prime}s(\tau^{\prime})\int d\tau^{\prime\prime}s(\tau^{\prime\prime})$ (36) $\displaystyle\times$ $\displaystyle\tilde{G}(k,\tau_{f},\tau^{\prime})\;\tilde{G}(-k,\tau_{f},\tau^{\prime\prime})\,{\cal N}(\tau^{\prime},\tau^{\prime\prime})\,.$ Following Eqs. (54) and (55) of Ref. plumbergkapusta , we can write the self- correlation term as $\displaystyle\langle\delta n(\xi_{1},\tau_{f})$ $\displaystyle\delta n$ $\displaystyle(\xi_{2},\tau_{f})\rangle_{\text{self}}$ (37) $\displaystyle=$ $\displaystyle\frac{\chi_{Q}(\tau_{f})T_{f}}{A\tau_{Q}}\int\frac{d\tau^{\prime\prime}}{\tau^{\prime\prime}}\left[e^{-(\tau_{f}-\tau_{2})/\tau_{Q}}-e^{-(\tau_{f}+\tau_{2}-2\tau_{i})/\tau_{Q}}\right]\int\frac{dk}{2\pi}e^{ik(\xi_{1}-\xi_{2})}\frac{\tilde{G}(-k,\tau_{f},\tau^{\prime\prime})}{ik}$ $\displaystyle=$ $\displaystyle\frac{s(\tau_{f})}{D_{Q}}\int\frac{dk}{2\pi}e^{ik(\xi_{1}-\xi_{2})}\int d\tau^{\prime\prime}s(\tau^{\prime\prime})\;\frac{\tilde{G}(k,\tau_{f},\tau_{f})}{ik}\;\frac{\tilde{G}(-k,\tau_{f},\tau^{\prime\prime})}{ik}\,{\cal N}(\tau_{f},\tau^{\prime\prime})\,.$ Recall from Ref. plumbergkapusta that $\tilde{G}(k,\tau_{f},\tau_{f})=ik$. It denotes a noise fluctuation that was generated at the final time and didn’t have to move anywhere. The $\tilde{G}(-k,\tau_{f},\tau^{\prime\prime})$ is a noise fluctuation generated at a time $\tau^{\prime\prime}<\tau_{f}$ and then moved to $\xi_{2}$ at $\tau_{f}$. For white noise, fluctuations generated at two separate spacetime points can’t be correlated, so the fluctuation generated at $\tau_{f}$ is only correlated to itself. Hence for white noise, $\tau_{Q}\to 0$, and the self-correlation is just a Dirac $\delta$-function. As $\tau_{Q}$ increases, the more backward in time the noise sources would be correlated. Once generated the noise will travel and give rise to a correlated electric charge fluctuation further away in spacetime rapidity. Hence we expect the self-correlation term to be more spread out in spacetime rapidity. One can use the same SDE solver for generating the self-correlation. The only change is that the Green’s function $\tilde{G}$ is replaced by $\tilde{G}/(ik)$ when solving for the charge density fluctuation. $\displaystyle\langle$ $\displaystyle\delta n$ $\displaystyle(\xi_{2},\tau_{f})f(\xi_{1},\tau_{f})\rangle$ (38) $\displaystyle=$ $\displaystyle\left\langle\int\frac{dk}{2\pi}e^{ik\xi_{2}}\delta\tilde{n}(k,\tau_{f})\int\frac{dk_{1}}{2\pi}e^{ik_{1}\xi_{1}}\tilde{f}(k,\tau_{f})\right\rangle$ $\displaystyle=$ $\displaystyle\frac{1}{\tau_{f}}\int^{\tau_{f}}_{\tau_{i}}d\tau^{\prime}s(\tau^{\prime})\int\frac{dk}{2\pi}e^{ik\xi_{2}}\frac{\tilde{G}(k,\tau^{\prime},\tau_{f})}{ik}\int\frac{dk_{1}}{2\pi}e^{ik_{1}\xi_{1}}\langle\tilde{f}(k,\tau^{\prime})\tilde{f}(k_{1},\tau_{f})\rangle$ $\displaystyle=$ $\displaystyle\frac{1}{\tau_{f}}\int^{\tau_{f}}_{\tau_{i}}d\tau^{\prime}s(\tau^{\prime})\int\frac{dk}{2\pi}e^{ik(\xi_{2}-\xi_{1})}\frac{\tilde{G}(k,\tau^{\prime},\tau_{f})}{ik}\mathcal{N}(\tau^{\prime},\tau_{f})$ $\displaystyle=$ $\displaystyle\frac{D_{Q}}{s_{f}\tau_{f}}\,\frac{\chi_{f}T_{f}}{A\tau_{Q}}\int^{\tau_{f}}_{\tau_{i}}\frac{d\tau^{\prime}}{\tau^{\prime}}\left[e^{-|\tau_{f}-\tau^{\prime}|/\tau_{Q}}-e^{(2\tau_{i}-\tau_{f}-\tau^{\prime})/\tau_{Q}}\right]\int\frac{dk}{2\pi}e^{ik(\xi_{2}-\xi_{1})}\frac{\tilde{G}(k,\tau^{\prime},\tau_{f})}{ik}$ Note that in going to the second step from the first step, we have used $\tilde{G}/(ik)$ and not $\tilde{G}$. The implication is that for generating the self-correlations, we will be using the differential equation whose Green’s function is going to be $\tilde{G}/(ik)$, instead of $\tilde{G}$. Thus, we arrive at the following relation for self-correlations. $\langle\delta n(\xi_{1},\tau_{f})\delta n(\xi_{2},\tau_{f})\rangle_{\text{self}}=\frac{s(\tau_{f})\tau_{f}}{D_{Q}}\langle\delta n(\xi_{1},\tau_{f})f(\xi_{2},\tau_{f})\rangle\,.$ (39) If $\tilde{G}/(ik)$ is our desired Green’s function, then $Z\equiv(\tau\delta n(\xi,\tau))/(\tau_{f}s(\tau_{f}))=\delta n(\xi,\tau)/s(\tau)$ satisfies the following equation $\left(z^{2}+2z\frac{\tau_{Q}}{\tau_{i}}\right)\frac{\partial Z}{\partial z}+z^{2}\frac{\tau_{Q}}{\tau_{i}}\frac{\partial^{2}Z}{\partial z^{2}}-v_{Q}^{2}\frac{\tau_{Q}}{\tau_{i}}\frac{\partial^{2}Z}{\partial\xi^{2}}+\left(z+\frac{\tau_{Q}}{\tau_{i}}\right)f+z\frac{\tau_{Q}}{\tau_{i}}\frac{\partial f}{\partial z}=0\,,$ (40) where $z=\tau/\tau_{i}$. This is the same as Eq. (25) except that $\partial f/\partial\xi$ is replaced by $f$ and $\partial^{2}f/\partial\xi\partial\tau$ is replaced by $\partial f/\partial\tau$. The justification is discussed in the Appendix C. In Fig. 11 we show the self-correlation at the final time $\tau_{f}$ for various values of $\tau_{Q}$. As the speed of propagation decreases, the height of the self-correlation decreases and widens. As a check, the limit $\tau_{Q}\to 0$ is shown in Fig. 12. Figure 11: Numerical results for self-correlations for colored noise. (color online) Figure 12: Numerical results for self-correlation for white noise. ## VI Balance Functions Balance functions are described in Ref. spratt . The width of a balance function plotted against particle rapidity is a measure of the diffusion. Balance functions have been experimentally studied by the ALICE and STAR collaborations balance1 ; balance2 ; balance3 ; balance4 . Reference ling_springer_stephanov studied the effect of white noise in balance functions and compared their analytical results with experimental data. Reference plumbergkapusta calculated balance functions for colored noise. We will see how the widths of balance functions change if we vary the speed of propagation of signals in case of Catteneo noise. To see the effect of charge fluctuations in particle spectra we have to calculate how the fluctuations freeze-out. The freeze-out happens when the system has expanded and cooled to the extent that thermal equilibrium can no longer be maintained. Then hadrons freeze-out and free-stream to the detectors. The standard procedure to calculate freeze-out abundances of particles is the Cooper-Frye prescription cooper_frye . This formula gives us the distribution of emitted particles on a freeze-out hypersurface $\Sigma_{f}$. This procedure has already been performed for this hydrodynamical model in Refs. kapusta2012 ; plumbergkapusta ; ling_springer_stephanov ; torres We will just give the salient features of that calculation here. $E\frac{dN}{d^{3}p}=d\int_{\Sigma_{f}}\frac{d^{3}\sigma_{\mu}}{(2\pi)^{3}}p^{\mu}f({\mbox{\boldmath$x$},\mbox{\boldmath$p$}})$ (41) Here $d$ is the degeneracy of the particle species under consideration. We take the distribution function to be the relativistic Boltzmann $f({\mbox{\boldmath$x$},\mbox{\boldmath$p$}})=e^{-(u\cdot p-\mu)/T}\,,$ (42) where $\mu$ is the chemical potential for that particle. The energy flux through an infinitesimal freeze-out fluid cell is given by $d^{3}\sigma_{\mu}p^{\mu}=\tau_{f}\,d\xi\,d^{2}x_{\perp}m_{\perp}\cosh(y-\xi)\,.$ (43) The variable $y$ represents the particle rapidity $p^{\mu}=(m_{\perp}\cosh y,p_{x},p_{y},m_{\perp}\sinh y)$ (44) with $m_{\perp}=\sqrt{m^{2}+p_{\perp}^{2}}$ the transverse mass. The average number of particles per unit rapidity at the final freeze-out time is $\left\langle\frac{dN}{dy}\right\rangle=\frac{dA\tau_{f}T_{f}^{3}}{4\pi^{2}}\int^{\infty}_{-\infty}\frac{dx}{\cosh^{2}x}\Gamma\left(3,\frac{m}{T_{f}}\cosh x\right)\,.$ (45) Reference plumbergkapusta calculates the fluctuation in this quantity due to a $\mu$ around the freeze-out $\mu_{f}=0$. After a few more steps of algebra, the fluctuation in $\frac{dN}{dy}$ reads $\delta\left(\frac{dN}{dy}\right)=\frac{dA\tau_{f}T_{f}^{2}}{4\pi^{2}}\int d\xi\,\delta n\,F_{n}(y-\xi)$ (46) where $F_{n}$ is the smearing function $F_{n}(x)=\frac{1}{\chi_{Q}\cosh^{2}x}\Gamma\left(3,\frac{m}{T_{f}}\cosh x\right)\,.$ (47) Using this in the definition of the charge balance function we arrive at the Eq. (74) in Ref. plumbergkapusta . $B(\Delta y)=\frac{\langle\delta\left(dN/dy_{1}\right)\delta\left(dN/dy_{2}\right)\rangle}{\langle dN/dy\rangle}=\frac{dA\tau_{f}T_{f}}{4\pi^{2}}\frac{C(\Delta y)}{Q(m/T_{f})}\,.$ (48) Here $C(\Delta y)=2\pi\int d\xi_{1}d\xi_{2}\,F_{n}(y_{1}-\xi_{1})\,F_{n}(y_{2}-\xi_{2})\,C_{nn}(\xi_{1}-\xi_{2},\tau_{f})\,.$ (49) The two-point correlator for the charge fluctuation is $C_{nn}(\xi_{1}-\xi_{2},\tau_{f})$ which is obtained from the solution of the SDE. The function $Q$ is given by $Q\left(\frac{m}{T_{f}}\right)=\int^{\infty}_{-\infty}\frac{dx}{\cosh^{2}x}\Gamma\left(3,\frac{m}{T_{f}}\cosh x\right)\,.$ (50) Let us first demonstrate the trivial self-correlation for white noise in terms of the balance function for pions. Figure 13 shows the balance function for the full unsubtracted correlation function for white noise. Notice the positive and negative part of the curve; this is because the full two-point correlation for white noise is composed of a positive self-correlation and a negative piece which does not include any self-correlation. The balance function for the self-correlation part only is shown in Fig. 14. When this is subtracted from Fig. 13 one obtains the so-called subtracted balance function shown in Fig. 15. Figure 13: Balance function for the full white noise two-point function. Figure 14: Balance function for the self-correlation of white noise. Figure 15: Balance function for the pure two-point function of white noise. We follow the same procedure for carrying out cancellations of the contributions arising from the self-correlations for colored noise to the balance function. Figure 16 shows the full unsubtracted balance functions for various values of $v_{Q}$ for colored noise. Figure 17 shows the self- correlation part only, and Fig. 18 shows the subtracted balance functions. The width of the subtracted balance function denotes the diffusion distance. That width increases with increasing $v_{Q}$, as expected, since it represents the rapidity interval over which the average charge pair has diffused to by freeze out. Figure 16: Balance function for the full unsubtracted two-point function. (color online) Figure 17: Balance function for the self-correlation part of two-point function. (color online) Figure 18: Balance function for the subtracted two-point correlation function. (color online) We estimated the error in our numerical simulations using the jackknife method. This method estimates the error of statistics without making any assumptions about the distribution that generated the data. It only uses the sample provided. We create jackknife samples over the whole data set which are “leave-one-out” data sets. In our case, we consider the two-point correlation statistic $S$ on the original sample size of $10^{7}$ events. We leave out the $i_{th}$ event to create the $i_{th}$ jackknife statistic $S_{i}$. The average of the jackknife sample is $S_{\rm avg}=\sum_{i}S_{i}/n$. The jackknife error is then estimated as $\sigma_{\rm jack}=\sqrt{\frac{n-1}{n}\sum_{i}(S_{i}-S_{\rm avg})^{2}}$ (51) The error we observe on $\langle\delta n\delta n\rangle$ is of the order of $10^{-2}\;\text{MeV}^{3}\>\text{fm}^{-3}$. This amounts to $\sigma_{\langle\delta n\delta n\rangle}/\langle\delta n\delta n\rangle\approx 10^{-3}$. We give a representative plot of the error bounds for $v_{Q}^{2}=1/10$ in Fig. 19. The bounds are visible only when zoomed in. This shows that for $10^{7}$ events, the statistical error in our simulations turn out to be negligible. Figure 19: Jackknife error bounds for $v_{Q}^{2}=1/10$. (color online) ## VII Conclusions State-of-the-art modeling of high energy nuclear collisions uses relativistic 2nd order viscous hydrodynamics. The fluctuation-dissipation theorem says that viscosity and thermal fluctuations are intricately connected. Although thousands of particles are produced in these collisions, that is still immensely smaller than Avogadro’s number. Therefore it has become apparent that thermal fluctuations really ought to be part of the standard model for heavy ion collisions kapusta2012 . Fully 3+1 dimensional hydrodynamic simulations are required which presents a major challenge for implementation of thermal noise. The goal of this paper is to understand the numerical methods necessary to do this and, furthermore, how to subract self- correlations from the numerically computed two-point correlators in order to compare with experiment. We chose to study causal electric charge diffusion in the boost-invariant 1+1 dimensional Bjorken model for two reasons. First, a Cattaneo description of diffusion propagates signal at a finite speed which is a necessity in heavy ion collisions. Second, this simple model was studied and solved with essentially analytic methods plumbergkapusta against which we can compare to verify the validity of the purely numerical approach. We introduced the noise term in the dissipative charge conservation equation, which in our case is the Catteneo noise. We simulated the stochastic differential equations that arise from the electric charge conservation equations. The way we solve the stochastic differential equations is by using normal random number generators with a specific, well-defined variance and then interpreting the derivatives of the noise in terms of what they mean when integrating by parts. The whole machinery on how to handle the noise is discussed in Appendix A. We used this methodology in simulating the white noise charge conservation equation and obtained the expect result. Then we generated colored Catteneo noise using a Langevin equation. We solved the full colored noise charge conservation equation and again obtained the expected result. The two-point charge correlator consists of two pieces. The first is the self-correlation, which is a manifestation of the stochastic nature of the dynamics. Once this piece is subtracted off, we are left with the physically relevant two-point correlation function. The self-correlation is a trivial Dirac $\delta$-function in the case of white noise, but is more complicated for colored noise. In this work, we gave a physically insightful interpretation of the meaning of self-correlation in the case of colored noise. This interpretation allows us to use the stochastic differential equation solver we developed to generate the self-correlations. In the case of the white noise, we populated the whole spacetime lattice with noise source terms uncorrelated to each other. It is obvious that all the individual noise terms are not required to calculate the final two-point correlation function, but more than a single noise term is necessary. Hence Monte-Carlo simulations will be insufficient to reproduce the results for colored noise. One can, however, speed up the stochastic differential equation solving procedure by removing noise terms that are outside the causal past of the spacetime points for which we want to calculate the twopoint correlations. We used the results obtained to compute the balance functions for pions within the context of this model. As one would expect, reducing the speed of propagation of signal leads to narrowing of the balance functions and to a corresponding increase in their height at small rapidities. As done previously in Ref. plumbergkapusta we neglect the contributions from resonance decays to the measured particle spectra used in the balance functions. Our results are in good quantitative agreement with that previous study. The numerical method used in this paper is verified. Future work entails furthering the current methodology to a full 3+1 dimensional fluid dynamical models of heavy ion collisions such as MUSIC music . Further, the prescription for self-correlations given in this paper for Catteneo noise can be straightforwardly extended to the case of shear and bulk viscosity and thermal conductivity, the details of which are deferred to a future work. Since baryon charge conductivity diverges near a critical point, this study can be extended to study charge fluctuations near the purported QCD critical point, which is also deferred to future work. Another possible direction of future work would be to study the higher order cumulants in the presence of colored noise. Since two-point correlations and higher order cumulants are expected to diverge near a QCD critical point, the ultimate culmination of the present work would be to characterize the noisy hydrodynamics of near-critical point behavior of heavy ion collisions. ## VIII Acknowledgements A. D. thanks Gaurav Nirala for enlightening discussions. We thank Chun Shen for suggesting the jackknife method. This work was supported by the U.S. DOE Grant No. DE-FG02-87ER40328. C. P. acknowledges support from the CLASH project (KAW 2017-0036). The authors acknowledge the Minnesota Supercomputing Institute (MSI) at the University of Minnesota for providing resources that contributed to the research results reported within this paper. ## Appendix A Numerical Simulation of SDEs In this appendix, we show our procedure for representing the Dirac delta function and its derivatives on a discrete lattice. White noise is defined as $\langle f(x)f(x^{\prime})\rangle=\delta(x-x^{\prime})$ and $\langle f(x)\rangle=0$. This implies that $\int dx\>g(x)\langle f(x)f(x^{\prime})\rangle=g(x^{\prime})$ (52) On a discrete lattice, the integral becomes a sum over lattice points and $dx$ becomes the lattice spacing $\Delta x$. Hence $g(x_{i})=\sum_{i^{\prime}}g(x_{i^{\prime}})\langle f(x_{i})f(x_{i^{\prime}})\rangle\Delta x=\sum_{i^{\prime}}g(x_{i^{\prime}})\left(\frac{\delta_{ii^{\prime}}}{\Delta x}\right)\Delta x$ (53) From the above, we can conclude $\langle f(x_{i})f(x_{i^{\prime}})\rangle=\frac{\delta_{ii^{\prime}}}{\Delta x}$ (54) The $\delta_{ii^{\prime}}/\Delta x$ becomes a Dirac-delta function in the limit $\Delta x\rightarrow 0$ which is the continuous case. Therefore we sample the white noise function $f$ from a Normal distribution of mean $0$ and standard deviation $1/\sqrt{\Delta x}$. We use a random number generator for a large number of instances ($10^{6}$) and compute the two-point function. It gives us the variance as the peak of a Kronecker delta at $x=0$. This is illustrated in the following figure. We used $\Delta x=0.09$. Figure 20: Two-point function of $f$ for 1 million events. Next, we investigate the correlation between white noise $f$ and its derivative $df/dx$. The two-point function $\langle f(x)f^{\prime}(x^{\prime})\rangle$ must then satisfy $\int dx\,g(x)\langle f(x)f^{\prime}(x^{\prime})\rangle=\int dx\,g(x)\frac{\partial}{\partial x^{\prime}}\delta(x-x^{\prime})=\frac{\partial}{\partial x^{\prime}}\int dx\,g(x)\delta(x-x^{\prime})=g^{\prime}(x^{\prime})$ (55) The derivative is $g^{\prime}(x)=(g_{i+1}-g_{i})/\Delta x$ in a discrete lattice. Replacing the integral by the sum, we get $g^{\prime}(x_{i})=\sum_{i^{\prime}}g(x_{i^{\prime}})\langle f(x_{i})f^{\prime}(x_{i^{\prime}})\rangle\Delta x=\frac{g_{i+1}-g_{i}}{\Delta x}=\sum_{i^{\prime}}g(x_{i^{\prime}})\left(\frac{\delta_{i+1,i^{\prime}}-\delta_{i,i^{\prime}}}{\Delta x^{2}}\right)\Delta x$ (56) Hence $\langle f(x_{i})f^{\prime}(x_{i^{\prime}})\rangle=\frac{\delta_{i+1,i^{\prime}}-\delta_{i,i^{\prime}}}{\Delta x^{2}}$ (57) If we again use the previous random number generator and calculate the two point function we get the results shown in Fig. 21. Figure 21: The correlation between $f$ and its derivative for 1 million events. Similarly, we can look into the correlation of the derivative of white noise with itself. $\langle f^{\prime}(x)f^{\prime}(x^{\prime})\rangle=\frac{\partial^{2}}{\partial x\partial x^{\prime}}\delta(x-x^{\prime})$ (58) $\int dx\,g(x)\langle f^{\prime}(x)f^{\prime}(x^{\prime})\rangle=\int dx\,g(x)\frac{\partial^{2}}{\partial x\partial x^{\prime}}\delta(x-x^{\prime})=-\int dx\,g(x)\frac{\partial^{2}}{\partial x^{2}}\delta(x-x^{\prime})=-g^{\prime\prime}(x^{\prime})$ (59) In the second step we performed an integration by parts. The second derivative is defined in the discrete case as $g^{\prime\prime}(x)=(g_{i+1}+g_{i-1}-2g_{i})/\Delta x^{2}$. Substituting the discrete sum in place of the integral, we get $-g^{\prime\prime}(x^{\prime})=-\sum_{i}g_{i}\frac{\delta_{i,i^{\prime}+1}+\delta_{i,i^{\prime}-1}-2\delta_{i,i^{\prime}}}{\Delta x^{3}}\Delta x=\sum_{i^{\prime}}g(x_{i^{\prime}})\langle f^{\prime}(x_{i})f^{\prime}(x_{i^{\prime}})\rangle\Delta x$ (60) $\langle f^{\prime}(x_{i})f^{\prime}(x_{i^{\prime}})\rangle=-\frac{\delta_{i,i^{\prime}+1}+\delta_{i,i^{\prime}-1}-2\delta_{i,i^{\prime}}}{\Delta x^{3}}$ (61) Figure 22 shows what we get numerically. Figure 22: The two-point correlation of the derivative of $f$ for 1 million events. Integration of white noise is called a random walk $W(z)$ which is a succession of random steps as a function of $z$. It is defined by $W=\int_{z_{i}}^{z}f(z)dz$ (62) We can easily calculate the variance of $W$: $\langle W^{2}(z)\rangle=\int_{z_{0}}^{z}dz^{\prime}\int_{z_{0}}^{z}dz^{\prime\prime}\langle f(z^{\prime})f(z^{\prime\prime})\rangle=\int_{z_{0}}^{z}dz^{\prime}\int_{z_{0}}^{z}dz^{\prime\prime}\delta(z^{\prime}-z^{\prime\prime})=\int_{z_{0}}^{z}dz^{\prime}=z-z_{0}$ (63) On a discrete lattice, $W_{i+1}=W_{i}+f\Delta z$ where we source $f$ from a normal distribution of mean $0$ and standard deviation $1/\sqrt{\Delta z}$. $W_{n}=\sum_{i=1}^{n}f\Delta z$ (64) Hence the variance is $\displaystyle\langle W^{2}\rangle=\langle(\sum_{i=1}^{n}\Delta zf)^{2}\rangle=\langle\sum_{i=1}^{n}(\Delta zf)^{2}\rangle=\sum_{i=1}^{n}(\Delta z)^{2}\langle f^{2}\rangle=\sum_{i=1}^{n}(\Delta z)^{2}\frac{1}{\Delta z}=\sum_{i=1}^{n}\Delta z=z-z_{0}$ Figure 23 shows the numerical results. Figure 23: Two point correlation of $W$ for 1 million events with $z-z_{i}=5$. We are ready to take up a simple stochastic differential equation to solve. Consider $\frac{dX}{dz}=-\frac{\partial f}{\partial\xi}$ (66) where $z$ has dimensions of time and $\xi$ is dimensionless. Let us define the following two-point function $\langle f(z_{1},\xi_{1})f(z_{2},\xi_{2})\rangle=M\delta(z_{1}-z_{2})\delta(\xi_{1}-\xi_{2})$ Here $M$ has dimensions of time to make $f$ dimensionless. We calculate the two-point function in $\xi$. $\displaystyle\langle X(z_{f},\xi_{1})X(z_{f},\xi_{2})\rangle$ $\displaystyle=$ $\displaystyle\left\langle\int^{z_{f}}_{z_{i}}\frac{\partial f}{\partial\xi}(\xi_{1})dz\int^{z_{f}}_{z_{i}}\frac{\partial f}{\partial\xi}(\xi_{2})dz^{\prime}\right\rangle$ (67) $\displaystyle=$ $\displaystyle\int^{z_{f}}_{z_{i}}\int^{z_{f}}_{z_{i}}dzdz^{\prime}\left\langle\frac{\partial f}{\partial\xi}(z,\xi_{1})\frac{\partial f}{\partial\xi}(z^{\prime},\xi_{2})\right\rangle$ $\displaystyle=$ $\displaystyle-\int^{z_{f}}_{z_{i}}\int^{z_{f}}_{z_{i}}dzdz^{\prime}M\left(\frac{\delta_{i+1}+\delta_{i-1}-2\delta_{i}}{\Delta\xi^{3}}\right)\delta(z-z^{\prime})$ $\displaystyle=$ $\displaystyle-M(z_{f}-z_{i})\left(\frac{\delta_{i+1}+\delta_{i-1}-2\delta_{i}}{\Delta\xi^{3}}\right)$ We used Eq. (61) in the above calculation. The two-point function has dimensions of time-squared and so is the expression on the right. On a discrete lattice, $X(z+\Delta z)=X(z)-\Delta z\times\frac{\Delta f}{\Delta\xi}$ (68) Figure 24 shows the numerical results using $z_{f}-z_{i}=10$. Figure 24: Two point correlation of $X$ for a million events. ## Appendix B Analytical Solution of Langevin Equation The Langevin equation can be written as $\frac{df(\tau)}{d\tau}=-\frac{1}{\tau_{Q}}f(\tau)+\frac{1}{\tau_{Q}}\zeta(\tau)$ (69) Here $\zeta$ is white noise and $f$ is the Catteneo noise. $\langle\zeta(\tau)\rangle=0\qquad\langle\zeta(\tau_{1})\zeta(\tau_{2})\rangle=N(\tau_{1})\delta(\tau_{1}-\tau_{2})$ (70) It does not matter whether we use $N(\tau_{1})$ or $N(\tau_{2})$ because of the Dirac-delta function. Let us multiply both sides by the factor $e^{\tau/\tau_{Q}}$. $\int_{\tau_{i}}^{\tau}\frac{d}{d\tau}(e^{\tau/\tau_{Q}}f)d\tau=\int_{\tau_{i}}^{\tau}\frac{e^{\tau/\tau_{Q}}}{\tau_{Q}}\zeta d\tau$ (71) $e^{\tau/\tau_{Q}}f(\tau)-e^{\tau_{i}/\tau_{Q}}f(\tau_{i})=\frac{1}{\tau_{Q}}\int^{\tau}_{\tau_{i}}\zeta(\tau)e^{(\tau^{\prime}-\tau)/\tau_{Q}}d\tau^{\prime}$ (72) Let us set $f(\tau_{i})=0$. Another way to see this is in an equilibrium system, the system does not have any initial conditions to be sensitive to. Any fluctuations in $f(\tau)$ will then be solely due to the action of $\zeta(\tau)$. Now we consider two separate times $\tau_{1}$, $\tau_{2}$. $\displaystyle\begin{aligned} \langle f(\tau_{1})f(\tau_{2})\rangle&=\frac{N}{\tau_{Q}^{2}}\int_{\tau_{i}}^{\tau_{1}}e^{(\tau^{\prime\prime}-\tau_{1})/\tau_{Q}}d\tau^{\prime\prime}\int_{\tau_{i}}^{\tau_{2}}e^{(\tau^{\prime}-\tau_{2})/\tau_{Q}}d\tau^{\prime}\delta(\tau_{1}-\tau_{2})\\\ &=\frac{N}{\tau_{Q}^{2}}\int_{\tau_{i}}^{\text{min}(\tau_{1},\tau_{2})}e^{(2\tau^{\prime\prime}-\tau_{1}-\tau_{2})/\tau_{Q}}d\tau^{\prime\prime}=\frac{N}{2\tau_{Q}}\left[e^{|\tau_{1}-\tau_{2}|/\tau_{Q}}-e^{(2\tau_{i}-\tau_{1}-\tau_{2})/\tau_{Q}}\right]\end{aligned}$ (73) ## Appendix C Constructing the self-correlations Self-correlations are defined by Eq. (37). As discussed above, their dynamics can be modeled by an equation whose (Fourier transformed) Green’s function is related to the original Green’s function by $\tilde{G}_{\mathrm{self}}(k,\tau,\tau^{\prime})=\frac{\tilde{G}(k,\tau,\tau^{\prime})}{ik}$ (74) The original Green’s function is defined schematically by the stochastic differential equation $D_{1}X(\tau,\xi)=D_{2}\frac{\partial f}{\partial\xi}(\tau,\xi),$ (75) where $D_{1}$ and $D_{2}$ are differential operators which contain no explicit $\xi$-dependence (other than $\xi$-derivatives) and $f$ is the noisy source. Fourier transforming the $\xi$-dependence to $k$ as before, this equation becomes $\tilde{D}_{1}\tilde{X}(\tau,k)=ik\tilde{D}_{2}\tilde{f}(\tau,k)$ (76) and its solution is written in terms of the original Green’s function as $\tilde{X}(k,\tau)=-\int_{\tau_{0}}^{\tau}d\tau^{\prime}\tilde{G}(k;\tau,\tau^{\prime})\tilde{f}(k,\tau^{\prime})$ (77) We therefore seek an ‘unphysical’ field $X_{\mathrm{self}}$ whose two-point function corresponds to the self-correlations which need to be subtracted out. This field solution will be generated by the expression $\displaystyle\tilde{X}_{\mathrm{self}}(k,\tau)$ $\displaystyle=$ $\displaystyle-\int_{\tau_{0}}^{\tau}d\tau^{\prime}\tilde{G}_{\mathrm{self}}(k;\tau,\tau^{\prime})\tilde{f}(k,\tau^{\prime})$ (78) $\displaystyle=$ $\displaystyle-\int_{\tau_{0}}^{\tau}d\tau^{\prime}\frac{\tilde{G}(k;\tau,\tau^{\prime})}{ik}\tilde{f}(k,\tau^{\prime})$ $\displaystyle=$ $\displaystyle-\int_{\tau_{0}}^{\tau}d\tau^{\prime}\tilde{G}(k;\tau,\tau^{\prime})\left(\frac{\tilde{f}(k,\tau^{\prime})}{ik}\right)$ The self-correlations can be therefore straightforwardly generated by replacing $\tilde{f}\to\tilde{f}/(ik)$ in $k$-space, which amounts to discarding the $\xi$-derivative in Eq. (75). Thus, the unphysical self- correlation field is generated by solving the modified equation $D_{1}X(\tau,\xi)=D_{2}f(\tau,\xi)$ (79) ## References * (1) J. I. Kapusta and J. M. Torres-Rincon, Phys. Rev. C. 86, 054911 (2012). * (2) Y. Akamatsu, D. Teaney, F. Yan, and Y. Yin, Phys. Rev. C 100, 044901 (2019). * (3) M. Stephanov and Y. Yin, Phys. Rev. D 98, 036006 (2018). * (4) M. Nahrgang, M. Bluhm, T. Schäfer, and S. A. Bass, Phys. Rev. D 99, 116015 (2019). * (5) M. Bluhm et al., arXiv:2001.08831 [nucl-th]. * (6) M. Stephanov, K. Rajagopal, and E. Shuryak, Phys. Rev. D 60, 114028 (1999). * (7) M. A. Stephanov, Phys. Rev. Lett. 102, 032301 (2009) * (8) B. Berdnikov and K. Rajagopal, Phys. Rev. D 61, 105017 (2000). * (9) J. I. Kapusta, B. Müller, and M. Stephanov, Phys. Rev. C. 85, 054906 (2012). * (10) S. Pratt, Phys. Rev. Lett. 108, 212301 (2012). * (11) J. I. Kapusta and C. Plumberg, Phys. Rev. C 97, 014906 (2018). * (12) C. Cattaneo, Atti Del Semin. Mat. e Fis. Univ. Modena 3, 3 (1948); C. R. Acad. Sci. 247, 431 (1958). * (13) J. I. Kapusta and C. Young, Phys. Rev. C 90, 044902 (2014). * (14) M. E. Gurtin and A. C. Pipkin, Arch. Ration. Mech. Anal. 31, 113 (1968). * (15) G. Aarts, C. Allton, A. Amato, P. Giudice, S. Hands, and J.-I. Skullerud, JHEP 02, 186 (2015). * (16) R.Courant, K. Friedrichs, and H. Lewy, Mathematische Annalen, Vol. 100, pp. 32-74 (1928). * (17) L. D. Landau and E. M. Lifshitz, Statistical Physics: Part 1, Vol. 5, Course of Theoretical Physics (Pergamon, 1980). * (18) E. M. Lifshitz and L. P. Pitaevskii, Statistical Physics: Part 2, Vol. 9, Course of Theoretical Physics (Pergamon, 1980). * (19) B. Abelev, et al. (STAR Collaboration), Phys. Rev. Lett. 105, 022301 (2010). * (20) A. R. Timmins (for the ALICE Collaboration), J. Phys. G: Nucl. Part. Phys. 38, 124093 (2011). * (21) M. Aggarwal, et al. (STAR Collaboration), Phys. Rev. C 82, 024905 (2010). * (22) B. Abelev, et al. (ALICE Collaboration), Phys. Lett. B 723, 267 (2013). * (23) B. Ling, T. Springer, and M. Stephanov, Phys. Rev. C 89, 064901 (2014). * (24) F. Cooper and G. Frye, Phys. Rev. D 10, 186 (1974). * (25) B. Schenke, S. Jeon, and C. Gale, Phys. Rev. C 82, 014903 (2010). * (26) https://github.com/audide12/HeavyIon
2024-09-04T02:54:58.871503
2020-03-10T20:26:26
2003.04956
{ "authors": "Bohan Wu, Feng Xu, Zhanpeng He, Abhi Gupta, and Peter K. Allen", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26144", "submitter": "Bohan Wu", "url": "https://arxiv.org/abs/2003.04956" }
arxiv-papers
# SQUIRL: Robust and Efficient Learning from Video Demonstration of Long- Horizon Robotic Manipulation Tasks Bohan Wu, Feng Xu, Zhanpeng He, Abhi Gupta, and Peter K. Allen This work is supported by NSF Grant CMMI-1734557. Authors are with Columbia University Robotics Group, New York, NY, 10027, USA ###### Abstract Recent advances in deep reinforcement learning (RL) have demonstrated its potential to learn complex robotic manipulation tasks. However, RL still requires the robot to collect a large amount of real-world experience. To address this problem, recent works have proposed learning from expert demonstrations (LfD), particularly via inverse reinforcement learning (IRL), given its ability to achieve robust performance with only a small number of expert demonstrations. Nevertheless, deploying IRL on real robots is still challenging due to the large number of robot experiences it requires. This paper aims to address this scalability challenge with a robust, sample- efficient, and general meta-IRL algorithm, SQUIRL, that performs a new but related long-horizon task robustly given only a single video demonstration. First, this algorithm bootstraps the learning of a task encoder and a task- conditioned policy using behavioral cloning (BC). It then collects real-robot experiences and bypasses reward learning by directly recovering a Q-function from the combined robot and expert trajectories. Next, this algorithm uses the Q-function to re-evaluate all cumulative experiences collected by the robot to improve the policy quickly. In the end, the policy performs more robustly (90%+ success) than BC on new tasks while requiring no trial-and-errors at test time. Finally, our real-robot and simulated experiments demonstrate our algorithm’s generality across different state spaces, action spaces, and vision-based manipulation tasks, e.g., pick-pour-place and pick-carry-drop. ## I Introduction We aspire robots to become generalists who acquire new complex skills robustly and quickly. The robotic system, whether planned or learned, needs to leverage its existing knowledge to solve a new but related task in an efficient yet high-performance manner. Thanks to recent advances in machine learning and sim-to-real transfer mechanisms, short-horizon robotic manipulation such as grasping has improved in performance. However, many real-world robotic manipulation tasks are long-horizon, diverse, and abundant in volume. In the absence of a scalable and systematic way to construct simulation environments for a large number of tasks, the robot needs to learn a new task directly in the physical world from only a handful of trials, due to the high cost of collecting real-robot trial-and-errors and experiences. Figure 1: Learning from a single video demonstration of a long-horizon manipulation task via Soft Q-functioned Meta-IRL (SQUIRL). In the pick-pour- place example above, the robot needs to approach, pick-up and carry the grey bottle, pour the iron pebble inside the bottle into a specific container, and finally place the bottle back on the table. During training, the robot is given a single video demonstration for each of the 117 training tasks. After learning from these 117 videos, the robot also practices 90 trial-and-errors in total on these tasks. From such combined expert and robot trajectories, the robot learns the general skills of pouring robustly. At test time, given a single video demonstration of pouring into a new, unseen red container at a new position, the robot successfully replicates this new task without the need for any trial-and-errors. We observe that real-world robotic skill acquisition can become more sample- efficient in several important ways. First, we notice that humans learn tasks quickly by watching others perform similar tasks. Among many forms of task representations such as rewards, goal images, and language instructions, human demonstrations guide exploration effectively and can lead to significant sample efficiency gains. Furthermore, learning from video demonstrations sidesteps hand-designing a proper reward function for every new task. In the case of a vision-based task, video demonstrations also conveniently provide the same pixel state space for the robot. In learning from demonstrations (LfD), the robot should be sample-efficient in two dimensions – it should use as few expert demonstrations (“demonstrations” hereafter) as possible and take as few trial-and-errors (practices) as possible on its own to learn a robust policy. Among LfD methods, behavioral cloning (“BC” hereafter) is sample-efficient but susceptible to compounding errors. Here, compounding errors refer to the problem in which every time a behavioral-cloned robot makes a small error, it makes a larger error down the road as it drifts away from the expert state distribution. In contrast, IRL alleviates compounding errors by allowing the robot to try the tasks out in the real world and measure its behavior against the expert. However, due to the need to learn a reward function, IRL can require many trial-and-errors in the real world, while BC does not require such robot experiences. We posit that leveraging off-policy experiences of trial-and-errors is essential to making IRL sample-efficient enough for real robots. Here, “off-policy experiences” refer to the cumulative experiences that the robot has collected thus far during training. In contrast, “on-policy experiences” are the most recent experiences that the robot has collected using its current policy. Humans leverage lifelong, cumulative experiences to learn quickly at present. We envision robots to acquire new skills more quickly by learning from off- policy (i.e., cumulative) experiences. Finally, many real-world tasks are related and share structures and knowledge that can be exploited to solve a new but similar task later. For example, humans can quickly learn to pick and place a new object after learning to pick and place many known objects. Meta-learning, explicitly utilizing this property, aims to learn a new but related task quickly if it has already learned several similar tasks in the past. Figure 2: Fig.2: Pick-Pour-Place Robot Setup at Test Time. Given an RGB image from the top-down (black) or 45°camera (also black), the UR5-Seed robot is tasked to approach and pick-up the grey cylindrical bottle, pour the iron pebble already inside the bottle into a specific container on the table and finally place the bottle back on the table. With these motivations, we introduce SQUIRL, a meta-IRL algorithm that learns long-horizon tasks quickly and robustly by learning from 1) video demonstrations, 2) off-policy robot experiences, and 3) a set of related tasks. Fig.1 explains this algorithm using the example of a set of long- horizon pick-pour-place tasks, using the UR5-Seed111Site: www.seedrobotics.com/rh8d-dexterous-hand.html robot setup shown in Fig.2. In this task, we have the containers (green, yellow, orange, and red), a cylindrical bottle (grey), and an iron pebble inside the bottle. The robot needs to first approach and pick-up the grey bottle, pour the iron pebble inside the bottle into a specific container on the table, and then finally place the bottle back on the table, as shown in each row of images in Fig.1. At the beginning of each task, the bottle is not yet in hand, but the iron pebble is already in the bottle. At training time, the robot is given a single video demonstration for each of the 117 pick-pour-place tasks, as shown in the first two rows of images in Fig.1. Every new combination of container positions represents a different pick-pour-place task. Furthermore, the robot only needs to pour into one of the containers in a single task. Therefore, pouring into different containers also represents different tasks. After learning from these 117 demonstrations, the robot also practices 90 trial-and- errors on these tasks in total. From such a combination of expert and robot trajectories, the robot learns the general skills of pick-pour-place robustly. In all 117 training tasks, only two of the four containers appear on the table: the green and yellow containers, as shown in the first two rows of images in Fig.1. The orange and red containers are excluded during training and only appear at test time, as shown in the last row of images in Fig.1. We do so to evaluate our algorithm’s generalizability to unseen containers at test time. As shown in the last row of images in Fig.1, the robot successfully pours into a new container (red) at test time, at a new position never seen before during training, without the need for any trials or practices. To achieve such fast generalization to new tasks, our algorithm learns a task encoder network and a task-conditioned policy. The task encoder generates a 32-dimensional task embedding vector that encodes task-specific information. The policy network then learns to generalize to new tasks by accepting this task embedding vector as input, thus becoming “task-conditioned”. During training, our algorithm first bootstraps learning by training both the task encoder and the policy jointly via the BC loss. The robot then collects 10 trials across 10 tasks using the warmed-up policy and the task encoder. Next, using the combined experiences of the expert and the robot, our algorithm bypasses reward learning by directly learning a task-conditioned Q-function. Using this Q-function, our algorithm then reuses and re-evaluates all cumulative experiences of the robot to improve the policy quickly. This cycle repeats until the $90^{th}$ trial. Finally, at test time, the task encoder generates a new task embedding from a single video demonstration of a new task. This embedding is then inputted into the task-conditioned policy to solve the new task without any trial-and-errors and yet in a high-performance manner. In summary, our contributions are: 1. 1. A robust meta-IRL algorithm that outperforms ($90\%$\+ success) its behavioral cloning counterpart in real-robot and simulated vision-based long-horizon manipulation; 2. 2. A novel Q-functioned IRL formulation that circumvents reward learning and improves IRL sample efficiency; 3. 3. An efficient method that leverages off-policy robot experiences for training and requires no trials at test time; 4. 4. A general approach that tackles various long-horizon robotic manipulation tasks and works with both vision and non-vision observations and different action spaces. ## II Related Work ### II-A Inverse Reinforcement Learning (IRL) and Meta-IRL Inverse reinforcement learning (IRL) models another agent’s (typically the expert’s) reward function, given its policy or observed behavior. Previous works have approached the IRL problem with maximum margin methods [1][2] and maximum entropy methods [3][4][5]. In particular, maximum entropy methods recover a distribution of trajectories that have maximum entropy among all distributions and match the demonstrated policy’s behaviors. While these methods have shown promising results in continuous control problems, they suffer from low sample efficiency due to the need for evaluating the robot’s policy, which can be alleviated by meta-learning (i.e., meta-IRL). SMILe [6] and PEMIRL [7] are two meta-IRL algorithms based on AIRL [8] that leverage a distribution of tasks to learn a continuous task-embedding space to encode task information and achieve fast adaptation to a new but similar task. Our work differs from [6][7] in four crucial ways. First, our meta-IRL algorithm works with real robots and image observations. Second, instead of a reward function, we directly model a Q-function that the policy can optimize, in order to increase IRL sample efficiency. Third, we train the task encoder with the behavioral cloning (BC) gradient as opposed to the IRL gradient for stabler and more efficient learning. Lastly, we bootstrap policy and task encoder learning using BC before training via meta-IRL. ### II-B Real-robot Learning from Demonstrations (LfD) Our work is related to real-robot LfD [9], such as [10][11][12]. In particular, [13] developed IRL on real robots without learning from raw pixels. Other works (e.g., [14][15][16][17]) used BC for real-robot LfD. Another work [18] developed goal-conditioned BC on a simulated robot to learn long-horizon tasks by playing with objects in the scene. While enjoying efficient learning by casting imitation learning into a supervised learning problem, BC suffers from the covariate shift between the train and test data. In comparison, IRL achieves robust performance by modeling the state-action joint distribution instead of the conditional action distribution in BC [19]. Different from previous works, our meta-IRL algorithm works on real-robot vision-based tasks, and its Q-functioned IRL policy gradient can be directly combined with the BC gradient signal to approach both the sample efficiency of BC and the robustness of IRL. ### II-C One-shot Meta-imitation Learning on Real Robots Our algorithm attempts to enable robots to quickly and robustly imitate a single unseen video demonstration by learning from a distribution of tasks with shared structure, i.e., one-shot robot meta-imitation learning. For example, [20] combines gradient-based meta-learning and BC on a real robot to learn quickly from video demonstrations. [21] then extends [20] to enable robots to learn from human-arm demonstrations directly. [22] then improves [21] to meta-imitation-learn multi-stage real-robot visuomotor tasks in a hierarchical manner. However, constrained by the covariate shift problem of BC, these works show limited task performance (e.g., around $50\%$ success rate for the training tasks). In contrast, our algorithm learns a vision-based manipulation task robustly ($90\%+$ success rates) and efficiently (117 videos, 90 trials) by utilizing the generalization ability of task embeddings [23] and a novel Q-functioned IRL formulation. ## III Preliminaries ### III-A Off-policy Reinforcement Learning via Soft Actor-Critic Standard RL models a task $\mathcal{M}$ as an MDP defined by a state space $\mathcal{S}$, an initial state distribution $\rho_{0}\in\Pi(\mathcal{S})$, an action space $\mathcal{A}$, a reward function $\mathcal{R}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}$, a dynamics model $\mathcal{T}:\mathcal{S}\times\mathcal{A}\to\Pi(\mathcal{S})$, a discount factor $\gamma\in[0,1)$, and a horizon $H$. Here, $\Pi(\cdot)$ defines a probability distribution over a set. The robot acts according to stochastic policy $\pi:\mathcal{S}\to\Pi(\mathcal{A})$, which specifies action probabilities for each $s$. Each policy $\pi$ has a corresponding $Q^{\pi}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}$ function that defines the expected discounted cumulative reward for taking an action $a$ from $s$ and following $\pi$ onward. Off-policy RL, particularly Soft Actor-Critic (SAC) [24], reuses historical experiences to improve learning sample efficiency by recovering a “soft” Q-function estimator $Q_{\theta}$. A policy can then be learned by minimizing the KL divergence between the policy distribution and the exponential-Q distribution: $\pi^{*}=\operatorname*{arg\,min}_{\pi\in\Pi}D_{KL}(\pi(a\mid s)\;\|\;\frac{\exp(Q^{\pi_{old}}_{\theta}(s,a))}{Z(s)})$ ### III-B Timestep-centric IRL as Adversarial Imitation Learning The purpose of IRL is to learn the energy function $f_{\theta}$ implicit in the provided expert demonstrations and use $f_{\theta}$ to learn a policy that robustly matches the expert performance. In particular, timestep-centric IRL aims to recover an energy function $f_{\theta}(s,a)$ to rationalize and match the demonstrated expert’s action conditional distribution: $p_{\theta}(a\mid s)=\frac{\exp(f_{\theta}(s,a))}{Z_{\theta}}\propto\exp(f_{\theta}(s,a))=\overline{p_{\theta}}(a\mid s)$, where $Z_{\theta}$ is the partition function, an integral over all possible actions given state $s$. In other words, IRL minimizes the KL divergence between the actual and predicted expert conditional action distributions: $\pi_{E}(a\mid s)$ and $p_{\theta}(a\mid s)$. Adversarial IRL [8][25] provides a sampling-based approximation to MatEntIRL [4] in an adversarial manner. Specifically, AIRL [8] learns a generative policy $\pi_{\psi}$ and a binary discriminator $D_{\theta}$ derived from energy function $f_{\theta}$: $\displaystyle D_{\theta}(s,a)=P((s,a)\text{ is generated by expert})$ $\displaystyle=\frac{\overline{p_{\theta}}(a\mid s)}{\overline{p_{\theta}}(a\mid s)+\pi_{\psi}(a\mid s)}=\frac{\exp(f_{\theta}(s,a))}{\exp(f_{\theta}(s,a))+\pi_{\psi}(a\mid s)}$ (1) and $\theta$ is trained to distinguish state-action pairs sampled from the expert vs. the policy, using binary cross entropy loss: $\displaystyle\mathcal{L}^{IRL}=-\mathbb{E}$ ${}_{(s,a)\sim\pi_{\psi},\pi_{E}}[y(s,a)\log(D_{\theta}(s,a))$ $\displaystyle+(1-y(s,a))\log(1-D_{\theta}(s,a))]$ (2) where $y(s,a)=\mathds{1}\\{(s,a)\text{ is generated by expert }\pi_{E}\\}$. Meanwhile, the policy is trained to maximize the MaxEntIRL Objective [4], or equivalently, to match the expert’s state-action joint distribution via reverse KL divergence [19]. ### III-C One-shot Meta-imitation Learning from A Single Video In one-shot meta-imitation learning, the robot is trained to solve a large number of tasks drawn from a task distribution $p(\mathcal{M})$. The total number of tasks in this task distribution can be finite or infinite. Each imitation task $\mathcal{M}_{train}^{i}$ consists of a single video demonstration $\mathcal{D}^{i}_{\pi_{E}}$. During training, the robot can also generate limited practice trajectories (e.g., 90). For example, in the Pick- Pour-Place experiment in Fig.1, the robot receives a single video demonstration for each of the 117 tasks. Each task is characterized by a different combination of container positions, or pouring into the green vs. the yellow container. At test time, the robot receives a single video of a new task $\mathcal{M}_{test}^{i}$ drawn from $p(\mathcal{M})$. For example, a new Pick-Pour-Place test task can be a new combination of container positions or pouring into a new container (e.g., the red or orange container). The robot then needs to solve this task the first time without trial-and-error. ### III-D Embedding-based Meta-learning Embedding-based meta-learning [7][23] learns a task-specific embedding vector $z$ that contains task-level abstraction to adapt to a new but related task quickly. This method aims to learn a task-conditioned policy $\pi(a|s,z)$ that maximizes task-conditioned expected returns: $\max_{\pi}\mathbb{E}_{(s_{t},a_{t})\sim\pi,\rho_{0}}[\sum_{t=1}^{T}r(s_{t},a_{t}|c)+\alpha\mathcal{H}(\pi(a_{t}|s_{t},c))]$, by learning an embedding space $Z$ that maximizes the mutual information between $z$ and task context $c$. The goal is to make this learned embedding space generalizable to new tasks so that at test time, the policy can quickly adapt to unseen tasks with no or few practices. A key advantage of embedding- based meta-learning is the ability to learn from off-policy experiences. However, current methods are mostly if not only demonstrated in non-vision tasks in simulation. ## IV Mathematical Formulation for SQUIRL ### IV-A SQUIRL: Timestep-centric IRL as Soft Q-Learning Previous works in timestep-centric IRL such as [6][7][8] have interpreted the energy function $f_{\theta}$ in Eq.III-B as a reward function $r_{\theta}$ and later recover a Q or advantage function based on reward $r_{\theta}$ for policy improvement. To improve IRL sample efficiency, we propose to bypass this reward learning and directly interpret $f_{\theta}(s,a)$ as the soft Q-function [24] $Q^{\pi_{mix}}_{\theta}(s,a)$. This soft Q-function models the expert’s behavior as maximizing both the Q-value and its entropy (i.e., randomness) simultaneously. It also encourages the robot to explore the real world to imitate the expert more robustly. Under this formulation, approximating the expert’s conditional action distribution is equivalent to recovering a soft Q-function under which the expert is soft Q-optimal: $\displaystyle\operatorname*{arg\,min}_{\theta}D_{KL}(\pi_{E}(a\mid s)\;\|\;p_{\theta}(a\mid s))$ $\displaystyle=$ $\displaystyle\operatorname*{arg\,max}_{\theta}\mathbb{E}_{a\sim\pi_{E}(a\mid s)}[Q^{\pi_{mix}}_{\theta}(s,a)]-\log Z_{\theta}$ (3) Eq.3 rationalizes the expert behavior intuitively because the expert should be optimal with respect to the cumulative reward [3], not the immediate reward. Here, $Q^{\pi_{mix}}_{\theta}$ is under a mixture policy $\pi_{mix}$ between the robot and expert’s policies. ### IV-B SQUIRL as Expert Imitation and Adversarial Learning Under SQUIRL, the policy learning objective (Eq.4) is also equivalent (derivations on website) to matching: 1) the exponential-Q distribution of the discriminator $\theta$ (Eq.5), 2) the generator’s objective in Generative Adversarial Networks (GANs) [26] (Eq.6), and 3) the joint state-action distribution of expert [19] (Eq.7): $\pi^{*}=\operatorname*{arg\,min}_{\pi\in\Pi}\mathcal{L}^{RL}(\pi)$, where $\displaystyle\mathcal{L}^{RL}(\pi)=D_{KL}(\pi_{\psi}(a\mid s)\;\|\;\frac{\exp{Q^{\pi_{mix}}_{\theta}(s,a)}}{Z(s)})$ (4) $\displaystyle=D_{KL}(\pi_{\psi}(a\mid s)\;\|\;p_{\theta}(a\mid s))$ (5) $\displaystyle=\mathbb{E}_{(s,a)\sim\pi_{mix}}[\log(1-D_{\theta}(s,a))-\log(D_{\theta}(s,a))]$ (6) $\displaystyle=D_{KL}(\rho_{\pi_{\psi}}(s,a)\;\|\;\rho_{\pi_{E}}(s,a))$ (7) Meanwhile, the discriminator $\theta$ is matching its Q-function to the log- distribution of the expert’s conditional action distribution (Section III-B). Therefore, when this Q-function is optimal: $Q^{\pi_{mix}}_{\theta}=Q^{\pi_{mix}}_{\theta^{*}}$, the robot’s policy objective (Eq.4) is also matching the expert’s conditional action distribution: $\psi^{*}=\operatorname*{arg\,min}_{\psi}E_{\rho_{\pi_{mix}}(s)}[D_{KL}(\pi_{\psi}(a\mid s)\;\|\;\pi_{E}(a\mid s))]$ (8) ### IV-C Comparison to the Behavioral Cloning (BC) Objective While BC attempts to learn a policy that also matches the expert’s conditional action distribution [19], the fundamental difference is that the KL-divergence in BC’s case is computed under the expert’s narrow state distribution $\rho_{\pi_{E}}(s)$: $\psi_{BC}^{*}=\operatorname*{arg\,min}_{\psi}E_{\rho_{\pi_{E}}(s)}[D_{KL}(\pi_{E}(a\mid s)\;\|\;\pi_{\psi}(a\mid s))]$. In contrast, ours (Eq.8) is computed under $\rho_{\pi_{mix}}(s)$: the state distribution of the combined cumulative experience of the robot and the expert, which is a much wider distribution than the expert distribution. We hypothesize that this, along with matching the joint state-action distribution of the expert (Eq.7), makes our algorithm less susceptible to compounding errors than BC, as experimentally tested in Section VI. Figure 3: SQUIRL: Soft Q-functioned Meta-IRL. To begin, our algorithm bootstraps learning for the policy (orange) and the task encoder (yellow) via behavioral cloning (the left third of Fig.3). Next, our algorithm uses the warmed-up policy and task encoder to generate 10 trials in the physical world (not in simulation). Using the combined expert and robot trajectories, our algorithm learns a task-conditioned soft Q-function (green) that rationalizes the expert’s behaviors as maximizing both cumulative reward and entropy (i.e., randomness). Using this Q-function, our algorithm then quickly improves the policy using all cumulative robot and expert timesteps. This cycle repeats until convergence, totaling 90 trials (the middle third of Fig.3). Finally, at test time (the right third Fig.3), our algorithm generates a new embedding $z$ for the new task, and inputs this embedding into the task-conditioned policy to solve the new task without any practices. ## V SQUIRL: Soft Q-functioned Meta-IRL Shown in Fig.3, our algorithm learns three neural networks jointly – a task encoder (yellow), a task-conditioned policy (orange), and a task-conditioned soft Q-function (green): 1. 1. $\Psi_{\phi}(c)$: a task encoder that encodes a sampled batch of $C=64$ expert state-action pairs $c=\\{s^{i}_{1:C},a^{i}_{1:C}\\}$ from a task $i$ into a single 32-dim embedding vector $z^{i}\in\mathbb{R}^{32}$ (by computing the mean vector across 64 embeddings) that enables generalization to new tasks. This batch of expert state-action pairs is randomly sampled and thus does not encode time information. Both the policy and the Q-function accept this embedding vector as input. 2. 2. $\pi_{\psi}(s,z^{i})$: a task-conditioned policy the robot uses to perform a task $i$ given state $s$ and the task embedding vector $z^{i}\in\mathbb{R}^{32}$ outputted by the task encoder $\Psi_{\phi}(c)$. 3. 3. $Q_{\theta}(s,a,z^{i})$: a task-conditioned soft Q-function used to train the policy $\pi_{\psi}(s,z^{i})$ to more robustly mimic the expert’s behavior for the robotic manipulation task $i$. To begin, the robot is given an expert trajectory of state-action pairs $\mathcal{D}_{\pi_{E}}$ for each of the 117 training tasks. The robot first uses these expert trajectories to bootstrap training for both its policy $\pi_{\psi}$, and the task encoder $\Psi_{\phi}$ via behavioral cloning (Eq.9). This way, the robot can distinguish the train tasks better and learn more quickly in the real world. Next, the robot generates 10 trials (state- action pairs) $\overline{\mathcal{D}}_{\pi_{\psi}}$ in the physical world (not simulation) using its warmed-up policy and task encoder. Then, the robot uses both the expert’s and its state-action pairs to train a discriminator $\theta$. This discriminator classifies which state-action pairs come from the expert $\pi_{E}$ vs. the robot $\pi_{\psi}$. At first, the robot is distinctively worse than the expert at performing the tasks. This makes it easy for the discriminator to classify. By doing so, the discriminator learns a Q-function $Q^{\pi_{mix}}_{\theta}$ using Eq.3. Using the learned Q-function $Q^{\pi_{mix}}_{\theta}$, the robot trains its policy $\pi_{\psi}$ via Eq.4. Meanwhile, the robot also has the option to continue updating its task-conditioned policy and task encoder via behavioral cloning (Eq.9). Since training the policy via Eq.4 is equivalent to indirectly imitating the expert (Eq.7 and 8), as derived in Section IV-B, the trajectories generated by the policy gradually become more similar to the expert. This makes the state-action pairs more difficult for the discriminator to classify. This difficulty, in turn, forces the discriminator to learn a more precise Q-function, which then encourages the policy to mimic the expert even more closely. This cycle repeats until convergence (90 trials in total), at which point: 1) the policy matches the expert performance, 2) the task encoder learns to generalize to new tasks, and 3) the discriminator continues to struggle to distinguish state-action pairs correctly despite having learned an accurate Q-function. ### V-A Rationale for Bypassing Reward Learning via SQUIRL SQUIRL learns a Q-function without rewards because 1) the policy is ultimately trained by the Q-function, not rewards, thus bypassing reward learning improves IRL sample efficiency, and 2) circumventing reward learning avoids off-policy Q-learning from a constantly changing reward function and makes training easier and more stable empirically. ### V-B Architectures for Policy, Task Encoder, and Q-function For all non-vision tasks, we parameterize $\pi_{\psi},\Psi_{\phi},Q_{\theta}$ with five fully-connected (FC) layers. For vision tasks, we use a 5-layer CNN followed by a spatial-softmax activation layer for the RGB image. This activation vector is then concatenated with the non-vision input vector and together passed through five FC layers. Our algorithm is general and works with many other network architectures, state, and action spaces. ### V-C Incorporating BC to Bootstrap and Accelerate Learning Since our algorithm’s IRL objective (Eq.8) is compatible with BC, as explained in Section IV-C, our algorithm can jointly be trained with BC to stabilize and accelerate learning without conflicting gradient issues (line 16 in Algorithm 1): $\mathcal{L}^{BC}=\mathbb{E}_{(s,a)\sim\pi_{E}}[\left\lVert\pi_{\psi}(s,\Psi_{\phi}(c))-a\right\rVert^{2}]$ (9) This, combined with the off-policy nature of our algorithm, also allows the robot to bootstrap learning by first “pre-training” via BC (Eq.9) using the expert demonstrations, before improving performance further via meta-IRL training. Algorithm 1 SQUIRL: Soft Q-functioned Meta-IRL (Train) Input: One expert video demonstration trajectory of state-action pairs $\mathcal{D}^{i}_{\pi_{E}}=\\{s^{i}_{1:H},a^{i}_{1:H}\\}$ for each of the $n$ training tasks $i=1:n$, where $H$ is the horizon of the task (e.g., $n=117,H=100$) 1: Initialize soft Q-function $Q_{\theta}$, policy $\pi_{\psi}$, task encoder $\Psi_{\phi}$, and an empty buffer of off-policy robot trajectories $\mathcal{D}^{i}_{\pi_{\psi}}\leftarrow\\{\\}$ for each training task $i=1:n$ 2: Warm-up policy and task encoder via $\mathcal{L}^{BC}$ (Eq.9) 3: while not converged do 4: Sample a batch of $m$ task indices $\\{i^{1:m}\\}$ from all training tasks $i=1:n$, (e.g., $m=10$) 5: for $i=i^{1:m}$ do 6: Infer task embedding $z^{i}=\mathbb{R}^{\mathcal{Z}}\leftarrow\Psi_{\phi}(c)$, where $c=\\{s^{i}_{1:C},a^{i}_{1:C}\\}\sim\mathcal{D}^{i}_{\pi_{E}}$ (e.g., $\mathcal{Z}=32,C=64$) 7: Generate a robot trajectory of state-action pairs $\overline{\mathcal{D}}^{i}_{\pi_{\psi}}=\\{s^{i}_{1:H},a^{i}_{1:H}\\}$ from task $i$ using $\pi_{\psi},z^{i}$ 8: $\mathcal{D}^{i}_{\pi_{\psi}}\leftarrow\mathcal{D}^{i}_{\pi_{\psi}}\cup\overline{\mathcal{D}}^{i}_{\pi_{\psi}}$ 9: end for 10: for $j=1:J$ (e.g., $J=400$) do 11: Sample another batch of $m$ task indices $\\{i^{1:m}\\}$ 12: $\theta\leftarrow\theta-\nabla_{\theta}\mathcal{L}^{IRL}$ (Eq.III-B) using a combined batch of $\mathcal{B}=128$ robot and expert timesteps: $\overline{\mathcal{D}}^{i}_{\pi_{\psi}}\cup\overline{\mathcal{D}}^{i}_{\pi_{E}}$ and $z^{i}$, where $\overline{\mathcal{D}}^{i}_{\pi_{\psi}}\sim\mathcal{D}^{i}_{\pi_{\psi}}$, $\overline{\mathcal{D}}^{i}_{\pi_{E}}\sim\mathcal{D}^{i}_{\pi_{E}}$, $i=\\{i^{1:m}\\}$ 13: end for 14: for $k=1:K$ (e.g., $K=2000$) do 15: Sample another batch of $m$ task indices $\\{i^{1:m}\\}$ 16: if necessary then $\\{\psi,\phi\\}\leftarrow\\{\psi,\phi\\}-\nabla_{\psi,\phi}\mathcal{L}^{BC}$ (Eq.9) using a batch of $\mathcal{B}$ expert timesteps $\overline{\mathcal{D}}^{i}_{\pi_{E}}\sim\mathcal{D}^{i}_{\pi_{E}},z^{i}$, $i=\\{i^{1:m}\\}$ end if 17: $\psi\leftarrow\psi-\nabla_{\psi}\mathcal{L}^{RL}$ (Eq.4) using a combined batch of $\mathcal{B}$ robot and expert timesteps: $\overline{\mathcal{D}}^{i}_{\pi_{\psi}}\cup\overline{\mathcal{D}}^{i}_{\pi_{E}}$ and $z^{i}$, where $\overline{\mathcal{D}}^{i}_{\pi_{\psi}}\sim\mathcal{D}^{i}_{\pi_{\psi}}$, $\overline{\mathcal{D}}^{i}_{\pi_{E}}\sim\mathcal{D}^{i}_{\pi_{E}}$, $i=\\{i^{1:m}\\}$ 18: end for 19: end while 20: return soft Q-function $Q_{\theta}$, policy $\pi_{\psi}$, task encoder $\Psi_{\phi}$ Algorithm 2 SQUIRL: Soft Q-functioned Meta-IRL (Test) Input: $\pi_{\psi}$, $\Psi_{\phi}$, $Q_{\theta}$, and a single expert video demonstration of state-action pairs $\mathcal{D}^{i}_{\pi_{E}}=\\{s^{i}_{1:H}$, $a^{i}_{1:H}\\}$ from a new task $i$ unseen during training 1: Infer task embedding vector $z^{i}=\mathbb{R}^{\mathcal{Z}}\leftarrow\Psi_{\phi}(c)$, where $c=\\{s^{i}_{1:C},a^{i}_{1:C}\\}\sim\mathcal{D}^{i}_{\pi_{E}}$ (e.g., $\mathcal{Z}=32,C=64$) 2: Rollout robot trajectory in the real world using $\pi_{\psi}$, $z^{i}$ Approach Box Lower to Box Grasp Box Pick up Box Carry Box Drop Box Figure 4: Pick-Carry-Drop Experiment. The robot needs to approach, lower to, grasp, pick-up, carry, and drop the box to solve the task. ### V-D Using Expert Demonstration as Both the Input Task Context Variables and Training Signal for the Task Encoder Learning robust task embeddings enables robots to generalize to new tasks quickly [23]. To this end, our algorithm proposes to use 64 expert timesteps as the input task context variable $c$ into the task encoder, as opposed to 64 robot timesteps. This is because context variables should explore the task and environment sufficiently well to expose the key information of the task, and expert demonstration timesteps are an ideal candidate compared to the timesteps from the robot’s suboptimal policy. As a result, the context variable $c$ input into the task encoder only includes the states and actions of the expert, but not the rewards or the next states. In addition, we choose the BC loss $\mathcal{L}^{BC}$ in Eq.9 as the training loss for learning the task encoder $\Psi_{\phi}$. This BC loss is stable since the expert timesteps are fixed. In contrast, the IRL loss $\mathcal{L}^{IRL}$ (Eq.III-B) and the policy loss $\mathcal{L}^{RL}$ (Eq.4) are less stable because the training data distribution for both losses are non-stationary. This design choice also allows us to learn a robust task embeddings first via BC pre-training before performing meta-IRL training via SQUIRL. We empirically observe that such pre-training can improve the training stability and the sample efficiency of SQUIRL, but the final policy performance is similar with or without BC pre-training. In summary, our algorithm is detailed in Algorithm 1 (train) and Algorithm 2 (test), with hyperparameters detailed here222Hyperparameters in Algorithm 1 and 2. Policy gradient batch size $\mathcal{B}$: 1024 (non-vision), 128 (vision); task embedding batch size $C$: 64; all learning rates: $3e^{-4}$; starting SAC alpha: $1e^{-5}$; SAC target entropy: $-300$; IRL updates per epoch $J$: $400$; policy updates per epoch $K$: $2000$; task embedding size $\mathcal{Z}$: 32; meta-batch size $m$: 10; discount rate $\gamma$: 0.99. ## VI Experiments and Results Analysis We evaluate the generality and robustness of our algorithm across long-horizon vision and non-vision tasks with continuous state and action spaces in both simulation (Pick-Carry-Drop, a horizon of 1024 timesteps, 30 train tasks) and real-world (Pick-Pour-Place, a horizon of 100 timesteps, 117 train tasks). There is only a single expert demonstration for each of the train or test tasks. We compare with the PEARL-BC baseline, which is the behavioral cloning version of PEARL [23]. Evaluation: We evaluate real-robot and simulation experiments on 50 and 500 trials respectively across 50 seen and unseen tasks. We report mean and standard deviation (“stdev” hereafter). The performance difference between different experiments is statistically significant if the difference in mean is at least either standard deviation away. Experimental video is at http://crlab.cs.columbia.edu/squirl. Approach Bottle Grasp Bottle Carry Bottle Pour Orange Cup Carry Bottle Place Bottle Figure 5: Pick-Pour-Place at Test Time. To solve this task, the robot needs to first approach, grasp and carry the grey bottle, pour the iron pebble inside the bottle into a specific container, and carry and place the bottle back on the table. At the beginning of each task, the bottle is not in hand, and but the iron pebble is already in the bottle. Top row: top-down camera images. Bottom row: 45°camera images. ### VI-A Simulation Experiment: Pick-Carry-Drop Description. We modify the planar Stacker task [27] to create “Pick-Carry- Drop”. Shown in Fig.4, a robot is tasked to approach, pick, carry, and drop the black box into the stack marked in green. The task is successful if the box is dropped into the stack within 1024 timesteps, and failed otherwise. State Space. We evaluate our algorithm on both the vision and the non-vision version of the task, to demonstrate that SQUIRL is general across different state space modalities. The state space for the vision version includes 1) the joint angles and velocities for its 5-DOFs, 2) a one-hot vector indicating the current stage of the task, and 3) an RGB image shown in Fig.4. The non-vision version’s state space replaces the RGB image with the position of the black box. Action Space. The robot controls its 5-DOF joint torques. Task Definition. There are a total of 30 training tasks in this experiment, each corresponding to a different drop location: $x\in\\{-0.15,-0.14,\ldots,0.14\\}$. During test time, we randomly sample a new, real-valued drop location from the maximum valid range: $x\in[-0.25,0.25]$. The green drop location is invisible in both the vision and the non-vision version of the task. Therefore, the robot needs to infer the green drop location (i.e., task information) solely from the provided expert video demonstration. On the other hand, the starting pose of the robot and the location of the black box are all initialized randomly at the beginning of each task. Robot Trials. The robot uses 150 training trials in total. Expert Demonstration. We trained an expert policy from scratch via RL to provide expert demonstrations. The reward function used to train the expert policy comprises of six stages, each stage with a reward of 10. Designing this reward function has taken significant human effort, which exhibits the value of directly learning from video demonstrations. TABLE I: Pick-Carry-Drop Results (% Drop Success$\pm$Stdev) Tasks | Seen | Unseen | Seen | Unseen ---|---|---|---|--- | Vision | Non-Vision SQUIRL (BC + IRL) | 95.8$\pm$1.7 | 95.0$\pm$1.5 | 97.3$\pm$3.0 | 96.9$\pm$2.0 Baseline (PEARL-BC) | 77.8$\pm$1.6 | 76.5$\pm$0.7 | 90.8$\pm$2.5 | 89.5$\pm$1.6 Ablation: No BC Joint Training or BC Pre-training SQUIRL (IRL Only) | 93.8$\pm$1.8 | 93.2$\pm$1.6 | 94.7$\pm$1.7 | 93.9$\pm$1.4 Simulation Results and Analysis. As shown in Table I, our algorithm, “SQUIRL (BC + IRL)”, pre-trains via BC and then trains the policy using both the BC loss (Eq.9) and the IRL policy gradient loss (Eq.4). It statistically significantly outperforms the PEARL-BC baseline in both the vision (95.8%$\pm$1.7 vs. 77.8%$\pm$1.6) and non-vision (97.3%$\pm$3.0 vs. 90.8%$\pm$2.5) version of the task for seen tasks. For unseen tasks, we observed similar outperformance (95.0%$\pm$1.5 vs. 76.5%$\pm$0.7 in the vision case and 96.9%$\pm$2.0 vs. 89.5%$\pm$1.6 in the non-vision case). Qualitatively, in the PEARL-BC’s case, the robot sometimes misses the drop location as it attempts to drop the box or fails to pick up the box when the box gets stuck by the walls of the stack (kindly see website). The performance drop of the baseline from the non-vision version (90.8%$\pm$2.5 and 89.5%$\pm$1.6 for seen and unseen tasks) to the vision version (77.8%$\pm$1.6 and 76.5%$\pm$0.7 for seen and unseen tasks) is mainly because vision-based manipulation tends to suffer from larger compounding errors. Nevertheless, as evident in the statistical similarities between seen and unseen tasks for SQUIRL (95.8%$\pm$1.7 vs. 95.0%$\pm$1.5 for vision) and PEARL-BC (77.8%$\pm$1.6 vs. 76.5%$\pm$0.7 for vision), both algorithms can generalize to unseen tasks, due to the generalizability of task embeddings. Ablation: IRL Gradient Only. To compare the performance contribution of SQUIRL’s meta-IRL core training procedure directly against PEARL-BC, we created “SQUIRL (IRL only)”, which trains the policy using only the policy gradient loss in Eq.4 (no BC joint training or pre-training). This ablated version still outperforms the PEARL-BC baseline (93.8%$\pm$1.8 vs. 77.8%$\pm$1.6 for seen vision tasks, 93.2%$\pm$1.6 vs. 76.5%$\pm$0.7 for unseen vision tasks). Nevertheless, by combining BC and IRL gradients, “SQUIRL (BC + IRL)” improves performance slightly further (95.8%$\pm$1.7 and 95.0%$\pm$1.5). Intuitively, while BC only matches the expert’s conditional action distribution under the expert’s state distribution, BC’s supervised learning signal is stabler than IRL. Joint training with BC and IRL gradients can be interpreted as combining the stability of BC and the robustness of Q-functioned IRL, by matching the conditional action distribution of the expert under the broader state distribution of the expert-robot mixture experience (Eq.8), in addition to matching the expert’s joint state-action distribution (Eq.7). ### VI-B Real-Robot Experiment: Pick-Pour-Place Description. We evaluated our algorithm on the UR5-Seed robot (Fig.2) to perform a set of long-horizon pick-pour-place tasks. As shown in Fig.2, in each task, there is a grey cylindrical bottle, an iron pebble that is already in the bottle, and more than one container on the table. The robot is tasked to approach and pick-up the grey bottle, pour the iron pebble into a specific container, and place the bottle back on the table. The task is a success only if the pebble is poured into the correct container and the bottle is placed upright on the table within $H=100$ timesteps, and a failure otherwise. State Space. The state space contains a top-down or 45°camera’s RGB image (Fig.5), and 2 binary indicators for whether the robot has poured or closed the hand, respectively. Action Space. The action space includes the Cartesian unit directional vector for the end-effector movement. During each timestep, the robot can adjust the end-effector by 2cm along any 3D direction. The action space also includes a binary indicator to control the arm vs. the hand and a trinary indicator to close, open, or rotate the hand for pouring. Orthogonality to State and Action Representions. While Pick-Pour-Place can be tackled by first localizing the correct container via object detection (alternative state space) and then executing motion-planning trajectories to pour (alternative action space), our algorithm is general across and orthogonal to alternative state and action spaces. Task Definition. As shown in each row of images in Fig.1, each task is defined by the positions and colors of the containers, and by the correct container to pour into. There are always only the green and yellow containers in the 117 train tasks. 25 of the 50 test tasks have the green and yellow containers at new positions. The remaining 25 test tasks add the red and the orange unseen containers, or either. Since there is always more than one container in the RGB image, the robot will not know which container to pour into without the expert demonstration. Therefore, the robot needs to depend solely on the task encoder’s ability to extract the correct task information from the expert demonstration. Robot Trials. The robot collects 90 training trials in total. Expert Demonstration. We collect demonstrations via teleoperation using a Flock of Birds sensor333Flock of Birds is a 6D pose tracker from Ascension Technologies Corp.. Using the human wrist pose detected by the sensor in real- time, we move, open, close, or rotate the robot hand for pouring. We collected $117$ video demonstrations across 117 tasks for training. It takes 1-2 minutes to collect one demonstration. TABLE II: Pick-Pour-Place Results (% Pour Success$\pm$Stdev) Tasks | RGB Image | Seen | Unseen ---|---|---|--- SQUIRL (BC + IRL) | Top-Down (90°) | 92.0$\pm$4.5 | 90.0$\pm$7.1 Baseline (PEARL-BC) | 70.0$\pm$7.1 | 68.0$\pm$11.0 Baseline (Standard-BC) | 60.0$\pm$10.0 | 56.0$\pm$11.4 SQUIRL (BC + IRL) | $45\degree$ (Ablation) | 90.0$\pm$7.1 | 88.0$\pm$8.4 Real-robot Results and Analysis. As shown in Table II, our algorithm outperforms the PEARL-BC baseline statistically significantly in both seen tasks (92.0%$\pm$4.5 vs. 70.0%$\pm$7.1) and unseen tasks (90.0%$\pm$7.1 vs. 68.0%$\pm$11.0). This observed outperformance mainly originates from our soft Q-functioned IRL formulation, which forces the robot to imitate the expert under a much wider state distribution provided by the expert-robot mixture trajectories, instead of the narrow state distribution of the expert demonstrations. This helps reduce compounding errors during task execution. The low performance of the PEARL-BC baseline is mainly due to additional compounding errors induced by real-world sensory noises such as unstable lighting conditions and small perturbation to camera positions. Qualitatively, the PEARL-BC baseline sometimes pours into the wrong container, misses the target container by a few centimeters, or moves past the target container while failing to pour in time (kindly see website for examples). Nevertheless, from the statistical similarity between seen and unseen tasks for both our algorithm (92.0%$\pm$4.5 vs. 90.0%$\pm$7.1) and PEARL-BC (70.0%$\pm$7.1 vs. 68.0%$\pm$11.0), we see that the learned task encoder is still effectively generalizing to a new, related task. Comparison to the “Standard-BC” Baseline. We also compared to “Standard-BC” (60.0%$\pm$10.0 and 56.0%$\pm$11.4 for seen and unseen tasks), which performs no meta-learning and learns every train or test task independently from scratch via BC. As a result, the neural network overfits to the single demonstration and fails to generalize to real-world sensory (camera) noises at test time. Note that Standard-BC’s unseen-task performance is slightly lower than seen tasks since the unseen tasks are more challenging with at most 4 containers on the table, compared to only 2 containers in seen tasks. Ablation: Non-top-down Camera. We also tested our algorithm with a $45\degree$ RGB image (90.0%$\pm$7.1 and 88.0%$\pm$8.4 for seen and unseen tasks) against a top-down RGB image (92.0%$\pm$4.5 and 90.0%$\pm$7.1 for seen and unseen tasks). The statistical similarity between the two shows that SQUIRL is general and can accept a non-top-down RGB input image. ## VII Conclusion We introduced SQUIRL, a robust, efficient, and general Soft Q-functioned meta- IRL algorithm, towards enabling robots to learn from limited expert (one per task) and robot (90 in total) trajectories. This algorithm is statistically significantly more robust than behavioral cloning and requires no trial-and- errors at test time. Finally, this general algorithm has been tested to work with various long-horizon manipulation tasks, and across vision and non-vision state and action spaces. In the future, we will extend this algorithm to learn from direct human-arm demonstrations instead of teleoperation. This will lower the cost of collecting real-world expert demonstrations further. We also aim to incorporate hierarchical learning into SQUIRL to solve much longer horizon manipulation tasks by reusing low-level subpolicies. ## References * [1] P. Abbeel and A. Ng, “Apprenticeship learning via inverse reinforcement learning,” _International Conference on Machine Learning_ , 2004. * [2] N. Ratliff, B. Andrew, and M. Zinkevich, “Maximum margin planning,” _International Conference on Machine Learning (ICML)_ , 2006. * [3] B. Ziebart, “Modeling purposeful adaptive behavior with the principle of maximum causal entropy,” _PhD Thesis_ , 2010. * [4] B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey, “Maximum entropy inverse reinforcement learning,” in _Proc. AAAI_ , 2008. * [5] A. Boularias, J. Kober, and J. Peters, “Relative entropy inverse reinforcement learning,” in _Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics_ , ser. Proceedings of Machine Learning Research, vol. 15. PMLR, 11–13 Apr 2011. * [6] S. K. Seyed Ghasemipour, S. S. Gu, and R. Zemel, “Smile: Scalable meta inverse reinforcement learning through context-conditional policies,” in _Advances in Neural Information Processing Systems_ , 2019. * [7] L. Yu, T. Yu, C. Finn, and S. Ermon, “Meta-inverse reinforcement learning with probabilistic context variables,” in _NeurIPS_ , 2019. * [8] J. Fu, K. Luo, and S. Levine, “Learning robust rewards with adverserial inverse reinforcement learning,” in _ICLR_ , 2018. * [9] B. D. Argall, S. Chernova, M. Veloso, and B. Browning, “A survey of robot learning from demonstration,” _Robotics and Autonomous Systems_ , vol. 57, no. 5, pp. 469–483, 2009. * [10] D. Xu, S. Nair, Y. Zhu, J. Gao, A. Garg, L. Fei-Fei, and S. Savarese, “Neural task programming: Learning to generalize across hierarchical tasks,” in _International Conference on Robotics and Automation_ , 2018. * [11] D.-A. Huang, S. Nair, D. Xu, Y. Zhu, A. Garg, L. Fei-Fei, S. Savarese, and J. C. Niebles, “Neural task graphs: Generalizing to unseen tasks from a single video demonstration,” in _CVPR_ , 2019. * [12] D.-A. Huang, Y.-W. Chao, C. Paxton, X. Deng, L. Fei-Fei, J. C. Niebles, A. Garg, and D. Fox, “Motion reasoning for goal-based imitation learning,” in _ICRA_ , 2020. * [13] C. Finn, S. Levine, and P. Abbeel, “Guided cost learning: Deep inverse optimal control via policy optimization,” in _ICML_ , 2016. * [14] T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen, K. Goldberg, and P. Abbeel, “Deep imitation learning for complex manipulation tasks from virtual reality teleoperation,” in _ICRA_ , 2018. * [15] J. Kober and J. Peters, “Imitation and reinforcement learning - practical algorithms for motor primitive learning in robotics,” _IEEE Robotics and Automation Magazine_ , vol. 17, no. 2, pp. 55–62, 2010. * [16] P. Pastor, H. Hoffmann, T. Asfour, and S. Schaal, “Learning and generalization of motor skills by learning from demonstration,” in _International Conference on Robotics and Automation (ICRA)_ , 2009. * [17] P. Sermanet, C. Lynch, Y. Chebotar, J. Hsu, E. Jang, S. Schaal, S. Levine, and G. Brain, “Time-contrastive networks: Self-supervised learning from video,” in _ICRA_ , 2018. * [18] C. Lynch, M. Khansari, T. Xiao, V. Kumar, J. Tompson, S. Levine, and P. Sermanet, “Learning latent plans from play,” in _CoRL_ , 2019. * [19] S. K. S. Ghasemipour, R. Zemel, and S. Gu, “A divergence minimization perspective on imitation learning methods,” in _CoRL_ , 2019. * [20] C. Finn, T. Yu, T. Zhang, P. Abbeel, and S. Levine, “One-shot visual imitation learning via meta-learning,” in _CoRL_ , 2017. * [21] T. Yu, C. Finn, A. Xie, S. Dasari, T. Zhang, P. Abbeel, and S. Levine, “One-shot imitation from observing humans via domain-adaptive meta-learning,” in _Robotics: Science and Systems (RSS)_ , 2018. * [22] T. Yu, P. Abbeel, S. Levine, and C. Finn, “One-shot hierarchical imitation learning of compound visuomotor tasks,” in _IROS_ , 2019. * [23] K. Rakelly, A. Zhou, D. Quillen, C. Finn, and S. Levine, “Efficient off-policy meta-reinforcement learning via probabilistic context variables,” _International Conference on Machine Learning (ICML)_ , 2019. * [24] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” _International Conference on Machine Learning (ICML)_ , 2018. * [25] J. Ho and S. Ermon, “Generative adversarial imitation learning,” in _Advances in neural information processing systems_ , 2016. * [26] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in _Advances in neural information processing systems_ , 2014. * [27] Y. Tassa, Y. Doron, A. Muldal, T. Erez, Y. Li, D. de Las Casas, D. Budden, A. Abdolmaleki, J. Merel, A. Lefrancq, T. Lillicrap, and M. Riedmiller, “DeepMind control suite,” Tech. Rep., Jan. 2018.
2024-09-04T02:54:58.885703
2020-03-10T20:41:24
2003.04960
{ "authors": "Sanmit Narvekar and Bei Peng and Matteo Leonetti and Jivko Sinapov and\n Matthew E. Taylor and Peter Stone", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26145", "submitter": "Sanmit Narvekar", "url": "https://arxiv.org/abs/2003.04960" }
arxiv-papers
# Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey Sanmit Narvekar<EMAIL_ADDRESS> Department of Computer Science University of Texas at Austin Bei Peng<EMAIL_ADDRESS> Department of Computer Science University of Oxford Matteo Leonetti<EMAIL_ADDRESS> School of Computing University of Leeds Jivko Sinapov<EMAIL_ADDRESS> Department of Computer Science Tufts University Matthew E. Taylor<EMAIL_ADDRESS> Alberta Machine Intelligence Institute Department of Computing Science University of Alberta Peter Stone<EMAIL_ADDRESS> Department of Computer Science University of Texas at Austin and Sony AI ###### Abstract Reinforcement learning (RL) is a popular paradigm for addressing sequential decision tasks in which the agent has only limited environmental feedback. Despite many advances over the past three decades, learning in many domains still requires a large amount of interaction with the environment, which can be prohibitively expensive in realistic scenarios. To address this problem, transfer learning has been applied to reinforcement learning such that experience gained in one task can be leveraged when starting to learn the next, harder task. More recently, several lines of research have explored how tasks, or data samples themselves, can be sequenced into a _curriculum_ for the purpose of learning a problem that may otherwise be too difficult to learn from scratch. In this article, we present a framework for curriculum learning (CL) in reinforcement learning, and use it to survey and classify existing CL methods in terms of their assumptions, capabilities, and goals. Finally, we use our framework to find open problems and suggest directions for future RL curriculum learning research. Keywords: curriculum learning, reinforcement learning, transfer learning ## 1 Introduction Curricula are ubiquitous throughout early human development, formal education, and life-long learning all the way to adulthood. Whether learning to play a sport, or learning to become an expert in mathematics, the training process is organized and structured so as to present new concepts and tasks in a sequence that leverages what has previously been learned. In a variety of human learning domains, the quality of the curricula has been shown to be crucial in achieving success. Curricula are also present in animal training, where it is commonly referred to as shaping (Skinner, 1958; Peterson, 2004). As a motivating example, consider the game of Quick Chess (shown in Figure 1), a game designed to introduce children to the full game of chess, by using a sequence of progressively more difficult “subgames.” For example, the first subgame is played on a 5x5 board with only pawns, where the player learns how pawns move, get promoted, and take other pieces. Next, in the second subgame, the king piece is added, which introduces a new objective: keeping the king alive. In each successive subgame, new elements are introduced (such as new pieces, a larger board, or different configurations) that require learning new skills and building upon knowledge learned in previous games. The final game is the full game of chess. The idea of using such curricula to train artificial agents dates back to the early 1990s, where the first known applications were to grammar learning (Elman, 1993; Rohde and Plaut, 1999), robotics control problems (Sanger, 1994), and classification problems (Bengio et al., 2009). Results showed that the order of training examples matters and that generally, incremental learning algorithms can benefit when training examples are ordered in increasing difficulty. The main conclusion from these and subsequent works in curriculum learning is that starting small and simple and gradually increasing the difficulty of the task can lead to faster convergence as well as increased performance on a task. Recently, research in reinforcement learning (RL) (Sutton and Barto, 1998) has been exploring how agents can leverage transfer learning (Lazaric et al., 2008; Taylor and Stone, 2009) to re-use knowledge learned from a source task when attempting to learn a subsequent target task. As knowledge is transferred from one task to the next, the sequence of tasks induces a curriculum, which has been shown to improve performance on a difficult problem and/or reduce the time it takes to converge to an optimal policy. Figure 1: Different subgames in the game of Quick Chess, which are used to form a curriculum for learning the full game of Chess. Many groups have been studying how such a curriculum can be generated automatically to train reinforcement learning agents, and many approaches to do so now exist. However, what exactly constitutes a curriculum and what precisely qualifies an approach as being an example of curriculum learning is not clearly and consistently defined in the literature. There are many ways of defining a curriculum: for example, the most common way is as an ordering of tasks. At a more fundamental level, a curriculum can also be defined as an ordering of individual experience samples. In addition, a curriculum does not necessarily have to be a simple linear sequence. One task can build upon knowledge gained from multiple source tasks, just as courses in human education can build off of multiple prerequisites. Methods for curriculum generation have separately been introduced for areas such as robotics, multi-agent systems, human-computer and human-robot interaction, and intrinsically motivated learning. This body of work, however, is largely disconnected. In addition, many landmark results in reinforcement learning, from TD-Gammon (Tesauro, 1995) to AlphaGo (Silver et al., 2016) have implicitly used curricula to guide training. In some domains, researchers have successfully used methodologies that align with our definition of curriculum learning without explicitly describing it that way (e.g., self-play). Given the many landmark results that have utilized ideas from curriculum learning, we think it is very likely that future landmark results will also rely on curricula, perhaps more so than researchers currently expect. Thus, having a common basis for discussion of ideas in this area is likely to be useful for future AI challenges. ### Overview The goal of this article is to provide a systematic overview of curriculum learning (CL) in RL settings and to provide an over-arching framework to formalize this class of methods. We aim to define classification criteria for computational models of curriculum learning for RL agents, that describe the curriculum learning research landscape over a broad range of frameworks and settings. The questions we address in this survey include: * • What is a _curriculum_ , and how can it be represented for reinforcement learning tasks? At the most basic level, a curriculum can be thought of as an ordering over experience samples. However, it can also be represented at the task level, where a set of tasks can be organized into a sequence or a directed acyclic graph that specifies the order in which they should be learned. We address this question in detail in Section 3.1. * • What is the _curriculum learning_ method, and how can such methods be evaluated? We formalize this class of methods in Section 3.2 as consisting of three parts, and extend metrics commonly used in transfer learning (introduced in Section 2) to the curriculum setting to facilitate evaluation in Section 3.3. * • How can tasks be constructed for use in a curriculum? The quality of a curriculum is dependent on the quality of tasks available to select from. Tasks can either be generated in advance, or dynamically and on-the-fly with the curriculum. Section 4.1 surveys works that examine how to automatically generate good intermediate tasks. * • How can tasks or experience samples be sequenced into a curriculum? In practice, most curricula for RL agents have been manually generated for each problem. However, in recent years, automated methods for generating curricula have been proposed. Each makes different assumptions about the tasks and transfer methodology used. In Section 4.2, we survey these different automated approaches, as well as describe how humans have approached curriculum generation for RL agents. * • How can an agent transfer knowledge between tasks as it learns through a curriculum? Curriculum learning approaches make use of transfer learning methods when moving from one task to another. Since the tasks in the curriculum can vary in state/action space, transition function, or reward function, it’s important to transfer relevant and reusable information from each task, and effectively combine information from multiple tasks. Methods to do this are enumerated and discussed in Section 4.3. The next section provides background in reinforcement learning and transfer learning. In Section 3, we define the curriculum learning method, evaluation metrics, and the dimensions along which we will classify curriculum learning approaches. Section 4, which comprises the core of the survey, provides a detailed overview of the existing state of the art in curriculum learning in RL, with each subsection considering a different component of the overall curriculum learning approach. Section 5 discusses paradigms related to curriculum learning for RL, such as curriculum learning for supervised learning and for human education. Finally, in Section 6, we identify gaps in the existing literature, outline the limitations of existing CL methods and frameworks, and provide a list of open problems. ## 2 Background In this section, we provide background on Reinforcement Learning (RL) and Transfer Learning (TL). ### 2.1 Reinforcement Learning Reinforcement learning considers the problem of how an agent should act in its environment over time, so as to maximize some scalar reward signal. We can formalize the interaction of an agent with its environment (also called a _task_) as a Markov Decision Process (MDP). In this article, we restrict our attention to _episodic_ MDPs:111In continuing tasks, a discount factor $\gamma$ is often included. For simplicity, and due to the fact that tasks typically terminate in curriculum learning settings, we present the undiscounted case. But unless otherwise noted, our definitions and discussions can easily apply to the discounted case as well. ###### Definition 1 An episodic MDP $M$ is a 6-tuple $(\mathcal{S},\mathcal{A},p,r,\Delta s_{0},\mathcal{S}_{f})$, where $\mathcal{S}$ is the set of states, $\mathcal{A}$ is the set of actions, $p(s^{\prime}|s,a)$ is a transition function that gives the probability of transitioning to state $s^{\prime}$ after taking action $a$ in state $s$, and $r(s,a,s^{\prime})$ is a reward function that gives the immediate reward for taking action $a$ in state $s$ and transitioning to state $s^{\prime}$. In addition, we shall use $\Delta s_{0}$ to denote the initial state distribution, and $\mathcal{S}_{f}$ to denote the set of terminal states. We consider time in discrete time steps. At each time step $t$, the agent observes its state and chooses an action according to its _policy_ $\pi(a|s)$. The goal of the agent is to learn an _optimal policy_ $\pi^{*}$, which maximizes the expected _return_ $G_{t}$ (the cumulative sum of rewards $R$) until the episode ends at timestep $T$: $G_{t}=\sum_{i=1}^{T-t}R_{t+i}$ There are three main classes of methods to learn $\pi^{*}$: value function approaches, policy search approaches, and actor-critic methods. In _value function approaches_ , a value $v_{\pi}(s)$ is first learned for each state $s$, representing the expected return achievable from $s$ by following policy $\pi$. Through policy evaluation and policy improvement, this value function is used to derive a policy better than $\pi$, until convergence towards an optimal policy. Using a value function in this process requires a model of the reward and transition functions of the environment. If the model is not known, one option is to learn an action-value function instead, $q_{\pi}(s,a)$, which gives the expected return for taking action $a$ in state $s$ and following $\pi$ after: $q_{\pi}(s,a)=\sum_{s^{\prime}}p(s^{\prime}|s,a)[r(s,a,s^{\prime})+q_{\pi}(s^{\prime},a^{\prime})]\textrm{ , where }a^{\prime}\sim\pi(\cdot|s^{\prime})$ The action-value function can be iteratively improved towards the optimal action-value function $q_{*}$ with on-policy methods such as SARSA (Sutton and Barto, 1998). The optimal action-value function can also be learned directly with off-policy methods such as $Q$-learning (Watkins and Dayan, 1992). An optimal policy can then be obtained by choosing action $\text{argmax}_{a}q_{*}(s,a)$ in each state. If the state space is large or continuous, the action-value function can instead be estimated using a function approximator (such as a neural network), $q(s,a;\bm{w})\approx q_{*}(s,a)$, where $\bm{w}$ are the weights of the network. In contrast, _policy search methods_ directly search for or learn a parameterized policy $\pi_{\bm{\theta}}(a|s)$, without using an intermediary value function. Typically, the parameter $\bm{\theta}$ is modified using search or optimization techniques to maximize some performance measure $J(\bm{\theta})$. For example, in the episodic case, $J(\bm{\theta})$ could correspond to the expected value of the policy parameterized by $\bm{\theta}$ from the starting state $s_{0}\sim\Delta s_{0}$: $v_{\pi_{\theta}}(s_{0})$. A third class of methods, _actor-critic methods_ , maintain a parameterized representation of both the current policy and value function. The actor is a parameterized policy that dictates how the agent selects actions. The critic estimates the (action-)value function for the actor using a policy evaluation method such as temporal-difference learning. The actor then updates the policy parameter in the direction suggested by the critic. An example of actor-critic methods is Deterministic Policy Gradient (Silver et al., 2014). ### 2.2 Transfer Learning In the standard reinforcement learning setting, an agent usually starts with a random policy, and directly attempts to learn an optimal policy for the target task. When the target task is difficult, for example due to adversarial agents, poor state representation, or sparse reward signals, learning can be very slow. Transfer learning is one class of methods and area of research that seeks to speed up training of RL agents. The idea behind transfer learning is that instead of learning on the _target task_ tabula rasa, the agent can first train on one or more _source task_ MDPs, and _transfer_ the knowledge acquired to aid in solving the target. This knowledge can take the form of samples (Lazaric et al., 2008; Lazaric and Restelli, 2011), options (Soni and Singh, 2006), policies (Fernández et al., 2010), models (Fachantidis et al., 2013), or value functions (Taylor and Stone, 2005). As an example, in value function transfer (Taylor et al., 2007), the parameters of an action-value function $q_{source}(s,a)$ learned in a source task are used to initialize the action- value function in the target task $q_{target}(s,a)$. This biases exploration and action selection in the target task based on experience acquired in the source task. Some of these methods assume that the source and target MDPs either share state and action spaces, or that a _task mapping_ (Taylor et al., 2007) is available to map states and actions in the target task to known states and actions in the source. Such mappings can be specified by hand, or learned automatically (Taylor et al., 2008; Ammar et al., 2015). Other methods assume the transition or reward functions do not change between tasks. The best method to use varies by domain, and depends on the relationship between source and target tasks. Finally, while most methods assume that knowledge is transferred from one source task to one target task, some methods have been proposed to transfer knowledge from several source tasks directly to a single target (Svetlik et al., 2017). See Taylor and Stone (2009) or Lazaric (2012) for a survey of transfer learning techniques. ### 2.3 Evaluation Metrics for Transfer Learning There are several metrics to quantify the benefit of transferring from a source task to a target task (Taylor and Stone, 2009). Typically, they compare the learning trajectory on the target task for an agent after transfer, with an agent that learns directly on the target task from scratch (see Figure 2a). One metric is _time to threshold_ , which computes how much faster an agent can learn a policy that achieves expected return $G_{0}\geq\delta$ on the target task if it transfers knowledge, as opposed to learning the target from scratch, where $\delta$ is some desired performance threshold. Time can be measured in terms of CPU time, wall clock time, episodes, or number of actions taken. Another metric is _asymptotic performance_ , which compares the final performance after convergence in the target task of learners when using transfer versus no transfer. The _jumpstart_ metric instead measures the initial performance increase on the target task as a result of transfer. Finally, the _total reward_ ratio compares the total reward accumulated by the agent during training up to a fixed stopping point, using transfer versus not using transfer. $\begin{array}[]{cc}\includegraphics[height=113.81102pt]{img/weakTL.png}&\includegraphics[height=113.81102pt]{img/strongTL.png}\\\ (a)&(b)\end{array}$ Figure 2: Performance metrics for transfer learning using (a) weak transfer and (b) strong transfer with offset curves. An important evaluation question is whether to include time spent _learning in source tasks_ into the cost of using transfer. The transfer curve in Figure 2a shows performance on the target task, and starts at time 0, even though time has already been spent learning one or more source tasks. Thus, it does not reflect time spent training in source tasks before transferring to the target task. This is known in transfer learning as the _weak transfer_ setting, where time spent training in source tasks is treated as a sunk cost. On the other hand, in the _strong transfer_ setting, the learning curves must account for time spent in all source tasks. One way to do this is to offset the curves to reflect time spent in source tasks, as shown in Figure 2b. Another option is to freeze the policy while learning on source tasks, and plot that policy’s performance on the target task. ## 3 The Curriculum Learning Method A _curriculum_ serves to sort the experience an agent acquires over time, in order to accelerate or improve learning. In the rest of this section we formalize this concept and the methodology of _curriculum learning_ , and describe how to evaluate the benefits and costs of using a curriculum. Finally, we provide a list of attributes which we will use to categorize curriculum learning approaches in the rest of this survey. ### 3.1 Curricula A curriculum is a general concept that encompasses both schedules for organizing past experiences, and schedules for acquiring experience by training on tasks. As such, we first propose a fully general definition of curriculum, and then follow it with refinements that apply to special cases common in the literature. We assume a _task_ is modeled as a Markov Decision Process, and define a curriculum as follows: ###### Definition 2 (Curriculum) Let $\mathcal{T}$ be a set of tasks, where $m_{i}=(\mathcal{S}_{i},\mathcal{A}_{i},p_{i},r_{i})$ is a task in $\mathcal{T}$. Let $\mathcal{D}^{\mathcal{T}}$ be the set of all possible transition samples from tasks in $\mathcal{T}$: $\mathcal{D}^{\mathcal{T}}=\\{(s,a,r,s^{\prime})\>|\>\exists\,m_{i}\in\mathcal{T}\;\mathrm{s.t.}\;s\in\mathcal{S}_{i},a\in\mathcal{A}_{i},s^{\prime}\sim p_{i}(\cdot|s,a),r\leftarrow r_{i}(s,a,s^{\prime})\\}$. A _curriculum_ $C=(\mathcal{V},\mathcal{E},g,\mathcal{T})$ is a directed acyclic graph, where $\mathcal{V}$ is the set of vertices, $\mathcal{E}\subseteq\\{(x,y)\;|\;(x,y)\in\mathcal{V}\times\mathcal{V}\>\land x\neq y\\}$ is the set of directed edges, and $g:\mathcal{V}\to\mathcal{P}(\mathcal{D}^{\mathcal{T}})$ is a function that associates vertices to subsets of samples in $\mathcal{D}^{\mathcal{T}}$, where $\mathcal{P}(\mathcal{D}^{\mathcal{T}})$ is the power set of $\mathcal{D}^{\mathcal{T}}$. A directed edge $\langle v_{j},v_{k}\rangle$ in $C$ indicates that samples associated with $v_{j}\in\mathcal{V}$ should be trained on before samples associated with $v_{k}\in\mathcal{V}$. All paths terminate on a single sink node $v_{t}\in\mathcal{V}$.222In theory, a curriculum could have multiple sink nodes corresponding to different target tasks. For the purpose of exposition, we assume a separate curriculum is created and used for each task. A curriculum can be created online, where edges are added dynamically based on the learning progress of the agent on the samples at a given vertex. It can also be designed completely offline, where the graph is generated before training, and edges are selected based on properties of the samples associated with different vertices. Creating a curriculum graph at the sample level can be computationally difficult for large tasks, or large sets of tasks. Therefore, in practice, a simplified representation for a curriculum is often used. There are 3 common dimensions along which this simplification can happen. The first is the single-task curriculum, where all samples used in the curriculum come from a single task: ###### Definition 3 (Single-task Curriculum) A _single-task curriculum_ is a curriculum $C$ where the cardinality of the set of tasks considered for extracting samples $|\mathcal{T}|=1$, and consists of only the target task $m_{t}$. A single-task curriculum essentially considers how best to organize and train on experience acquired from a single task. This type of curriculum is common in experience replay methods (Schaul et al., 2016). A second common simplification is to learn a curriculum at the task level, where each vertex in the graph is associated with samples from a single task. At the task level, a curriculum can be defined as a directed acyclic graph of _intermediate_ tasks: ###### Definition 4 (Task-level Curriculum) For each task $m_{i}\in\mathcal{T}$, let $\mathcal{D}^{\mathcal{T}}_{i}$ be the set of all samples associated with task $m_{i}$: $\mathcal{D}^{\mathcal{T}}_{i}=\\{(s,a,r,s^{\prime})\>|\>s\in\mathcal{S}_{i},a\in\mathcal{A}_{i},s^{\prime}\sim p_{i}(\cdot|s,a),r\leftarrow r_{i}(s,a,s^{\prime})\\}$. A _task-level curriculum_ is a curriculum $C=(\mathcal{V},\mathcal{E},g,\mathcal{T})$ where each vertex is associated with samples from a single task in $\mathcal{T}$. Thus, the mapping function $g$ is defined as $g:\mathcal{V}\to\\{\mathcal{D}^{\mathcal{T}}_{i}\;|\;m_{i}\in\mathcal{T}\\}$. In reinforcement learning, the entire set of samples from a task (or multiple tasks) is usually not available ahead of time. Instead, the samples experienced in a task depend on the agent’s behavior policy, which can be influenced by previous tasks learned. Therefore, while generating a task-level curriculum, the main challenge is how to order tasks such that the behavior policy learned is useful for acquiring good samples in future tasks. In other words, selecting and training on a task $m$ induces a mapping function $g$, and determines the set of samples $\mathcal{D}_{i}^{\mathcal{T}}$ that will be available at the next vertex based on the agent’s behavior policy as a result of learning $m$. The same task is allowed to appear at more than one vertex, similar to how in Definition 2 the same set of samples can be associated with more than one vertex. Therefore, tasks can be revisited when the agent’s behavior policy has changed. Several works have considered learning task-level curricula over a graph of tasks (Svetlik et al., 2017; MacAlpine and Stone, 2018). An example can be seen in Figure 3b. Finally, another simplification of the curriculum is the linear _sequence_. This is the simplest and most common structure for a curriculum in existing work: ###### Definition 5 (Sequence Curriculum) A _sequence curriculum_ is a curriculum $C$ where the indegree and outdegree of each vertex $v$ in the graph $C$ is at most 1, and there is exactly one source node and one sink node. These simplifications can be combined to simplify a curriculum along multiple dimensions. For example, the sequence simplification and task-level simplification can be combined to produce a task-level sequence curriculum. This type of curriculum can be represented as an ordered list of tasks $[m_{1},m_{2},...m_{n}]$. An example can be seen in Figure 3a (Narvekar et al., 2017). A final important question when designing curricula is determining the stopping criteria: that is, how to decide _when_ to stop training on samples or tasks associated with a vertex, and move on to the next vertex. In practice, typically training is stopped when performance on the task or set of samples has converged. Training to convergence is not always necessary, so another option is to train on each vertex for a fixed number of episodes or epochs. Since more than one vertex can be associated with the same samples/tasks, this experience can be revisited later on in the curriculum. ### 3.2 Curriculum Learning _Curriculum learning_ is a methodology to _optimize_ the order in which experience is accumulated by the agent, so as to increase performance or training speed on a set of final tasks. Through generalization, knowledge acquired quickly in simple tasks can be leveraged to reduce the exploration of more complex tasks. In the most general case, where the agent can acquire experience from multiple intermediate tasks that differ from the final MDP, there are 3 key elements to this method: * • Task Generation. The quality of a curriculum is dependent on the quality of tasks available to choose from. Task generation is the process of creating a good set of intermediate tasks from which to obtain experience samples. In a task-level curriculum, these tasks form the nodes of the curriculum graph. This set of intermediate tasks may either be pre-specified, or dynamically generated during the curriculum construction by observing the agent. * • Sequencing. Sequencing examines how to create a partial ordering over the set of experience samples $\mathcal{D}$: that is, how to generate the edges of the curriculum graph. Most existing work has used manually defined curricula, where a human selects the ordering of samples or tasks. However, recently automated methods for curriculum sequencing have begun to be explored. Each of these methods make different assumptions about the tasks and transfer methodology used. These methods will be the primary focus of this survey. * • Transfer Learning. When creating a curriculum using multiple tasks, the intermediate tasks may differ in state/action space, reward function, or transition function from the final task. Therefore, transfer learning is needed to extract and pass on reusable knowledge acquired in one task to the next. Typically, work in transfer learning has examined how to transfer knowledge from one or more source tasks directly to the target task. Curriculum learning extends the transfer learning scenario to consider training sessions in which the agent must repeatedly transfer knowledge from one task to another, up to a set of final tasks. $\begin{array}[]{cc}\includegraphics[height=156.49014pt]{img/sequences.png}&\includegraphics[height=156.49014pt]{img/dag.png}\\\ (a)&(b)\end{array}$ Figure 3: Examples of structures of curricula from previous work. (a) Linear sequences in a gridworld domain (Narvekar et al., 2017) (b) Directed acyclic graphs in block dude (Svetlik et al., 2017). ### 3.3 Evaluating Curricula Curricula can be evaluated using the same metrics as for transfer learning (cf. Section 2.3), by comparing performance on the target task after following the complete curriculum, versus performance following no curriculum (i.e., learning from scratch). If there are multiple final tasks, the metrics can easily be extended: for example, by comparing the average asymptotic performance over a set of tasks, or the average time to reach a threshold performance level over a set of tasks. Similarly, it is possible to distinguish between weak and strong transfer. However, in curriculum learning, there is the additional expense required to _build_ the curriculum itself, in addition to training on intermediate tasks in the curriculum, which can also be factored in when evaluating the cost of the curriculum. As in the transfer learning case, cost can be measured in terms of wall clock time, or data/sample complexity. Most existing applications of curricula in reinforcement learning have used curricula created by humans. In these cases, it can be difficult to assess how much time, effort, and prior knowledge was used to design the curriculum. Automated approaches to generate a curriculum also typically require some prior knowledge or experience in potential intermediate tasks, in order to guide the sequencing of tasks. Due to these difficulties, these approaches have usually treated curriculum generation as a sunk cost, focusing on evaluating the performance of the curriculum itself, and comparing it versus other curricula, including those designed by people. The best set of evaluation criteria to use ultimately depends on the specific problem and settings being considered. For example, how expensive is it to collect data on the final task compared to intermediate tasks? If intermediate tasks are relatively inexpensive, we can treat time spent in them as sunk costs. Is it more critical to improve initial performance, final performance, or reaching a desired performance threshold? If designing the curriculum will require human interaction, how will this time be factored into the cost of using a curriculum? Many of these questions depend on whether we wish to evaluate the utility of a specific curriculum (compared to another curriculum), or whether we wish to evaluate the utility of using a curriculum design approach versus training without one. ### 3.4 Dimensions of Categorization We categorize curriculum learning approaches along the following seven dimensions, organized by attributes (in bold) and the values (in italics) they can take. We use these dimensions to create a taxonomy of surveyed work in Section 4. 1. 1. Intermediate task generation: _target / automatic / domain experts / naive users_. In curriculum learning, the primary challenge is how to sequence a set of tasks to improve learning speed. However, finding a good curriculum depends on first having useful source tasks to select from. Most methods assume the set of possible source tasks is fixed and given ahead of time. In the simplest case, only samples from the _target_ task are used. When more than one intermediate task is used, typically they are manually designed by humans. We distinguish such tasks as designed by either _domain experts_ , who have knowledge of the agent and its learning algorithm, or _naive users_ , who do not have this information. On the other hand, some works consider _automatically_ creating tasks online using a set of rules or generative process. These approaches may still rely on some human input to control/tune hyper-parameters, such as the number of tasks generated, or to verify that generated tasks are actually solvable. 2. 2. Curriculum representation: _single / sequence / graph_. As we discussed previously, the most general form of a curriculum is a directed acyclic graph over subsets of samples. However, in practice, simplified versions of this representation are often used. In the simplest case, a curriculum is an ordering over samples from a _single_ task. When multiple tasks can be used in a curriculum, curricula are often created at the task-level. These curricula can be represented as a linear chain, or _sequence_. In this case, there is exactly one source for each intermediate task in the curriculum. It is up to the transfer learning algorithm to appropriately retain and combine information gathered from previous tasks in the chain. More generally, they can be represented as a full directed acyclic _graph_ of tasks. This form supports transfer learning methods that transfer from many-to-one, one-to- many, and many-to-many tasks. 3. 3. Transfer method: _policies / value function / task model / partial policies / shaping reward / other / no transfer_. Curriculum learning leverages ideas from transfer learning to transfer knowledge between tasks in the curriculum. As such, the transfer learning algorithm used affects how the curriculum will be produced. The type of knowledge transferred can be low-level knowledge, such as an entire _policy_ , an _(action-)value function_ , or a full _task model_ , which can be used to directly initialize the learner in the target task. It can also be high-level knowledge, such as _partial policies_ (e.g. options) or _shaping rewards_. This type of information may not fully initialize the learner in the target task, but it could be used to guide the agent’s learning process in the target task. We use partial policies as an umbrella term to represent closely related ideas such as options, skills, and macro-actions. When samples from a single task are sequenced, _no transfer_ learning algorithm is necessary. Finally, we use _other_ to refer to other types of transfer learning methods. We categorize papers along this dimension based on what is transferred between tasks in the curriculum in each paper’s experimental results. 4. 4. Curriculum sequencer: _automatic / domain experts / naive users_. Curriculum learning is a three-part method, consisting of task generation, sequencing, and transfer learning. While much of the attention of this survey is on automated sequencing approaches, many works consider the other parts of this method, and assume the sequencing is done by a human or oracle. Thus, we identify and categorize the type of sequencing approach used in each work similar to task generation: it can be done _automatically_ by a sequencing algorithm, or manually by humans that are either _domain experts_ or _naive users_. 5. 5. Curriculum adaptivity: _static / adaptive_. Another design question when creating a curriculum is whether it should be generated in its entirety before training, or dynamically adapted during training. We refer to the former type as _static_ and to the latter as _adaptive_. Static approaches use properties of the domain and possibly of the learning agent, to generate a curriculum before any task is learned. Adaptive methods, on the other hand, are influenced by properties that can only be measured during learning, such as the learning progress by the agent on the task it is currently facing. For example, learning progress can be used to guide whether subsequent tasks should be easier or harder, as well as how relevant a task is for the agent at a particular point in the curriculum. 6. 6. Evaluation metric: _time to threshold / asymptotic / jumpstart / total reward_. We discussed four metrics to quantify the effectiveness of learned curricula in Section 3.3. When calculating these metrics, one can choose whether to treat time spent generating the curriculum and training on the curriculum as a sunk cost, or whether to account for both of these for performance. Specifically, there are three ways to measure the cost of learning and training via a curriculum. 1) The cost of generating and using the curriculum is treated as a sunk cost, and the designer is only concerned with performance on the target task after learning. This case corresponds to the weak transfer setting. 2) The cost of training on intermediate tasks in the curriculum is accounted for, when comparing to training directly on the target task. This case is most common when it is hard to evaluate the cost of generating the curriculum itself, for example if it was hand-designed by a human. 3) Lastly, the most comprehensive case accounts for the cost of generating the curriculum as well as training via the curriculum. We will refer to the last two as strong transfer, and indicate it by bolding the corresponding metric. Note that achieving asymptotic performance improvements implies strong transfer. 7. 7. Application area: _toy / sim robotics / real robotics / video games / other_. Curriculum learning methods have been tested in a wide variety of domains. _Toy_ domains consist of environments such as grid worlds, cart-pole, and other low dimensional environments. _Sim robotics_ environments simulate robotic platforms, such as in MuJoCo. _Real robotics_ papers test their method on physical robotic platforms. _Video games_ consist of game environments such as Starcraft or the Arcade Learning Environment (Atari). Finally, _other_ is used for custom domains that do not fit in these categories. We list these so that readers can better understand the scalability and applicability of different approaches, and use these to inform what methods would be suitable for their own problems. ## 4 Curriculum Learning for Reinforcement Learning Agents In this section, we systematically survey work on each of the three central elements of curriculum learning: task generation (Section 4.1), sequencing (Section 4.2), and transfer learning (Section 4.3). For each of these subproblems, we provide a table that categorizes work surveyed according to the dimensions outlined in Section 3. The bulk of our attention will be devoted to the subproblem most commonly associated with curriculum learning: sequencing. ### 4.1 Task Generation Task generation is the problem of creating intermediate tasks specifically to be part of a curriculum. In contrast to the life-long learning scenario, where potentially unrelated tasks are constantly proposed to the agent (Thrun, 1998), the aim of task generation is to create a set of tasks such that knowledge transfer through them is beneficial. Therefore, all the generated tasks should be relevant to the final task(s) and avoid _negative transfer_ , where using a task for transfer hurts performance. The properties of the research surveyed in this section are reported in Table 1. Very limited work has been dedicated to formally studying this subproblem in the context of reinforcement learning. All known methods assume the domain can be parameterized using some kind of representation, where different instantiations of these parameters create different tasks. For instance, Narvekar et al. (2016) introduce a number of methods to create intermediate tasks for a specific final task. The methods hinge on a definition of a domain as a set of MDPs identified by a _task descriptor_ , which is a vector of parameters specifying the _degrees of freedom_ in the domain. For example, in the quick chess example (see Section 1), these parameters could be the size of the board, number of pawns, etc. By varying the degrees of freedom and applying task _restrictions_ , the methods define different types of tasks. Methods introduced include: _task simplification_ , which directly changes the degrees of freedom to reduce the task dimensions; _promising initialization_ , which modifies the set of initial states by adding states close to high rewards; _mistake learning_ , which rewinds the domain to a state a few steps before a mistake is detected and resumes learning from there; and several other methods. The set of methods determine different kinds of possible tasks, which form a space of tasks in which appropriate intermediate tasks can be chosen. Da Silva and Reali Costa (2018) propose a similar partially automated task generation procedure in their curriculum learning framework, based on Object- Oriented MDPs. Each task is assumed to have a class _environment_ parameterized by a number of attributes. A function, which must be provided by the designer, creates simpler versions of the final task by instantiating the attributes with values that make the tasks easier to solve. For example, continuing the quick chess example, the attributes could be the types of pieces, and the values are the number of each type of piece. The presence of different kinds and numbers of objects provide a range of tasks with different levels of difficulty. However, since the generation is mostly random, the designer has to make sure that the tasks are indeed solvable. Citation | Intermediate Task Generation | Curriculum Representation | Transfer Method | Curriculum Sequencer | Curriculum Adaptivity | Evaluation Metric | Application Area ---|---|---|---|---|---|---|--- Da Silva and Reali Costa (2018) | automatic | graph | value function | automatic | static | time to threshold, total reward | toy, video games Narvekar et al. (2016) | automatic | sequence | value function | domain experts | adaptive | asymptotic | video games Schmidhuber (2013) | automatic | sequence | partial policies | automatic | adaptive | asymptotic | other Stone and Veloso (1994) | automatic | sequence | other | domain experts | adaptive | time to threshold | other Table 1: The papers discussed in Section 4.1, categorized along the dimensions presented in Section 3.4. Bolded values under evaluation metric indicate strong transfer. Generating auxiliary intermediate tasks is a problem that has been studied in non-RL contexts as well. For instance, Stone and Veloso (1994) consider how to semiautomatically create subproblems to aid in learning to solve difficult _planning_ problems. Rather than using a static analysis of the domain’s properties, they propose to use a partially completed search trajectory of the target task to identify what makes a problem difficult, and suggest auxiliary tasks. For example, if the task took too long and there are multiple goals in the task, try changing the order of the goals. Other methods they propose include reducing the number of goals, creating tasks to solve difficult subgoals, and changing domain operators and objects available for binding. Lastly, Schmidhuber (2013) introduced Powerplay, a framework that focuses on inventing new problems to train a more and more general problem solver in an unsupervised fashion. The system searches for both a new task and a modification of the current problem solver, such that the modified solver can solve all previous tasks, plus the new one. The search acts on a domain- dependent encoding of the problem and the solver, and has been demonstrated on pattern recognition and control tasks (Srivastava et al., 2013). The generator of the task and new solver is given a limited computational budget, so that it favors the generation of the simplest tasks that could not be solved before. Furthermore, a possible task is to solve all previous tasks, but with a more compact representation of the solver. The resulting iterative process makes the system increasingly more competent at different tasks. The task generation process effectively creates a curriculum, although in this context there are no final tasks, and the system continues to generate pairs of problems and solvers indefinitely, without any specific goal. ### 4.2 Sequencing Given a set of tasks, or samples from them, the goal of sequencing is to order them in a way that facilitates learning. Many different sequencing methods exist, each with their own set of assumptions. One of the fundamental assumptions of curriculum learning is that we can configure the environment to create different tasks. For the practitioner attempting to use curriculum learning, the amount of control one has to shape the environment affects the type of sequencing methods that could be applicable. Therefore, we categorize sequencing methods by the degree to which intermediate tasks may differ. Specifically, they form a spectrum, ranging from methods that simply reorder experience in the final task without modifying any property of the corresponding MDP, to ones that define entirely new intermediate tasks, by progressively adjusting some or all of the properties of the final task. In this subsection, we discuss the different sequencing approaches. First, in Section 4.2.1, we consider methods that reorder samples in the target task to derive a curriculum. Experience replay methods are one such example. In Section 4.2.2, we examine multi-agent approaches to curriculum generation, where the cooperation or competition between two (typically evolving) agents induces a sequence of progressively challenging tasks, like a curriculum. Then, in Section 4.2.3, we begin describing methods that explicitly use intermediate tasks, starting with ones that vary in limited ways from the target task. In particular, these methods only change the reward function and/or the initial and terminal state distributions to create a curriculum. In Section 4.2.4, we discuss methods that relax this assumption, and allow intermediate tasks that can vary in any way from the target task MDP. Finally, in Section 4.2.5, we discuss work that explores how humans sequence tasks into a curriculum. #### 4.2.1 Sample Sequencing First we consider methods that reorder samples from the final task, but do not explicitly change the domain itself. These ideas are similar to curriculum learning for supervised learning (Bengio et al., 2009), where training examples are presented to a learner in a specific order, rather than completely randomly. Bengio et al. (2009) showed that ordering these examples from simple to complex can improve learning speed and generalization ability. An analogous process can be used for reinforcement learning. For example, many current reinforcement learning methods, such as Deep Q Networks (DQN) (Mnih et al., 2015) use a replay buffer to store past state-action-reward experience tuples. At each training step, experience tuples are sampled from the buffer and used to train DQN in minibatches. The original formulation of DQN performed this sampling uniformly randomly. However, as in the supervised setting, samples can be reordered or “prioritized,” according to some measure of usefulness or difficulty, to improve learning. Citation | Intermediate Task Generation | Curriculum Representation | Transfer Method | Curriculum Sequencer | Curriculum Adaptivity | Evaluation Metric | Application Area ---|---|---|---|---|---|---|--- Sample Sequencing (Section 4.2.1) Andrychowicz et al. (2017) | target | single | no transfer | automatic | adaptive | asymptotic | sim robotics Fang et al. (2019) | target | single | no transfer | automatic | adaptive | asymptotic | sim robotics Kim and Choi (2018) | target | single | no transfer | automatic | adaptive | asymptotic | toy, other Lee et al. (2019) | target | single | no transfer | automatic | adaptive | time to threshold | toy, video games Ren et al. (2018) | target | single | no transfer | automatic | adaptive | asymptotic | video games Schaul et al. (2016) | target | single | no transfer | automatic | adaptive | asymptotic | video games Co-learning (Section 4.2.2) Baker et al. (2020) | automatic | sequence | policies | automatic | adaptive | asymptotic, time to threshold | other Bansal et al. (2018) | automatic | sequence | policies | automatic | adaptive | asymptotic | sim robotics Pinto et al. (2017) | automatic | sequence | policies | automatic | adaptive | time to threshold | sim robotics Sukhbaatar et al. (2018) | automatic | sequence | policies | automatic | adaptive | time to threshold, asymptotic | toy, video games Vinyals et al. (2019) | automatic | sequence | policies | automatic | adaptive | asymptotic | video games Reward and Initial/Terminal State Distribution Changes (Section 4.2.3) Asada et al. (1996) | domain experts | sequence | value function | automatic | adaptive | asymptotic | sim/real robotics Baranes and Oudeyer (2013) | automatic | sequence | partial policies | automatic | adaptive | asymptotic | sim/real robotics Florensa et al. (2017) | automatic | sequence | policies | automatic | adaptive | asymptotic | sim robotics Florensa et al. (2018) | automatic | sequence | policies | automatic | adaptive | asymptotic | sim robotics Ivanovic et al. (2019) | automatic | sequence | policies | automatic | adaptive | asymptotic | sim robotics Racaniere et al. (2019) | automatic | sequence | policies | automatic | adaptive | asymptotic | toy, video games Riedmiller et al. (2018) | domain experts | sequence | policies | automatic | adaptive | time to threshold | sim/real robotics Wu and Tian (2017) | domain experts | sequence | task model | automatic | both | asymptotic | video games No Restrictions (Section 4.2.4) Bassich et al. (2020) | domain experts | sequence | policies | automatic | adaptive | asymptotic, time to threshold | toy Da Silva and Reali Costa (2018) | automatic | graph | value function | automatic | static | time to threshold, total reward | toy, video games Foglino et al. (2019a) | domain experts | sequence | value function | automatic | static | time to threshold, asymptotic, total reward | toy Foglino et al. (2019b) | domain experts | sequence | value function | automatic | static | total reward | toy Foglino et al. (2019c) | domain experts | sequence | value function | automatic | static | total reward | toy Jain and Tulabandhula (2017) | domain experts | sequence | value function | automatic | adaptive | time to threshold, total reward | toy Matiisen et al. (2017) | domain experts | sequence | policies | automatic | adaptive | asymptotic | toy, video games Narvekar et al. (2017) | automatic | sequence | value function | automatic | adaptive | time to threshold | toy Narvekar and Stone (2019) | domain experts | sequence | value function, shaping reward | automatic | adaptive | time to threshold | toy, video games Svetlik et al. (2017) | domain experts | graph | shaping reward | automatic | static | asymptotic, time to threshold | toy, video games Human-in-the-loop Curriculum Generation (Section 4.2.5) Hosu and Rebedea (2016) | target | single | no transfer | automatic | adaptive | asymptotic | video games Khan et al. (2011) | domain experts | sequence | no transfer | naive users | static | N/A | other MacAlpine and Stone (2018) | domain experts | graph | policies | domain experts | static | asymptotic | sim robotics Peng et al. (2018) | domain experts | sequence | task model | naive users | static | time to threshold | other Stanley et al. (2005) | domain experts | sequence | partial policies | domain experts | adaptive | asymptotic | video games Table 2: The papers discussed in Section 4.2, categorized along the dimensions presented in Section 3.4. Bolded values under evaluation metric indicate strong transfer. The first to do this type of sample sequencing in the context of deep learning were Schaul et al. (2016). They proposed Prioritized Experience Replay (PER), which prioritizes and replays _important_ transitions more. Important transitions are those with high expected learning progress, which is measured by their temporal difference (TD) error. Intuitively, replaying samples with larger TD errors allows the network to make stronger updates. As transitions are learned, the distribution of important transitions changes, leading to an implicit curriculum over the samples. Alternative metrics for priority/importance have been explored as well. Ren et al. (2018) propose to sort samples using a complexity index (CI) function, which is a combination of a self-paced prioritized function and a coverage penalty function. The self-paced prioritized function selects samples that would be of appropriate difficulty, while the coverage function penalizes transitions that are replayed frequently. They provide one specific instantiation of these functions, which are used in experiments on the Arcade Learning Environment (Bellemare et al., 2013), and show that it performs better than PER in many cases. However, these functions must be designed individually for each domain, and designing a broadly applicable domain- independent priority function remains an open problem. Kim and Choi (2018) consider another extension of prioritized experience replay, where the weight/priority of a sample is jointly learned with the main network via a secondary neural network. The secondary network, called ScreenerNet, learns to predict weights according to the error of the sample by the main network. Unlike PER, this approach is memoryless, which means it can directly predict the significance of a training sample even if that particular example was not seen. Thus, the approach could potentially be used to actively request experience tuples that would provide the most information or utility, creating an online curriculum. Instead of using sample importance as a metric for sequencing, an alternative idea is to restructure the training process based on trajectories of samples experienced. For example, when learning, typically easy to reach states are encountered first, whereas harder to reach states are encountered later on in the learning cycle. However, in practical settings with sparse rewards, these easy to reach states may not provide a reward signal. Hindsight Experience Replay (HER) (Andrychowicz et al., 2017) is one method to make the most of these early experiences. HER is a method that learns from “undesired outcomes,” in addition to the desired outcome, by replaying each episode with a goal that was actually achieved rather than the one the agent was trying to achieve. The problem is set up as learning a Universal Value Function Approximator (UVFA) (Schaul et al., 2015), which is a value function $v_{\pi}(s,g)$ defined over states $s$ and goals $g$ . The agent is given an initial state $s_{1}$ and a desired goal state $g$. Upon executing its policy, the agent may not reach the goal state $g$, and instead land on some other terminal state $s_{T}$. While this trajectory does not help to learn to achieve $g$, it does help to learn to achieve $s_{T}$. Thus, this trajectory is added to the replay buffer with the goal state substituted with $s_{T}$, and used with an off-policy RL algorithm. HER forms a curriculum by taking advantage of the implicit curriculum present in exploration, where early episodes are likely to terminate on easy to reach states, and more difficult to reach states are found later in the training process. One of the issues with vanilla HER is that all goals in seen trajectories are replayed evenly, but some goals may be more useful at different points of learning. Thus, Fang et al. (2019) later proposed Curriculum-guided HER (CHER) to adaptively select goals based on two criteria: curiosity, which leads to the selection of diverse goals, and proximity, which selects goals that are closer to the true goal. Both of these criteria rely on a measure of distance or similarity between goal states. At each minibatch optimization step, the objective selects a subset of goals that maximizes the weighted sum of a diversity and proximity score. They manually impose a curriculum that starts biased towards diverse goals and gradually shifts towards proximity based goals using a weighting factor that is exponentially scaled over time. Other than PER and HER, there are other works that reorder/resample experiences in a novel way to improve learning. One example is the episodic backward update (EBU) method developed by Lee et al. (2019). In order to speed up the propagation of delayed rewards (e.g., a reward might only be obtained at the end of an episode), Lee et al. (2019) proposed to sample a whole episode from the replay buffer and update the values of all transitions within the sampled episode in a backward fashion. Starting from the end of the sampled episode, the $\max$ Bellman operator is applied recursively to update the target $Q$-values until the start of the sampled episode. This process basically reorders all the transitions within each sampled episode from the last timestep of the episode to the first, leading to an implicit curriculum. Updating highly correlated states in a sequence while using function approximation is known to suffer from cumulative overestimation errors. To overcome this issue, a diffusion factor $\beta\in(0,1)$ was introduced to update the current $Q$-value using a weighted sum of the new bootstrapped target value and the pre-existing $Q$-value estimate. Their experimental results show that in 49 Atari games, EBU can achieve the same mean and median human normalized performance of DQN by using significantly fewer samples. Methods that sequence experience samples have wide applicability and found broad success in many applications, since they can be applied directly on the target task without needing to create intermediate tasks that alter the environment. In the following sections, we consider sequencing approaches that progressively alter how much intermediate tasks in the curriculum may differ. #### 4.2.2 Co-learning Co-learning is a multi-agent approach to curriculum learning, in which the curriculum emerges from the interaction of several agents (or multiple versions of the same agent) in the same environment. These agents may act either cooperatively or adversarially to drive the acquisition of new behaviors, leading to an implicit curriculum where both sets of agents improve over time. Self-play is one methodology that fits into this paradigm, and many landmark results such as TD-Gammon (Tesauro, 1995) and more recently AlphaGo (Silver et al., 2016) and AlphaStar (Vinyals et al., 2019) fall into this category. Rather than describing every work that uses self-play or co- learning, we describe a few papers that focus on how the objectives of the multiple agents can be set up to facilitate co-learning. Sukhbaatar et al. (2018) proposed a novel method called asymmetric self-play that allows an agent to learn about the environment without any external reward in an unsupervised manner. This method considers two agents, a teacher and a student, using the paradigm of “the teacher proposing a task, and the student doing it.” The two agents learn their own policies simultaneously by maximizing interdependent reward functions for goal-based tasks. The teacher’s task is to navigate to an environment state that the student will use either as 1) a goal, if the environment is resettable, or 2) as a starting state, if the environment is reversible. In the first case, the student’s task is to reach the teacher’s final state, while in the second case, the student starts from the teacher’s final state with the aim of reverting the environment to its original initial state. The student’s goal is to minimize the number of actions it needs to complete the task. The teacher, on the other hand, tries to maximize the difference between the actions taken by the student to execute the task, and the actions spent by the teacher to set up the task. The teacher, therefore, tries to identify a state that strikes a balance between being the simplest goal (in terms of number of teacher actions) for itself to find, and the most difficult goal for the student to achieve. This process is iterated to automatically generate a curriculum of intrinsic exploration. Another example of jointly training a pair of agents adversarially for policy learning in single-agent RL tasks is Robust Adversarial RL (RARL) by Pinto et al. (2017). Unlike asymmetric self-play (Sukhbaatar et al., 2018), in which the teacher defines the goal for the student, RARL trains a protagonist and an adversary, where the protagonist learns to complete the original RL task while being robust to the disturbance forces applied by the adversarial agent. RARL is targeted at robotic systems that are required to generalize effectively from simulation, and learn robust policies with respect to variations in physical parameters. Such variations are modeled as disturbances controlled by an adversarial agent, and the adversarial agent’s goal is to learn the optimal sequence of destabilizing actions via a zero-sum game training procedure. The adversarial agent tries to identify the hardest conditions under which the protagonist agent may be required to act, increasing the agent’s robustness. Learning takes place in turns, with the protagonist learning against a fixed antagonist’s policy, and then the antagonist learning against a fixed protagonist’s policy. Each agent tries to maximize its own return, and the returns are zero-sum. The set of “destabilizing actions” available to the antagonist is assumed to be domain knowledge, and given to the adversary ahead of time. For multi-agent RL tasks, several works have shown how simple interaction between multiple learning agents in an environment can result in emergent curricula. Such ideas were explored early on in the context of evolutionary algorithms by Rosin and Belew (1997). They showed that competition between 2 groups of agents, dubbed hosts and parasites, could lead to an “arms race,” where each group drives the other to acquire increasingly complex skills and abilities. Similar results have been shown in the context of RL agents by Baker et al. (2020). They demonstrated that increasingly complex behaviors can emerge in a physically grounded task. Specifically, they focus on a game of hide and seek, where there are two teams of agents. One team must hide with the help of obstacles and other items in the environment, while the other team needs to find the first team. They were able to show that as one team converged on a successful strategy, the other team was pressured to learn a counter-strategy. This process was repeated, inducing a curriculum of increasingly competitive agents. A similar idea was explored by Bansal et al. (2018). They proposed to use multi-agent curriculum learning as an alternative to engineering dense shaping rewards. Their method interpolates between dense “exploration” rewards, and sparse multi-agent competitive rewards, with the exploration reward gradually annealed over time. In order to prevent the adversarial agent from getting too far ahead of the learning agent and making the task impossible, the authors propose to additionally sample older versions of the opponent. Lastly, in order to increase robustness, the stochasticity of the tasks is increased over time. Curriculum learning approaches have also been proposed for cooperative multi- agent systems (Wang et al., 2020; Yang et al., 2020). In these settings, there is a natural curriculum created by starting with a small number of agents, and gradually increasing them in subsequent tasks. The schedule with which to increase the number of agents is usually manually defined, and the emphasis instead is on how to perform transfer when the number of agents change. Therefore, we discuss these approaches in more detail in Section 4.3. Finally, while self-play has been successful in a wide variety of domains, including solving games such as Backgammon (Tesauro, 1995) and Go (Silver et al., 2016), such an approach alone was not sufficient for producing strong agents in a complex, multi-agent, partially-observable game like Starcraft. One of the primary new elements of Vinyals et al. (2019) was the introduction of a Starcraft League, a group of agents that have differing strategies learned from a combination of imitation learning from human game data and reinforcement learning. Rather than have every agent in the league maximize their own probability of winning against all other agents like in standard self play, there were some agents that did this, and some whose goal was to optimize against the main agent being trained. In effect, these agents were trained to exploit weaknesses in the main agent and help it improve. Training against different sets of agents over time from the league induced a curriculum that allowed the main agents to achieve grandmaster status in the game. #### 4.2.3 Reward and Initial/Terminal State Distribution Changes Thus far, the curriculum consisted of ordering experience from the target task or modifying agents in the target environment. In the next two sections, we begin to examine approaches that explicitly create different MDPs for intermediate tasks, by changing some aspect of the MDP. First we consider approaches that keep the state and action spaces the same, as well as the environment dynamics, but allow the reward function and initial/terminal state distributions to vary. One of the earliest examples of this type of method was _learning from easy missions_. Asada et al. (1996) proposed this method to train a robot to shoot a ball into a goal based on vision inputs. The idea was to create a series of tasks, where the agent’s initial state distribution starts close to the goal state, and is progressively moved farther away in subsequent tasks, inducing a curriculum of tasks. In this work, each new task starts one “step” farther away from the goal, where steps from the goal is measured using a domain specific heuristic: a state is closer to the terminal state if the goal in the camera image gets larger. The heuristic implicitly requires that the state space can be categorized into “substates,” such as goal size or ball position, where the ordering of state transitions in a substate to a goal state is known. Thus, each substate has a dimension for making the task simpler or more complex. Source tasks are manually created to vary along these dimensions of difficulty. Recently, Florensa et al. (2017) proposed more general methods for performing this reverse expansion. They proposed reverse curriculum generation, an algorithm that generates a distribution of starting states that get increasingly farther away from the goal. The method assumes at least one goal state is known, which is used as a seed for expansion. Nearby starting states are generated by taking a random walk from existing starting states by selecting actions with some noise perturbation. In order to select the next round of starting states to expand from, they estimate the expected return for each of these states, and select those that produce a return between a manually set minimum and maximum interval. This interval is tuned to expand states where progress is possible, but not too easy. A similar approach by Ivanovic et al. (2019) considered combining the reverse expansion phase for curriculum generation with physics-based priors to accelerate learning by continuous control agents. An opposite “forward” expansion approach has also been considered by Florensa et al. (2018). This method allows an agent to automatically discover different goals in the state space, and thereby guide exploration of the space. They do this discovery with a Generative Adversarial Network (GAN) (Goodfellow et al., 2014), where the generator network proposes goal regions (parameterized subsets of the state space) and the discriminator evaluates whether the goal region is of appropriate difficulty for the current ability of the agent. Goal regions are specified using an indicator reward function, and policies are conditioned on the goal in addition to the state, like in a universal value function approximator (Schaul et al., 2015). The agent trains on tasks suggested by the generator. In detail, the approach consists of 3 parts: 1) First, goal regions are labelled according to whether they are of appropriate difficulty. Appropriate goals are those that give a return between hyperparameters $R_{min}$ and $R_{max}$. Requiring at least $R_{min}$ ensures there is a signal for learning progress. Requiring less than $R_{max}$ ensures that it is not too easy. 2) They use the labeled goals to train a Goal GAN. 3) Goals are sampled from the GAN as well as a replay buffer, and used for training to update the policy. The goals generated by the GAN shift over time to reflect the difficulty of the tasks, and gradually move from states close to the starting state to those farther away. Racaniere et al. (2019) also consider an approach to automatically generate a curriculum of goals for the agent, but for more complex goal-conditioned tasks in dynamic environments where the possible goals vary between episodes. The idea was to train a “setter” model to propose a curriculum of goals for a “solver” agent to attempt to achieve. In order to help the setter balance its goal predictions, they proposed three objectives which lead to a combination of three losses to train the setter model: goal validity (the goal should be valid or achievable by the current solver), goal feasibility (the goal should match the feasibility estimates for the solver with current skill), and goal coverage (encourage the setter to choose more diverse goals to encourage exploration in the space of goals). In addition, a “judge” model was trained to predict the reward the current solver agent would achieve on a goal (the feasibility of a goal) proposed by the setter. Their experimental results demonstrate the necessity of all three criteria for building useful curricula of goals. They also show that their approach is more stable and effective than the goal GAN method (Florensa et al., 2018) on complex tasks. An alternative to modifying the initial or terminal state distribution is to modify the reward function. Riedmiller et al. (2018) introduce SAC-X (Scheduled Auxiliary Control), an algorithm for scheduling and executing auxiliary tasks that allow the agent to efficiently explore its environment and also make progress towards solving the final task. Auxiliary tasks are defined to be tasks where the state, action, and transition function are the same as the original MDP, but where the reward function is different. The rewards they use in auxiliary tasks correspond to changes in raw or high level sensory input, similar to Jaderberg et al. (2017). However, while Jaderberg et al. (2017) only used auxiliary tasks for improving learning of the state representation, here they are used to guide exploration, and are sequenced. The approach is a hierarchical RL method: they need to 1) learn intentions, which are policies for the auxiliary tasks, and 2) learn the scheduler, which sequences intention policies and auxiliary tasks. To learn the intentions, they learn to maximize the action-value function of each intention from a starting state distribution that comes as a result of following each of the other intention policies. This process makes the policies compatible. The scheduler can be thought of as a meta-agent that performs sequencing, whose goal is to maximize the return on the target task MDP. The scheduler selects intentions, whose policy is executed on the extrinsic task, and is used to guide exploration. Heuristic-based methods have also been designed to sequence tasks that differ in their reward functions. One such approach is SAGG-RIAC (Self-Adaptive Goal Generation - Robust Intelligent Adaptive Curiosity) (Baranes and Oudeyer, 2013). They define _competence_ as the distance between the achieved final state and the goal state, and _interest_ as the change in competence over time for a set of goals. A region of the task space is deemed more _interesting_ than others, if the latest tasks in the region have achieved a high increase in competence. The approach repeatedly selects goals by first picking a region with a probability proportional to its interest, and then choosing a goal at random within that region. With a smaller probability the system also selects a goal at random over the whole task set or a goal close to a previously unsuccessful task. The bias towards interesting regions causes the goals to be more dense in regions where the competence increases the fastest, creating a curriculum. Because of the stochastic nature of the goal generating process, however, not every task is necessarily beneficial in directly increasing the agent’s ability on the target task, but contributes to updating the competence and interest measures. Since the intermediate tasks are generated online as the agent learns, in this approach both sequencing and generation result from the same sampling process. Finally, Wu and Tian (2017) also consider changing the transition dynamics and the reward functions of the intermediate tasks. They propose a novel framework for training an agent in a partially observable 3D Doom environment. Doom is a First-Person Shooter game, in which the player controls the agent to fight against enemies. In their experiment, they first train the agent on some simple maps with several curricula. Each curriculum consists of a sequence of progressively more complex environments with varying domain parameters (e.g., the movement speed or initial health of the agent). After learning a capable initial task model, the agent is then trained on more complicated maps and more difficult tasks with a different reward function. They also design an adaptive curriculum learning strategy in which a probability distribution over different levels of curriculum is maintained. When the agent performs well on the current distribution, the probability distribution is shifted towards more difficult tasks. #### 4.2.4 No restrictions Next, there is a class of methods that create a curriculum using intermediate tasks, but make no restrictions on the MDPs of these intermediate tasks. We categorize them in three ways by how they address the task sequencing problem: treating sequencing 1) as an MDP/POMDP, 2) as a combinatorial optimization over sequences, and 3) as learning the connections in a directed acyclic task graph. Because there are no limitations on the types of intermediate tasks allowed, some assumptions are usually made about the transfer learning algorithm, and additional information about the intermediate tasks (such as task descriptors) is typically assumed. Finally, we also discuss work on an auxiliary problem to sequencing: how long to spend on each task. #### MDP-based Sequencing The first formalization of the sequencing problem is as a Markov Decision Process. These methods formulate curriculum generation as an interaction between 2 types of MDPs. The first is the standard MDP, which models a _learning agent_ (i.e., the student) interacting with a task. The second is a higher level meta-MDP for the _curriculum agent_ (i.e., the teacher), whose goal is to select tasks for the learning agent. Narvekar et al. (2017) denote the meta-MDP as a curriculum MDP (CMDP), where the state space $\mathcal{S}$ is the set of policies the learning agent can represent. These can be represented parametrically using the weights of the learning agent. The action space $\mathcal{A}$ is the set of tasks the learning agent can train on next. Learning a task updates the learning agent’s policy, and therefore leads to a transition in the CMDP via a transition function $p$. Finally, the reward function $r$ is the time in steps or episodes that it took to learn the selected task. Under this model, a curriculum agent typically starts in an initial state corresponding to a random policy for the learning agent. The goal is to reach a terminal state, which is defined as a policy that can achieve some desired performance threshold on the target task, as fast as possible. Matiisen et al. (2017) consider a similar framework, where the interaction is defined as a POMDP. The state and action spaces of the meta-POMDP are the same as in Narvekar et al. (2017), but access to the internal parameters of the learning agent is not available. Instead, an observation of the current score of the agent on each intermediate task is given. The reward is the change in the score on the task from this timestep to the previous timestep when the same task was trained on. Thus, while Narvekar et al. (2017) focused on minimizing time to threshold performance on the target task, the design of Matiisen et al. (2017) aims to maximize the sum of performance in all tasks encountered. While both approaches are formalized as POMDPs, learning on these POMDPs is computationally expensive. Thus, both propose heuristics to guide the selection of tasks. Narvekar et al. (2017) take a sample-based approach, where a small amount of experience samples gathered on the target and intermediate tasks are compared to identify relevant intermediate tasks. The task that causes the greatest change in policy as evaluated on the target task samples is selected. In contrast, Matiisen et al. (2017) select tasks where the absolute value of the slope of the learning curve is highest. Thus it selects tasks where the agent is making the most progress or where the agent is forgetting the most about tasks it has already learned. Initially tasks are sampled randomly. As one task starts making progress, it will be sampled more, until the learning curve plateaus. Then another will be selected, and the cycle will repeat until all the tasks have been learned. Subsequently, Narvekar and Stone (2019) explored whether learning was possible in a curriculum MDP, thus avoiding the need for heuristics in task sequencing. They showed that you can represent a CMDP state using the weights of the knowledge transfer representation. For example, if the agent uses value function transfer, the CMDP state is represented using the weights of the value function. By utilizing function approximation over this state space, they showed it is possible to learn a policy over this MDP, termed a curriculum policy, which maps from the current status of learning progress of the agent, to the task it should learn next. In addition, the approach addresses the question of how long to train on each intermediate task. While most works have trained on intermediate tasks until learning plateaus, this is not always necessary. Narvekar and Stone (2019) showed that training on each intermediate task for a few episodes, and letting the curriculum policy reselect tasks that require additional time, results in faster learning. However, while learning a curriculum policy is possible, doing so independently for each agent and task is still very computationally expensive. #### Combinatorial Optimization and Search A second way of approaching sequencing is as a combinatorial optimization problem: given a fixed set of tasks, find the permutation that leads to the best curriculum, where best is determined by one of the CL metrics introduced in Section 3.3. Finding the optimal curriculum is a computationally difficult black-box optimization problem. Thus, typically fast approximate solutions are preferred. One such popular class of methods are metaheuristic algorithms, which are heuristic methods that are not tied to specific problem domains, and thus can be used as black boxes. Foglino et al. (2019a) adapt and evaluate four representative metaheuristic algorithms to the task sequencing problem: beam search (Ow and Morton, 1988), tabu search (Glover and Laguna, 1998), genetic algorithms (Goldberg, 1989), and ant colony optimization (Dorigo et al., 1991). The first two are trajectory-based, which start at a guess of the solution, and search the neighborhood of the current guess for a better solution. The last two are population-based, which start with a set of candidate solutions, and improve them as a group towards areas of increasing performance. They evaluate these methods for 3 different objectives: time to threshold, maximum return (asymptotic performance), and cumulative return. Results showed that the trajectory-based methods outperformed their population-based counterparts on the domains tested. While metaheuristic algorithms are broadly applicable, it is also possible to create specific heuristic search methods targeted at particular problems, such as task sequencing with a specific transfer metric objective. Foglino et al. (2019b) introduce one such heuristic search algorithm, designed to optimize for the cumulative return. Their approach begins by computing transferability between all pairs of tasks, using a simulator to estimate the cumulative return attained by using one task as a source for another. The tasks are then sorted according to their potential of being a good source or target, and iteratively chained in curricula of increasing length. The algorithm is anytime, and eventually exhaustively searches the space of all curricula with a predefined maximum length. Jain and Tulabandhula (2017) propose 4 different online search methods to sequence tasks into a curriculum. Their methods also assume a simulator is available to evaluate learning on different tasks, and use the learning trajectory of the agent on tasks seen so far to select new tasks. The 4 approaches are: 1) Learn each source task for a fixed number of steps, and add the one that gives the most reward. The intuition is that high reward tasks are the easiest to make progress on. 2) Calculate a transferability matrix for all pairs of tasks, and create a curriculum by chaining tasks backwards from the target tasks greedily with respect to it. 3) Extract a feature vector for each task (as in Narvekar et al., 2016), and learn a regression model to predict transferability using the feature vector. 4) Extract pair wise feature vectors between pairs of tasks, and learn a regression model to predict transferability. Finally, instead of treating the entire problem as a black box, it has also been treated as a gray box. Foglino et al. (2019c) propose such an approach, formulating the optimization problem as the composition of a white box scheduling problem and black box parameter optimization. The scheduling formulation partially models the effects of a given sequence, assigning a utility to each task, and a penalty to each pair of tasks, which captures the effect on the objective of learning two tasks one after the other. The white- box scheduling problem is an integer linear program, with a single optimal solution that can be computed efficiently. The quality of the solution, however, depends on the parameters of the model, which are optimized by a black-box optimization algorithm. This external optimization problem searches the optimal parameters of the internal scheduling problem, so that the output of the two chained optimizers is a curriculum that maximizes cumulative return. #### Graph-based Sequencing Another class of approaches explicitly treats the curriculum sequencing problem as connecting nodes with edges into a directed acyclic task graph. Typically, the task-level curriculum formulation is used, where nodes in the graph are associated with tasks. A directed edge from one node to another implies that one task is a source task for another. Existing work has relied on heuristics and additional domain information to determine how to connect different task nodes in the graph. For instance, Svetlik et al. (2017) assume the set of tasks is known in advance, and that each task is represented by a task feature descriptor. These features encode properties of the domain. For example, in a domain like Ms. Pac-Man, features could be the number of ghosts or the type of maze. The approach consists of three parts. First, a binary feature vector is extracted from the feature vector to represent non-zero elements. This binary vector is used to group subsets of tasks that share similar elements. Second, tasks within each group are connected into subgraphs using a novel heuristic called _transfer potential_. Transfer potential is defined for discrete state spaces, and trades off the applicability of a source task against the cost needed to learn it. Applicability is defined as the number of states that a value function learned in the source can be applied to a target task. The cost of a source task is approximated as the size of its state space. Finally, once subgraphs have been created, they are linked together using directed edges from subgraphs that have a set of binary features to subgraphs that have a superset of those features. Da Silva and Reali Costa (2018) follow a similar procedure, but formalize the idea of task feature descriptors using an object-oriented approach. The idea is based on representing the domain as an object-oriented MDP, where states consist of a set of objects. A task OO-MDP is specified by the set of specific objects in this task, and the state, action, transition, and reward functions of the task. With this formulation, source tasks can be generated by selecting a smaller set of objects from the target task to create a simpler task. To create the curriculum graph, they adapt the idea of transfer potential to the object-oriented setting: instead of counting the number of states that the source task value function is applicable in, they compare the sets of objects between the source and target tasks. While the sequencing is automated, human input is still required to make sure the tasks created are solvable. #### Auxiliary Problems Finally, we discuss an additional approach that tackles an auxiliary problem to sequencing: how long to spend on each intermediate task in the curriculum. Most existing work trains on intermediate tasks until performance plateaus. However, as we mentioned previously, Narvekar and Stone (2019) showed that this is unnecessary, and that better results can be obtained by training for a few episodes, and reselecting or changing tasks dynamically as needed. Bassich et al. (2020) consider an alternative method for this problem based on _progression_ functions. Progression functions specify the pace at which the difficulty of the task should change over time. The method relies on the existence of a task-generation function, which maps a desired complexity $c_{t}\in[0,1]$ to a task of that complexity. The most complex task, for which $c_{t}=1$, is the final task. After every episode, the progression function returns the difficulty of the task that the agent should face at that time. The authors define two types of progression functions: fixed progressions, for which the learning pace is predefined before learning takes place; and adaptive progressions, which adjust the learning pace online based on the performance of the agent. Linear and exponential progressions are two examples of fixed progression functions, and increase the difficulty of the task linearly and exponentially, respectively, over a prespecified number of time steps. The authors also introduce an adaptive progression based on a friction model from physics, which increases $c_{t}$ as the agent’s performance is increasing, and slows down the learning pace if performance decreases. Progression functions allow the method to change the task at every episode, solving the problem of deciding how long to spend in each task, while simultaneously creating a continually changing curriculum. #### 4.2.5 Human-in-the-Loop Curriculum Generation Thus far, all the methods discussed in Section 4.2 create a curriculum _automatically_ using a sequencing algorithm, which either reorders samples from the final task or progressively alters how much intermediate tasks in the curriculum may differ. Bengio et al. (2009) and Taylor (2009) both emphasize the importance of better understanding how _humans_ approach designing curricula. Humans may be able to design good curricula by considering which intermediate tasks are “too easy” or “too hard,” given the learner’s current ability to learn, similar to how humans are taught with the zone of proximal development (Vygotsky, 1978). These insights could then be leveraged when designing automated curriculum learning systems. Therefore, in this section, we consider curriculum sequencing approaches that are done _manually_ by humans who are either _domain experts_ , who have specialized knowledge of the problem domain, or _naive users_ , who do not necessarily know about the problem domain and/or machine learning. One example of having domain experts manually generate the curriculum is the work done by Stanley et al. (2005), in which they explore how to keep video games interesting by allowing agents to change and to improve through interaction with the player. They use the NeuroEvolving Robotic Operatives (NERO) game, in which simulated robots start the game with no skills and have to learn complicated behaviors in order to play the game. The human player takes the role of a trainer and designs a curriculum of training scenarios to train a team of simulated robots for military combat. The player has a natural interface for setting up training exercises and specifying desired goals. An ideal curriculum would consist of exercises with increasing difficulty so that the agent can start with learning basic skills and gradually building on them. In their experiments, the curriculum is designed by several NERO programmers who are familiar with the game domain. They show that the simulated robots could successfully be trained to learn different sophisticated battle tactics using the curriculum designed by these domain experts. It is unclear whether the human player who is not familiar with the game can design good curriculum. A more recent example is by MacAlpine and Stone (2018). They use a very extensive manually constructed curriculum to train agents to play simulated robot soccer. The curriculum consists of a training schedule over 19 different learned behaviors. It encompasses skills such as moving to different positions on the field with different speeds and rotation, variable distance kicking, and accessory skills such as getting up when fallen. Optimizing these skills independently can lead to problems at the intersection of these skills. For example, optimizing for speed in a straight walk can lead to instability if the robot needs to turn or kick due to changing environment conditions. Thus, the authors of this work hand-designed a curriculum to train related skills together using an idea called overlapping layered learning. This curriculum is designed using their domain knowledge of the task and agents. While domain experts usually generate good curricula to facilitate learning, most existing work does not explicitly explore their curriculum design process. It is unclear what kind of design strategies people follow when sequencing tasks into a curriculum. Published research on Interactive Reinforcement Learning (Thomaz and Breazeal, 2006; Knox and Stone, 2009; Suay and Chernova, 2011; Knox and Stone, 2012; Griffith et al., 2013; Subramanian et al., 2016; Loftin et al., 2016; MacGlashan et al., 2017) has shown that RL agents can successfully speed up learning using human feedback, demonstrating the significant role can humans play in teaching an agent to learn a (near-) optimal policy. This large body of work mainly focuses on understanding how human teachers want to teach the agent and how to incorporate these insights into the standard RL framework. Similarly, the way we define curriculum design strategies still leaves a lot to be defined by human teachers. As pointed out by Bengio et al. (2009), the notion of simple and complex tasks is often based on human intuition, and there is value in understanding how humans identify “simple” tasks. Along these lines, some work has been done to study whether curriculum design is a prominent teaching strategy that naive users choose to teach the agent and how they approach designing curricula. $\begin{array}[]{cc}\includegraphics[height=79.6678pt]{img/dog_training_given_final_task.png}&\includegraphics[width=303.53267pt]{img/dog_training_curriculum.png}\\\ (a)&(b)\end{array}$ Figure 4: One example of curricula designed by human users. (a) Given final task. (b) A curriculum designed by one human participant. To study the teaching strategies followed by naive users, Khan et al. (2011) conduct behavioral studies in which human participants need to teach a robot the concept of whether an object can be grasped with one hand. In their experiment, participants are provided with 31 cards with photos of common objects (e.g., food, furniture, and animals) for them to select. The experiment consists of two subtasks. In the first subtask, participants sort the objects on the table based on their subjective ratings of their graspability. In the second subtask, participants pick up the cards from the table and show them to the robot while teaching the robot the concept of graspability, using as few cards as possible. While teaching the robot the object’s graspability, participants can either use any natural language or say either “graspable” or “not graspable,” depending on one of the two conditions they are randomly assigned. They observe that participants follow three distinct teaching strategies, one of which is consistent with the curriculum learning principle, i.e., starting simple and gradually increasing the difficulty of the task. Furthermore, they propose a novel theoretical framework as a potential explanation for the teaching strategy that follows the curriculum learning principle, which shows that it is the result of minimizing per-iteration expected error of the learner. Peng et al. (2018) also explore how naive users design a curriculum of tasks for an agent, but in a more complex sequential decision-making task. Specifically, a simple simulated home environment is used, where the agent must learn to perform tasks in a variety of environments. The tasks are specified via text commands and the agent is trained to perform the task via reinforcement and punishment feedback from a human trainer. It uses the goal- directed Strategy-Aware Bayesian Learning (SABL) algorithm (Loftin et al., 2016) for learning from human feedback. In the user study, participants are asked to design a set of training assignments for the agent to help it quickly learn to complete the given final assignment (shown in Figure 4a). A set of source tasks are provided for human participants to select and sequence. One example of curricula designed by human participants is shown in Figure 4b. Their empirical results show that, compared to directly learning the pre- specified final task from scratch, non-expert humans can successfully design curricula that result in better overall agent performance on learning both the entire curriculum and the final task. They also discover that humans are more likely to select commands for intermediate tasks that include concepts that are important for the final task, and that doing so results in curricula that lead to better overall agent performance. Furthermore, they demonstrate that by taking advantage of this type of non-expert guidance, their curriculum- learning algorithm can be adapted to learn the human-generated curricula more efficiently. There is also some work that does not explicitly ask humans to design a curriculum, but uses human data to help generate the curriculum. One example is the work done by Hosu and Rebedea (2016), in which they propose a deep RL method that combines online agent experiences with offline human experiences to train the agent more efficiently. In some sparse-reward Atari games such as Montezuma’s Revenge and Private Eye, the agent needs to execute a long sequence of specific actions to receive the first positive reward from the environment, which makes the exploration problem much harder. Thus, the commonly used $\epsilon$-greedy strategy could not find any game paths to reach a first state with positive reward, preventing the neural network from learning relevant features to good states. Inspired by curriculum learning and the human starts evaluation metric used for testing Atari agents, they use checkpoints sampled from a human player’s game experience as starting points for the learning process. The main intuition behind this approach is that at least some of the checkpoints will be an “easier” starting point, which is closer to some states with positive reward that the agent can benefit from. While this method belongs to the class of sequencing approaches, as discussed in Section 4.2.1, that reorders samples in the final task to derive a curriculum, it additionally considers more informative sample data generated by naive human users in order to build a more efficient curriculum. We find that very limited work has been done on investigating how humans design curricula. While the work discussed in this section enriches our empirical understanding of human teaching and gives us some insights into the development of new machine-learning algorithms and interfaces that can better accommodate machine- or human-created curricula, we believe more work needs to be done along this line. ### 4.3 Knowledge Transfer While we view sequencing, as covered in Section 4.2, to be the core concept of curriculum learning, the whole premise of CL depends on an agent’s ability to transfer knowledge among tasks. While a full discussion of transfer learning for RL is beyond the scope of this survey, this subsection is designed to provide the reader a brief introduction to the area so that they can effectively leverage it as part of their own explorations in curriculum learning. In curriculum learning, transfer learning methods are used to allow the agent to reuse knowledge learned from one intermediate task to another within the curriculum. It is worth noting that when creating a curriculum using only samples from the target task (discussed in Section 4.2.1), there is no transfer as there is only a single task (the target task) and correspondingly no change in the environment. However, when creating a curriculum using multiple intermediate tasks, which may differ in state/action space, reward function, or transition function from the final task, transfer learning is needed to extract and pass on reusable knowledge acquired in one intermediate task to the next. The type of knowledge transferred also directly affects the type of learner that is applicable to the learning process. Transferred knowledge can be low-level, such as an entire policy, a value function, a full task model, or some training instances, which can be directly used to initialize the learner in the target task. The knowledge can also be high-level, such as partial policies or options, skills, shaping rewards, or subtask definitions. This type of information may not fully initialize the learner in the target task, but it could be used to guide the agent’s learning process in the target task. In this subsection, we discuss different transfer learning approaches used in curricula. In policy transfer, a policy learned in a source or intermediate task is used to initialize the policy in the target task. When transferring policies between different tasks, the tasks may differ in some aspect of the MDP, such as starting states (Florensa et al., 2017), reward functions (Florensa et al., 2018; Riedmiller et al., 2018), or transition functions (Clegg et al., 2017). For instance, Clegg et al. (2017) demonstrate that an arm-like manipulator can successfully learn the control policy for a simulated dressing task, by transferring policies between tasks with different transition functions. In a dressing task, the goal is to achieve a desired relative positioning of the garment and the limb. To do this, they first train a sphere to move through a funnel-like geometry to reach some target location. They then directly apply the learned policy to a different scenario in which a manipulator with arbitrary shape navigates through a simulated garment. The main trick is to train multiple spheres using a curriculum learning strategy and then aggregate them to control the manipulator in the dressing task. Citation | Intermediate Task Generation | Curriculum Representation | Transfer Method | Curriculum Sequencer | Curriculum Adaptivity | Evaluation Metric | Application Area ---|---|---|---|---|---|---|--- Clegg et al. (2017) | domain experts | sequence | policies | domain experts | static | asymptotic, time to threshold | sim robotics Fujii et al. (1998) | domain experts | sequence | partial policies | domain experts | static | asymptotic | real robotics Karpathy and Van De Panne (2012) | domain experts/target | sequence/single | partial policies /no transfer | domain experts/automatic | static/adaptive | time to threshold | sim robotics Rusu et al. (2016) | domain experts | sequence | policies | domain experts | static | asymptotic | video games Shao et al. (2018) | domain experts | sequence | task model | domain experts | static | asymptotic, total reward | video games Sinapov et al. (2015) | automatic | sequence | value function | automatic | static | jump start | video games Tessler et al. (2017) | domain experts | sequence | partial policies | domain experts | static | asymptotic | video games Vezhnevets et al. (2016) | automatic | sequence | partial policies | automatic | static | asymptotic, total reward | video games Wang et al. (2020) | domain experts | sequence | policies | domain experts | static | asymptotic | video games Yang and Asada (1996) | domain experts | sequence | partial policies | automatic | adaptive | asymptotic, time to threshold | real robotics Yang et al. (2020) | domain experts | sequence | policies | domain experts | static | asymptotic, time to threshold | toy, other Zimmer et al. (2018) | domain experts | sequence | partial policies | domain experts | static | asymptotic, total reward | sim robotics Table 3: The papers discussed in Section 4.3, categorized along the dimensions presented in Section 3.4. Bolded values under evaluation metric indicate strong transfer. In Shao et al. (2018), a learned task model is transferred between tasks, which is used to initialize the policy network. Thus, it is similar to transferring policies. Their work aims to solve the problem of multi-agent decision making in StarCraft micromanagement, where the goal is to control a group of units to destroy the enemy under certain terrain conditions. A parameter sharing multi-agent gradient-descent Sarsa($\lambda$) (PS-MAGDS) method is proposed to train the units to learn an optimal policy, which is parametrized by a feed-forward neural network. PS-MAGDS simply extends the traditional Sarsa($\lambda$) to multiple units by sharing parameters of the policy network among units to encourage cooperative behaviors. A reward function including small immediate rewards is also designed to accelerate the learning process. When using transfer learning in their experiments, the agents are first trained in some small scale source scenarios using PS-MAGDS. The well-trained model is then used to initialize the policy network to learn micromanagement in the target scenarios. To scale the combat to a large scale scenario, they combine curriculum learning and transfer learning where the agents are trained with a sequence of progressively more complex micromanagement tasks. The difficulty of the micromanagement task is controlled by changing the number and type of units. Value function transfer is another common method for transferring low-level knowledge between intermediate tasks within a curriculum. In most existing work (Sinapov et al., 2015; Narvekar et al., 2017; Da Silva and Reali Costa, 2018), value function transfer is achieved by using the parameters of a value function learned in one intermediate task to initialize the value function in the next intermediate task in the curriculum, such that the agent learns the final task with some initial policy that is better than random exploration. For example, Sinapov et al. (2015) focus on addressing the task selection problem in curriculum learning using value function transfer, under the assumption that no samples from the final tasks are available. They propose to use meta-data (i.e., a fixed-length feature vector that describes the task) associated with each task to identify suitable intermediate tasks. The main idea is to use such meta-data to learn the benefits of transfer between different ‘source-target’ task pairs, and have this generalize to new unseen task pairs to guide task selection. When transferring low-level policies or value functions across tasks, there are several challenges that arise, particularly in the modern context of deep reinforcement learning. First is the problem of catastrophic forgetting, where knowledge from previously learned tasks is lost as information on a new task is incorporated. This effect occurs because the weights of the neural network optimized for a first task must be changed to meet the objectives of a new task, often resulting in poorer performance on the original task. Typically, in the curriculum setting, we only care about performance in the final tasks. However, if information from two orthogonal tasks needs to be combined (such as two independent skills), this challenge needs to be addressed. One approach is progressive neural networks (Rusu et al., 2016), which trains a new network “column” for each new task, and leverages lateral connections to previously learned network columns to achieve transfer. When training subsequent columns, parameters from previous columns are frozen, which prevents catastrophic forgetting. The limitation is that the number of parameters grows with the number of tasks, and at inference time, the task label is needed to know which column to extract output from. A second problem is the case where the state and action spaces differ between tasks. One alternative is to transfer higher-level knowledge across tasks, such as partial policies or options. A partial policy is a policy that is not necessarily defined for all states in the state space of an MDP. We use partial policies as an umbrella term to represent closely related ideas such as options, skills, and macro-actions. Yang and Asada (1996) transfer learned control parameters between tasks, which are similar to partial policies. To solve the impedance learning problem for high-speed robotic assembly, they allow the system to learn impedance parameters associated with different dynamic motions separately, rather than to learn all the control parameters simultaneously. For instance, they first learn only the parameters associated with quasistatic motion by driving the system slowly, leaving other parameters unlearned. After the quasistatic parameters have been learned, they then slightly increase the motion speed, and use the learned values to initialize the quasistatic parameters when learning other parameters. Another example of transferring partial policies between tasks is the work done by Zimmer et al. (2018). Their main idea is to progressively increase the dimensionality of the tackled problem by increasing the (continuous) state and action spaces of the MDP, while an agent is learning a policy. The agent first learns to solve the source task with reduced state and action spaces until the increase in performance stagnates. Then, the partial policy learned by the agent is used as an initialization to learn the full policy in the target task with full state and action spaces. A developmental layer (like a dropout layer) is added to the network to filter dimensions of the states and actions. Similarly, Fujii et al. (1998) transfer options between tasks. To train mobile robots to learn collision avoidance behaviors in multi-robot systems more efficiently, they develop a multi-layered RL mechanism. Rather than gradually increasing the level of task complexity based on the learner’s performance as in Yang and Asada (1996), their learning process consists of four stages like a curriculum in which each stage learns a pre-defined controller. Each controller learns an option to solve a pre-defined sub-task. For instance, the first controller learns to move toward a specific goal. Then the output (goal- directed behavior) of the first controller is used as input for the second controller, which aims to learn to avoid the collision to a single robot, and so on. Vezhnevets et al. (2016) also transfer high-level macro-actions between tasks, which are simpler instances of options. In their experiment, the agent is trained with a curriculum where the goal state is first set to be very close to the start state and is then moved further away during learning process. Although the task gets progressively harder, the temporally abstracted macro- actions remain the same. The macro-actions learned early on can also be easily adapted using their proposed architecture. Specifically, a deep recurrent neural network architecture is used to maintain a multi-step action plan. The network learns when to commit to the action plan to generate macro-actions and when to update the plan based on observations. Another mechanism for transfer are skills. Tessler et al. (2017) propose a deep RL method that effectively retains and transfers learned skills to solve lifelong learning in MineCraft. In their work, a set of $N$ skills are trained a priori on various sub-tasks, which are then reused to solve the harder composite task. In their MineCraft experiment, the agent’s action space includes the original primitive actions as well as the set of pre-learned skills (e.g., navigate and pickup). A hierarchical architecture is developed to learn a policy that determines when to execute primitive actions and when to reuse pre-learned skills, by extending the vanilla DQN architecture (Mnih et al., 2015). The skills could be sub-optimal when they are directly reused for more complex tasks, and this hierarchical architecture allows the agent to learn to refine the policy by using primitive actions. They also show the potential for reusing the pre-learned skill to solve related tasks without performing any additional learning. Rather than selectively reusing pre-learned skills, Karpathy and Van De Panne (2012) focus on learning motor skills in an order of increasing difficulty. They decompose the acquisition of skills into a two-level curriculum: a _high-level_ curriculum specifies the order in which different motor skills should be learned, while the _low-level_ curriculum defines the learning process for a specific skill. The high-level curriculum orders the skills in a way such that each skill is relatively easy to learn, using the knowledge of the previously learned skills. For instance, the Acrobot first learns the Hop (easy to learn from scratch) and Flip (similar to hopping very slowly) skills, and then learns the more complex Hop-Flip skill. The learned skill-specific task parameters for easier skills will highly constrain the states that the Acrobat could be in, making it easier to learn more complex skills. For example, the Hop-Flip skills begin from a hopping gait of some speed, which can be reached by repeatedly executing the previously learned Hop skill. In multi-agent settings, several specific methods have been designed for curricula that progressively scale the number of agents between tasks. In these settings, the state and action spaces often scale based on the number of agents present. One common assumption in many of these methods is that the state space can be factored into elements for the environment $s^{env}$, the agent $s^{n}$, and all other agents $s^{-n}$. For example, Yang et al. (2020) propose CM3, which takes a two-stage approach. In the first stage, a single agent is trained without the presence of other agents. This is done by inducing a new MDP that removes all dependencies on agent interactions (i.e., removing $s^{-n}$) and training a network on this subspace. Then in the second stage, cooperation is learned by adding the parameters for the other agents into the network. Wang et al. (2020) propose 3 different approaches for multi-agent settings. The first is buffer reuse, which saves the replay buffers from all previous tasks, and samples experience from all of them to train in the current task. Samples from lower dimensional tasks are padded with zeros. The second is curriculum distillation, which adds a distillation loss based on KL divergence between policies/q-values between tasks. The third is transferring the model using a new network architecture called Dynamic Agent-number Network (DyAN). In this architecture, the state space elements related to the agent and environment go through a fully connected network, while the observations for each teammate agent are passed through a graph neural network (GNN) and then aggregated. These networks are subsequently combined to produce q-values or policies. ## 5 Related Areas and Paradigms Curriculum learning is an idea that has been studied in other areas of machine learning and human education, and is similar to several existing paradigms in reinforcement learning. In this section, we first relate curriculum learning to approaches in reinforcement learning that aim to improve sample complexity, and that consider learning multiple sets of tasks (Section 5.1). Then we describe approaches to learn curricula in supervised learning (Section 5.2) and for teaching and human education (Section 5.3). We include these approaches with the idea that the insights discovered in these areas could be adapted to apply to the reinforcement learning setting with autonomous agents. ### 5.1 Related Paradigms in Reinforcement Learning One of the central challenges in applying reinforcement learning to real world problems is sample complexity. Due to issues such as a sparse reward signal or complex dynamics, difficult problems can take an RL agent millions of episodes to learn a good policy, with many suboptimal actions taken during the course of learning. Many different approaches have been proposed to deal with this issue. To name a few, one method is imitation learning (Schaal, 1997), which uses demonstrations from a human as labels for supervised learning to bootstrap the learning process. Another example is off-policy learning (Hanna et al., 2017), which uses existing data from an observed behavior policy, to estimate the value of a desired target policy. Model-based approaches (Sutton and Barto, 1998) first learn a model of the environment, which can then be used for planning the optimal policy. Each of these methods come with their advantages and disadvantages. For imitation learning, the assumption is that human demonstrations are available. However, these are not always easy to obtain, especially when a good policy for the task is not known. In off-policy learning, in order to make full use of existing data, it is assumed that the behavior policy has a nonzero probability of selecting each action, and typically that every action to be evaluated or the target policy has been seen at least once. Finally, model- based approaches typically first learn a model of the environment, and then use it for planning. However, any inaccuracies in the learned model can compound as the planning horizon increases. Curriculum learning takes a different approach, and makes a different set of assumptions. The primary assumption is that the environment can be configured to create different subtasks, and that it is easier for the agent to discover _on its own_ reusable pieces of knowledge in these subtasks that can be used for solving a more challenging task. Within reinforcement learning, there are also several paradigms that consider learning on a set of tasks so as to make learning more efficient. Multitask learning, lifelong/continuous learning, active learning, and meta-learning are four such examples. In _multitask learning_ , the goal is to learn how to solve _sets_ of prediction or decision making tasks. Formally, given a set of tasks $m_{1},m_{2},\ldots m_{n}$, the goal is to _co-learn_ all of these tasks, by optimizing the performance over all $n$ tasks simultaneously. Typically, this optimization is facilitated by learning over some shared basis space. For example, Caruana (1997) considers multitask learning for supervised learning problems, and shares layers of a neural network between tasks. In supervised learning, these tasks are different classification or regression problems. Similar ideas have been applied in a reinforcement learning context by Wilson et al. (2007). In reinforcement learning, different tasks correspond to different MDPs. _Lifelong learning_ and _continual learning_ can be viewed as an online version of multitask learning. Tasks are presented one at a time to the learner, and the learner must use shared knowledge learned from previous tasks to more efficiently learn the presented task. As in multitask learning, typically the goal is to optimize performance over all tasks given to the learner. Lifelong and continual learning have been examined in both the supervised setting (Ruvolo and Eaton, 2013a) and the reinforcement learning setting (Ring, 1997; Ammar et al., 2014). The distinguishing feature of curriculum learning compared to these works is that in curriculum learning, we have full control over the _order_ in which tasks are selected. Indeed, we may have control over the _creation_ of tasks as well. In addition, the goal is to optimize performance for a specific target task, rather than all tasks. Thus, source tasks in curriculum learning are designed solely to improve performance on the target task—we are not concerned with optimizing performance in a source. In _active learning_ , the learner chooses which task or example to learn or ask about next, from a given set of tasks. Typically, active learning has been examined in a semi-supervised learning setting: a small amount of labeled data exists whereas a larger amount of unlabeled data is present. The labeled data is used to learn a classifier to infer labels for unlabeled data. Unlabeled data that the classifier is not confident about is requested for a label from a human user. For example, Ruvolo and Eaton (2013b) consider active learning in a lifelong learning setting, and show how a learner can actively select tasks to improve learning speed for all tasks in a set, or for a specific target task. The selection of which task to be learned next is similar to the _sequencing_ aspect of curriculum learning. However, the full method of curriculum learning is much broader, as it also encompasses creating the space of tasks to consider. Ruvolo and Eaton (2013b) and similar active learning work typically assume the set of tasks to learn and select from are already given. In addition, typically active learning has been examined for supervised prediction tasks, whereas we are concerned with reinforcement learning tasks. Finally, in _meta-learning_ (Finn et al., 2017), the goal is to train an agent on a variety of tasks such that it can quickly adapt to a new task within a small number of gradient descent steps. Typically, the agent is not given information identifying the task it is training on. In contrast, in curriculum learning, the learning agent may or may not have information identifying the task. However, the process that designs the curriculum by sequencing tasks usually does have this information. Like in the lifelong setting, there is no significance attached to the order in which tasks are presented to the learner. In addition, the objective in meta-learning is to train for fast adaptability, rather than for a specific final task as is the case in curriculum learning. ### 5.2 Curricula in Supervised Machine Learning In addition to reinforcement learning, curriculum learning has been examined for supervised learning. While it is beyond the scope of this article to extensively survey supervised CL methods, we would like to highlight a few that could inspire ideas and draw parallels to the RL setting. Bengio et al. (2009) first formalized the idea of curriculum learning in the context of supervised machine learning. They conducted case studies examining when and why training with a curriculum can be beneficial for machine learning algorithms, and hypothesized that a curriculum serves as both a continuation method and a regularizer. A continuation method is an optimization method for non-convex criteria, where a smoothed version of the objective is optimized first, with the smoothing gradually reduced over training iterations. Typically, “easy” examples in a curriculum correspond to a smoother objective. Using a simple shape recognition and language domain, they showed that training with a curriculum can improve both learning speed and performance. While many papers before Bengio et al. (2009) _used_ the idea of a curriculum to improve training of machine learning algorithms, most work considering how to systematically _learn_ a curriculum came after. One recent example is work by Graves et al. (2017). They introduced measures of _learning progress_ , which indicate how well the learner is currently improving from the training examples it is being given. They introduce 2 main measures based on 1) rate of increase in prediction accuracy and 2) rate of increase of network complexity. These serve as the reward to a non-stationary multi-armed bandit algorithm, which learns a stochastic policy for selecting tasks. These signals of learning progress could in theory be applied or adapted to the reinforcement learning setting as well. Graves et al. (2017) also make an interesting observation, which is that using a curriculum is similar to changing the step size of the learning algorithm. Specifically, in their experiments, they found that a random curriculum still serves as a strong baseline, because all tasks in the set provide a gradient333Note however that in the reinforcement learning setting, because the policy affects the distribution of states an agent encounters, random training can be significantly worse.. Easier tasks provide a stronger gradient while harder tasks provide a gradient closer to 0. Thus, choosing easy, useful tasks allows the algorithm to take larger steps and converge faster. More recently, Fan et al. (2018) frame curriculum learning as “Learning to Teach,” where a teacher agent learned to train a learning agent using a curriculum. The process is formulated as an MDP between these two interacting agents, similar to the MDP approaches discussed in Section 4.2.4: the teacher agent selects the training data, loss function, and hypothesis space, while the learning agent trains given the parameters specified by the teacher. The state space of the MDP is represented as a combination of features of the data, features of the student model, and features that represent the combination of both data and learner models. The reward signal is the accuracy on a held-out development set. Training a teacher agent can be computationally expensive. They amortize this cost by using a learned teacher agent to teach a new student with the same architecture. For example, they train the teacher using the first half of MNIST, and use the learned teacher to train a new student from the second half of MNIST. Another way they amortize the cost is to train a new student with a different architecture (e.g., changing from ResNet32 to ResNet110). Similar ideas have been explored in the reinforcement learning setting. However, the test set distribution is different from the training set distribution, which makes performing these kind of evaluations more challenging. However, showing that the cost for training a teacher can be amortized is an important direction for future work. Finally, Jiang et al. (2015) explore the idea of self-paced curriculum learning for supervised learning, which unifies and takes advantage of the benefits of self-paced learning and curriculum learning. In their terminology, curriculum learning uses prior knowledge, but does not adapt to the learner. Specifically, a curriculum is characterized by a ranking function, which orders a dataset of samples by priority. This function is usually derived by predetermined heuristics, and cannot be adjusted by feedback from the learner. In contrast, self-paced learning (SPL) adjusts to the learner, but does not incorporate prior knowledge and leads to overfitting. In SPL, the curriculum design is implicitly embedded as a regularization term into the learning objective. However, during learning, the training loss usually dominates over the regularization, leading to overfitting. This paper proposes a framework that unifies these two ideas into a concise optimization problem, and discusses several concrete implementations. The idea is to replace the regularization term in SPL with a self-paced function, such that the weights lie within a predetermined curriculum region. In short, the curriculum region induces _a weak ordering_ over the samples, and the self-paced function determines the actual learning scheme within that ordering. The idea has parallels to a task-level curriculum for RL, where the curriculum induces a weak ordering over samples from all tasks, and with the learning algorithm determining the actual scheme within that ordering. ### 5.3 Algorithmically Designed Curricula in Education Curriculum learning has also been widely used for building effective Intelligent Tutoring Systems (ITS) for human education (Iglesias et al., 2003, 2009; Green et al., 2011; Brunskill and Russell, 2011; Doroudi et al., 2016). An ITS system involves a student interacting with an intelligent tutor (a computer-based system), with the goal of helping the student to master all skills quickly, using as little learning content as possible. Given that students have different learning needs, styles, and capabilities, the intelligent tutor should be able to provide customized instructions to them. To achieve this goal, one common strategy is called _curriculum sequencing_ , which aims to provide the learning materials in a meaningful order that maximizes learning of the students with different knowledge levels. The main problem this strategy must solve is to find the most effective lesson to propose next, given the student’s current learning needs and capabilities. Reinforcement learning is one of the machine learning techniques that has been used with intelligent tutors to partially automate construction of the student model and to automatically compute an optimal teaching policy (Woolf, 2007). One advantage of using RL methods in tutoring is that the model can learn adaptive teaching actions based on each individual student’s performance in real time, without needing to encode complex pedagogical rules that the system requires to teach effectively (e.g., how to sequence the learning content, when and how to provide an exercise). Another advantage is that it is a general domain-independent technique that can be applied in any ITS. As a concrete example, Iglesias et al. (2003, 2009) adapt $Q$-learning (Watkins, 1989) to an adaptive and intelligent educational system to allow it to automatically learn how to teach each student. They formulate the learning problem as an RL problem, where the state is defined as the description of the student’s knowledge, indicating whether the student has learned each knowledge item. The set of actions the intelligent tutor can execute includes selecting and showing a knowledge item to the student. A positive reward is given when all required content has been learned, otherwise no reward is given. The system evaluates the student’s knowledge state through tests, which shows how much the student knows about each knowledge item. The $Q$-value estimates the usefulness of executing an action when the student is in a particular knowledge state. Then, the tutoring problem can be solved using the traditional $Q$-learning algorithm. Green et al. (2011) propose using a multi-layered Dynamic Bayes Net (DBN) to model the teaching problem in an ITS system. The main idea is to model the dynamics of a student’s skill acquisition using a DBN, which is normally used in RL to represent transition functions for state spaces. More specifically, they formulate the problem as a factored MDP, where the state consists of one factor for each skill, corresponding to the student’s proficiency on that particular skill. The actions are to either provide a hint or to pose a problem about a particular skill to the student. From a history of teacher- student interaction, the teacher can model the student’s proficiency state, with the goal of teaching the student to achieve the highest possible proficiency value on each skill, using as few problems and hints as possible. Subsequently, the learned DBN model is used by a planning algorithm to search for the optimal teaching policy, mapping proficiency states of student knowledge to the most effective problem or hint to pose next. To allow the automated teacher to select a sequence of pedagogical actions in cases where learner’s knowledge may be unobserved, a different problem formulation is posed by Rafferty et al. (2016). They formulate teaching as a partially observable Markov decision process (POMDP), where the learner’s knowledge state is considered as a hidden state, corresponding to the learner’s current understanding of the concept being taught. The actions the automated teacher can select is a sequence of pedagogical choices, such as examples or short quizzes. The learner’s next knowledge state is dependent on her current knowledge state and the pedagogical action the teacher chooses. Changes in the learner’s knowledge state reflect learning. In this framework, the automated teacher makes some assumptions about student learning, which is referred to as the learner model: it specifies the space of possible knowledge states and how the knowledge state changes. Then the teacher can update its beliefs about the learner’s current knowledge state based on new observations, given this learner model. Using this POMDP framework, they explore how different learner models affect the teacher’s selection of pedagogical actions. While most approaches seek to solely maximize overall learning gains, Ramachandran and Scassellati (2014) propose an RL-based approach that uses a personalized social robot to tutor children, that maximizes learning gains and sustained engagement over the student-robot interaction. The main goal of the social robot is to learn the ordering of questions presented to a child, based on difficulty level and the child’s engagement level in real time. To represent the idea that children with different knowledge levels need a different curriculum, each child is categorized into a given group based on knowledge level at the start of the one-on-one tutoring interaction. An optimal teaching policy is then learned specific to each group. In particular, their approach consists of a training phase and an interaction phase. In the training phase, participants are asked to complete a tutoring exercise. A pretest and post-test will be used to evaluate the participant’s relative learning gains, which will also be used as the reward function to learn an optimal policy during the training phase. Subsequently, in the interaction phase, the child’s real-time engagement will be detected, serving as another reward signal for the RL algorithm to further optimize the teaching policy. Non-RL-based algorithms have been considered as well. Ballera et al. (2014) leverage the roulette wheel selection algorithm (RWSA) to perform personalized topic sequencing in e-learning systems. RWSA is typically used in genetic algorithms to arrange the chromosomes based on their fitness function, such that individuals with higher fitness value will have higher probability of being selected (Goldberg, 1989). Similarly, in an e-learning system, a chromosome is denoted by a lesson. Each lesson has a fitness value that dynamically changes based on the student’s learning performance. This fitness value indicates how well the topic was learned by the student, depending on three performance parameters: exam performance, study performance, and review performance of the learner. A lower fitness value means that the student has a poorer understanding of the topic. Thus, a reversed mechanism of RWSA is implemented, so as to select the lessons with lower fitness values more often for reinforcement. Then, this reversed RWSA algorithm is combined with linear ranking algorithm to sort the lessons. ## 6 Open Questions Through our survey of the literature, we have identified several open problems that have not been sufficiently studied in past work, and could be useful avenues for future research. ### 6.1 Fully Automated Task Creation Task creation is an important piece of the method of curriculum learning. Whether tasks are created “on-demand” or all in advance, the quality of the pool of tasks generated directly affects the quality of curricula that can be produced. In addition, the _quantity_ of tasks produced affect the search space and efficiency of curriculum sequencing algorithms. Despite this, very limited work (see Section 4.1) has been done on the problem of automatically generating tasks. Existing work either assumes the pool of tasks are manually crafted and specified beforehand, or defines a set of rules for semi- automatically creating tasks. However, these rules often have hyper-parameters that control how many tasks are created, and are also usually manually tuned. Reducing the amount of manual input required by these methods remains an important area for future work. ### 6.2 Transferring Different Types of Knowledge Between each pair of tasks in a curriculum, knowledge must be transferred from one task to the subsequent task. In virtually all of the works surveyed, the type of knowledge transferred has been fixed. For example, a value function was always transferred between tasks by Narvekar et al. (2017) while a shaping reward was always transferred by Svetlik et al. (2017). However, this limitation opens the question of whether different tasks could benefit from extracting different types of knowledge. For instance, it may be useful to extract an option from one task, and a model from another. Thus, in addition to deciding _which_ task to transfer from, we could also ask _what_ to extract and transfer from that task. Past transfer learning literature has shown that many forms of transfer are possible. The best type of knowledge to extract may differ based on task, and techniques will need to be developed to effectively combine these different types of knowledge. ### 6.3 Reusing Curricula and Sim-to-Real Curriculum Learning Another limitation of many curriculum learning approaches is that the time to generate a curriculum can be greater than the time to learn the target task outright. This shortcoming stems from the fact that curricula are typically learned independently for each agent and target task. However, in areas such as human education, curricula are used to train multiple students in multiple subjects. Thus, one way to amortize the cost would be to learn a curriculum to train multiple different agents, or to solve multiple different target tasks (Narvekar and Stone, 2020). Another option for amortizing the cost is to learn curricula for a sim-to-real setting on physical robots, where a curriculum is learned in simulation and then used to train a physical robot. While the exact weights of the policy learned in simulation would not apply in the real world, the semantics of the curriculum tasks might. Therefore, the physical robot could go through the same training regimen, but learn using the physics and dynamics of the real world. ### 6.4 Combining Task Generation and Sequencing The curriculum learning method can be thought of as consisting of 3 parts: task generation, sequencing, and transfer learning. For the most part, previous work has tackled each of these pieces independently. For example, sequencing methods typically assume the tasks are prespecified, or a task generation method exists. However, an interesting question is whether the task generation and task sequencing phases can be done simultaneously, by directly generating the next task in the curriculum. Some very preliminary work has been done in this direction in the context of video game level generation. For example, Green et al. (2019) used an evolutionary algorithm to generate maps for a gridworld, where each tile had a different element. The generator was optimized to maximize the loss of deep RL agent’s network, inducing a training curriculum. Combining task generation and sequencing has additional challenges, such as specifying the space of possible maps, ensuring those maps are valid/solvable, and creating maps that are challenging, but not too difficult to solve. In addition, training the generator can be very expensive. However, it promises an end-to-end solution that could reduce the amount of human intervention needed to design curricula. ### 6.5 Theoretical Results There have been many practical applications of curricula to speed up learning in both supervised and reinforcement learning. However, despite empirical evidence that curricula are beneficial, there is a lack of theoretical results analyzing when and why they are useful, and how they should be created. An initial analysis in the context of supervised learning was done by Weinshall et al. (2018) and Weinshall and Amir (2018). They analyzed whether reordering samples in linear regression and binary classification problems could improve the ability to learn new concepts. They did this analysis by formalizing the idea of an Ideal Difficulty Score (IDS), which is the loss of the example with respect to the optimal hypothesis, and the Local Difficulty Score (LDS), which is the loss of the example with respect to the current hypothesis. These are 2 ways to classify the difficulty of a sample, which can be used as a means to sequence samples. They showed that the convergence of an algorithm like stochastic gradient descent monotonically decreases with the IDS, and monotonically increases with the LDS. An open question is whether similar grounded metrics for difficulty of tasks can be identified in reinforcement learning, and what kind of convergence guarantees we can draw from them. ### 6.6 Understanding General Principles for Curriculum Design Determining the difficulty of a training example for an agent, and ensuring that each example presented to the agent is suitable given its current ability, is a major challenge in curriculum learning. In most existing work, the curriculum is generated either automatically (see Section 4.2), by ordering samples from the target tasks or iteratively selecting intermediate tasks with increasing difficulty tailored to the current ability of the learner; or manually by domain experts, who will typically have specialized knowledge of the problem domain. Very limited work (see Section 4.2.5) has been done to better understand how non-expert humans design curricula. The way we define curriculum design strategies still leaves a lot to be defined by human teachers. Can non-expert humans design effective curricula for a given final task? What kind of curriculum design strategies do they tend to follow when building curricula? If we could find some general principles non-expert humans follow for designing and/or sequencing more “interesting” intermediate tasks into a curriculum, we could incorporate these insights into the automatic process of generating useful source tasks for any task domain. Furthermore, can we adapt curriculum learning algorithms to better take advantage of this type of non- expert guidance to learn more efficiently? We believe a better understanding of the curriculum-design strategies used by non-expert humans may help us to 1) understand the general principles that make some curriculum strategies work better than others, and 2) inspire the design of new machine-learning algorithms and interfaces that better accommodate the natural tendencies of human trainers. ## 7 Conclusion This survey formalized the concept of a curriculum, and the method of curriculum learning in the context of reinforcement learning. Curriculum learning is a 3-part approach consisting of 1) task generation, 2) sequencing, and 3) transfer learning. We systematically surveyed existing work addressing each of these parts, with a particular focus on sequencing methods. We broke down sequencing methods into five categories, based on the assumptions they make about intermediate tasks in the curriculum. The simplest of these are sample sequencing methods, which reorder samples from the final task itself, but do not explicitly change the domain. These were followed by co-learning methods, where a curriculum emerges from the interaction of several agents in the same environment. Next we considered methods that explicitly changed the MDP to produce intermediate tasks. Some of these assumed that the environment dynamics stay the same, but that the initial/terminal state distribution and reward function can change. Others made no restrictions on the differences allowed from the target task MDP. Finally, we also discussed how humans approach sequencing, to shed light on manually designed curricula in existing work. Our survey of the literature concluded with a list of open problems, which we think will serve as worthwhile directions for future work. As a budding area in reinforcement learning, we hope that this survey will provide a common foundation and terminology to promote discussion and advancement in this field. Acknowledgments We would like to sincerely thank Brad Knox, Garrett Warnell, and the anonymous reviewers for helpful comments and suggestions that improved the presentation of many ideas in this article. Part of this work has taken place in the Learning Agents Research Group (LARG) at the Artificial Intelligence Laboratory, The University of Texas at Austin. LARG research is supported in part by grants from the National Science Foundation (CPS-1739964, IIS-1724157, NRI-1925082), the Office of Naval Research (N00014-18-2243), Future of Life Institute (RFP2-000), Army Research Office (W911NF-19-2-0333), DARPA, Lockheed Martin, General Motors, and Bosch. The views and conclusions contained in this document are those of the authors alone. Peter Stone serves as the Executive Director of Sony AI America and receives financial compensation for this work. The terms of this arrangement have been reviewed and approved by the University of Texas at Austin in accordance with its policy on objectivity in research. Part of this work has taken place in the Sensible Robots Research Group at the University of Leeds, which is partially supported by the Engineering and Physical Sciences Research Council of the UK (EP/R031193/1, EP/S005056/1), and the British Council. Part of this work has taken place in the Control, Robotics, Identification and Signal Processing (CRISP) Laboratory at Tufts University which is partially supported by DARPA (W911NF-19-2-0006), the Verizon Foundation, PTC Inc., and the Center for Applied Brain and Cognitive Sciences (CABCS). Part of this work has taken place in the Whiteson Research Lab at the University of Oxford, which is partially supported by the European Research Council (ERC), under the European Union’s Horizon 2020 research and innovation programme (grant agreement number 637713). Part of this work has taken place in the Intelligent Robot Learning (IRL) Lab at the University of Alberta, which is supported in part by research grants from the Alberta Machine Intelligence Institute. ## References * Ammar et al. (2014) Haitham Bou Ammar, Eric Eaton, Paul Ruvolo, and Matthew E Taylor. Online multi-task learning for policy gradient methods. In _International Conference on Machine Learning (ICML)_ , pages 1206–1214, 2014. * Ammar et al. (2015) Haitham Bou Ammar, Eric Eaton, José Marcio Luna, and Paul Ruvolo. Autonomous cross-domain knowledge transfer in lifelong policy gradient reinforcement learning. In _International Joint Conference on Artificial Intelligence (IJCAI)_ , pages 3345–3351, 2015. * Andrychowicz et al. (2017) Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 5048–5058, 2017. * Asada et al. (1996) Minoru Asada, Shoichi Noda, Sukoya Tawaratsumida, and Koh Hosoda. Purposive behavior acquisition for a real robot by vision-based reinforcement learning. _Machine Learning_ , 23(2-3):279–303, 1996. * Baker et al. (2020) Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor Mordatch. Emergent tool use from multi-agent autocurricula. In _International Conference on Learning Representations (ICLR)_ , 2020. * Ballera et al. (2014) Melvin Ballera, Ismail Ateya Lukandu, and Abdalla Radwan. Personalizing e-learning curriculum using reversed roulette wheel selection algorithm. In _International Conference on Education Technologies and Computers (ICETC)_ , pages 91–97. IEEE, 2014. * Bansal et al. (2018) Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, and Igor Mordatch. Emergent complexity via multi-agent competition. In _International Conference on Learning Representations (ICLR)_ , 2018. * Baranes and Oudeyer (2013) Adrien Baranes and Pierre-Yves Oudeyer. Active learning of inverse models with intrinsically motivated goal exploration in robots. _Robotics and Autonomous Systems_ , 61(1):49–73, 2013. * Bassich et al. (2020) Andrea Bassich, Francesco Foglino, Matteo Leonetti, and Daniel Kudenko. Curriculum learning with a progression function. https://arxiv.org/abs/2008.00511, 2020. * Bellemare et al. (2013) Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. _Journal of Artificial Intelligence Research_ , 47:253–279, 2013. * Bengio et al. (2009) Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In _International Conference on Machine Learning (ICML)_ , pages 41–48, 2009. * Brunskill and Russell (2011) Emma Brunskill and Stuart Russell. Partially observable sequential decision making for problem selection in an intelligent tutoring system. In _Poster at International Conference on Educational Data Mining (EDM)_. Citeseer, 2011. * Caruana (1997) Rich Caruana. Multitask learning. _Machine Learning_ , 28(1):41–75, 1997. * Clegg et al. (2017) Alexander Clegg, Wenhao Yu, Zackory Erickson, Jie Tan, C Karen Liu, and Greg Turk. Learning to navigate cloth using haptics. In _International Conference on Intelligent Robots and Systems (IROS)_ , pages 2799–2805, 2017. * Da Silva and Reali Costa (2018) Felipe Leno Da Silva and Anna Reali Costa. Object-oriented curriculum generation for reinforcement learning. In _International Conference on Autonomous Agents & Multiagent Systems (AAMAS)_, 2018. * Dorigo et al. (1991) Marco Dorigo, Vittorio Maniezzo, and Alberto Colorni. The ant system: An autocatalytic optimizing process. _Technical Report_ , 1991. * Doroudi et al. (2016) Shayan Doroudi, Kenneth Holstein, Vincent Aleven, and Emma Brunskill. Sequence matters but how exactly? a method for evaluating activity sequences from data. _Grantee Submission_ , 2016. * Elman (1993) Jeffrey L Elman. Learning and development in neural networks: The importance of starting small. _Cognition_ , 48(1):71–99, 1993. * Fachantidis et al. (2013) Anestis Fachantidis, Ioannis Partalas, Grigorios Tsoumakas, and Ioannis Vlahavas. Transferring task models in reinforcement learning agents. _Neurocomputing_ , 107:23–32, 2013. * Fan et al. (2018) Yang Fan, Fei Tian, Tao Qin, Xiang-Yang Li, and Tie-Yan Liu. Learning to teach. In _International Conference on Learning Representations (ICLR)_ , 2018. * Fang et al. (2019) Meng Fang, Tianyi Zhou, Yali Du, Lei Han, and Zhengyou Zhang. Curriculum-guided hindsight experience replay. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 12602–12613, 2019. * Fernández et al. (2010) Fernando Fernández, Javier García, and Manuela Veloso. Probabilistic policy reuse for inter-task transfer learning. _Robotics and Autonomous Systems_ , 58(7):866–871, 2010. * Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In _International Conference on Machine Learning (ICML)_ , pages 1126–1135. JMLR. org, 2017. * Florensa et al. (2017) Carlos Florensa, David Held, Markus Wulfmeier, Michael Zhang, and Pieter Abbeel. Reverse curriculum generation for reinforcement learning. In _Conference on Robot Learning (CoRL)_ , 2017. * Florensa et al. (2018) Carlos Florensa, David Held, Xinyang Geng, and Pieter Abbeel. Automatic goal generation for reinforcement learning agents. In _International Conference on Machine Learning (ICML)_ , pages 1514–1523, 2018. * Foglino et al. (2019a) Francesco Foglino, Christiano Coletto Christakou, and Matteo Leonetti. An optimization framework for task sequencing in curriculum learning. In _International Conference on Developmental Learning (ICDL-EPIROB)_ , 2019a. * Foglino et al. (2019b) Francesco Foglino, Christiano Coletto Christakou, Ricardo Luna Gutierrez, and Matteo Leonetti. Curriculum learning for cumulative return maximization. In _International Joint Conference on Artificial Intelligence (IJCAI)_ , 2019b. * Foglino et al. (2019c) Francesco Foglino, Matteo Leonetti, Simone Sagratella, and Ruggiero Seccia. A gray-box approach for curriculum learning. In _World Congress on Global Optimization_ , 2019c. * Fujii et al. (1998) Teruo Fujii, Yoshikazu Arai, Hajime Asama, and Isao Endo. Multilayered reinforcement learning for complicated collision avoidance problems. In _International Conference on Robotics and Automation (ICRA)_ , volume 3, pages 2186–2191. IEEE, 1998. * Glover and Laguna (1998) Fred Glover and Manuel Laguna. Tabu search. In _Handbook of combinatorial optimization_ , pages 2093–2229. Springer, 1998. * Goldberg (1989) David E Goldberg. _Genetic Algorithms in Search, Optimization and Machine Learning_. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 1st edition, 1989. * Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 2672–2680, 2014. * Graves et al. (2017) Alex Graves, Marc G Bellemare, Jacob Menick, Remi Munos, and Koray Kavukcuoglu. Automated curriculum learning for neural networks. In _International Conference on Machine Learning (ICML)_ , 2017. * Green et al. (2011) Derek T Green, Thomas J Walsh, Paul R Cohen, and Yu-Han Chang. Learning a skill-teaching curriculum with dynamic Bayes nets. In _Innovative Applications of Artificial Intelligence (IAAI)_ , 2011\. * Green et al. (2019) Michael Cerny Green, Benjamin Sergent, Pushyami Shandilya, and Vibhor Kumar. Evolutionarily-curated curriculum learning for deep reinforcement learning agents. In _AAAI Reinforcement Learning in Games Workshop_ , 2019. * Griffith et al. (2013) Shane Griffith, Kaushik Subramanian, Jonathan Scholz, Charles Isbell, and Andrea L Thomaz. Policy shaping: Integrating human feedback with reinforcement learning. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 2625–2633, 2013. * Hanna et al. (2017) Josiah Hanna, Philip Thomas, Peter Stone, and Scott Niekum. Data-efficient policy evaluation through behavior policy search. In _International Conference on Machine Learning (ICML)_ , August 2017\. * Hosu and Rebedea (2016) Ionel-Alexandru Hosu and Traian Rebedea. Playing Atari games with deep reinforcement learning and human checkpoint replay. In _Workshop on Evaluating General-Purpose AI (EGPAI)_ , 2016. * Iglesias et al. (2003) Ana Iglesias, Paloma Martínez, and Fernando Fernández. An experience applying reinforcement learning in a web-based adaptive and intelligent educational system. _Informatics in Education_ , 2:223–240, 2003. * Iglesias et al. (2009) Ana Iglesias, Paloma Martínez, Ricardo Aler, and Fernando Fernández. Learning teaching strategies in an adaptive and intelligent educational system through reinforcement learning. _Applied Intelligence_ , 31(1):89–106, 2009. * Ivanovic et al. (2019) Boris Ivanovic, James Harrison, Apoorva Sharma, Mo Chen, and Marco Pavone. Barc: Backward reachability curriculum for robotic reinforcement learning. In _International Conference on Robotics and Automation (ICRA)_ , pages 15–21. IEEE, 2019. * Jaderberg et al. (2017) Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In _International Conference on Learning Representations (ICLR)_ , 2017. * Jain and Tulabandhula (2017) Vikas Jain and Theja Tulabandhula. Faster reinforcement learning using active simulators. In _NIPS Workshop on Teaching Machines, Robots, and Humans_ , 2017\. * Jiang et al. (2015) Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, and Alexander G Hauptmann. Self-paced curriculum learning. In _Association for the Advancement of Artificial Intelligence (AAAI)_ , 2015. * Karpathy and Van De Panne (2012) Andrej Karpathy and Michiel Van De Panne. Curriculum learning for motor skills. In _Canadian Conference on Artificial Intelligence_ , pages 325–330. Springer, 2012. * Khan et al. (2011) Faisal Khan, Bilge Mutlu, and Xiaojin Zhu. How do humans teach: On curriculum learning and teaching dimension. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 1449–1457, 2011. * Kim and Choi (2018) Tae-Hoon Kim and Jonghyun Choi. Screenernet: Learning self-paced curriculum for deep neural networks. _arXiv preprint arXiv:1801.00904_ , 2018. * Knox and Stone (2009) W Bradley Knox and Peter Stone. Interactively shaping agents via human reinforcement: The TAMER framework. In _International Conference on Knowledge Capture_ , 2009. * Knox and Stone (2012) W Bradley Knox and Peter Stone. Reinforcement learning from simultaneous human and MDP reward. In _International Conference on Autonomous Agents and Multiagent Systems (AAMAS)_ , pages 475–482, 2012. * Lazaric (2012) Alessandro Lazaric. Transfer in reinforcement learning: a framework and a survey. In _Reinforcement Learning_ , pages 143–173. Springer, 2012. * Lazaric and Restelli (2011) Alessandro Lazaric and Marcello Restelli. Transfer from multiple MDPs. In _Advances in Neural Information Processing Systems (NIPS)_ , 2011\. * Lazaric et al. (2008) Alessandro Lazaric, Marcello Restelli, and Andrea Bonarini. Transfer of samples in batch reinforcement learning. In _International Conference on Machine Learning (ICML)_ , pages 544–551, 2008. * Lee et al. (2019) Su Young Lee, Choi Sungik, and Sae-Young Chung. Sample-efficient deep reinforcement learning via episodic backward update. In _Advances in Neural Information Processing Systems (NeurIPS)_ , pages 2110–2119, 2019. * Loftin et al. (2016) Robert Loftin, Bei Peng, James MacGlashan, Michael L Littman, Matthew E Taylor, Jeff Huang, and David L Roberts. Learning behaviors via human-delivered discrete feedback: modeling implicit feedback strategies to speed up learning. _Autonomous Agents and Multi-Agent Systems_ , 30(1):30–59, 2016. * MacAlpine and Stone (2018) Patrick MacAlpine and Peter Stone. Overlapping layered learning. _Artificial Intelligence_ , 254:21–43, 2018. * MacGlashan et al. (2017) James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David L Roberts, Matthew E Taylor, and Michael L Littman. Interactive learning from policy-dependent human feedback. In _International Conferences on Machine Learning (ICML)_ , 2017\. * Matiisen et al. (2017) Tambet Matiisen, Avital Oliver, Taco Cohen, and John Schulman. Teacher-student curriculum learning. _IEEE Transactions on Neural Networks and Learning Systems_ , 2017\. * Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. _Nature_ , 518(7540):529, 2015. * Narvekar and Stone (2019) Sanmit Narvekar and Peter Stone. Learning curriculum policies for reinforcement learning. In _International Conference on Autonomous Agents and Multiagent Systems (AAMAS)_ , May 2019. * Narvekar and Stone (2020) Sanmit Narvekar and Peter Stone. Generalizing curricula for reinforcement learning. In _Lifelong Learning Workshop at ICML_ , 2020. * Narvekar et al. (2016) Sanmit Narvekar, Jivko Sinapov, Matteo Leonetti, and Peter Stone. Source task creation for curriculum learning. In _International Conference on Autonomous Agents and Multiagent Systems (AAMAS)_ , Singapore, 2016. * Narvekar et al. (2017) Sanmit Narvekar, Jivko Sinapov, and Peter Stone. Autonomous task sequencing for customized curriculum design in reinforcement learning. In _International Joint Conference on Artificial Intelligence (IJCAI)_ , volume 147, page 149, 2017. * Ow and Morton (1988) Peng Si Ow and Thomas E Morton. Filtered beam search in scheduling. _The International Journal Of Production Research_ , 26(1):35–62, 1988. * Peng et al. (2018) Bei Peng, James MacGlashan, Robert Loftin, Michael L Littman, David L Roberts, and Matthew E Taylor. Curriculum design for machine learners in sequential decision tasks. _IEEE Transactions on Emerging Topics in Computational Intelligence_ , 2(4):268–277, 2018. * Peterson (2004) Gail B Peterson. A day of great illumination: B. F. Skinner’s discovery of shaping. _Journal of the Experimental Analysis of Behavior_ , 82(3):317–328, 2004. * Pinto et al. (2017) Lerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta. Robust adversarial reinforcement learning. In _International Conference on Machine Learning (ICML)_ , pages 2817–2826, 2017. * Racaniere et al. (2019) Sebastien Racaniere, Andrew Lampinen, Adam Santoro, David Reichert, Vlad Firoiu, and Timothy Lillicrap. Automated curriculum generation through setter-solver interactions. In _International Conference on Learning Representations (ICLR)_ , 2019. * Rafferty et al. (2016) Anna N Rafferty, Emma Brunskill, Thomas L Griffiths, and Patrick Shafto. Faster teaching via pomdp planning. _Cognitive Science_ , 40(6):1290–1332, 2016. * Ramachandran and Scassellati (2014) Aditi Ramachandran and Brian Scassellati. Adapting difficulty levels in personalized robot-child tutoring interactions. In _Workshop at the AAAI Conference on Artificial Intelligence_ , 2014\. * Ren et al. (2018) Zhipeng Ren, Daoyi Dong, Huaxiong Li, and Chunlin Chen. Self-paced prioritized curriculum learning with coverage penalty in deep reinforcement learning. _IEEE Transactions on Neural Networks and Learning Systems_ , 29(6):2216–2226, 2018. * Riedmiller et al. (2018) Martin Riedmiller, Roland Hafner, Thomas Lampe, Michael Neunert, Jonas Degrave, Tom van de Wiele, Vlad Mnih, Nicolas Heess, and Jost Tobias Springenberg. Learning by playing solving sparse reward tasks from scratch. In _International Conference on Machine Learning (ICML)_ , pages 4344–4353, 2018. * Ring (1997) Mark B Ring. Child: A first step towards continual learning. _Machine Learning_ , 28(1):77–104, 1997. * Rohde and Plaut (1999) Douglas LT Rohde and David C Plaut. Language acquisition in the absence of explicit negative evidence: How important is starting small? _Cognition_ , 72(1):67–109, 1999. * Rosin and Belew (1997) Christopher D Rosin and Richard K Belew. New methods for competitive coevolution. _Evolutionary computation_ , 5(1):1–29, 1997\. * Rusu et al. (2016) Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. _arXiv preprint arXiv:1606.04671_ , 2016. * Ruvolo and Eaton (2013a) Paul Ruvolo and Eric Eaton. ELLA: An efficient lifelong learning algorithm. In _International Conference on Machine Learning (ICML)_ , 2013a. * Ruvolo and Eaton (2013b) Paul Ruvolo and Eric Eaton. Active task selection for lifelong machine learning. In _Association for the Advancement of Artificial Intelligence (AAAI)_ , 2013b. * Sanger (1994) Terence D Sanger. Neural network learning control of robot manipulators using gradually increasing task difficulty. _IEEE Transactions on Robotics and Automation_ , 10(3):323–333, 1994. * Schaal (1997) Stefan Schaal. Learning from demonstration. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 1040–1046, 1997. * Schaul et al. (2015) Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In _International Conference on Machine Learning (ICML)_ , 2015. * Schaul et al. (2016) Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. In _International Conference on Learning Representations (ICLR)_ , 2016. * Schmidhuber (2013) Jürgen Schmidhuber. Powerplay: Training an increasingly general problem solver by continually searching for the simplest still unsolvable problem. _Frontiers in Psychology_ , 4:313, 2013. * Shao et al. (2018) Kun Shao, Yuanheng Zhu, and Dongbin Zhao. Starcraft micromanagement with reinforcement learning and curriculum transfer learning. _IEEE Transactions on Emerging Topics in Computational Intelligence_ , 2018. * Silver et al. (2014) David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In _International Conference on Machine Learning (ICML)_ , 2014. * Silver et al. (2016) David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. _Nature_ , 529(7587):484, 2016. * Sinapov et al. (2015) Jivko Sinapov, Sanmit Narvekar, Matteo Leonetti, and Peter Stone. Learning inter-task transferability in the absence of target task samples. In _International Conference on Autonomous Agents and Multiagent Systems (AAMAS)_ , pages 725–733, 2015. * Skinner (1958) Burrhus F Skinner. Reinforcement today. _American Psychologist_ , 13(3):94, 1958. * Soni and Singh (2006) Vishal Soni and Satinder Singh. Using homomorphisms to transfer options across continuous reinforcement learning domains. In _American Association for Artificial Intelligence (AAAI)_ , 2006\. * Srivastava et al. (2013) Rupesh Kumar Srivastava, Bas R. Steunebrink, and Jürgen Schmidhuber. First experiments with powerplay. _Neural Networks_ , 41:130 – 136, 2013. Special Issue on Autonomous Learning. * Stanley et al. (2005) Kenneth O Stanley, Bobby D Bryant, and Risto Miikkulainen. Evolving neural network agents in the nero video game. In _IEEE Symposium on Computational Intelligence and Games (CIG)_ , Piscataway, NJ, 2005. * Stone and Veloso (1994) Peter Stone and Manuela Veloso. Learning to solve complex planning problems: Finding useful auxiliary problems. In _AAAI Fall Symposium on Planning and Learning_ , pages 137–141, 1994. * Suay and Chernova (2011) Halit Bener Suay and Sonia Chernova. Effect of human guidance and state space size on interactive reinforcement learning. In _International Conference on Robot and Human Interactive Communication (RO-MAN)_ , pages 1–6, 2011. * Subramanian et al. (2016) Kaushik Subramanian, Charles L Isbell Jr, and Andrea L Thomaz. Exploration from demonstration for interactive reinforcement learning. In _International Conference on Autonomous Agents and Multiagent Systems (AAMAS)_ , pages 447–456, 2016. * Sukhbaatar et al. (2018) Sainbayar Sukhbaatar, Zeming Li, Ilya Kostrikov, Gabriel Synnaeve, Arthur Szlam, and Rob Fergus. Intrinsic motivation and automatic curricula via asymmetric self-play. In _International Conference on Learning Representations (ICLR)_ , 2018. * Sutton and Barto (1998) Richard Sutton and Andrew Barto. _Reinforcement Learning: An Introduction_. MIT Press, 1998. * Svetlik et al. (2017) Maxwell Svetlik, Matteo Leonetti, Jivko Sinapov, Rishi Shah, Nick Walker, and Peter Stone. Automatic curriculum graph generation for reinforcement learning agents. In _Association for the Advancement of Artificial Intelligence (AAAI)_ , pages 2590–2596, 2017. * Taylor (2009) Matthew E Taylor. Assisting transfer-enabled machine learning algorithms: Leveraging human knowledge for curriculum design. In _The AAAI Spring Symposium on Agents that Learn from Human Teachers_ , 2009. * Taylor and Stone (2005) Matthew E Taylor and Peter Stone. Behavior transfer for value-function-based reinforcement learning. In Frank Dignum, Virginia Dignum, Sven Koenig, Sarit Kraus, Munindar P. Singh, and Michael Wooldridge, editors, _International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS)_ , pages 53–59, New York, NY, 2005. ACM Press. * Taylor and Stone (2009) Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. _Journal of Machine Learning Research_ , 10(1):1633–1685, 2009. * Taylor et al. (2007) Matthew E Taylor, Peter Stone, and Yaxin Liu. Transfer learning via inter-task mappings for temporal difference learning. _Journal of Machine Learning Research_ , 8(1):2125–2167, 2007. * Taylor et al. (2008) Matthew E Taylor, Gregory Kuhlmann, and Peter Stone. Autonomous transfer for reinforcement learning. In _International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS)_ , 2008. * Tesauro (1995) Gerald Tesauro. Temporal difference learning and td-gammon. _Communications of the ACM_ , 38(3):58–68, 1995\. * Tessler et al. (2017) Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A deep hierarchical approach to lifelong learning in minecraft. In _Association for the Advancement of Artificial Intelligence (AAAI)_ , pages 1553–1561, 2017. * Thomaz and Breazeal (2006) Andrea Lockerd Thomaz and Cynthia Breazeal. Reinforcement learning with human teachers: Evidence of feedback and guidance with implications for learning performance. In _Association for the Advancement of Artificial Intelligence (AAAI)_ , volume 6, pages 1000–1005, 2006. * Thrun (1998) Sebastian Thrun. Lifelong learning algorithms. In Sebastian Thrun and Lorien Pratt, editors, _Learning to Learn_ , pages 181–209. Kluwer Academic Publishers, Norwell, MA, USA, 1998. * Vezhnevets et al. (2016) Alexander Vezhnevets, Volodymyr Mnih, Simon Osindero, Alex Graves, Oriol Vinyals, John Agapiou, et al. Strategic attentive writer for learning macro-actions. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 3486–3494, 2016. * Vinyals et al. (2019) Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. _Nature_ , pages 1–5, 2019. * Vygotsky (1978) Lev Semenovich Vygotsky. _Mind in Society: Development of Higher Psychological Processes_. Harvard University Press, 1978. * Wang et al. (2020) Weixun Wang, Tianpei Yang, Yong Liu, Jianye Hao, Xiaotian Hao, Yujing Hu, Yingfeng Chen, Changjie Fan, and Yang Gao. From few to more: Large-scale dynamic multiagent curriculum learning. In _Association for the Advancement of Artificial Intelligence (AAAI)_ , pages 7293–7300, 2020. * Watkins and Dayan (1992) Christopher JCH Watkins and Peter Dayan. Q-learning. _Machine learning_ , 8(3-4):279–292, 1992. * Watkins (1989) Christopher John Cornish Hellaby Watkins. _Learning from delayed rewards_. PhD thesis, King’s College, Cambridge, 1989. * Weinshall and Amir (2018) Daphna Weinshall and Dan Amir. Theory of curriculum learning, with convex loss functions. _arXiv preprint arXiv:1812.03472_ , 2018. * Weinshall et al. (2018) Daphna Weinshall, Gad Cohen, and Dan Amir. Curriculum learning by transfer learning: Theory and experiments with deep networks. In _International Conference on Machine Learning (ICML)_ , pages 5235–5243, 2018. * Wilson et al. (2007) Aaron Wilson, Alan Fern, Soumya Ray, and Prasad Tadepalli. Multi-task reinforcement learning: a hierarchical bayesian approach. In _International Conference on Machine Learning (ICML)_ , pages 1015–1022. ACM, 2007. * Woolf (2007) Beverly Park Woolf. _Building Intelligent Interactive Tutors: Student-centered Strategies for Revolutionizing e-Learning_. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2007. * Wu and Tian (2017) Yuxin Wu and Yuandong Tian. Training agent for first-person shooter game with actor-critic curriculum learning. In _International Conference on Learning Representations (ICLR)_ , 2017. * Yang and Asada (1996) Boo-Ho Yang and Haruhiko Asada. Progressive learning and its application to robot impedance learning. _IEEE Transactions on Neural Networks_ , 7(4):941–952, 1996. * Yang et al. (2020) Jiachen Yang, Alireza Nakhaei, David Isele, Kikuo Fujimura, and Hongyuan Zha. Cm3: Cooperative multi-goal multi-stage multi-agent reinforcement learning. In _International Conference on Learning Representations (ICLR)_ , 2020. * Zimmer et al. (2018) Matthieu Zimmer, Yann Boniface, and Alain Dutech. Developmental reinforcement learning through sensorimotor space enlargement. In _International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)_ , pages 33–38. IEEE, 2018.
2024-09-04T02:54:58.913772
2020-02-15T06:15:17
2003.04978
{ "authors": "Sairamvinay Vijayaraghavan, Ye Wang, Zhiyuan Guo, John Voong, Wenda\n Xu, Armand Nasseri, Jiaru Cai, Linda Li, Kevin Vuong, and Eshan Wadhwa", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26146", "submitter": "Sairamvinay Vijayaraghavan", "url": "https://arxiv.org/abs/2003.04978" }
arxiv-papers
# Fake News Detection with Different Models Sairamvinay Vijayaraghavan <EMAIL_ADDRESS> &Zhiyuan Guo <EMAIL_ADDRESS> &Ye Wang <EMAIL_ADDRESS> John Voong <EMAIL_ADDRESS> &Wenda Xu <EMAIL_ADDRESS> Armand Nasseri <EMAIL_ADDRESS> &Jiaru Cai <EMAIL_ADDRESS> Linda Li <EMAIL_ADDRESS> &Kevin Vuong <EMAIL_ADDRESS> &Eshan Wadhwa <EMAIL_ADDRESS> ###### Abstract Problem: The problem we intend to solve is modelled as a binary classification problem. We intend to find the relation in the words and the context in which the words appear within the text and how it could be used to classify texts as real (negative cases) or fake (positive). High-level description: Many news sources contain false information and are therefore “fake news.” Because there is a lot of “fake news” articles and fabricated, misleading information on the web, we would like to determine which texts are legitimate (real) and which are illegitimate (fake). To solve this as a binary classification problem, we investigate the effectiveness of different Natural Language Processing models which are used to convert character based texts into numeric representations such as TFIDF, CountVectorizer and Word2Vec models and find out which model is able to preserve most of the contextual information about the text used in a fake news data set and how helpful and effective it is in detecting whether the text is a fake news or not. Results:We find that out of the three pre-training vectorizing algorithms, Word2Vec performs comparatively the worst in general and the CountVectorizer performs slightly better than the TF-IDF models in most of the cases. Out of the five fine-tuning algorithms, neural networks (ANNs and LSTMs) perform better. A combination of cv with LSTM achieves the best performance. Contribution to the machine learning field: We presented a simple model which can be used to classify a given text as “real” or “fake” mostly accurately. This form of pre-training embedding algorithms and then fine-tuning on the downstream supervised task (of binary classification) proves to be efficient and effective in classifying susceptible news text. ## 1 Introduction For this report, we are exploring the field of natural language processing, which is the broad study of how computers and machines can understand human to human communication and how texts are analyzed based on contextual information by machines. In particular, we are using natural language processing to classify news articles as real news or “fake news”. Fake news is misinformation masked under the guise of a real news article, and is used to deceptively influence people’s beliefs. For this report, we are classifying news articles as “real” or “fake”, which will be a binary classification problem - classifying the samples as a positive (with fake news) or negative (not fake news) sample. Many studies have used machine learning algorithms and build classifiers based on features like content, the author’s name and job-title, using lots of models like the convolutional neural network (CNN), recurrent neural network (RNN), feed- forward neural network (FFNN), long-short term memory (LSTM) and logistic regression to find the most optimal model and return its results. In [1], the author built a classifier using natural language processing and used models like CNN, RNN, FFNN, and Logistic Regression and concluded that the CNN classifiers could not be as competitive as the RNN classifiers. The authors in [2] think that their study can be improved by having more features like knowing the history of lies spoken by the news reporter or the speaker. Moreover, apart from the traditional machine learning methods, new models have also been developed. One of the newer models, TraceMiner, creates an LSTM-RNN model inferring from the embedding of social media users in the social network structure to propagate through the path of messages and has provided high classification accuracy5. FAKEDETECTOR is another inference model developed to detect the credibility of the fake news which is considered to be quite reliable and accurate7. There also have been studies that have a different approach. A paper surveys the current state-of-the-art technologies that are imperative when adopting and developing fake news detection and provides a classification of several accurate assessment methods that analyze the text and detect anomalies3. These previous approaches lack a clear contextual analysis used in NLP. We considered the semantic meaning of each word and we feel that the presence of particular words influence the meaning. We reckoned this important since we felt the contextual meaning of the text needs to be preserved and analyzed for better classification. Other studies emphasize the user and features related to them. In [4], “45 features…[were used] for predicting accuracy…across four types: structural, user, content, and temporal,” so features included characteristics beyond the text. Article [6] "learn[s] the representations of news articles, creators and subjects simultaneously." In our project, we emphasize the content by working with articles whose labels only relate to the text, nothing outside that scope, and have used SVM, Logistic Regression, ANN, LSTM, and Random Forest. We had devised this problem into 3 different phases: pre-processing, text-to- numeric representation conversion using pre-trained algorithms, and then evaluate the models using state-of-the-art machine learning algorithms. We had analysed the data set and in particular the text part of the data explaining how it is distributed and then we converted each text into numeric representation using pre-training models such as TFIDF, CV and W2V for vector representation. Finally, we evaluated our numeric conversion data using significant machine learning algorithms such as neural networks, classification algorithms etc to perform the classification. ## 2 Methods ### 2.1 The Dataset The training data set has five features: ID, title, author, text, and label. The ID uniquely identifies the news article. The title and author are the title and author of the news article respectively. The text is the content of the article, and may be incomplete. The label indicates whether the article is reliable (real) or not (fake): label = $\begin{cases}0&\textrm{if reliable news}\\\ 1&\textrm{if fake news}\end{cases}$ The training data set contains 20800 odd number of samples. The test data set does not have labels, so we do not use it. The test data set will be selected from the training data set randomly when we are evaluating our models. In our project, since we hypothesized that the text and the words used within the text are key to distinguish between real and fake news samples, we decided to investigate only the text column. ### 2.2 Data Pre-processing #### 2.2.1 Removed numbers Within the context of a news article title or text, numbers simply quantify claims and do not change the meaning of the text. Therefore it is best to remove all numbers to minimize noise in our data. We use the string.digits string constant in Python as well as the translate and maketrans methods from Python’s string module to convert all numerical digits to an empty string, effectively removing all digits. #### 2.2.2 Removed punctuation and special characters In addition of pre-processing the textual data, we removed all characters that are not textual (not alphabets such as punctuation, extra delimiters etc.). We used the string.punctuation module in Python to find all punctuation characters. We remove all those punctuation characters from every word in the texts, with the exception of the symbols ‘#’ and ‘@’. Because these are characters used for Twitter hashtags and mentions, we handle these later. Next, we removed an assortment of special characters that don’t appear on traditional American keyboards and don’t contribute to the meaning of the tweets. The long dash (“–”), single and double Asian quotations, ellipse characters (…), and bullet points (•) all were removed for this reason. After removing all special characters, there are still a couple of pre- processing cases we account for. For these cases, we used regular expressions to detect certain patterns we wish to remove. One of the patterns is Twitter hashtags and mentions. In a news setting, Twitter hashtags and mentions are often added to try to obtain more search results and relevance, but often distract from the overall meaning of the news content itself. In our problem, we are primarily concerned with words and mostly their contextual meanings used in the text and we assumed that these unnecessary characters. To detect the hashtags and mentions, we simply use regular expressions to remove all text after a hashtag (#) or @ symbol, and stop removing text when we reach the next space. We also use regular expressions to handle em dashes (—) and more than two consecutive spaces. Em dashes are used in various linguistic contexts like joining independent clauses. They do not add to the meaning of the text, however they are surrounded by two words of different clauses, so we replaced all em dashes with a single space to maintain the integrity of each phrase. Lastly, we replace any set of two or more consecutive spaces with just one space. Proceeding further, we make all of our texts lowercase and then remove all rows that have foreign language characters in their text, since we are only interested in identifying fake news in English. To do this we used the package langid in Python to identify the language of all texts, and removed all rows with foreign characters. This finally ensures the text we preserve is only with English words with no non-alpha character. #### 2.2.3 Removed stop words Stop words are a list of the most common words in a language, such as “a”, “be”, “quite”, “should”…etc. They are often void of meaning, and does not add anything to the content. They are also most frequently present in every text. Hence, we presumed removal of stop words can have multiple advantages. For once, it decreases memory overhead, since we cut down a huge amount of text (and hence narrows down the number of features to train our models on). Second, it reduces noise, since by eliminating stop words, we are able to focus on more meaningful contents (the more distinct features between these two classes). Although it is not often the case that removing stop words are the most optimal, sometimes the information that we are looking for may be included in the stop words that we removed. For example, in most cases of language modeling, or translation, where it is important that we keep all the stop words. However, in our circumstances, we are using the semantics of the text to make a decision. In this case, we can safely remove stop words to observe the more meaningful context words. ### 2.3 Data Distribution We performed some data analysis on the text and wanted to understand how the text is distributed. We had analyzed and represented our data (text) distribution in a few different perspectives. We first analyzed the data through graphing its sentiment polarity, most popular unigram and bigram, as well as looking at the distribution of the word types. We will be comparing the graphs before and after preprocessing, which includes, stop word removal, removing punctuation and special characters, and numbers. #### 2.3.1 Sentiment Polarity Polarity Graphs before pre-processing Polarity Graphs after pre-processing For both before and after pre-processing, the distribution of the polarity of fake news sentiment and real news sentiment are mostly the same. For both fake news and real news, there are slightly more positive news than the negatives. However, there is a noticeable difference between the polarity. We can see that although not by much, fake news are a little bit more polar than real news. There are more outliers, and the data are a little bit more spread out. #### 2.3.2 Part of Speech Distribution Part of Speech Graphs before pre-processing Part of Speech Graphs after pre-processing Although the differences are slight, there is a difference in part of speech distribution between real and fake news. In fake news, there are a higher percentage of adverbs and adjectives compared to all the other parts of speech, while there is a lower percentage of proper pronoun; however, in real news, there are a higher percentage of pronoun. We can interpret this as there are more adverbs and adjectives in fakes new, and there are more pronoun in real news. Perhaps, this is indicating that fake news are more likely to use adverbs and adjectives to embellish their sentences, while real news use more pronouns to establish as reference to their legitimacy. #### 2.3.3 Unigram and Bigram Unigrams Real News Fake News Before After Before After the nt the nt to trump to Trump of people of people and clinton and clinton in hillary in hillary that said that said for like is like on new for new he time it time is World on world it state as state was election with election said government are government mr president this preseident with war by war as years before years his states was states at american you american by obama have obama from media they media Bigrams Real News Fake News Before After Before After of the mr trump of the hillary clinton in the united states in the donald trump to the new york to the united states on the mr trumps on the white house mr trump white house and the new york at the donald trump that the hillary clintons and the mrs clinton to be clinton campaign that the said mr for the clinton foundation to be york times it is secretary state he said islamic state with the nt know with the mr obama from the american people from the breitbart news by the mainstream media by the president trump at the foreign policy it was years ago hillary clinton bill clinton The comparison between the result of the top unigram and bigram before and after preprocessing demonstrates that our decision to remove stop words is the correct choice. The top unigram and bigram are all consisted of words, in other words, filler words that does supply us with any explanation. After removing the stop words, we can see that the top unigrams and bigrams become much more specific. ### 2.4 Unsupervised Pre-training to encode our texts into numeric representations #### 2.4.1 Natural Language Processing Models After text have been cleaned, they are mapped into numeric representations in form of vectors of the textual data using three pre-training algorithms (i.e. CountVectorizer, TF-IDFVectorizer, and Word2Vec). Each sample, originally consisting of all text, is converted into a vector of features. Since only the text is passed into these pre-training algorithm, this stage is unsupervised. In the cases of CountVectorizer and TfidfVectorizer, the number of features is clipped at 10000 to avoid memory overrun and overfitting (because of the large number of features (the vocabulary)). #### 2.4.2 CountVectorizer The CountVectorizer provides a simple way to both tokenize a collection of text documents and build a vocabulary of known distinct words, but also to encode new documents using that vocabulary13. Given a collection of text documents, $S$ , CountVectorizer will generate a sparse matrix $A$ of size $m$ by $n$, where $m=$ total number of documents, $n=$ total number of distinct words used in $S$. $A=\begin{pmatrix}a_{11}&a_{12}&\cdots&a_{1n}\\\ \vdots&\vdots&\vdots&\vdots\\\ a_{m1}&a_{m2}&\cdots&a_{mn}\end{pmatrix}$ This matrix is the one hot encoded representation of the different words present in the corpus. Entry $a_{ij}=$ total number of times $j$th word appears in the $i$th document. We had converted the sparse matrix into a dense one since we found that there are plenty of distinct words in the corpus which may not even be present in some of the samples and hence they may be populated with zeros. Hence, we felt that since zeros may be entirely populated, we decided to convert it to a dense matrix using the todense() method call which a dense representation of the sparse matrix. #### 2.4.3 TF-IDFVectorizer Although TF-IDF is an old algorithm, it is simple and effective to be used in the phase of pre-training11. The computation of TfidfVectorizer involves computing the product of term frequency and inverse document frequency. As the term implies, TF-IDF calculates values for each word in a document through an inverse proportion of the frequency of the word in a particular document to the percentage of documents the word appears in12. The term frequency $tf(t,d)$ calculates the proportion of times that the term $t\in V(d)$ appears in the document $d$. The vocabulary $V(d)=\sum_{t}n(t,d)$ is constructed by the document $d$. Thus, if a word $w^{\prime}$ does not appear in a document $d^{\prime}$, the term frequency $tf(t^{\prime},d^{\prime})$ in this case would be zero. The idea of the term frequency is essentially the same as CountVectorizer. $tf(t,d)=\frac{n(t,d)}{V(d)}$ $n(t,d)=\textrm{ occurrence of the word }t\textrm{ in the document }d$ Given a document collection $D$, the inverse document frequency $idf(t,D)$ is the log of the number of documents $N$ divided by $df(t,D)$, the number of documents $d\in D$ containing the term $t$. As a result, common words in $D$ will have a low term frequency score, while infrequent words will have a high term frequency. Thus, the term frequency will be very likely to separate fake news that often have less common words (even ungrammatical) from real news that usually consist of common words. $idf(t,D)=\log\Big{(}\frac{N}{df(t,D)}\Big{)}$ As a summary, TF-IDF score $w(t,d)$ for a word increases with its count, but will be counteracted if the word appears in too many documents. $w(t,d)=tf(t,d)\times idf(t,D)$ Similar to CountVectorizer, we found that most of the entries within the matrix were 0. Hence, we used the dense (todense() call) to return the dense representation of the sparse TFIDF matrix representation. #### 2.4.4 Word2Vec Word2Vec is another state of the art model used to represent words into vectors. Word2Vec is a simple neural network which basically tries to predict the next word within a context given a set of words provided. Word2Vec basically represents a vector for each word within the context and the vector representation is the weights of the particular connection from the input layer node into one of the hidden layer neurons. This information is mainly encoding the contextual information of the particular word within the corpus (collection of texts) on which we train our word2vec model. In this project, all we did was we trained the word2vec model on our current corpus. We did this because we felt that the corpus contained very specific words which had a contextual meaning completely different from what is used in general. Hence, we chose to train the corpus on the existing texts in our corpus texts over the pre-trained word2vec models such as google models. For training our word2vec models, we chose the minimum count as the average number of words in each of the texts in general, since we believed that texts which are shorter than the mean length have less context and hence we rejected those sentences to train on. We then used the number of features as the default number of features as 100 since we wanted to analyze on a short number of features. For this project, we decided on a very simple and plain approach. We obtained the vector for each sentence by summing all the vector representations for each word in the sentence only if the word belongs to the word2vec model. The summed up vector is finally divided with the number of words in the sentence since we wanted to make sure that the size of the text doesn’t affect the vector embeddings and hence we normalized our word2vec embedding. ### 2.5 Outlier Removal During outlier removal, the Isolation Forest algorithm isolates observations by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of selected features. In Isolation Forest, an anomaly score can be calculated as the number of conditions required to separate given observation. In our outlier detections and removals, Isolation Forest has been applied to three different features. Generated from TFIDF, CV, WV. Percentages of outlier in each feature set is calculated, bar graph of percentage of training outliers are included. ### 2.6 Fine-tuning Once the representations of text are pre-trained from previous unsupervised learning, the representations are then fed into 5 different models to perform supervised learning on the downstream task. In this case, the downstream task is a binary classification of the fake news as either real or fake. A k-fold prediction error is obtained from each of the 5 models, and since we have 3 different pre-training models, we have a total of 15 models to compare. #### 2.6.1 Artificial Neural Network (ANN) We trained simple Artificial Neural Networks which contains an input layer, particular number of output layers (specified by a hyperparameter) in which each hidden layer contains the same number of neurons and the same activation function, and an output layer with just one node for the classification (real or fake) which uses sigmoid as an activation function. We chose sigmoid as the output layer activation and the binary_crossentropy as the loss since it is a binary classification problem and the use of softmax normalizes the results which is not needed for this problem and since we use only one output node to return the activation, we applied sigmoid for the output layer activation. We performed Grid Search strategy to find the best hyper-parameters such as activations, optimizers, number of hidden layers and number of hidden neurons. We had used Keras Sequential model and we used Dense Layers which contains connections to every hidden node in the next layer. Due to the limitation of computing resource, the grid search for Neural Networks is divided into three sequential steps. Instead of performing grid search on all the hyperparameters all at once, we chose to do grid search for the activations for the hidden layers, optimizers and the number of hidden layers and hidden neurons (done together). We coupled the number of hidden layers and the number of neurons since we believed that each of these hyperparameters interact with each other in improving the model training. We also did a K-fold Split for 3 splits at each step and picked the best hyperparameters which renders the highest accuracy. #### 2.6.2 Long Short Term Memory networks (LSTMs) Long Short Term Memory networks (LSTMs) is a special recurrent neural network (RNN) introduced by Hochreiter & Schmidhuber (1997)8. (Christopher Olah. “Understanding LSTM Networks.”) The chain-like nature of an RNN allows information to be passed from the beginning all the way to the end. The prediction at time step $t$ depends on all previous predictions at time step $t\textquoteright<t$. However, when a typical RNN is used in a larger context (i.e. a relatively large time steps), the RNN suffers from the issue of vanishing gradient descent 9. LSTMs, a special kind of RNN, can solve this long-term dependency problem. (Christopher Olah. “Understanding LSTM Networks.”) Each cell in a typical LSTMs network contains 3 gates (i.e., forget gate, input gate, and output gate) to decide whether or not information should be maintained in the cell state $C_{t}$. For CountVectorizer and TfidfVectorizer, each sample of text is converted into a 1-d feature vector of size 10000. As a result, the number of time steps (i.e. the maximum amount of word vectors for each sample) for these two can only be set to 1, as the pre-trained representations are done at the sample’s level. By contrast, the number of time steps for Word2Vec can either be 1, if we simply take an average of the word embeddings, or the length of the sentence, where each word has an embedding and thus the pre-trained representations are done at the word’s level. We choose the approach with 1 timestep in our model because it requires less computation power. Meanwhile, we also do the length of the sentence, and 200 time steps are chosen as 200 is close to the mean amount of words in each sample and it is a fairly common choice in practice. However, since we do not have enough computation power to fine-tune (grid search) our model, we leave it out for our model and include it only in the final section. In the LSTM layer, a dropout rate of 0.2, a common choice in practice10 , is used to prevent overfitting. Grid search is performed in order to pick decent values of hyperparameters, including the number of hidden units in the LSTM layer, the number of hidden layers, the activation functions and the number of nodes in the hidden layer, and the optimizer. Relatively small numbers of hidden layers (i.e., {0, 1, 2}) and nodes (i.e., {200, 400, 600}) are selected as the basis for grid search, because this is a simple binary classification task and too many of them would cause overfitting. Due to the limitation of computing resource, the grid search for LSTMs is divided into four sequential steps. Instead of performing grid search on all the hyperparameters all at once, the grid search is first done on the number of hidden layers and all other hyperparameters are randomly selected from the subset. Then, the grid search is done on the number of nodes in the hidden layer(s), using the best number of hidden layer found in step 1. The grid search completes when all four steps are finished. In each step we used K-fold cross validation with $K=3$. #### 2.6.3 Random Forest A random forest is an ensemble classifier that estimates based on the combination of different decision trees. So random forest will fit a number of decision tree classifiers on various subsamples of the dataset. A random best subsets are built by each tree in the forest. In the end, it gives the best subset of features among all the random subsets of features. In our project, 3 random forest algorithms have been applied with models count vectorizer, tfidf and word-to-vector. Random forest algorithm requires 4 hyperparameters to tune, such as the number of trees in the forest (i.e., {200, 400, 800}); the maximum depth of the tree (i.e., {1,5,9}); the minimum number of samples required to be at a lead node (i.e., {2, 4}); The minimum number of samples at each leaf node has the effect of smoothing the model, especially during regression; the minimum number of samples required to be at a leaf node (i.e., {5, 10}). All parameters are applied to grid search and in the end, the best set of parameters can be determined as we used K-fold cross validation with $K=3$. #### 2.6.4 Logistic Regression Logistic regression is a statistical machine learning algorithm that classifies the data by considering outcome variables on extreme ends and this algorithm is providing a discriminatory line between classes. Compared to another simple model, linear regression, which requires hard threshold in classification, logistic regression can overcome threshold values for a large dataset. Logistic regression produces a logistic curve, which is limited to values between 0 to 1, by adding sigmoid function in the end. In regards to our project, three logistic regressions have been applied with models CountVectorizer, TF-IDF and Word2Vec. We did grid search on the solvers, including newton-cg, sag, lbfgs and liblinear. Grid search is also performed on the inverse of regularization parameter with values being {0, 4, 10}. Best parameter sets can be determined as we used K-fold cross validation with $K=3$. #### 2.6.5 Support Vector Machine (SVM) SVM is a supervised machine learning algorithm in which a hyperplane is created in order to separate and categorize features. The optimal hyperplane is usually calculated by creating support vectors on both sides of the hyperplane in which each vector must maximize the distance between each other. In other words, the larger the distance between each vector around the hyperplane, the more accurate the decision boundary will be between the categories of features. In regards to our project, we fit 3 support vector machines on CountVectorizer, TfidfVectorizer, and WordToVectorizer. An SVM requires specific parameters such as a kernel type, $C$, maximum iterations, etc. In our case, we needed to determine the optimal $C$ as well as the optimal kernel for each fit. We used K-fold cross validation with $K=3$. A grid search of kernel types and $C$ was performed in order to give us the most accurate svm model. The parameters we used for each kernel were linear and rbf while the values we used for $C$ were 0.25 ,0.5, and 0.75. Once the grid search was completed for these hyperparameters, the model was evaluated with the most optimal hyperparameters using cross validation of 3 splits. ## 3 Results Grid Search Results CountVectorizer TF-IDF Word2Vec SVM Kernel = Linear Kernel = Linear Kernel = Linear C = 0.25 C = 0.75 C = 0.75 Logistic Regression Solver = sag Solver = sag Solver = newton-cg C = 21.54 C = 7.74 C = 3593.81 Random Forest Max Depth = 9 Max Depth = 9 Max Depth = 9 Min_samples_leaf = 2 Min_samples_leaf = 4 Min_samples_leaf = 2 Min_samples_split = 10 Min_samples_split = 5 Min_samples_split = 10 N_estimators = 200 N_estimators = 400 N_estimators = 400 ANN Activation = relu Activation = sigmoid Activation = relu Optimizer = Adam Optimizer = Adam Optimizer = Adam Hidden_layers = 2 Hidden_layers = 3 Hidden_layers = 1 Num_Neurons = 600 Num_Neurons = 400 Num_Neurons = 600 LSTM Activation = sigmoid Activation = sigmoid Activation = relu Optimizer = Adam Optimizer = Adam Optimizer = Adam Hidden_layers = 2 Hidden_layers = 2 Hidden_layers = 2 Memcells = 200 Memcells = 200 Memcells = 200 Num_Neurons = 200 Num_Neurons = 600 Num_Neurons = 600 Mean Test Scores SVM ANNs LSTMs LOGISTIC RANDOM FOREST CV 93.06% 94.29% 94.88% 94.45% 87.64% TFIDF 94.58% 93.73% 93.89% 94.79% 87.64% Word2Vec 91.17% 93.06% 92.29% 91.30% 88.60% ANN Loss and Accuracy LSTM Loss and Accuracy The model is evaluated using a 3-fold of cross validation. Out of the fifteen models, CountVectorizer with LSTMs performs the best. Word2Vec performs the worst among the three pre-training algorithms. Random forest performs the worst among the five fine-tuning algorithms. ## 4 Discussion Among our three pre-training models, CountVectorizer achieves in general the best performance comparatively and Word2Vec performs relatively poor amongst the three models. The essential idea behind both CountVectorizer and TF-IDF is computing a score which depends on the frequency of the word belonging to the vocabulary. However, comparing to CountVectorizer, the TF-IDF includes an extra inverse document frequency that “penalizes” (apparently masks) the contextual meaning within the words that appear more frequently across documents. They represent the importance of the word within a document. The results may imply that even though the penalization is smoothed by a log function, the punishment may be too high. The results also show that in general neural networks do the best consistently, as neural networks serve as a powerful universal approximator. However, the loss and accuracy plots show that we are using too many epochs and thus have the issue of overfitting. This is because our pre-training model is already very strong so it learns a good contextual representation of text. As a result, the epochs needed for downstream task are not much. In addition, one thing to note is that logistic regression also performs very well. This implies that our data are mostly linearly separable. While neural networks can fit the data very well, but they run the risk of overfitting the data. As a result, neural networks are not as good as SVM and Logistic Regression for TF- IDF. A combination of CountVectorizer and LSTMs is the best among all the models. While LSTMs with one timestep are very similar to ANN in terms of architecture, LSTMs have gates and a tanh activation function inside the module. This different design may let LSTMs perform slightly better than ANN. Word2Vec does not perform well. One reason is that we are simply taking an average of the word embedding vectors to get a generalized vector representation of each sample of paragraph. Taking an average fails to represent the dependencies between words. Another reason is that we do not use pre-trained Word2Vec embeddings available online from huge corpus but instead build our own from the dataset. While we thought that building our own Word2Vec would make the model specific to this task, the results show that Word2Vec may need to be built from larger dataset. ## 5 Conclusion This report provides a fairly simple approach to encode texts and how the presence of words in general impacts the classification of texts as real and fake. We achieved high accuracy results in most of our algorithms and in particular neural networks generally do better than the others. What’s worth noting is that our LSTMs only use a timestep of 1 and are essentially multi-layer perceptrons. Still, as mentioned is the LSTM’s method section, the LSTMs with the real recurrence are performed by using Word2Vec for representations at the word’s level. In this case, each word has its own vector, and a sample will be a collection of vectors and thus a 2-D matrix. As mentioned before, each vectorized word will become a timestep, and a total of 200 timesteps is used (If the paragraph has more than 200 words, only the first 200 words will be selected). We run our model and get the following results. The results seem solid, but this approach is not included in our model because it takes too much time to run and we do not have time to fine-tune the hyperparameters. But in future work, we believe that using LSTMs with real recurrence will give an even better results. While we achieve great performance in this dataset, the question remains as to whether X (to be replaced by the best model) can still perform well in tasks that classify news into more than two categories, such as the Fake News Challenge. In that case, a simple unidirectional LSTMs may not be so well and may need to be replaced by a bidirectional one. In addition, it would be interested to know how well our pre-trained model performs in other downstream tasks, such as Spam Detection. Lastly, in our model, the pre-training is done on the dataset given (will make the model specific to the task), instead of on the big corpus available online, such as Google’s pre-trained Word2Vec model. If the task were a classification of four or eight categories, pre-trained model on large corpus may perform better as the model is pre-trained on more words. We can also try to improve the training by using different word embeddings. While we only chose only 3 different types of embeddings, we could have tried different embeddings such as GloVe and the features used are entirely dependent only on context words. We can use different forms for encoding texts which can be used to be trained using these algorithms to achieve a better model. In another State-of-the-art pre-trained models can be used if the task is no longer a binary classification. Models like Transformer and BERT will be strong candidates as they have learned a very strong representation that takes the context into account when computing an embedding for a word. Unlike LSTMs whose sequential nature prohibits parallelization, the Transformer and the BERT can achieve parallelization by replacing recurrence with the attention mechanism. Thus, they require less computation power and can be easily fine- tuned in downstream tasks. ## 6 Appendix ## Github Repo https://github.com/Sairamvinay/Fake-News-Dataset ## Author Contributions Sairamvinay Vijayaraghavan: Project Planning, Problem Formation, DataSet Search, POS Distribution graph, Code for CountVectorizer, Word2Vec, ANN, Randomforest,To parse csv files (readdata), Code integration for TextVectorizer, Grid Search model running, ROC model running, Code Base Cleanup and management (further cleanup), PowerPoint Checking, Report Analysis for W2V, ANN, Report editing Zhiyuan Guo: Project Planning, DataSet Search, Polarity Graphs, Code for LSTM, RandomForest, Adding Functionality and Readability in each of the scripts, Code Integration, Grid Search model running, ROC model running, PowerPoint Development, Report Analysis for TFIDF and LSTM, Report Analysis for the Abstract, the Discussion, Conclusion, Pipeline Diagram, Report editing Ye Wang: Project Planning, DataSet Search, Code for TFIDF, PCA, Grid Search model running, ROC model running, Report Integration into Latex, Report Analysis of the Results (table creations), Report Analysis for the Outlier Removal, Random Forest, Report editing John Voong: Word2Vec, DataCleanup (StopWord Cleanup), Grid Search model running, ROC model running, PowerPoint Development, Report Analysis for W2V, Pipeline Diagram, Report editing, Paper structure Wenda Xu: Code for PCA, ROC model running, Code Base Cleanup and management, PowerPoint Development, Report Analysis about Count Vectorizer, Report Analysis about Logistic Regression Armand Nasseri: Project Planning, Dataset search, Code for SVM, Data Cleanup (StopWord Cleanup), ROC model running, PowerPoint Development, Report Analysis about SVM Jiaru Cai: Outlier Removal, Accuracy and Loss Plots for Neural Network, PowerPoint Framework Kevin Vuong: DataCleanup (remove punctuations), Code for Logistic Regression, Grid Search model running, PowerPoint Cleanup, Report Analysis about Data Cleanup, Introduction and Abstract Linda Li: Unigram and Bigram analysis, Code for ROC plots, Report Analysis of the Data Cleanup section, Graph analysis Eshan Wadhwa: Related Work, References and Citation (Introduction and Field research), Report Editing, PowerPoint slides, ## References [1] Samir Bajaj, “The Pope Has a New Baby!” Fake News Detection Using Deep Learning”, Winter 2017, https://pdfs.semanticscholar.org/19ed/b6aa318d70cd727b3cdb006a782556ba657a.pdf [2] Arjun Roy, Kingshuk Basak, Asif Ekbal, and Pushpak Bhattacharyya, “A Deep Ensemble Framework for Fake News Detection and Classification”, 12 November 2018, https://arxiv.org/pdf/1811.04670.pdf [3] Niall J. Conroy, Victoria L. Rubin, and Yimin Chen, “Automatic Deception Detection: Methods for Finding Fake News”, November 2015, https://asistdl.onlinelibrary.wiley.com/doi/epdf/10.1002/pra2.2015.145052010082. [4] Liang Wu and Huan Liu, “Tracing Fake-News Footprints: Characterizing Social Media Messages by How They Propagate”, February 2018, http://www.public.asu.edu/~liangwu1/WSDM18_TraceMiner.pdf [5] Adrian Colyer, “Tracing fake news footprints: characterizing social media messages by how they propagate”,the morning paper, February 2018, https://blog.acolyer.org/2018/02/19/tracing-fake-news-footprints- characterizing-social-media-messages-by-how-they-propagate/ [6] Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang and Huan Liu, “Fake News Detection on Social Media: A Data Mining Perspective”, August 2017, https://arxiv.org/abs/1708.01967 [7] Jiawei Zhang, Bowen Dong and Philip S. Yu, “FAKEDETECTOR: Effective Fake News Detection with Deep Diffusive Neural Network”, August 2019, https://arxiv.org/pdf/1805.08751.pdf [8] Sepp Hochreiter and Jurgen Schmidhuber, “Long short-term memory”, November 1997, http://www.bioinf.jku.at/publications/older/2604.pdf [9] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. “Learning long-term dependencies with gradient descent is difficult”, March 1994, http://www.comp.hkbu.edu.hk/~markus/teaching/comp7650/tnn-94-gradient.pdf [10] Gaofeng Cheng, Vijayaditya Peddinti, Daniel Povey, et al., “An Exploration of Dropout with LSTMs”. August 2017, https://www.danielpovey.com/files/2017_interspeech_dropout.pdf [11] Juan Ramos. “Using tf-idf to determine word relevance in document queries”, December 2003, https://www.cs.rutgers.edu/~mlittman/courses/ml03/iCML03/papers/ramos.pdf [12] Gerard Salton and Christopher Buckley. “Term-weighting approaches in automatic text retrieval”, January 1988, https://www.sciencedirect.com/science/article/abs/pii/0306457388900210 [13] Jason Brownlee. “How to Prepare Text Data for Machine Learning with scikit-learn”, August 2019, https://machinelearningmastery.com/prepare-text-data-machine-learning-scikit- learn/
2024-09-04T02:54:58.923713
2020-02-18T13:58:06
2003.04986
{ "authors": "Vukosi Marivate, Tshephisho Sefara, Vongani Chabalala, Keamogetswe\n Makhaya, Tumisho Mokgonyane, Rethabile Mokoena, Abiodun Modupe", "full_text_license": null, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "arxiv-papers-0000.json.gz:26147", "submitter": "Vukosi Marivate", "url": "https://arxiv.org/abs/2003.04986" }
arxiv-papers
# Investigating an approach for low resource language dataset creation, curation and classification: Setswana and Sepedi ###### Abstract The recent advances in Natural Language Processing have been a boon for well represented languages in terms of available curated data and research resources. One of the challenges for low-resourced languages is clear guidelines on the collection, curation and preparation of datasets for different use-cases. In this work, we take on the task of creation of two datasets that are focused on news headlines (i.e short text) for Setswana and Sepedi and creation of a news topic classification task. We document our work and also present baselines for classification. We investigate an approach on data augmentation, better suited to low resource languages, to improve the performance of the classifiers. languageresourceLanguage Resources Investigating an approach for low resource language dataset creation, curation and classification: Setswana and Sepedi Vukosi Marivate${}^{1}{}^{,}{}^{2}$, Tshephisho Sefara2, Vongani Chabalala3, Keamogetswe Makhaya4, Tumisho Mokgonyane5, Rethabile Mokoena6, Abiodun Modupe${}^{7}{}^{,}{}^{1}$ --- University of Pretoria1, CSIR2, University of Zululand3, University of Cape Town4, University of Limpopo5, North-West University6, University of the Witwatersrand7 <EMAIL_ADDRESS><EMAIL_ADDRESS> Abstract content ## 1\. Introduction The most pressing issue with low-resource languages is of insufficient language resources. In this study, we introduce an investigation of a low- resource language that provides automatic formulation and customization of new capabilities from existing ones. While there are more than six thousand languages spoken globally, the openness of resources for each is extraordinarily unbalanced [Nettle, 1998]. For example, if we focus on language resources annotated on the public domain, as of November 2019, AG corpus released about $496,835$ news articles only in English languages from more than $200$ sources111http://groups.di.unipi.it/~gulli, Reuters News Dataset [Lewis, 1997] comprise an roughly $10,788$ annotated texts from the Reuters financial newswire. The New York Times Annotated Corpus [Sandhaus, 2008] holds over $1.8$ million articles, and there are no standard annotated tokens in the low-resource language. Google Translate only supports around 100 languages [Johnson et al., 2017]. A significant number of bits of knowledge focus on a small number of languages neglecting $17\%$ out of the world’s language categories label as low-resource [Strassel and Tracey, 2016], which makes it challenging to develop various mechanisms for Natural Language Processing (NLP). In South Africa several of the news websites (private and public) are published in English, even though there are 11 official languages (including English). We list the top premium newspapers by circulation as per first Quarter 2019 [Bureau of Circulations, 2019] in Table 1. We do not have a distinct collection of a diversity of languages with most of the reported datasets as existed in English, Afrikaans and isiZulu. In this work, we aim to provide a general framework that enables us to create an annotated linguistic resource for Setswana and Sepedi news headlines. We apply data sources of the news headlines from the South African Broadcast Corporation (SABC) 222http://www.sabc.co.za/, their social media streams and a few acoustic news. Unfortunately, we do not have any direct access to news reports, so we hope this study will promote collaboration between the national broadcaster and NLP researchers. Table 1: Top newspapers in South Africa with their languages Paper | Language | Circulation ---|---|--- Sunday Times | English | 260132 Soccer Laduma | English | 252041 Daily Sun | English | 141187 Rapport | Afrikaans | 113636 Isolezwe | isiZulu | 86342 Sowetan | English | 70120 Isolezwe ngeSonto | isiZulu | 65489 Isolezwe ngoMgqibelo | isiZulu | 64676 Son | Afrikaans | 62842 The rest of the work is organized as follows. Section 2. discusses prior work that has gone into building local corpora in South Africa and how they have been used. Section 3. presents the proposed approach to build a local news corpora and annotating the corpora with categories. From here, we focus on ways to gather data for vectorization and building word embeddings (needing an expanded corpus). We also release and make pre-trained word embeddings for 2 local languages as part of this work [Marivate and Sefara, 2020a]. Section 4. investigate building classification models for the Setswana and Sepedi news and improve those classifiers using a 2 step augmentation approach inspired by work on hierarchical language models [Yu et al., 2019]. Finally, Section 5. concludes and proposes a path forward for this work. ## 2\. Prior Work Creating sizeable language resources for low resource languages is important in improving available data for study [Zoph et al., 2016] and cultural preservation. If we focus our attention on the African continent, we note that there are few annotated datasets that are openly available for tasks such as classification. In South Africa, the South African Center for Digital Language Resources (SADILAR) 333www.sadilar.org has worked to curate datasets of local South African languages. There remain gaps such as accessing large corpora and data from sources such as broadcasters and news organizations as they have sizeable catalogs that are yet to make it into the public domain. In this work, we work to fill such a gap by collecting, annotating and training classifier models for news headlines in Setswana and Sepedi. As the data that we do find publicly is still small, we also have to deal with the challenges of Machine Learning on small data. Machine learning systems perform poorly in presence of small training sets due to overfitting. To avoid this problem, data augmentation can be used. The technique is well known in the field of image processing [Cubuk et al., 2019]. Data augmentation refers to the augmentation of the training set with artificial, generated, training examples. This technique is used less frequently in NLP but a number of few studies applied data augmentation. ?) use data augmentation to counteract overfitting where recurrent neural network (RNN) Encoder-Decoder is implemented specifically geared toward a low- resource setting. Authors apply data augmentation by finding words that share word stem for example fizzle and fizzling share fizzl. Then authors replace a stem with another string. ?) apply data augmentation by using synonyms as substitute words for the original words. However, ?) states that synonyms are very limited and the synonym-based augmentation cannot produce numerous different patterns from the original texts. Hence, ?) proposes contextual data augmentation by replacing words that are predicted by a language model given the context surrounding the original words to be augmented. As ?) states that these techniques are valid, they are not often used in practice because they have a high cost of implementation relative to performance gain. They propose an easy data augmentation as techniques for boosting performance on text classification tasks. These techniques involve synonym replacement, random insertion, random swap, and random deletion of a word. Authors observed good performance when using fraction of the dataset (%):1, 5, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, as the data size increases, the accuracy also increases for augmented and original data. Original data obtained highest accuracy of 88.3% at 100% data size while augmented data obtained accuracy of 88.6% at 50% data size. In this work, we investigate the development of a 2 step text augmentation method in order to be improve classification models for Setswana and Sepedi. To do this we had to first identify a suitable data source. Collect the data, and then annotate the datasets with news categories. After the data is collected and annotated, we then worked to create classification models from the data as is and then use a word embedding and document embedding augmentation approach. ## 3\. Developing news classification models for Setswana and Sepedi Here we discuss how data was collected as well as the approach we use to build classification models. ### 3.1. Data Collection, Cleaning and Annotation Before we can train classification models, we first have to collect data for 2 distinct processes. First, we present our collected news dataset as well as its annotation. We then discuss how we collected larger datasets for better vectorization. #### 3.1.1. News data collection and annotation The news data we collected is from the SABC444http://www.sabc.co.za/ Facebook pages. The SABC is the public broadcaster for South Africa. Specifically, data was collected from the Thobela FM (An SABC Sepedi radio station)555https://www.facebook.com/thobelafmyaka/ and Motsweding FM (An SABC Setswana radio station)666https://www.facebook.com/MotswedingFM/. We scraped the news headlines that are published as posts on both Facebook pages. We claim no copyright for the content but used the data for research purposes. We summarize the datasets in Table 2. We visualize the token distributions in Sepedi and Setswana in Figures 1 and 2 respectively. Table 2: News Data Sets | Setswana | Sepedi ---|---|--- Corpus Size (headlines) | 219 | 491 Number of Tokens (words) | 1561 | 3018 Figure 1: Setswana Wordcloud Figure 2: Sepedi Wordcloud As can be seen, the datasets are relatively small and as such, we have to look at other ways to build vectorizers that can better generalize as the word token diversity would be very low. We annotated the datasets by categorizing the news headlines into: _Legal_ , _General News_ ,_Sports_ , _Other_ , _Politics_ , _Traffic News_ , _Community Activities_ , _Crime_ , _Business_ and _Foreign Affairs_. Annotation was done after reading the headlines and coming up with categories that fit both datasets. We show the distribution of the labels in both the Setswana and Sepedi data sets in Figures 3 and 4 respectively. For this work, we only explore single label categorization for each article. It remains future work to look at the multi-label case. As such, there might be some noise in the labels. Examples from the Sepedi annotated news corpus are shown next: > _Tsela ya N1 ka Borwa kgauswi le Mantsole Weighbridge ka mo Limpopo ebe e > tswaletswe lebakanyana ka morago ga kotsi yeo e hlagilego._ Traffic > > _Tona ya toka Michael Masutha,ore bahlankedi ba kgoro ya ditirelo tsa > tshokollo ya bagolegwa bao ba tateditswego dithieeletsong tsa khomisene ya > go nyakisisa mabarebare a go gogwa ga mmuso ka nko,ba swanetse go hlalosa > gore ke ka lebaka la eng ba sa swanelwa go fegwa mesomong_ Legal Figure 3: Setswana news title category distribution Figure 4: Sepedi news title category distribution The full dataset is made available online [Marivate and Sefara, 2020b] for further research use and improvements to the annotation777https://zenodo.org/record/3668495. As previously discussed, we used larger corpora to create language vectorizers for downstream NLP tasks. We discuss this next. #### 3.1.2. Vectorizers Before we get into the annotated dataset, we needed to create pre-trained vectorizers in order to be able to build more classifiers that generalize better later on. For this reason we collected different corpora for each language in such as way that we could create Bag of Words, TFIDF, Word2Vec [Mikolov et al., 2013] and FastText [Bojanowski et al., 2017] vectorizers (Table 3). We also make these vectorizers available for other researchers to use. Table 3: Vectorizer Corpora Sizes in number of lines (number of tokens) Source | Setswana | Sepedi ---|---|--- Wikipedia | 478(_21924_)888https://tn.wikipedia.org/ | 300(_10190_)999https://nso.wikipedia.org/ JW300101010http://opus.nlpl.eu/JW300.php | 874464(_70251_) | 618275(_53004_) Bible | 3110(_40497_) | 29723 Constitution111111https://www.justice.gov.za/legislation/constitution/pdf.html | 7077(_3940_) | 6564(_3819_) SADILAR121212https://www.sadilar.org/index.php/en/resources | 33144(_61766_) | 67036(_87838_) Total | 946264(_152027_) | 721977(_149355_) ### 3.2. News Classification Models We explore the use of a few classification algorithms to train news classification models. Specifically we train * • Logistic Regression, * • Support Vector Classification, * • XGBoost, and * • MLP Neural Network. To deal with the challenge of having a small amount of data on short text, we use data augmentation methods, specifically a word embedding based augmentation [Wang and Yang, 2015], approach that has been shown to work well on short text [Marivate and Sefara, 2019]. We use this approach since we are not able to use other augmentation methods such as synonym based (requires developed Wordnet Synsets [Kobayashi, 2018]), language models (larger corpora needed train) and back-translation (not readily available for South African languages). We develop and present the use of both word and document embeddings (as an augmentation quality check) inspired by a hierarchical approach to augmentation [Yu et al., 2019]. ## 4\. Experiments and Results This Section presents the experiments and results. As this is still work in progress, we present some avenues explored in both training classifiers and evaluating them for the task of news headline classification for Setswana and Sepedi. ### 4.1. Experimental Setup For each classification problem, we perform 5 fold cross validation. For the bag-of-words and TFIDF vectorizers, we use a maximum token size of 20,000. For word embeddings and language embeddings we use size 50. All vectorizers were trained on the large corpora presented earlier. #### 4.1.1. Baseline Experiments We run the baseline experiments with the original data using 5-fold cross validation. We show the performance (in terms of weighted F1 score) in the Figures 5 & 6\. We show the baseline results as _orig_. For both the Bag-of- Words (TF) and TFIDF, the MLP performs very well comparatively to the other methods. In general the TFIDF performs better. Figure 5: Baseline classification model performance for Setswana news title categorisation Figure 6: Baseline classification model performance for Sepedi news title categorisation #### 4.1.2. Augmentation We applied augmentation in different ways. First for Sepedi and Setswana word embeddings (word2vec), we use word embedding-based augmentation. We augment each dataset 20 times on the training data while the validation data is left intact so as to be comparable to the earlier baselines. We show the effect of augmentation in Figure 5 and 6 (performance labeled with _aug_ The contextual, word2vec based, word augmentation improves the performance of most of the classifiers. If we now introduce a quality check using doc2vec (Algorithm 1) we also notice the impact on the performance for Sepedi (Figure 6 _aug qual_ ). We were not able to complete experiments with Setswana for the contextual augmentation with a quality check, but will continue working to better under stand the impact of such an algorithm in general. For example, it remains further work to investigate the effects of different similarity thresholds for the algorithm on the overall performance, how such an algorithm works on highly resourced languages vs low resourced languages, how we can make the algorithm efficient etc. Input: $s$: a sentence, $run$: maximum number of attempts at augmentation Output: $\hat{s}$ a sentence with words replaced 1 def _Augment(_s,run_)_: 2 Let $\vv{V}$ be a vocabulary; 3 for _ $i$ in range(_run_) _ : 4 $w_{i}\leftarrow$ randomly select a word from $s$; 5 $\vv{w}\leftarrow$ find similar words of $w_{i}$; 6 $s_{0}\leftarrow$ randomly select a word from $\vv{w}$ given weights as distance; 7 $\hat{s}\leftarrow$replace $w_{i}$ with similar word $s_{0}$; 8 $\vv{s}\leftarrow Doc2vec(s)$; 9 $\vv{\hat{s}}\leftarrow Doc2vec(\hat{s})$; 10 $similarity$ $\leftarrow$ Cosine Similarity($\vv{s}$, $\vv{\hat{s}}$); 11 if _$similarity$ $>$ $threshold$_ : 12 return($\hat{s}$); 13 14 15 16 17 18 Algorithm 1 Contextual (Word2vec-based) augmentation algorithm with a doc2vec quality check Figure 7: Word2Vec feature based performance for news headline classification Figure 8: Confusion Matrix of News headline classification models It also interesting to look at how performance of classifiers that were only trained with word2vec features would fair. Deep neural networks are not used in this current work and as such we did not use recurrent neural networks, but we can create sentence features from - word2vec by either using: the mean of all word vectors in a sentence, the median of all word vectors in a sentence or the concatenated power means [Rücklé et al., 2018]. We show the performance of using this approach with the classifiers used for Bag of Words and TFIDF earlier in Figure 8. The performance for this approach is slightly worse with the best results for Sepedi news headline classification being with XGBoost on the augmented data. We hope to improve this performance using word2vec feature vectors using recurrent neural networks but currently are of the view that increasing the corpora sizes and the diversity of corpora for the pre-trained word embeddings may yield even better results. Finally, we show the confusion matrix of the best model in Sepedi on a test set in Figure 8. The classifier categorises _General News_ , _Politics_ and _Legal_ news headlines best. For the others there there is more error. A larger news headline dataset is required and classification performance will also need to be compared to models trained on full news data (with the article body). For the Setswana classifiers, the confusion matrix shows that the data skew results in models that mostly can categorise between categorises _General News_ and _Other_. We need to look at re-sampling techniques to improve this performance as well as increasing the initial dataset size. ## 5\. Conclusion and Future Work This work introduced the collection and annotation of Setswana and Sepedi news headline data. It remains a challenge that in South Africa, 9 of the 11 official languages have little data such as this that is available to researchers in order to build downstream models that can be used in different applications. Through this work we hope to provide an example of what may be possible even when we have a limited annotated dataset. We exploit the availability of other free text data in Setswana and Sepedi in order to build pre-trained vectorisers for the languages (which are released as part of this work) and then train classification models for news categories. It remains future work to collect more local language news headlines and text to train more models. We have identified other government news sources that can be used. On training embedding models with the data we have collected, further studies are needed to look at how augmentation using the embedding models improve the quality of augmentation. ## 6\. Bibliographical References ## References * Bojanowski et al., 2017 Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2017). Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. * Bureau of Circulations, 2019 Bureau of Circulations, A. (2019). Newspaper circulation statistics for the period january–march 2019 (abc q1 2019). * Cubuk et al., 2019 Cubuk, E. D., Zoph, B., Mane, D., Vasudevan, V., and Le, Q. V. (2019). Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 113–123. * Johnson et al., 2017 Johnson, M., Schuster, M., Le, Q. V., Krikun, M., Wu, Y., Chen, Z., Thorat, N., Viégas, F., Wattenberg, M., Corrado, G., et al. (2017). Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. * Kobayashi, 2018 Kobayashi, S. (2018). Contextual augmentation: Data augmentation by words with paradigmatic relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 452–457. * Lewis, 1997 Lewis, D. D. (1997). Reuters-21578 text categorization collection data set. * Marivate and Sefara, 2019 Marivate, V. and Sefara, T. (2019). Improving short text classification through global augmentation methods. arXiv preprint arXiv:1907.03752. * Marivate and Sefara, 2020a Marivate, V. and Sefara, T. (2020a). African embeddings [nlp]. https://doi.org/10.5281/zenodo.3668481, February. * Marivate and Sefara, 2020b Marivate, V. and Sefara, T. (2020b). South African news data dataset. https://doi.org/10.5281/zenodo.3668489. * Mikolov et al., 2013 Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. * Nettle, 1998 Nettle, D. (1998). Explaining global patterns of language diversity. Journal of anthropological archaeology, 17(4):354–374. * Rücklé et al., 2018 Rücklé, A., Eger, S., Peyrard, M., and Gurevych, I. (2018). Concatenated power mean word embeddings as universal cross-lingual sentence representations. arXiv preprint arXiv:1803.01400. * Sandhaus, 2008 Sandhaus, E. (2008). The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752. * Silfverberg et al., 2017 Silfverberg, M., Wiemerslage, A., Liu, L., and Mao, L. J. (2017). Data augmentation for morphological reinflection. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, pages 90–99. * Strassel and Tracey, 2016 Strassel, S. and Tracey, J. (2016). Lorelei language packs: Data, tools, and resources for technology development in low resource languages. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 3273–3280. * Wang and Yang, 2015 Wang, W. Y. and Yang, D. (2015). That’s so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using# petpeeve tweets. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2557–2563. * Wei and Zou, 2019 Wei, J. and Zou, K. (2019). Eda: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6383–6389. * Yu et al., 2019 Yu, S., Yang, J., Liu, D., Li, R., Zhang, Y., and Zhao, S. (2019). Hierarchical data augmentation and the application in text classification. IEEE Access, 7:185476–185485. * Zhang et al., 2015 Zhang, X., Zhao, J., and LeCun, Y. (2015). Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649–657. * Zoph et al., 2016 Zoph, B., Yuret, D., May, J., and Knight, K. (2016). Transfer learning for low-resource neural machine translation. arXiv preprint arXiv:1604.02201.
2024-09-04T02:54:58.932182
2020-03-04T06:40:14
2003.04991
{ "authors": "Jitin Krishnan, Hemant Purohit and Huzefa Rangwala", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26148", "submitter": "Jitin Krishnan", "url": "https://arxiv.org/abs/2003.04991" }
arxiv-papers
# Unsupervised and Interpretable Domain Adaptation to Rapidly Filter Tweets for Emergency Services Jitin Krishnan Department of Computer Science George Mason University Fairfax, VA, USA <EMAIL_ADDRESS>Hemant Purohit Department of Information Sciences & Technology George Mason University Fairfax, VA, USA <EMAIL_ADDRESS>Huzefa Rangwala Department of Computer Science George Mason University Fairfax, VA, USA <EMAIL_ADDRESS> ###### Abstract During the onset of a natural or man-made crisis event, public often share relevant information for emergency services on social web platforms such as Twitter. However, filtering such relevant data in real-time at scale using social media mining is challenging due to the short noisy text, sparse availability of relevant data, and also, practical limitations in collecting large labeled data during an ongoing event. In this paper, we hypothesize that unsupervised domain adaptation through multi-task learning can be a useful framework to leverage data from past crisis events for training efficient information filtering models during the sudden onset of a new crisis. We present a novel method to classify relevant social posts during an ongoing crisis without seeing any new data from this event (fully unsupervised domain adaptation). Specifically, we construct a customized multi-task architecture with a multi-domain discriminator for crisis analytics: multi-task domain adversarial attention network (MT-DAAN). This model consists of dedicated attention layers for each task to provide model interpretability; critical for real-word applications. As deep networks struggle with sparse datasets, we show that this can be improved by sharing a base layer for multi-task learning and domain adversarial training. The framework is validated with the public datasets of TREC incident streams that provide labeled Twitter posts (tweets) with relevant classes (Priority, Factoid, Sentiment) across 10 different crisis events such as floods and earthquakes. Evaluation of domain adaptation for crisis events is performed by choosing one target event as the test set and training on the rest. Our results show that the multi-task model outperformed its single-task counterpart. For the qualitative evaluation of interpretability, we show that the attention layer can be used as a guide to explain the model predictions and empower emergency services for exploring accountability of the model, by showcasing the words in a tweet that are deemed important in the classification process. Finally, we show a practical implication of our work by providing a use-case for the COVID-19 pandemic. ###### Index Terms: Social Media, Crisis Analytics, Text Classification, Unsupervised Domain Adaptation, Interpretability Figure 1: Problem Statement: Interpretably predict labels for tweets collected during an ongoing crisis using only the past crisis data, given a) unavailability of labeled data in the ongoing event, and b) need for interpretability of machine reasoning behind data filtering for emergency managers. ## I Introduction During the sudden onset of a crisis situation, social media platforms such as Twitter provide valuable information to aid crisis response organizations in gaining real-time situational awareness [1]. Effective analysis of important information such as affected individuals, infrastructure damage, medical emergencies, or food and shelter needs can help emergency responders make time-critical decisions and allocate resources in the most effective manner [2, 3, 4]. Several machine learning systems have been deployed to help towards this humanitarian goal of converting real-time social media streams into actionable knowledge. Classification being the most common task, researchers have designed models [5, 6, 7, 3] that classify tweets into various crisis- dependent categories such as priority, affected individuals, type of damage, type of assistance needed, usefulness of the tweet, etc. Social media streams contain short, informal, and abbreviated content; with potential linguistic errors and sometimes contextually ambiguous. These inherently challenging properties of tweets make their classification task and formulation less trivial when compared to traditional text classification tasks. In this paper, we address two practically important and underdeveloped aspects of current research in social media mining for crisis analytics to classify relevant social web posts: a) a fully unsupervised domain adaptation, and b) interpretability of predictions. A fully unsupervised domain adaptation uses no data from the ongoing crisis to train the model. Nguyen et al., 2016 [5] showed that their convolutional neural network (CNN) model does not require feature engineering and performed better than the state-of-the-art methods; one of their models being completely unsupervised [5]. Similarly, Alam et al., 2018 [6] designed a CNN architecture with adversarial training on graph embeddings, but utilizing unlabeled target data. Our goal is to construct an unsupervised model that does not require any unlabeled target data with the capability of being interpretable. We specifically address the problem of data sparsity and limited labels by designing a multi-task classification model with domain adversarial training; which, to the best of our knowledge, is not explored in social media mining for crisis analytics. Another crucial component of our model is interpretability. In prior works, when a top performing model produces an accuracy of $78\%$, for instance, it is unclear how trustworthy it is and what features are deemed important in the model’s decision-making process. An interpretable model like ours can present with a convincing evidence of which words the classifier deems important when making a certain prediction, and helps ensure reliability for domain users, e.g., emergency managers. Contributions: a) To address the problems of data sparsity and limited labels, we construct a customized multi-task learning architecture (MT-DAAN) to filter tweets for crisis analytics by training four different classification tasks (c.f. examples in Fig. 3) across ten different crisis events under domain shift. This multi-task domain adversarial model consists of dedicated attention layers for each task for interpretability and a domain classifier branch to promote the model to be domain-agnostic. b) We demonstrate that the attention layers provide interpretability for the predictions made by the classifiers; with the goal to aid emergency services in a more meaningful way. c) We empirically validate the performance of the underlying single-task attention-based neural network architecture by comparing it to the state-of- the-art methods, for improving generalizability and interpretability for domain adaptation in unsupervised tweet classification tasks in general. d) Additionally, through experiments, we show that deep networks struggle with small datasets, and that this can be improved by sharing the base layer for multi-task learning and domain adversarial training. ## II Related Work and Background ### II-A Domain Adaptation Domain Adaptation in text classification tasks has a long line of fruitful research [8, 9, 10] that try to minimize the difference between the domains so that a model trained solely on one domain is generalizable to unseen test data from a completely different domain. With the introduction of Domain- Adversarial training of Neural Networks (DANN) [11], many state-of-the-art models now utilize unlabeled target data to train classifiers that are indiscriminate toward different domains. The speciality of this architecture is that it consists of an extra branch, which performs domain classification using unlabeled data from different domains. Thus, both task and domain classifiers share some bottom layers but have separate layers towards the top. A negative gradient (gradient reversal) from the domain classifier branch is back-propagated to promote the features at the lower layers of the network incapable of discriminating the domains. Recent works such as Adversarial Memory Network (AMN) [12] utilizes attention, in addition to DANN, to bring interpretability to capture the pivots for sentiment classification. Hierarchical Attention Network (HATN) [13] expands upon AMN by first extracting pivots and then jointly training networks for both pivots and non- pivots. For filtering social web data for crisis analytics, these models do not suffice and need customized expansions due to the following reasons: a) Collecting and using large unlabeled target data from the new/ongoing crisis event may not be practically viable, thus, we aim for a fully unsupervised modeling. b) Having access to unlabeled data from multiple crisis events can alleviate the above problem to an extent by using it to train the domain classifier branch to push the model to be domain independent. c) Due to the low-resource nature of the dataset, binary classifiers may miss important lower level features that can be potentially improved by a multi-task model that shares the lower layers of the network for all the tasks. This is also evident from our results in Table III and IV, which show that deep models that perform much better than simple models on Amazon reviews do not significantly outperform them on TREC tweet dataset for crises. ### II-B Multi-Task Learning Multi-Task Learning (MTL) solves multiple tasks at the same time with a goal to improve the overall generalization capability of the model [14]. Within the context of Deep Learning, MTL is performed by sharing (or constraining) lower level layers and using dedicated upper level layers for various tasks. A rich overview of MTL in Deep Neural Networks is presented by Ruder (2017) [15]. MTL has been a successful strategy over the past few years for many research explorations such as relationship networks [16] in computer vision and Sluice networks [17] in natural language processing. Similar problems in domain adaptation of semantic classification and information retrieval were addressed by jointly learning to leverage large amounts of cross-task data [18]. In low resource datasets such as for crises, the chance of overfitting is very high. Thus, it seems intuitively better for the model to find a shared representation capturing different tasks and not just one, such that feature commonalities across tasks can be exploited. ### II-C Attention Mechanism Attention mechanism [19], originally designed for machine translation problems, has become one of the most successful and widely used methods in deep learning that can look at a part of a sentence at a time like humans. This is particularly useful because of its ability to construct a context vector by weighing on the entire input sequence unlike previous sequence-to- sequence models [20] that used only the last hidden state of the encoder network (typically BiLSTM [21], LSTM [22], or GRU [23]). For example, in a sentence, the context vector is a dot product of the word activations and weights associated with each word; thus leading to an improved contextual memorization, especially for long sentences. Our method incorporates such attention mechanisms to enhance interpretability of the classifier. ## III Methodology ### III-A Problem Statement: Unsupervised Domain Adaptation for Crisis Tweet Classification Using notations in Table I, consider a set $C$ of all crisis events such as Guatemala Earthquake or Typhoon Yolanda. The task of unsupervised domain adaptation for crisis analytics is to train a classifier for a specific target crisis ($c_{t}$) using labeled ($L_{C-c_{t}}$) and unlabeled ($U_{C-c_{t}}$) data from all other crises; where $C-c_{t}$ denotes the set of all crisis events minus the target crisis. We assume that no data record from the target crisis is available for training. Following the traditional domain adaptation terminology, $X_{s}$ = $L_{C-c_{t}}$ represents the labeled data from the source domain $S$ and $Y_{s}$ = $y_{C-c_{t}}$ represents the ground truth labels on which the classifier is trained. And, $X_{t}$ = $L_{c_{t}}$ represents the labeled data from the target domain $T$ and $Y_{t}$ = $y_{c_{t}}$ represents the ground truth labels; both of which are only used for testing the classifier. $X_{d}$ = $U_{C-c_{t}}$ represents the unlabeled data from different domains minus the target. To summarize: Input: $X_{s}$, $Y_{s}$, $X_{d}$ Output: $Y_{t}^{pred}$ $\leftarrow$ $predict(X_{t})$ Notation | Definition ---|--- $C$ | Set of all crisis events $\\{c_{1},c_{2},...\\}$ $L_{c_{k}}$ | Set of labeled data from the event $c_{k}$ $y_{c_{k}}$ | Set of ground truth labels for $L_{c_{k}}$. $m$ | Number of tasks (Number of bits in each label) $U_{c_{k}}$ | Set of unlabeled data from the event $c_{k}$ $T_{x}$ | Number of words in a sentence $x^{<k>}$ | $k$-th word of a sentence $\alpha^{<k>}$ | attention from $k$-th word $a^{<k>}$ | BiLSTM activation from $k$-th word TABLE I: Notations ### III-B Overview In the following sections, we describe three models: Single-Task Attention Network (ST), Single-Task Domain Adversarial Attention Network (ST-DAAN), and Multi-Task Domain Adversarial Attention Network (MT-DAAN). ST is the model we adopt from [24] to build the single-task attention based baseline. ST-DAAN is constructed on top of ST to make the model domain agnostic by performing adversarial training using gradient reversal. Finally, MT-DAAN is constructed on top of ST-DAAN with dedicated attention layers for each task on a shared BiLSTM layer. This is shown in Figure 2. Figure 2: Fully Unsupervised Domain Adaptation Set-up for Multi-Task Crisis Tweet Classification. ### III-C Single-Task Attention Network (ST) We first describe the single-task attention network [24] on top of which we build our models. This model aligns with our goals of interpretability and unsupervised domain adaptation. This BiLSTM based model with Attention gives us three main advantages: 1. 1. Unlike several existing domain adaptation methods that use unlabeled target data to train the domain adversarial component via gradient reversal, this method is a fully unsupervised baseline which also can be customized for multi-task learning. 2. 2. The method uses attention mechanism which in turn weighs each word in a sentence based on its importance. This can be directly utilized for interpretability. 3. 3. The method also runs much faster (only a few minutes), i.e. highly useful in crisis times, as compared to the top performing semi-supervised models such as HATN [13] (hours). This model [24] consists of a BiLSTM layer which produces $T_{x}$ activations, each corresponding to a word in the sentence. These activations are passed through dense and softmax layers and are combined by dot product to produce the context vector $\sum_{k=1}^{T_{x}}\alpha^{<k>}a^{<k>}$, where $a^{<k>}$ is the BiLSTM activation from $k$-th word and $\alpha^{<k>}$ is the attention weight of $k$-th word. Sentences with words greater than $T_{x}$ are stripped and those with words lower than $T_{x}$ are padded. This single-task ($m=1$) attention network is the building block with which rest of the following models are constructed. The single-task binary cross entropy loss function is shown below. $\footnotesize L_{T}=-\frac{1}{N}\sum_{i=1}^{N}[y_{i}\log\hat{y_{i}}+(1-y_{i})\log(1-\hat{y_{i}})]$ (1) where $T$ represents the task, $y$ is the true label, and $\hat{y}$ is the predicted label. ### III-D Single-Task Domain Adversarial Attention Network (ST-DAAN) To study the specific contribution of domain adversarial training, we construct a secondary baseline over the ST architecture by constructing an additional branch with gradient reversal layer which is represented by the green blocks in Figure 2. This is a single-task binary classifier with $m=1$. Domain Adversarial Training of Neural Networks (DANN) [11] was introduced with a goal to confuse the classifier by back-propagating a negative gradient from a separate domain classifier branch (right-most branch, as shown in Figure 2). This makes the classifier agnostic to difference in domains. This back- propagation is implemented using a gradient reversal layer [11] which does nothing during the forward pass but pushes a negative gradient ($-\lambda\frac{\partial L_{d}}{\partial\theta_{f}}$) during the backward (gradient update) pass. $L_{d}$ is the domain classification loss, $\lambda$ is the strength of the reversal, and $f$ represents the lower level layers or features over which the negative gradient update is performed. In our architecture, the goal is to make the BiLSTM layer indiscriminate towards various crisis domains such that the multi-task classification does not depend on the domain from which the tweet/sentence is coming from. The ST-DAAN loss function is shown below. $\footnotesize L_{T}^{\prime}=L_{T}+w_{d}L_{d}$ (2) where $w_{d}$ is the domain adversarial loss weight. $L_{d}$ represents the categorical cross entropy loss for multi-domain discriminator shown below. $\footnotesize L_{d}=-\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{|C-c_{t}|}[y_{ij}\log\hat{y_{ij}}]$ (3) where $C-c_{t}$ is the set of all crisis events without the target event. ### III-E Multi-Task Domain Adversarial Attention Network (MT-DAAN) Building on top of ST-DAAN, we construct MT-DAAN, which is intended to classify problems with multiple tasks or labels. For each task, a dedicated attention layer is allocated from which it predicts binary labels. The BiLSTM layer remains exactly the same as in the single-task model but multiple attention blocks are added for each task along with a domain classifier. In the architecture decision process, we first investigated a multi-label classifier where all layers are shared with the final softmax layer making multi-label predictions. In low resource settings, constructing a multi-label classifier using a shared architecture is challenging for two reasons: a) jointly balancing positive and negative samples across all classes is not trivial and potentially challenging to make it extensible when new classes need to be added, and b) attention layer may not always produce class-specific insights as the weights are assigned to train for the combination of labels. On the other hand, in the multi-task architecture with separate attention layers, it is easy to add more classes. If some classes require more training, it is trivial to further tune a model specific to that class. More importantly, $context^{<t_{j}>}$ vector for $j$-th task identifies the influential words from each sentence for that specific task. The complete architecture is shown in Figure 2. MT-DAAN loss function is shown below: $\footnotesize L_{MT-DAAN}=\sum_{k=1}^{m}(w_{k}L_{T_{k}})+w_{d}L_{d}$ (4) where $m$ is the number of tasks, $w_{k}$ is the loss weight and $L_{T_{k}}$ is the loss term for each task, $w_{d}$ is the domain adversarial loss weight, and $L_{d}$ is the domain adversarial loss term. ### III-F Model Interpretability The output ($\alpha$) of the attention layer ($ATT$) of each task, is a $T_{x}$-dimensional vector; $T_{x}$ being the number of words in the sentence. The context vector ($\sum_{k=1}^{T_{x}}\alpha^{<k>}a^{<k>}$) is the product of these attention weights and the $T_{x}$-dimensional activation ($a$) from the $BiLSTM$ layer. $\alpha$ essentially weighs how much each word in the sentence contributes to the classification result. Thus, $\alpha$ is the component that is evaluated for model interpretability. ## IV DATASETS CRISIS EVENTS | Total Tweets | Vocab | Avg #words | P | F | S | I ---|---|---|---|---|---|---|--- 2012 Guatemala Earthquake | 154 | 422 | 18.74 | 104 | 108 | 12 | 15 2013 Typhoon Yolanda | 564 | 1746 | 19.47 | 249 | 46 | 119 | 51 2013 Australia Bushfire | 677 | 2102 | 20.21 | 152 | 213 | 167 | 36 2013 Boston Bombings | 535 | 1755 | 19.30 | 147 | 28 | 234 | 198 2013 Queensland Floods | 713 | 2301 | 19.08 | 293 | 54 | 173 | 215 2014 Chile Earthquake | 311 | 919 | 16.54 | 48 | 26 | 50 | 10 2014 Typhoon Hagupit | 1470 | 2893 | 15.36 | 469 | 375 | 276 | 101 2015 Nepal Earthquake | 2048 | 4026 | 13.77 | 1067 | 377 | 741 | 133 2015 Paris Attacks | 2066 | 4152 | 18.62 | 306 | 183 | 782 | 429 2018 Florida School Shooting | 1118 | 2940 | 21.40 | 329 | 64 | 206 | 70 TABLE II: TREC Dataset Statistics; Showing the number of positive samples for each of the 4 classes. $P$=Priority, $F$=Factoid, $S$=Sentiment, and $I$=Irrelevant. ### IV-A TREC Dataset TREC-IS111http://dcs.gla.ac.uk/~richardm/TREC_IS/ (Text Retrieval Conference - Incident Streams) is a program that encourages research in information retrieval from social media posts with the goal to improve the state-of-the- art social media based crisis analytics solutions. We use the dataset from 2018 track proposal. Statistics of this curated dataset of Twitter downloaded from TREC is shown in Table II. The original dataset consisted of 15 crisis events. However, due to very low data, we trimmed the events and tasks such that there are at least 10 positive samples for each task. The four tasks used in our experiments are shown below: 1. 1. Priority: Different priority levels are assigned for each tweet: low, medium, high, critical. We convert this into a binary classification problem where $low=0$ and $\\{medium$, $high$, $critical\\}=1$. 2. 2. Factoid: ‘Factoid’ is a categorical label that represents if a tweet is stating a fact. Eg: ‘death toll rises ...’ 3. 3. Sentiment: ‘Sentiment’ is a categorical label that represents if a tweet represents a sentiment. Eg: ’Worried.. Thoughts and prayers.’ 4. 4. Irrelevant: ‘Irrelevant’ is a categorical label for tweets that do not provide any relevant information. ### IV-B Amazon Reviews Dataset The standard benchmark dataset222http://www.cs.jhu.edu/~mdredze/datasets/sentiment/ of Amazon reviews [25] is widely used for cross-domain sentiment analysis. We chose four domains: Books (B), Kitchen (K), DVD (D), and Electronics (E). The raw data333https://github.com/hsqmlzno1/HATN/tree/master/raw_data, a part of Blitzer’s original raw dataset, used in this work is from HATN [13]. This dataset consists of $3000$ positive and $3000$ negative samples for each of the $4$ domains. This dataset is used for two purposes: 1) to validate the performance of the state-of-the-art methods including the single-task baseline and 2) to compare and contrast the performance of deep models when trained with rich versus sparse datasets. ### IV-C COVID-19 Tweet Dataset For the COVID-19 use-case, we use Twitter posts collected using CitizenHelper [26] system in March 2020, for the geo-bounding box of the Washington D.C. Metro region. These tweets were annotated by volunteers of regional Community Emergency Response Teams (CERTs), with ‘Relevant’ label denoting how relevant a tweet is for crisis response operations. The label values range on a scale of $1$-$4$. We convert them into binary classes by considering values $1$ and $2$ as $-$ve ($0$) class and values $3$ and $4$ as $+$ve ($1$) class. This dataset consists of $4911$ tweets with $-$ve ($Relevant$=$0$) and $637$ tweets with $+$ve ($Relevant$=$1$) classes. Following unsupervised domain adaptation criteria, the filtering models are trained using only the TREC dataset and evaluated on the COVID-19 tweets. For each independent run of the experiment, a balanced subset of size $637$ for both classes is selected for testing. S $\rightarrow$ T | LR | SVM | CNN | BiLSTM | AMN | HATN | ST ---|---|---|---|---|---|---|--- B $\rightarrow$ K | 76.40 | 75.95 | 81.20 | 84.45 | 81.88 | 87.03 | 87.22 B $\rightarrow$ E | 75.53 | 74.05 | 80.44 | 84.61 | 80.55 | 85.75 | 85.51 B $\rightarrow$ D | 81.08 | 81.43 | 82.94 | 83.52 | 85.62 | 87.07 | 86.32 K $\rightarrow$ B | 76.12 | 75.78 | 78.78 | 80.67 | 79.05 | 84.88 | 81.85 K $\rightarrow$ E | 80.37 | 81.20 | 85.17 | 87.37 | 86.68 | 89.00 | 87.09 K $\rightarrow$ D | 73.32 | 74.98 | 76.41 | 78.49 | 79.50 | 84.72 | 81.13 E $\rightarrow$ B | 74.85 | 74.18 | 78.08 | 81.18 | 77.52 | 84.03 | 81.50 E $\rightarrow$ K | 81.85 | 81.85 | 86.59 | 89.00 | 87.83 | 90.08 | 89.21 E $\rightarrow$ D | 75.82 | 75.83 | 78.35 | 78.46 | 85.03 | 84.32 | 81.37 D $\rightarrow$ B | 81.17 | 82.20 | 82.26 | 84.83 | 84.53 | 87.78 | 87.02 D $\rightarrow$ K | 76.42 | 77.58 | 81.09 | 85.21 | 81.67 | 87.47 | 86.37 D $\rightarrow$ E | 72.47 | 73.68 | 79.56 | 83.66 | 80.42 | 86.32 | 85.63 AVG | 77.12 | 77.39 | 80.91 | 83.45 | 82.52 | 86.54 | 85.02 TABLE III: Performance comparison (accuracy) of various models on the standard benchmark dataset of amazon reviews. Methods in blue do not use any unlabeled target data; hence relevant in our context. Each reported score is an average of 10 independent runs of each experiment. Target | LR | SVM | CNN | BiLSTM | ST ---|---|---|---|---|--- Guatemala Earthquake | 60.14 | 56.76 | 60.47 | 65.54 | 59.97 Typhoon Yolanda | 65.39 | 65.97 | 63.05 | 65.49 | 65.53 Australia Bushfire | 65.61 | 63.23 | 62.10 | 60.10 | 62.44 Boston Bombings | 71.47 | 75.45 | 69.72 | 71.43 | 72.08 Queensland Floods | 65.56 | 64.81 | 64.13 | 66.01 | 66.21 Chile Earthquake | 43.09 | 37.94 | 43.37 | 35.45 | 39.23 Typhoon Hagupit | 49.86 | 46.22 | 49.21 | 54.13 | 52.61 Nepal Earthquake | 57.11 | 55.39 | 58.61 | 60.49 | 61.35 Paris Attacks | 71.43 | 71.72 | 72.50 | 72.14 | 71.31 Florida School Shooting | 58.79 | 63.02 | 58.82 | 59.71 | 60.55 AVG | 60.85 | 60.05 | 60.20 | 61.05 | 61.13 TABLE IV: Performance comparison (accuracy) of unsupervised models on TREC- Priority (tweet) dataset showing that deep models are not strictly superior than simpler models due to data sparsity. Each reported score is an average of 10 independent runs of each experiment. $Source$ = $Everything$ \- $Target$. TARGET | Priority | Factoid ---|---|--- | ST | ST-DAAN | MT-DAAN | ST | ST-DAAN | MT-DAAN | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 Guatemala Earthquake | 59.97 | 62.39 | 69.07 | 69.66 | 69.05 | 69.34 | 68.92 | 68.47 | 79.90 | 80.76 | 84.05 | 97.01 Typhoon Yolanda | 65.53 | 65.47 | 66.07 | 63.73 | 67.42 | 67.30 | 80.50 | 84.42 | 82.71 | 85.61 | 84.36 | 86.93 Australia Bushfire | 62.44 | 66.69 | 61.07 | 63.42 | 61.93 | 64.28 | 64.58 | 60.69 | 65.64 | 60.53 | 65.04 | 60.13 Boston Bombings | 72.08 | 74.29 | 72.34 | 73.37 | 73.80 | 74.74 | 83.10 | 88.51 | 81.42 | 85.90 | 85.82 | 88.82 Queensland Floods | 66.21 | 65.94 | 67.19 | 66.97 | 66.74 | 66.46 | 37.56 | 48.90 | 50.46 | 59.82 | 49.52 | 59.21 Chile Earthquake | 39.23 | 40.92 | 38.91 | 42.37 | 41.80 | 46.33 | 30.38 | 33.97 | 39.87 | 48.68 | 45.28 | 54.58 Typhoon Hagupit | 52.61 | 50.59 | 58.97 | 58.94 | 57.50 | 57.52 | 68.98 | 70.79 | 71.42 | 72.44 | 69.49 | 70.08 Nepal Earthquake | 61.35 | 59.44 | 60.18 | 57.80 | 61.65 | 59.49 | 74.04 | 76.08 | 80.72 | 81.00 | 81.04 | 81.02 Paris Attacks | 71.31 | 76.26 | 70.42 | 74.08 | 74.44 | 77.21 | 75.78 | 80.35 | 82.35 | 84.89 | 82.52 | 85.63 Florida School Shooting | 60.55 | 61.75 | 65.47 | 64.07 | 62.51 | 63.24 | 76.73 | 82.67 | 84.55 | 87.51 | 85.80 | 88.15 AVG | 61.13 | 62.37 | 62.97 | 63.44 | 63.68 | 64.59 | 66.06 | 69.49 | 71.90 | 74.71 | 73.29 | 77.16 TARGET | Sentiment | Irrelevant ---|---|--- | ST | ST-DAAN | MT-DAAN | ST | ST-DAAN | MT-DAAN | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 Guatemala Earthquake | 96.96 | 97.03 | 96.45 | 96.68 | 96.76 | 92.73 | 89.36 | 89.03 | 91.22 | 91.06 | 93.11 | 92.73 Typhoon Yolanda | 75.81 | 77.62 | 77.54 | 79.01 | 76.82 | 78.35 | 76.05 | 79.77 | 78.49 | 80.59 | 80.46 | 82.31 Australia Bushfire | 75.95 | 77.58 | 78.80 | 79.12 | 78.54 | 78.92 | 35.42 | 47.164 | 53.78 | 65.11 | 51.76 | 63.36 Boston Bombings | 81.39 | 81.11 | 80.73 | 80.70 | 82.13 | 82.10 | 58.15 | 55.73 | 58.15 | 57.43 | 61.49 | 61.45 Queensland Floods | 81.69 | 80.39 | 81.05 | 81.39 | 81.53 | 81.32 | 65.68 | 65.36 | 67.26 | 65.72 | 67.88 | 67.27 Chile Earthquake | 92.69 | 92.91 | 93.10 | 93.21 | 93.62 | 93.68 | 75.16 | 84.98 | 80.46 | 86.38 | 80.64 | 86.56 Typhoon Hagupit | 84.98 | 85.86 | 85.15 | 86.14 | 85.43 | 86.38 | 63.21 | 75.04 | 71.50 | 78.25 | 70.22 | 77.27 Nepal Earthquake | 67.75 | 68.42 | 70.20 | 70.51 | 69.96 | 70.31 | 31.79 | 42.10 | 36.97 | 47.41 | 41.49 | 52.87 Paris Attacks | 76.01 | 76.63 | 73.65 | 73.98 | 74.47 | 74.60 | 33.91 | 35.25 | 44.52 | 48.32 | 47.17 | 51.32 Florida School Shooting | 68.77 | 71.77 | 67.06 | 70.03 | 68.14 | 71.05 | 32.66 | 40.90 | 44.22 | 55.27 | 47.64 | 58.65 AVG | 80.20 | 80.93 | 80.37 | 81.08 | 80.74 | 80.94 | 56.14 | 61.53 | 62.66 | 67.55 | 64.19 | 69.38 TABLE V: Unsupervised domain adaptation results on TREC dataset showing performance boost for Priority, Factoid, and Irrelevant tasks. However, Sentiment task did not show a significant improvement. See performance evaluation section for details. Each reported score is an average of 10 independent runs of each experiment. TARGET | Relevant ---|--- | ST | ST-DAAN | MT-DAAN | Acc | F1 | Acc | F1 | Acc | F1 COVID-19 | 73.25 | 77.36 | 74.55 | 77.51 | 77.00 | 78.09 TABLE VI: Unsupervised domain adaptation results for COVID-19 tweets using only the TREC dataset for training. Each reported score is an average of 10 independent runs of each experiment. ## V Results & Discussion We first validate the performance of the adopted unsupervised ST model [24] by comparing it with the following standard neural network architectures and state-of-the-art models used for domain adaption in text. We use the standard benchmark dataset of Amazon reviews. Following the traditional domain adaptation experimental setup, each experiment represented as S $\rightarrow$ T consists of a source domain (S) on which the model is trained and a target domain (T) on which the model is tested. We use Keras deep learning library for our implementations; with $T_{x}$=$200$ for Amazon reviews and $30$ for Tweets. We use Adam optimizer with a dropout of $0.4$, maximum epoch of $50$, early stopping patience of $3$, batch size of $32$, and validation split of $0.15$. 1. 1. Simple Baselines: We construct simple baseline classifiers [27]: Logistic Regression (LR) and Support Vector Machines (SVM). The input to these models are constructed by aggregating the $300$-dimensional word embeddings of words in each review. 2. 2. CNN: A standard Convolutional Neural Network inspired by Kim, 2014 [28] is constructed with the following architecture: $Word\ Embeddings(T_{x},300)\rightarrow Conv1D(128,5)$ $\rightarrow MaxPooling1D(5)$ $\rightarrow Conv1D(128,5)\\\ \rightarrow MaxPooling1D(5)$ $\rightarrow Conv1D(128,5)\\\ \rightarrow GlobalMaxPooling1D()\rightarrow Dense(128)$ $\rightarrow Dense(2)\rightarrow y$. This is combined with dropouts, relu activations, and ending with softmax activation producing labels for binary classification. State-of-the-art deep learning methods for existing social media mining approaches of crisis analytics [6, 5] use a similar architecture. 3. 3. BiLSTM: This is the bottom-most layer in Figure 2 with the activation $a^{<T_{x}>}$ passed through the following: $Dense(10)\rightarrow Dense(2)\rightarrow y$ also including dropouts, relu activation, and ending with softmax. 4. 4. AMN and HATN: AMN [12] and HATN [13] are attention-based methods which use gradient reversal to perform domain adversarial training on the unlabeled data from source and target domains. HATN is an extension to AMN by adding the hierarchical component and jointly training pivot and non-pivot networks. Input to all the models are word vectors444https://code.google.com/archive/p/word2vec/ [29]. The evaluation on amazon reviews shows how well the single-task (ST) model perform when compared to the existing top-performing domain adaptation models on benchmark dataset. Table III shows accuracy scores on the Amazon cross-domain sentiment analysis dataset. HATN uses unlabeled target data, gradient reversal, explicit pivot extraction, and joint training making it a computationally expensive method. As shown in the experimental evaluation, we use the same Amazon dataset and GoogleNews word vectors for our experiments. ST, being unsupervised with no need of unlabeled target data, performed competitively with an overall accuracy of 85.02%; thus establishing a strong fully unsupervised building block for us to build upon. ### V-A Crisis Tweets vs Amazon Reviews Table III and IV show that deep models struggle with small datasets such as TREC-IS tweets. When ST model outperformed Logistic Regression by $\sim 8\%$ on the Amazon reviews dataset, the difference was only less than $1\%$ with no statistical significance on the TREC-Priority dataset. Note that we conduct experiments with various parameter combinations on the deep models when using tweets. For example, $T_{x}=200$ for amazon reviews and $T_{x}=30$ for tweets due to the difference in their average word-length. Books domain of Amazon reviews has $182$ average number of tokens per review with a vocab size of $105920$. On the other hand, the event with highest number of tweets in the TREC dataset (Paris Attacks) has only 18.62 average number of tokens per tweet with a vocab size of $4152$. This difference makes it intuitively challenging to train deep models with several parameters that may lead the model to memorize the entire dataset resulting in poor generalization. Multi-task learning and domain adversarial training try to alleviate this problem by training the shared BiLSTM layer with much more data from different tasks and unlabeled data. ### V-B MT-DAAN Performance Evaluation The primary purpose of the MT-DAAN model is to show that sharing the bottom layer of the model (i.e., shared representation) for different tasks along with domain adversarial training can help improve the generalizability of some of the tasks that are otherwise trained alone in the single-task model. The experiments for MT-DAAN are setup in the same unsupervised way as for single- task. No data from the test crisis is used for training. For example, if we are testing our model for the event ‘Typhoon Yolanda’, no data from this crisis is used for training. Note that the domain classifier component uses unlabeled data only from rest of the crisis; making it a fully unsupervised domain adaptation approach. Performance scores of the four tasks (Priority, Factoid, Sentiment, and Irrelevant) are shown in Table V. The results show clear performance improvement for Priority, Factoid, and Irrelevant tasks. However, Sentiment task did not show significant improvement. We speculate that this is because other tasks do not generalize the bottom layer enough to boost the sentiment classification performance. These results show the usefulness of multi-task learning as well as domain adversarial training where different tasks in multiple domains help each other when the data is sparse and labels are limited. Figure 3: Examples of interpretable results using attention; darker the shade, higher the attention. Recall that no data from the crisis-event for testing is used for training the model. Even then, relevant keywords such as ‘police urging’, ‘death toll rises’, ‘worried’, and ‘thoughts with people’ are correctly picked up by the attention layers of their respective tasks. ### V-C Word Vectors We use fastText[30] as our word embeddings for tweets because of its sub-word usage and the ability to create vectors for arbitrary and out-of-vocabulary words. Although there exists many alternatives, picking the one that works well for a specific dataset is not trivial. We conducted experiments using four choices of word embeddings: fastText [30], GoogleNews [29], Glove [31], and CrisisNLP [32]. Averaging over 10 crises, we received the following accuracy scores (in %) respectively for the above word embeddings: {$80.20$, $81.82$, $81.88$, $80.73$}. Unlike fastText, we fine-tune these pre-trained vectors to create vectors for out-of-vocabulary words. Vectors for words that are already in the vocabulary are locked while tuning for consistency in evaluation. The tweet-based embeddings such as Glove or CrisisNLP did not significantly outperform other models. Glove vectors are 200-dimensional while the rest are 300-dimensional which makes the experiment favoring Glove word vectors. This experiment shows that the problem of finding a strictly superior word vector model for tweets still remains a challenging task. Figure 4: Examples of interpretable results using attention for relevancy prediction of COVID-19 tweets. With $77\%$ accuracy, although the highly attended words in the ‘Relevant’ tweets provide some intuitive sense of interpretability, the highlighted words in the ‘Irrelevant’ tweets are somewhat ambiguous because it is unclear if those words are chosen due to their specific or generic nature. This shows both the benefits and challenges of unsupervised and interpretable domain adaptation. ### V-D Interpretability: Attention Visualization The attention weights used to create the context vector by the dot product operation with word activations represent the interpretable layer in our architecture. These weights represent the importance of each word in the classification process. Some examples are shown in Figures 3 and 4. Stronger the color intensity stronger the word attention. In the first example, ‘boston police urging’ is the reason why the tweet is classified as $+$ve priority. Similarly, ‘death toll rises’ in the Factoid example, ‘worried, prayers’ in the Sentiment example, and ‘thoughts with people’ in the Irrelevant example are clear intuitive indicators of +ve predictions. These examples show the importance of having interpretability as a key criterion in crisis domain adaptation tasks for social media. To the best of our knowledge, in social media mining for crisis analytics, there does not exist a ground truth dataset that highlights the words that explain the labels for tweets. Using our model as a guide, we hope to build a robust evaluation dataset as our immediate next step so that the models can be quantitatively evaluated using robust trust-evaluation methods such as LIME [33]. It is also crucial to note that binary classification tasks such as sentiment analysis of Amazon reviews has a clear class divide that produces intuitive keywords such as ‘good’, ‘excellent’, or ‘great’ for $+$ve reviews and ‘bad’, ‘poor’, or ‘horrible’ for $-$ve reviews. However, for short texts such as tweets shown in Figure 4, ‘relevancy’ can depend on the context and it is unclear which keywords truly represent the examples in the ‘irrelevant’ class. ## VI COVID-19 Use-Case We show a practical implication of our work by applying it to COVID-19 tweets described in Section 4.3. Our goal is to interpretably predict if a COVID-19 tweet is relevant or not; a binary classification task. The models are trained using only the TREC dataset and evaluated on the COVID-19 tweets (a balanced subset of size $637$ for $+$ve and $-$ve labels). We found that a combination of ‘Priority’ and ‘Irrelevant’ labels from TREC performs better to predict COVID-19’s ‘Relevant’ label (this can be trivially verified by constructing two binary classifiers). We augment all three methods (ST, ST-DAAN, and MT- DAAN) with an additional condition before label prediction: $R_{c}=P_{t}\cap\overline{I_{t}}$, which means that a COVID-19 tweet is ‘Relevant’ only if it is predicted both ‘Priority’ = $1$ and ‘Irrelevant’ = $0$. The scores are reported in Table VI and the attention results are shown in Figure 4, demonstrating the effectiveness of our proposed method. ## VII Conclusion We presented a novel approach of unsupervised domain adaptation with multi- task learning to classify relevant information from Twitter streams for crisis management, while addressing the problems of data sparsity and limited labels. We showed that a multi-task learning model that shares the lower layers of the neural network with dedicated attention layers for each task along with a domain classifier branch can help improve generalizability and performance of deep models in the settings of limited data. Furthermore, we showed that using an attention-based architecture can help in interpreting the classifier’s predictions by highlighting the important words that justify the predictions. We also presented an in-depth empirical analysis of the state-of-the-art models on both benchmark dataset of Amazon reviews and TREC dataset of crisis events. The application of our generic approach for interpretable and unsupervised domain adaptation within a multi-task learning framework can benefit social media mining systems in diverse domains beyond crisis management. Reproducibility: Source code and instructions for deployment are available at - https://github.com/jitinkrishnan/Crisis-Tweet-Multi-Task-DA. ## References * [1] C. Castillo, _Big crisis data: social media in disasters and time-critical situations_. Cambridge University Press, 2016. * [2] M. Imran, P. Mitra, and C. Castillo, “Twitter as a lifeline: Human-annotated twitter corpora for nlp of crisis-related messages,” _arXiv preprint arXiv:1605.05894_ , 2016. * [3] H. Li, D. Caragea, C. Caragea, and N. Herndon, “Disaster response aided by tweet classification with a domain adaptation approach,” _Journal of Contingencies and Crisis Management_ , vol. 26, no. 1, pp. 16–27, 2018. * [4] S. Vieweg, C. Castillo, and M. Imran, “Integrating social media communications into the rapid assessment of sudden onset disasters,” in _International Conference on Social Informatics_. Springer, 2014, pp. 444–461. * [5] D. T. Nguyen, K. A. A. Mannai, S. Joty, H. Sajjad, M. Imran, and P. Mitra, “Rapid classification of crisis-related data on social networks using convolutional neural networks,” _arXiv preprint arXiv:1608.03902_ , 2016\. * [6] F. Alam, S. Joty, and M. Imran, “Domain adaptation with adversarial training and graph embeddings,” _arXiv preprint arXiv:1805.05151_ , 2018. * [7] R. Mazloom, H. Li, D. Caragea, C. Caragea, and M. Imran, “A hybrid domain adaptation approach for identifying crisis-relevant tweets,” _International Journal of Information Systems for Crisis Response and Management (IJISCRAM)_ , vol. 11, no. 2, pp. 1–19, 2019. * [8] J. Blitzer, R. McDonald, and F. Pereira, “Domain adaptation with structural correspondence learning,” in _Proceedings of the 2006 conference on empirical methods in natural language processing_ , 2006, pp. 120–128. * [9] S. J. Pan, X. Ni, J.-T. Sun, Q. Yang, and Z. Chen, “Cross-domain sentiment classification via spectral feature alignment,” in _Proceedings of the 19th international conference on World wide web_. ACM, 2010, pp. 751–760. * [10] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” _Journal of machine learning research_ , vol. 11, no. Dec, pp. 3371–3408, 2010. * [11] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” _The Journal of Machine Learning Research_ , vol. 17, no. 1, pp. 2096–2030, 2016. * [12] Z. Li, Y. Zhang, Y. Wei, Y. Wu, and Q. Yang, “End-to-end adversarial memory network for cross-domain sentiment classification.” in _IJCAI_ , 2017, pp. 2237–2243. * [13] Z. Li, Y. Wei, Y. Zhang, and Q. Yang, “Hierarchical attention transfer network for cross-domain sentiment classification,” in _Thirty-Second AAAI Conference on Artificial Intelligence_ , 2018. * [14] R. Caruana, “Multitask learning,” _Machine learning_ , vol. 28, no. 1, pp. 41–75, 1997. * [15] S. Ruder, “An overview of multi-task learning in deep neural networks,” _arXiv preprint arXiv:1706.05098_ , 2017. * [16] M. Long and J. Wang, “Learning multiple tasks with deep relationship networks,” _arXiv preprint arXiv:1506.02117_ , vol. 2, p. 1, 2015. * [17] S. Ruder12, J. Bingel, I. Augenstein, and A. Søgaard, “Sluice networks: Learning what to share between loosely related tasks,” _stat_ , vol. 1050, p. 23, 2017. * [18] X. Liu, J. Gao, X. He, L. Deng, K. Duh, and Y.-Y. Wang, “Representation learning using multi-task deep neural networks for semantic classification and information retrieval,” 2015. * [19] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” _arXiv preprint arXiv:1409.0473_ , 2014\. * [20] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in _Advances in neural information processing systems_ , 2014, pp. 3104–3112. * [21] M. Schuster and K. K. Paliwal, “Bidirectional recurrent neural networks,” _IEEE Transactions on Signal Processing_ , vol. 45, no. 11, pp. 2673–2681, 1997. * [22] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” _Neural computation_ , vol. 9, no. 8, pp. 1735–1780, 1997. * [23] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Gated feedback recurrent neural networks,” in _International conference on machine learning_ , 2015, pp. 2067–2075. * [24] J. Krishnan, H. Purohit, and H. Rangwala, “Diversity-based generalization for neural unsupervised text classification under domain shift,” _ECML-PKDD_ , 2020. * [25] J. Blitzer, M. Dredze, and F. Pereira, “Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification,” in _Proceedings of the 45th annual meeting of the association of computational linguistics_ , 2007, pp. 440–447. * [26] R. Pandey and H. Purohit, “Citizenhelper-adaptive: expert-augmented streaming analytics system for emergency services and humanitarian organizations,” in _2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)_. IEEE, 2018, pp. 630–633. * [27] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine Learning in Python ,” _Journal of Machine Learning Research_ , vol. 12, pp. 2825–2830, 2011. * [28] Y. Kim, “Convolutional neural networks for sentence classification,” _arXiv preprint arXiv:1408.5882_ , 2014. * [29] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in _Advances in neural information processing systems_ , 2013, pp. 3111–3119. * [30] T. Mikolov, E. Grave, P. Bojanowski, C. Puhrsch, and A. Joulin, “Advances in pre-training distributed word representations,” in _Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)_ , 2018\. * [31] J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation,” in _Empirical Methods in Natural Language Processing (EMNLP)_ , 2014, pp. 1532–1543. [Online]. Available: http://www.aclweb.org/anthology/D14-1162 * [32] M. Imran, P. Mitra, and C. Castillo, “Twitter as a lifeline: Human-annotated twitter corpora for nlp of crisis-related messages,” in _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)_. Paris, France: European Language Resources Association (ELRA), may 2016. * [33] M. T. Ribeiro, S. Singh, and C. Guestrin, “” why should i trust you?” explaining the predictions of any classifier,” in _Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining_ , 2016, pp. 1135–1144.
2024-09-04T02:54:58.992673
2020-03-11T04:29:14
2003.05106
{ "authors": "Geovane Fedrecheski, Jan M. Rabaey, Laisa C. P. Costa, Pablo C.\n Calcina Ccori, William T. Pereira, Marcelo K. Zuffo", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26149", "submitter": "Geovane Fedrecheski", "url": "https://arxiv.org/abs/2003.05106" }
arxiv-papers
# Self-Sovereign Identity for IoT environments: A Perspective Geovane Fedrecheski1, Jan M. Rabaey4, Laisa C. P. Costa1, Pablo C. Calcina Ccori1, William T. Pereira1, Marcelo K. Zuffo1 {geovane, laisa<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>This research was partially funded by CAPES. 1Interdisciplinary Center on Interactive Technologies, Polytechnic School, University of Sao Paulo, Brazil 4Berkeley Wireless Research Center, Electrical Engineering and Computer Science Department, University California, Berkeley, US ###### Abstract This paper analyses the concept of Self-Sovereign Identity (SSI), an emerging approach for establishing digital identity, in the context of the Internet of Things (IoT). We contrast existing approaches for identity on the Internet, such as cloud-based accounts and digital certificates, with SSI standards such as Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs). To the best of our knowledge, this is the first thorough comparison of these approaches. The benefits and challenges of using DIDs and VCs to identify and authenticate IoT devices and their respective users are discussed. In the end, we establish that SSI, with its owner-centric, privacy-aware and decentrailized approach, provides a viable and attractive option for secure identification of IoT devices and users. ## I Introduction The Internet was developed as a research project to interconnect computers [1]. Protocols like TCP/IP, developed as open standards, allowed computers to connect in a global scale. However, even after the world-changing impacts the Internet had on society over the last decades, it has no pervasive, privacy- preserving, and easy to use mechanism to manage digital identities. Where human activity is involved, a common abstraction is to use accounts, i.e. digital records, often containing personally identifiable information (PII), that are protected by a password and saved on a webserver. Although this method has been working for several decades, it has many security drawbacks, such as the use of weak passwords [2] and the potential for privacy violation. Furthermore, the manual approach of password-protected accounts makes it unsuitable to machine-to-machine interactions, a common scenario in the IoT. More automated solutions can be achieved by using Public Key Certificates (PKCs) that bind names to public keys [3]. Widespread use of PKC, however, is limited to organizations, due to the complexity of current methods. For instance, while websites usually prove their identities to web browsers using certificates, users do not use certificates in the same way, i.e. to prove their identity to the website. Moreover, existing standards were not designed for privacy, as evidenced by the use of real names in known certificate formats such as PGP [4] and X.509 [5]. To aggravate the situation, the assignment of unique names often require centralized architectures, which is inadequate for distributed IoT applications. A recent development towards online identification of users, organizations, and devices has been referred to as “Self-Sovereign Identity” (SSI). The basic premise of SSI is that subjects should own and control their own identity, instead of having it stored and managed by a third party. This approach brings several benefits, including enhanced privacy, control, and decentralization. Two new standards are being proposed to realize SSI, namely, Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs) [6, 7]. While DIDs focus on cryptographic identification, VCs provide a means for privacy-aware and authenticated attribute disclosure. In this paper we analyze existing approaches to identity in the Internet, such as X.509, PGP [4], and SSI. We present a detailed comparison focusing on the data models used to represent identity across different standards. Finally, we discuss what are the benefits of using SSI in the Internet of Things, and identify challenges that must be overcome. ## II Self-Sovereign Identity Self-Sovereign Identity is an approach in which subjects are in full control of their own digital identities [8]. SSI is analogous to offline identifiers, which are carried by the owner (within a physical wallet), but contrasts with current digital identity solutions, which are either based on accounts or digital certificates, and have privacy and centralization issues. While initially proposed by members [8] of online communities, a formal definition of SSI was released recently [9]. Considering an identity to be composed of an identifier associated with a set of name-value attributes, the full self-sovereign identity of an individual is the collection of all identities (i.e. identifiers and attributes) that span a range of decentralized domains, such that the individual is in full control of these identities [9]. As digital privacy concerns have been growing in recent years, interest in SSI has intensified. This led to the definition of a set of technical specifications to implement SSI, which we describe below. ### II-A Decentralized Identifiers Digital identifiers so far have been either centralized or non-resolvable. For example, Uniform Resource Locators (URLs), which can be used to resolve HTML documents, usually depend on domains names assigned by ICANN111Internet Corporation for Assigned Names and Numbers - https://www.icann.org/, a centralized authority. On the other hand, unique, user-generated identifiers such as UUIDs cannot be used to resolve associated metadata. To address this, a new specification for Decentralized Identifiers (DIDs) is being developed with the support of the W3C [6]. The DID has the following syntax: did:btcr:abcdefgh12345678. The did prefix is mandatory, and colons are used to separate a method definition and a method-specific id. A method is a specific set of rules for working with DIDs (the example above uses the Bitcoin method), and the format of the id depends on that method. An open directory of different DID methods is available for public access and open for new submissions222https://w3c-ccg.github.io/did-method-registry/. Each DID is associated with a DID Document (DDo) that contains the DID itself along with public keys, service endpoints, and other metadata. The public key is used to authenticate and encrypt messages, while the endpoint provides a way to message the entity that controls that DID. To control a specific DID, a subject just have to own a private key associated with public keys in the DDo. A common storage mechanism for DDos are Blockchains, from which they can be resolved using the referred DID. On the other hand, in some cases individuals may not want to publish their DIDs, e.g. to avoid identity correlation. In this case, the special peer DID method can be used. Thus, DIDs are unique identifiers that can be resolved to DID Documents, and they allow the establishment of an end-to-end secure channel. What DIDs do not provide, however, is a means for entities to prove claims (attributes) about themselves. ### II-B Verifiable Credentials Verifiable Credentials (VCs) is a W3C recommendation for portable and provable claims about a subject. For instance, a person may claim to have the name Alice, and a device may claim to be of type Camera. The relationship among DIDs and VCs is shown in Figure 1. All VCs refer to the DID of the subject to which they have been assigned (e.g. an IoT device). VCs also contain the DID of its issuer along with a cryptographic proof. This allows a subject to present a VC to a verifier, which can then resolve the DDo of the issuer (and therefore its public key) from a public ledger, e.g. a Blockchain, and check the authenticity of the VC. Figure 2 shows a use case where a user issues a VC to a device. A major incentive for SSI is privacy, therefore VCs are expected to be private and stored in a personal wallet, to be shared only when necessary. To further improve privacy, the VC specification supports zero-knowledge proofs, i.e. a cryptographic technique “where an entity can prove to another entity that they know a certain value without disclosing the actual value” [7]. Figure 1: A DID is the link between a DDo and a set of VCs, much like a primary key can link different tables in a database. This allows a subject associated with a DID to prove its identity. Figure 2: An owner-centric scenario using SSI. Each subject generates its own DDo, while the VC is issued by the device owner. ### II-C Decentralization, privacy, and layered authentication Public key cryptography can be used to derive a shared secret over an insecure channel [10]. However, a known problem is how to trust the origin of the public key. To solve this, a signed certificate that binds a name to a public key was proposed [3]. Two common standards for digital certificates are X.509 [5, 11] and Pretty Good Privacy (PGP) [4]. Although they differ in details, both follow the original definition in which names are tied to public keys and signed by a third party [3]. A crucial challenge faced by certificate-based solutions was ensuring the uniqueness of the names. The most common solution to this was to rely on centralized architectures. For example, the name on the subject field in X.509 must be enforced by a global authority, and the PGP id uses the name of a person plus her email address, which ultimately depends on DNS, which is centralized as well. More recently, the emergence of Blockchain technology allows decentralized consensus for choosing unique names. One problem, however, is that solutions based on certificates put sensitive information in the identifier, which compromises the privacy of certificate holders, and therefore might not be suitable for storage in public, immutable ledgers. An approach to solve this is to limit the exposure of PII on the ledger by only writing anonymous information to it, e.g., public keys. In particular, this approach enables public key storage and lookup, which can be used to create a confidential and non-repudiable channel. Higher-level abstractions can then be used to implement authentication, since the attributes necessary to authenticate users are usually application-specific. This is the solution that results from combining the DID and VC specifications. Containing only pseudonymous information, such as public keys and service endpoints, DID Documents can be used to establish a cryptographically secure channel between two entities. After the confidential channel is created, the entities can exchange VCs, according to the levels of trust necessary to each application. In other words, while DIDs are lower- level and pseudonymous, VCs are application-specific and can be used to authenticate attributes such as name or device type. Finally, it is worth noting that as each DID is usually a high-entropy random string, name collisions actually stop being a concern. ## III Data models for digital identity This section provides analysis and comparisons of existing data models for digital identity. We start by discussing the limitations of password-based accounts, and then proceed to compare data models based on public key cryptography. ### III-A Accounts The most basic method to identify subjects in computer systems is the account: a digital record, usually composed of at least a user name and a password, that identifies a user. Accounts are commonly stored in a server controlled by the service provider. For example, popular IoT vendors require that a device owner have a cloud-based account, so that she can use this virtual identity to configure her devices. While accounts have been used for decades in a variety of systems, they are among the most primitive solutions for digital identities. Among the problems related to account-based authentication are privacy and the use of passwords. With respect to privacy, issues arise because the user is forced to store plaintext PII in a third-party system. Regarding passwords, the literature indicates common problems such as password reuse and difficulty to enforce strong passwords, and points that the most widespread solution is the use of “recommendations” [2], which depends on human factors and are difficult to enforce. TABLE I: Comparison of standardized data models for digital identity. | PGP | X.509 | Self-Sovereign Identity ---|---|---|--- | PGP Key | | Public Key Certificate --- (PKC) | Attribute Certificate --- (AC) | DID Document --- (DDo) | Verifiable Credential --- (VC) Goal | | Prove control of public --- keys and identifier (plus optional attributes) Publish public keys | Prove control of public --- keys and identifier (plus optional attributes) Publish public keys | Prove possession of --- attributes | Prove control of identifier --- Publish public keys and service endpoints | Prove possession of --- attributes Identifier | Name and Email | Qualified Name | Same as PKC | Method-specific DID | Same as DDo | Uniqueness --- of identifier | Global --- authority (DNS) Global authority (CA) | Same as PKC | | Ledger consensus / --- Random number gen. Same as DDo Public Key(s) | 1 primary, N subkeys | 1 | n/a (points to PKC) | N | n/a (points to DDo) Attribute(s) | Attributes field | Extensions field | Attributes field | - | subjectCredential field Endorsement | Signature of many peers | Signature of a CA | Signature of a CA | | Self-signed (optional) --- Indirect through VC Signature of an Issuer | Service --- endpoints - | - | n/a | Yes | n/a | Semantic --- schemas - | - | - | Yes | Yes ### III-B Models based on public key cryptography Pretty Good Privacy (PGP) [4] was created to allow individuals to prove a binding between a public key and an identifier, the latter being composed by a real name and an email address. This binding, along with optional attributes and signatures, is stored in a document called a PGP Key. Conceived as a distributed solution, individuals in the PGP scheme can sign the keys of other individuals, so as to give an endorsement that they are who they say they are, i.e. they are not impersonating someone or using a fake id. This scheme of peer signatures is often referred to as the Web of Trust. X.509 Certificates, created by the X.500 working group, defines a format for Public Key Certificates (PKC) that binds public keys to qualified names [5]. PKCs are widely used in the Internet to authenticate domain names and protect communications. Although technically nothing prevents peer-to-peer signature of X.509 certificates, the vast majority of its usage is under centralized architectures, in which a trusted authority signs the certificate to make it trustworthy. Finally, in certain cases it is useful to have a separate document that, instead of having public key, contains only a name associated with signed attributes. To meet this demand, X.509 proposed a new standard called Attribute Certificate (AC), which contains no public key, but links to a PKC through its subject field [11]. Finally, as previously mentioned, Self-Sovereign Identity is a novel approach that uses Decentralized Identifiers [6] and Verifiable Credentials [7] to prove possession of identifiers and attributes, respectively. ### III-C High-level comparison The following paragraphs compares models used by the PGP, X.509, and SSI standards, according to Table I. #### Goal Both PGP Keys and PKCs are used to publish and prove control of public keys that are tied to identifiers. Also, in these approaches, attributes can be provided either in the same document as the public keys (PGP Key and PKC), or, in the case of X.509, in a separate document (AC). On the other hand, documents in the SSI paradigm have decoupled goals: DDos are be used to prove control of an identifier and to provide a means for establishing a secure communication; and VCs are used to prove possession of attributes. #### Identifier (and uniqueness) While PGP and X.509 use names and other identifiers that depend on centralized entities, in SSI the identifiers are completely decentralized and can be auto- generated, for example by using strong random number generators. Not only this enables easy global uniqueness, but the pseudonymous characteristic of DIDs also enhances privacy, when compared to previous approaches based on real names or email addresses. Pseudonymous identifiers are also more suited for IoT, since devices do not have names or email addresses by default. #### Public Key(s) PKCs are limited to only one public key, while PGP Keys and DDos can have many. PGP still differs from DDos as the former uses a primary key that is tied to an identifier and allows more subkeys to be included, while the latter support multiple public keys without assumptions other than the key type, which usually encodes its purpose, e.g. sign or encrypt. #### Attribute(s) Both PGP Keys and X.509 certificates support arbitrary attributes, either via PKC extensions or dedicated ACs. In self-sovereign identity, a DDo does not support attributes in order to stay anonymous. Instead, all PII is handled only by VCs, which are private by default. #### Endorsement(s) PGP Keys can be signed by one or more peers, but X.509 certificates and VCs can only be signed by a single issuer. DDos are not signed by external entities, and may be self-signed. When a DDo is written to a ledger, however, the transaction will be signed, which can be used to attest the validity of the DDo. Another way of proving endorsement over a DID is to check the signature of a VC associated with that DID. If the VC is signed by a trusted issuer, the DID can be trusted. Furthermore, with respect to who can sign the endorsements, technically it can be anyone, but there are philosophical differences. X.509, for example, was devised to work within a centralized architecture, where only trusted authorities can sign certificates. On the other end of the spectrum, PGP expects peer-to-peer signatures, which ultimately creates a Web of Trust. Finally, VCs does not make strong assumptions on the network structure, although decentralized approaches, especially the ones based on Blockchain, may be favorable. #### Service endpoints a novelty introduced by DDos is the association of a built-in mechanism to reach the owner of a public key. This facilitates the establishment of secure interactions between peers, from web to IoT environments. #### Semantic schemas only SSI-based data models allow extensibility through semantic annotations over JSON documents. The main reason for this is that these technologies only became popular after X.509 and PGP were developed. ### III-D Public key distribution TABLE II: Comparison of data models for key distribution. | Raw Pub Key | PKC | DDo ---|---|---|--- | Associates key material --- to metadata | X | X Privacy: no PII disclosed | X | | X | Key rotation does not --- requires re-signing n/a | | X Serialization formats | | Binary --- Base64 | DER --- PEM | JSON-LD --- JWT Semantic schemas | | | X | Decentralized: user --- generates the artifact X | | X | Decentralized: user --- carries the artifact X | X | X Service endpoint | | | X TABLE III: Comparison of data models for attributes. | PKC | AC | VC ---|---|---|--- | Signed attributes --- about a subject X | X | X | Key rotation does not --- requires re-signing | X | X | Identifier differs from --- key material X | X | X | Attributes decoupled --- from key material | X | X Selective disclosure | | | X Zero-knowledge proofs | | | X Delegation | | X | Revocation | X | X | X Serialization formats | | DER --- PEM | DER --- PEM | JSON-LD --- JWT Semantic schemas | | | X | Decentralized: user --- carries the artifact X | X | X | Decentralized: Verifier --- decoupled from Issuer | | X An important aspect in the design of systems based on asymmetric encryption is the data model used to support key distribution. In the following, we compare three approaches, as shown in Table II: Raw Public Key, Public Key Certificates, and DID Document. #### Raw public key this is the simplest approach, and consists in having a public key shared as a raw array of bytes, often encoded in some ascii-compatible format, such as base64. Although this approach is decentralized and discloses no personal information, it does not allow associated metadata. #### Public Key Certificate as previously discussed, PKCs bind a name and other attributes to a public key, which allows subjects prove their identity. Created before privacy was a major concern, X.509 PKCs always carry PII in the main identifier, and may carry PII in other attributes. Finally, other drawbacks of PKCs include the imposition of specialized serialization formats (DER and PEM), tight coupling of keys and data (which makes key rotation more difficult), and a centralized architecture, i.e. the artifact is not self-generated. #### DID Document DDos associate public keys to pseudonymous metadata, while also allowing key rotation without re-signing any associated metadata. The latter is possible because all signed metadata actually only lives in associated VCs. An important difference to highlight is that DDos are not signed by third parties, thus they cannot authenticate the origin of a public key. If this is necessary, DDos can be composed with VCs to increase security. DDos supports JSON-based serialization formats, which are available in most programming languages and platforms, and can benefit from publicly available semantic schemas. As each user auto-generates their own DIDs and DDos, the management of the identifier is decentralized. Finally, service endpoints in DDos provide a novel way for peers to securely establish secure channels. ### III-E Attribute distribution Four out of the five previously described formats can be used to prove control over attributes: PGP Keys, Public Key Certificates, Attribute Certificates, and Verifiable Credentials. Since PGP Keys are less widely used, we only compare the latter, as shown in Table III. #### Public Key Certificates the encoding of attributes in PKCs leverages the X.509 PKC extension field. Although the reuse of an existing format may seen advantageous in terms of compatibility, the whole certificate must be re-signed when a key is rotated, or when selective disclosure of attributes is necessary. An important drawback not mentioned so far is that it is impossible to disclose only a subset of the attributes in a PKC, without contacting the issuer for a new signature. #### Attribute Certificates differing from PKCs, ACs contain a name and a list of attributes, but no public key, which simplifies key rotation. Finally, while ACs support delegation, in general they have the same drawbacks as PKCs. #### Verifiable Credentials similar to an AC, a VC does not contain public keys, as it focus on binding identifiers to attributes. Among the novelties in the VC standard is the support for selective disclosure without contacting the issuer, which is realized using zero-knowledge cryptography. VCs also leverage JSON, a serialization format that is both human readable and lightweight to parse. VCs and can be further specialized into two formats: JSON Linked Data (JSON- LD)333https://json-ld.org/, a format to serialize linked data; and JSON Web Token (JWT), a widely used format to express security claims444https://jwt.io/. ## IV Benefits and Challenges of SSI for IoT As the IoT continues to evolve, new paradigms that allow spontaneous machine- to-machine interactions started to appear [12, 13]. Necessarily decentralized, the future IoT will require users to be the root of trust of their devices, leading to an owner-centric IoT. As privacy concerns raise in importance, solutions that minimize personal data sharing become paramount. Full realization of these and other features will require novel, open, and secure standards for identity in the IoT. The next paragraphs discuss aspects of self-sovereign identity that are likely to improve decentralized IoT security, while also pointing the factors that will require innovation to bring SSI to IoT, such as support constrained devices. ### IV-A Benefits The benefits of SSI for IoT, such as privacy and decentralization, are discussed below. #### Owner-Centric The user can be the root of trust of her devices. Once a user is the owner and controller of her identity, it is straightforward to create a network of devices that belong to her, for example by provisioning an “owner=Alice” credential to each device. One interesting consequence of this is that no third party is needed to enforce security and administration of devices, as the user herself will be able to do it. Note that in this approach devices can have their own identity as well, and may only use the owner attribute to facilitate the creation of trust relationships, i.e. devices that share the same owner can automatically trust each other. #### Privacy-preserving Personal information is protected. By having the identity of owners and things stored locally, sensitive data that would otherwise be stored in a service provider will now live closer to the owner (usually in a digital wallet). While the user can choose to backup his data for various reasons, she will be able to do so in an encrypted way, as only she will possess the decryption keys. Users and devices will also get to choose with whom they share their credentials, and even be able to do so employing selective disclosure and zero-knowledge proofs techniques, further improving privacy. #### Decentralized No single-point of failure. While identity providers may have been a convenient way to authenticate users and devices so far, it is not clear what happens when a provider stops providing, e.g. when it goes out of business. In the self-sovereign approach, the user decides when her identity starts or stops being valid, and she will have similar controls over her devices. Finally, data breaches, information sharing without user consent, and other issues are minimized when identities are not stored in a high-value data silo that acts as a honeypot for hackers. #### End-to-end security Communications between two endpoints are secure. By exchanging DID Documents and applying asymmetric cryptography, IoT devices can mutually authenticate, derive short-lived symmetric keys, send encrypted messages, and enforce non- repudiation. This approach can also be implemented in a transport-agnostic way, enabling secure communication even among different protocols. #### Layered authentication Separates cryptographic and application-specific authentication. In the former, two devices prove to each other that they are in possession of specific public keys, while in the latter the devices prove different attributes about themselves. This approach allows endpoints to always be cryptographically protected, and leaves higher-level trust requirements to be handled at the application layer. #### Standardized and open approach Fosters interoperability and robustness. Since both DIDs and VCs are being developed as open W3C specifications, companies and researchers are free to build solutions that are interoperable and rely on well-tested data models. #### JSON-based encoding Using JSON enables more applications to handle data extracted from DID Documents and credentials, even if not originally designed to work with SSI. ### IV-B Challenges We now discuss some challenges to apply SSI in IoT environments. #### Constrained devices Fully adopting SSI means that devices need to be able to run asymmetric cryptography and cope with communication overhead of transmitting metadata, such as DID Documents and Verifiable Credentials. #### Asymmetric Cryptography SSI demands execution of encryption algorithms based on asymmetric keys, which can be challenging in devices with limited processing and energy resources. While authors points that constrained processors such as the 32-bits Cortex M0 are well equipped to execute Elliptic Curve Cryptography (ECC) [14], the number of operations still must be controlled to avoid battery draining. A common tactic is to use long lived session keys that are less frequently updated, e.g. once a day. #### Communication overhead Depending on the communication protocol, the size of DDos and VCs may impose a barrier. For example, low energy protocols such as LoRA and BLE have maximum packet sizes of 222 and 244 bytes, respectively, while DDos and VCs easily achieve 500 bytes or more. Therefore, strategies such as compression, fragmentation, and infrequent document transmission, will be necessary. In extreme cases, SSI may not be possible at all, which will require proxy approaches [15]. #### DID Resolution Higlhy constrained devices may not be able to connect to the Internet to download DID Documents at all. A possible solution is to create a local cache of known DIDs, either managed by the device itself or by its gateway. On the other hand, if both devices use peer DIDs, they can simply exchange their DIDs directly, which shifts the problem to securely delivering the DIDs in the first place. #### Software availability The SSI ecosystem is new and there is limited software available for embedded devices. Given the foundational importance of secure cryptographic algorithms and protocols, applications based on SSI should rely on existing libraries that encapsulate complexity and are well tested, which reduces the chances for vulnerabilities. Although reference implementations exist [16], they are focused on cloud and mobile use cases. To fully incorporate SSI into IoT, portable and lightweight libraries tailored for constrained devices must be created and made widely available. ## V Conclusion and perspective As the primary motivation for the development of the Internet was to remotely connect computers, the problem of secure identification of users and devices was left aside. While identity solutions such as accounts and certificates were eventually developed, they feature critical issues such as weak passwords, lack of privacy, and centralization. As it is common for systems to mature over time, as good (and bad) practices are learned, we argue that the Self-Sovereign Identity approach represents an important step forward in the area of digital identity. Particularly in the context of the IoT, this paper showed how SSI can (1) empower owners to have full control over both their identities and their devices, (2) improve privacy by decoupling pseudonymous and sensitive identity records, and (3) allow decentralized identity management by reducing the dependency on third parties. As for the next steps, the realization of SSI in the IoT will demand implementations that are optimized for constrained devices, both for cryptographic operations and low- power communication. Furthermore, wide adoption of SSI will depend on the availability of open software libraries to manipulate DIDs and VCs in IoT devices. To conclude we argue that, if adopted, SSI may significantly benefit security and privacy of IoT applications, and potentially enable new use cases, such as those that involve cross-owner decentralized interactions. ## References * [1] B. M. Leiner, V. G. Cerf, D. D. Clark, R. E. Kahn, L. Kleinrock, D. C. Lynch, J. Postel, L. G. Roberts, and S. Wolff, “A brief history of the internet,” _ACM SIGCOMM Computer Communication Review_ , vol. 39, no. 5, pp. 22–31, 2009\. * [2] V. Taneski, M. Heričko, and B. Brumen, “Systematic overview of password security problems,” _Acta Polytechnica Hungarica_ , vol. 16, no. 3, 2019\. * [3] L. M. Kohnfelder, “Towards a practical public-key cryptosystem.” Ph.D. dissertation, Massachusetts Institute of Technology, 1978. * [4] J. Callas, L. Donnerhacke, H. Finney, D. Shaw, and R. Thayer, “Openpgp message format,” Internet Requests for Comments, RFC Editor, RFC 4880, November 2007, http://www.rfc-editor.org/rfc/rfc4880.txt. [Online]. Available: http://www.rfc-editor.org/rfc/rfc4880.txt * [5] D. Cooper, S. Santesson, S. Farrell, S. Boeyen, R. Housley, and W. Polk, “Internet x.509 public key infrastructure certificate and certificate revocation list (crl) profile,” Internet Requests for Comments, RFC Editor, RFC 5280, May 2008, http://www.rfc-editor.org/rfc/rfc5280.txt. [Online]. Available: http://www.rfc-editor.org/rfc/rfc5280.txt * [6] M. Sporny, D. Longley, C. Allen, M. Sabadello, and D. Reed, “Decentralized identifiers (DIDs) v1.0,” W3C, W3C Working Draft, Dec. 2019, https://www.w3.org/TR/2019/WD-did-core-20191209/. * [7] M. Sporny, G. Noble, D. Burnett, B. Zundel, and D. Longley, “Verifiable credentials data model 1.0,” W3C, W3C Recommendation, Nov. 2019, https://www.w3.org/TR/2019/REC-vc-data-model-20191119/. * [8] C. Allen, “The path for self-sovereign identity,” http://www.lifewithalacrity.com/2016/04/the-path-to-self-soverereign-identity.html, accessed: 2020-02-13. * [9] M. S. Ferdous, F. Chowdhury, and M. O. Alassafi, “In search of self-sovereign identity leveraging blockchain technology,” _IEEE Access_ , vol. 7, pp. 103 059–103 079, 2019. * [10] W. Diffie and M. Hellman, “New directions in cryptography,” _IEEE transactions on Information Theory_ , vol. 22, no. 6, pp. 644–654, 1976. * [11] S. Farrell, R. Housley, and S. Turner, “An internet attribute certificate profile for authorization,” Internet Requests for Comments, RFC Editor, RFC 5755, January 2010. * [12] J. M. Rabaey, “The swarm at the edge of the cloud-a new perspective on wireless,” in _VLSI Circuits (VLSIC), 2011 Symposium on_. IEEE, 2011, pp. 6–8. * [13] L. C. Costa, J. Rabaey, A. Wolisz, M. Rosan, and M. K. Zuffo, “Swarm os control plane: an architecture proposal for heterogeneous and organic networks,” _IEEE Transactions on Consumer Electronics_ , vol. 61, no. 4, pp. 454–462, 2015. * [14] Y. Kortesniemi, D. Lagutin, T. Elo, and N. Fotiou, “Improving the privacy of iot with decentralised identifiers (dids),” _Journal of Computer Networks and Communications_ , vol. 2019, 2019. * [15] D. Lagutin, Y. Kortesniemi, N. Fotiou, and V. A. Siris, “Enabling decentralised identifiers and verifiable credentials for constrained internet-of-things devices using oauth-based delegation,” in _Workshop on Decentralized IoT Systems and Security (DISS)_ , 2019. * [16] H. Foundation, “Hyperledger aries,” accessed: 2020-02-15. [Online]. Available: https://www.hyperledger.org/projects/aries
2024-09-04T02:54:59.007056
2020-03-11T04:51:07
2003.05110
{ "authors": "Nicole F. Allard, John F. Kielkopf, Siyi Xu, Gr\\'egoire Guillon, Bilel\n Mehnen, Roberto Linguerri, Muneerah Mogren Al Mogren, Majdi Hochlaf, Ivan\n Hubeny", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26150", "submitter": "John Kielkopf", "url": "https://arxiv.org/abs/2003.05110" }
arxiv-papers
# H–He collision-induced satellite in the Lyman-$\alpha$ profile of DBA white dwarf stars Nicole F. Allard 1,2, John F. Kielkopf 3, Siyi Xu 4, Grégoire Guillon 5, Bilel Mehnen 6, Roberto Linguerri 6, Muneerah Mogren Al Mogren 7, Majdi Hochlaf 6, Ivan Hubeny 8 1GEPI, Observatoire de Paris, Université PSL, UMR 8111, CNRS, 61, Avenue de l’Observatoire, F-75014 Paris, France 2Sorbonne Université, CNRS, UMR7095, Institut d’Astrophysique de Paris, 98bis Boulevard Arago, PARIS, France 3Department of Physics and Astronomy, University of Louisville, Louisville, Kentucky 40292 USA 4Gemini Observatory, 670 N. Aóhoku Place, Hilo, HI 96720 HI, USA 5Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR6303, CNRS, Université de Bourgogne Franche Comté, 21078 Dijon Cedex, France 6Université Gustave Eiffel, COSYS/LISIS, 5 Bd Descartes 77454, Champs sur Marne, France 7Chemistry Department, Faculty of Science, King Saud University, PO Box 2455, Riyadh 11451, Kingdom of Saudi Arabia. 8Department of Astronomy, University of Arizona, 933 N Cherry Ave, Tucson, AZ 85719 USA E-mail<EMAIL_ADDRESS> (Accepted XXX. Received YYY; in original form ZZZ) ###### Abstract The spectra of helium-dominated white dwarf stars with hydrogen in their atmosphere present a distinctive broad feature centered around 1160 Å in the blue wing of the Lyman-$\alpha$ line. It is extremely apparent in WD 1425+540 recently observed with HST COS. With new theoretical line profiles based on ab initio atomic interaction potentials we show that this feature is a signature of a collision-induced satellite due to an asymptotically forbidden transition. This quasi-molecular spectral satellite is crucial to understanding the asymmetrical shape of Lyman-$\alpha$ seen in this and other white dwarf spectra. Our previous work predicting this absorption feature was limited by molecular potentials that were not adequate to follow the atomic interactions with spectroscopic precision to the asymptotic limit of large separation. A new set of potential energy curves and electronic dipole transition moments for the lowest electronic states of the H–He system were developed to account accurately for the behaviour of the atomic interactions at all distances, from the chemical regime within 1 Å out to where the radiating H atoms are not significantly perturbed by their neighbors. We use a general unified theory of collision-broadened atomic spectral lines to describe a rigorous treatment of hydrogen Lyman-$\alpha$ with these potentials and present a new study of its broadening by radiative collisions of hydrogen and neutral helium. These results enable ab initio modeling of radiative transport in DBA white dwarf atmospheres. ###### keywords: (stars:) white dwarfs < Stars - stars: atmospheres < Stars - atomic data < Physical Data and Processes - atomic processes < Physical Data and Processes - line: profiles < Physical Data and Processes - molecular data < Physical Data and Processes ††pubyear: 2019††pagerange: H–He collision-induced satellite in the Lyman-$\alpha$ profile of DBA white dwarf stars–7 ## 1 Introduction Theoretical studies of the effects of neutral atom collisions on atomic spectral lines have often been hindered by our ignorance of the atomic potentials. Even for systems as simple as H-H or H-He, the interactions and the electric transition moments are quite difficult to compute with the accuracy which is needed for evaluating a complete line profile. The fundamental theory of calculating the spectral line profile (Allard et al., 1999) requires knowledge of molecular potentials with high accuracy because the shape and strength of the line profile are very sensitive to the details of the molecular potential curves describing the atom-atom collisions. In Allard & Christova (2009) we made an exhaustive study of the red wing of Lyman-$\alpha$ line perturbed by H–He collisions, where we used the potentials and electric dipole transition moments of Theodorakopoulos et al. (1984) and Theodorakopoulos et al. (1987). We considered the high He densities met in cool DZ white dwarfs and examined the range of validity of the one-perturber approximation widely used to calculate the line wings. We have shown there that the extension of the red wing of the Lyman-$\alpha$ line seen in DZ white dwarf spectra depends strongly on the stellar temperature, while it is not dependent on the helium density. We also predicted a blue satellite which only very recently has been observed in Hubble Space Telescope Cosmic Origins Spectrograph (HST COS) observations (Xu et al., 2017). The importance of a correct determination of the blue wing of Lyman-$\alpha$ line to interpret the asymmetrical shape of the Lyman-$\alpha$ line observed with COS is presented in Sect. 2. An accurate prediction of the satellite and consequently the full Lyman-$\alpha$ profile requires exacting new ab initio calculations to obtain the ground and first excited potential energy curves and the corresponding electric dipole transition moments for the H–He system. The new molecular data in Sect. 3 corroborate the prediction of a line satellite in the Lyman-$\alpha$ profile (Allard & Christova, 2009) that is described in Sect. 4. In Allard et al. (1999) we previously derived a classical path expression for a pressure-broadened atomic spectral line shape that includes the effects of a radiative electric dipole transition moment that is dependent on the position of the radiating atom and its dynamic neighbors. Such a comprehensive unified approach employing the precise molecular data is fundamentally necessary to obtain an accurate absorption line profile that is valid over the full breadth of spectral line for the range of densities and temperatures found in stellar atmospheres. Figure 1: COS observation of WD 1425+540. The broad distinctive collision- induced satellite in the blue wing of the Lyman-$\alpha$ line about 1160 Å is clearly visible (Xu et al., 2017). The strong emission at the center of Lyman-$\alpha$ is from Earth’s geocoronal hydrogen above the HST orbit. ## 2 COS observation of WD 1425+540 WD 1425+540 (T=14,490 K, log g=7.95) is the prototype of DBA white dwarfs and it is a helium-dominated white dwarf that also has a large amount of hydrogen in its atmosphere (Bergeron et al., 2011). It was observed with HST COS under program 13453, and the details of observation and data reduction strategy were reported by Xu et al. (2017). Here, we focus on the spectrum of segment B of the G130M grating, which covers 1130-1270 Å, as shown in Fig. 1. As described in Xu et al. (2017), there are two unusual features of the Lyman-$\alpha$ profile in WD 1425+540. First, the line profile is very asymmetric exhibiting an extend blue wing with the satellite feature as noted. Second, previous white dwarf spectral models cannot reproduce the strength of Lyman-$\alpha$ and Balmer-$\alpha$ simultaneously. The derived hydrogen abundance is more than a factor of 10 higher from the Lyman-$\alpha$ measurement than from Balmer-$\alpha$. While WD 1425+540 is the most extreme case so far, these peculiarities have been observed in other DBA white dwarfs as well, e.g. Jura et al. (2012). The asymmetry also could not be produced by white dwarf models of Xu et al. (2017) because the opacity data used for the Lyman-$\alpha$ profile did not take into account the quasi-molecular line satellite predicted in Allard & Christova (2009). Once this feature is included, the observed asymmetry is reproduced (Gänsicke et al., 2018). The need to have both accurate data for Lyman-$\alpha$ and for Balmer-$\alpha$ is essential to determine the hydrogen abundance correctly. The goal of this paper is to develop the foundation of the atomic and molecular physics needed to compute a complete profile without making ad hoc assumptions. We emphasize the importance of accurate potentials and electric dipole transition moment data for this purpose, and here we provide that data for Lyman-$\alpha$. With new potentials of H-He we also compute a model DBA white dwarf spectrum that demonstrates their validity. ## 3 H–He diatomic potentials ### 3.1 Methodology and benchmarks The lowest electronic excited states of hydrogen and helium are at unusually high energies for neutral atoms (> 10 eV) with respect to their ground states, and close to the corresponding ionization thresholds. Hydrogen with $n$= 2 or greater is a Rydberg atom in this sense (Gallagher, 1994). The electronic excited states of H–He diatomic system of interest in the present work correlate adiabatically to those of these atoms. Therefore, for the correct description of the electronic states of the H–He diatomic system consistent with its isolated atomic fragments one needs the inclusion of diffuse functions that can flexibly represent the states. In addition to this, the computation of the possible interactions that may occur between these electronic states and the subsequent mixing of their wavefunctions that results in an apparent change in electric dipole transition moments, require post Hartree-Fock multi-configurational approaches. More specifically, we used the Complete Active Space Self Consistent Field (CASSCF) (Knowles & Werner, 1985; Werner & Knowles, 1985) followed by the internally contracted Multi- Reference Configuration Interaction (MRCI) (Knowles & Werner, 1988; Werner & Knowles, 1988; Shamasundar et al., 2011) methods as implemented in the MOLPRO 2015 package (Werner et al., 2015). In MRCI, the complete CASSCF wave functions are used as a reference. Furthermore, the Davidson correction (MRCI+Q) (Langhoff & Davidson, 1974) has been applied to the resulting energies to account for the lack of size-consistency of the MRCI method. These computations were performed in the $C_{2v}$ point group, where the $B_{1}$ and $B_{2}$ representations were treated on equal footing. Benchmarks on valence-Rydberg electronic states of other molecular systems (Spelsberg & Meyer, 2001; Ndome et al., 2008; Hochlaf et al., 2010) showed the need to use a CASSCF active space larger than the full-valence space. The atomic basis set for the H and He atoms had to be optimized as well. Thus, we performed a series of benchmark computations at different levels of accuracy to find the appropriate states for convergence. Firstly, at the lowest level of accuracy, we adopted a small active space of 3 electrons in 7 molecular orbitals in conjunction with the aug-cc-pV5Z (Dunning, 1989; Kendall et al., 1992) basis set. With this approach, we found inconsistencies in the calculated energies, especially in the asymptotic region. Indeed, with this simplest choice there is a large energy gap of $\sim 0.45$ eV between the two equivalent dissociation limits H($2p\,^{2}P$) + He($1s^{2}\,{}^{1}S$) and H($2s\,^{2}S$) + He($1s^{2}\,{}^{1}S$). Obviously, this gap is unphysical since these two asymptotes should be strictly degenerate because the two H ($n=2$) states have the same energy apart from Lamb shift and negligibly small fine and hyperfine structure. Moreover, we found a spurious second potential well ($D_{e}$ $\sim$ 660 cm-1) in the $C\,\Sigma$ state of H–He at large internuclear separations (for $R_{\mathrm{H-He}}$ $\sim$ 4.2 Å). Thus, at this level of accuracy, a rather poor chemical description of the H–He molecule is obtained in spite of the relatively large size of the MRCI computations with $\sim$ 4.3 x 104 uncontracted configuration state functions (CSFs) per $C_{2v}$ symmetry. This may be linked to some missing correlation energy in the MRCI wavefunctions that can be recovered by means of larger active spaces in the reference CASSCF vector and by adopting more diffuse atomic basis sets. Secondly, we tried an enlarged CASSCF active space of 3 electrons in 14 molecular orbitals in conjunction with the aug-cc-pV6Z (Dunning, 1989; Kendall et al., 1992) basis set. In the subsequent MRCI treatment, the multi- configuration wave functions included $\sim$ 2.1 x 105 uncontracted CSFs per $C_{2v}$ symmetry. With this ansatz, the energy difference between the above mentioned asymptotes was reduced to $\sim$ 0.33 eV but still did not vanish. For modeling based on unified spectral line shape theory an error of this size would be unacceptable. Finally, using the same active space as in the second series of computations, we added a set of diffuse functions to the aug-cc-pV6Z basis set for H and He. Hereafter, this enlarged set will be denoted as aug-cc-pV6Z⋆. The exponents of the added Gaussian primitives, which were left uncontracted, are listed in Table 1 in the Appendix. This approach, compared to the previous ones, solved all the inconsistencies mentioned above. That is, it yielded degenerate H($2p\,^{2}P$) + He($1s^{2}\,{}^{1}S)$ and H($2s\,^{2}S$) + He($1s^{2}\,{}^{1}S)$ dissociation limits and no spurious potential well in the $C\,\Sigma$ state. We note that convergence was reached at this step since a further expansion of the aug-cc- pV6Z⋆ set by inclusion of more diffuse functions led to almost identical results. In these calculations, the MRCI wave functions included more than $7.5\times 10^{5}$ uncontracted CSFs per $C_{2v}$ symmetry species. These relatively large computations for such a small molecular system were necessary to obtain the precision needed to model the Lyman-$\alpha$ profile accurately. Figure 2: Top: short-range part of the potential curves of the H–He molecule: $A$ (red dotted), $B$ (green dashed line) and $C$ (blue solid). Bottom: $X$ (black solid). Note the agreement at short distance with data of Theodorakopoulos et al. (1984) that are overplotted in dotted cyan. Figure 3: Top: long range part of the $C\,\Sigma$ potential curve correlated with $2s$ state. This work (full line), Theodorakopoulos et al. (1984) (dotted line). Bottom: $\Delta V(R)$ (black) and $\tilde{d}(R)$ (blue) at 14500 K for the $C-X$ transition. The atomic separation for the maximum in the $C-X$ difference potential is $R_{\mathrm{max}}\approx 2.2\,$Å as shown in Fig. 4 Note that the $C-X$ transition in this work is forbidden asymptotically as it is a transition between the $2s$ and $1s$ states of the free hydrogen atom at large $R$. ### 3.2 Potential energy curves and transition moments The electronic states investigated in the present contribution correlate, at large internuclear distances, to the H($1s\;^{2}S$) + He($2s^{2}\;{}^{1}S$), H($2s\;^{2}P$) + He($2s^{2}\;{}^{1}S$), and H($2p\;^{2}P$) + He($2s^{2}\;{}^{1}S$) dissociation limits (see Fig. 2 and Table 2 in the Appendix). The MRCI+Q/aug-cc-pV6Z⋆ potential energy curves of the four lowest electronic states of H–He, obtained with the largest active space and basis set as described in the previous section, are represented in Fig. 2 as a function of the internuclear distance, $R_{\mathrm{H-He}}$. This figure shows that the ground state possesses a repulsive potential correlating to the H($1s\,^{2}S$) + He($1s^{2}\,{}^{1}S$) isolated atom asymptote at large distances. The ground $X\,^{2}\Sigma^{+}$ state is repulsive at short range with a shallow well at $4\,$Å. The excited $A\,^{2}\Sigma^{+}$, $B\,^{2}\Pi$ and $C\,^{2}\Sigma^{+}$ states have rather deep potential wells in the molecular region closer than 1 Å, and complex behavior at longer range that can affect transition probabilities and difference potential energies in subtle ways. We refer to these as the $X\,\Sigma$, $A\,\Sigma$, $B\,\Pi$, and $C\,\Sigma$ states, or more succinctly by the letter designations $X$, $A$, $B$, and $C$ in the following. They correlate adiabatically to the H($n=2$) + He($1s^{2}\,{}^{1}S$) dissociation limits at large internuclear separations (see Table 2 in the Appendix). The ordering of the assignments of labels for the states is with $A\,\Sigma$ the lowest and $C\,\Sigma$ the highest inside this close 1 Å region with wells in all the states of the order of $15\,000\;\mathrm{cm}^{-1}$ deep, with minima located at $R_{\mathrm{H-He}}$ = 0.7407, 0.7686, and 0.8095 Å for the $A$, $B$ and $C$ states, respectively (see Table 3 in the Appendix). While the $A$ and $B$ states have potentials with a simple short-range well, the $C$ state also exhibits a potential maximum of $\approx 0.666$ eV at $R_{\mathrm{H-He}}=2.098$ Å. Its presence causes a related maximum in the $C-X$ transition difference potential energy curve which affects the blue wing of Lyman-$\alpha$. Although the $C\,\Sigma$ H-He molecular state shown in Fig. 2 is correlated asymptotically with the $2s$ atomic state, we find that at $R_{\mathrm{H-He}}<7\;$Å the transition probability to the $X\,\Sigma$ ground state is not zero. Detailed electric dipole transition moments between the $X\,\Sigma$ ground state and the $A\,\Sigma$, $B\,\Pi$ and $C\,\Sigma$ excited states as a function of the internuclear distance have been calculated at the MRCI/aug-cc-pV6Z⋆ level. In this calculation almost all the transition moments are rather large, particularly for the $C\,\Sigma$ $\leftarrow$ $A\,\Sigma$ and $B\,\Pi$ $\leftarrow$ $A\,\Sigma$ transitions, where corresponding matrix elements of around -9.2 and -7.5 debye (D or $10^{-18}$ statcoulomb-cm) are calculated, respectively. Fig. 7 in the Appendix offers a detailed view. These transition moments correlate to the correct atomic values at dissociation. In particular, the $\langle X\,\Sigma|DM|C\,\Sigma\rangle$ matrix element of the electric dipole transition moment (DM) vanishes at large $R_{\mathrm{H-He}}$ where the $1s-2s$ transition in the isolated hydrogen atom is forbidden to one-photon electric dipole transitions by parity conservation. ## 4 Lyman-alpha opacity The theory of spectral line shapes, especially the unified approach we developed, determines the contributions of specific spectral lines to stellar opacities and may be incorporated into stellar atmosphere models to make accurate synthesis of stellar spectra possible. The line shape theory accounts for neutral atom broadening and shift in both the centers of spectral lines and their extreme wings with one consistent treatment without ad hoc assumptions about the line shape or potentials. Complete details and the derivation of the theory are provided by Allard et al. (1999). The spectrum, $I(\Delta\omega)$, is the Fourier transform (FT) of a electric dipole transition autocorrelation function, $\Phi(s)$. For a perturber density $n_{p}$, we have $\Phi(s)=e^{-n_{p}g(s)}\;,$ (1) where the decay of the autocorrelation function with time leads to atomic line broadening. (See Eq. (121) of Allard et al. (1999).) Our approach introduces the concept of a modulated electric dipole transition moment $\tilde{d}_{if}(R(t))$ into the line shape calculation. $\tilde{d}_{if}[R(t)]=d_{if}[R(t)]e^{-\frac{V_{i}[R(t)]}{2kT}}\;\;,$ (2) where the potential energy for the initial state is $V_{i}(R)=E_{i}(R)-E_{i}^{\infty}\;\;.$ (3) The difference potential energy $\Delta V(R)$ for a transition $if$ is $\Delta V(R)=V_{if}(R)=V_{f}(R)-V_{i}(R)\;\;.$ (4) The Boltzmann factor $e^{-\frac{V_{i}(R)}{2kT}}$ in Eq. (2) appears because the perturbing atoms or ions are in thermal equilibrium with the radiating atom which affects the probability of finding them initially at a given $R$. This treatment results in Lyman series line wing profiles that exhibit a sensitive dependence on temperature. We had to use electric dipole moments modulated by the Boltzmann factor in the comparison of emission spectra of Lyman-$\alpha$ (Kielkopf & Allard, 1998) and Balmer $\alpha$ (Kielkopf et al., 2002) measured in laboratory. ### 4.1 Study of the characteristics of the line satellite In Allard & Christova (2009) we predicted a line satellite at 1157 Å in spectra computed for the temperature range of cool DZ white dwarfs with potentials published in Theodorakopoulos et al. (1984). However, we noticed an unexpected well of about 150 cm-1 (upper Fig. 3) in the potential energy of the $C\,\Sigma$ state at $R\sim 8$ Å which may be related to the choice of basis states and has no clear physical origin. In this work we use the new ab initio calculations of the potentials over the full range of distances $R$ between the H and He atoms since convergence at large $R$ is now reached. The long range well of the $C\,\Sigma$ state of Theodorakopoulos et al. (1984) and Theodorakopoulos et al. (1987) potentials is not found in these new calculations as we see in Fig. 3. Figure 4: Top: variation with temperature of the line satellite. The He density is $1\times 10^{20}$ cm-3, the temperatures are 14500 K (full black line), $20\,000$ K (blue stars), and $5\,000$ K (red dashed line). Bottom: for the $C-X$ transition, $\Delta V(R)$ (black solid) and $\tilde{d(R)}$ at 5000 K (black solid), $10\,000$ K (red dotted), 14500 K (green dashed), and 20000 K (blue solid). At the highest temperatures the He can reach the inner regions of the lower state $X\,^{2}\Sigma$ potential and enhance the transition probability. The prediction of a satellite in the blue wing of the H–He line profile is related to a potential maximum at $R=2.1$ Å (see Sect. 3.2) of the $C\,\Sigma$ state. This leads to a maximum of the potential energy difference $\Delta V(R)$ in Eq. (4) for this transition shown in Fig. 3. The unified theory predicts that line satellites will be centered periodically at frequencies corresponding to integer multiples of the extrema of $\Delta V(R)$. In the quasi-static limit the first satellite on the line would be at $\Delta\omega=5\,000$ cm-1 corresponding to $\lambda\sim 1150$ Å on the blue side of Lyman-$\alpha$. In this case the maximum in $\Delta V$ occurs at rather small internuclear distance, and is quite sharp. The correspondingly short duration of the close collision leads to a broad satellite centered at $\lambda$ $\sim$ 1160 Å for T=$14\,500$ K (Fig. 4). ### 4.2 Temperature and density dependence For a lower temperature, $T=5\,000$ K (Fig. 4), the duration of the collision is longer, and the line satellite at $\lambda\sim 1153$ Å is sharper and closer to the predicted quasi-static position than at higher temperatures. The oscillations which appear on the red side of the quasi-molecular satellite are due to interference effects described by Royer (1971) and Sando & Wormhoudt (1973). They depend on the relative velocity and therefore on temperature. Consequently velocity averaging would moderate their amplitude in observed spectra. At temperatures below $10\,000$ K the blue wing of Lyman-$\alpha$ shortward of 1150 Å becomes significantly more transparent than at higher temperature, an order of magnitude effect below 1120 Å. Thus this far blue wing is a sensitive indicator of temperature in cool helium-rich WD atmospheres. The satellite amplitude depends on the value of the electric dipole transition moment through the region of the potential extremum responsible for the satellite and on the position of this extremum. The blue line wings shown in Fig. 4 are unchanged in the range $14\,500$ to $20\,000$ K as there is no change with $T$ of $\tilde{d}_{if}[R(t)]$ in the internuclear distance where the potential difference goes through a maximum. $\tilde{d}_{if}[R(t)]$ at $14\,500$ K for the $C-X$ transition is also plotted in Fig. 3. In the former work we used electric dipole transition moments of Theodorakopoulos et al. (1987) where the $C-X$ transition was allowed. Nevertheless the amplitude and position of the line satellite are unchanged as they are due to a range of internuclear distance where the potentials and the dipole moments are almost identical as we see in Fig. 5. The main difference between the two potentials concerns the red wing which is lowered using dipole moments of Theodorakopoulos et al. (1987) where the $A-X$ transition was forbidden. Figure 5: Comparison of the unified line profile using the dipole moments of this work (black line) with the line profile using dipole moments of Theodorakopoulos et al. (1987) (red dashed line) . The He density is $10^{20}$ cm-3 and the temperature is 14500 K. In summary the unified line profile calculation leads to a flat blue wing due to a line satellite. The resulting asymmetry of the Lyman-$\alpha$ line can be easily appreciated in Fig. 5 the blue side of the line is wider than the red side. Measured at the strength of the broad collision-induced 1160 Å satellite, the asymmetry ratio of the width on the blue side to that on the red is as large as 2.2. Consequently, the near wing is clearly far different from a symmetric Lorentzian because the satellite is rather close to the isolated atom line center. This was also the case for the Mg b triplet perturbed by He (Allard et al., 2016). The existence of the asymmetrical shape of these line profiles depends strongly on the maximum value of the potential energy difference $\Delta V(R)$ which predicts the position of the line satellite and on the atomic collision energies at the temperatures of interest. These results enable computing atmosphere models and synthetic spectra which we compare to an HST COS observation of WD 1425+540 in Section 5. ## 5 Model atmosphere and synthetic white dwarf spectrum To demonstrate the importance of a proper treatment of He perturbers on hydrogen lines, synthetic spectra of the white dwarf WD 1425+540 were computed using the stellar atmosphere code TLUSTY (version 207) for computing the atmospheric structure, and a companion program SYNSPEC (version 53) for generating detailed synthetic spectra. For a description of the previous versions (205 and 51) see the works of Hubeny & Lanz (2017) and Hubeny & Lanz (2011a, b). This procedure allows us to study the effect of the H/He ratio on the spectrum, and the development of line wings, though it is not fully self- consistent with the stellar atmosphere model since that would require a treatment of He I optical lines as well. We have computed a number of H-He models, with the basic model parameters, $T_{\rm eff}=14,410$ K and $\log g=7.89$, from Gänsicke et al. (2018), and with varying He/H ratio. For treating the electron and proton broadening of the hydrogen lines we used Tremblay & Bergeron (2009) data. The He/H ratio was adjusted to obtain a reasonable agreement by eye with the observed spectrum, and we found a nominal ratio of $4\times 10^{3}$ ($\log(N_{\mathrm{H}}/N_{\mathrm{He}})\approx-3.6$) fitted the observed profile well. Liebert et al. (1979) found 3.7 from a ground-based H$\beta$ profile, and recently Gänsicke et al. (2018) analyzed the L$\alpha$ profile and adopted a somewhat larger $\log(N_{\mathrm{H}}/N_{\mathrm{He}})\approx-4.0\pm 0.20$. The potential energies for the $n=1$ and $n=2$ electronic states H-He that were used in our models are the ones described in this paper. Stellar opacities were computing using H-He electric dipole moments from the previous work of Theodorakopoulos et al. (1987) in which the $A-X$ transition is forbidden, and also using new dipole transition moments from this work in which the $A-X$ transition is allowed. As shown in Fig. 6, the observed red wing of Lyman-$\alpha$ is consistent with a suppressed $A-X$ transition probability in the region of atomic separation with difference potential energy that would contribute. We conclude that the additional basis states used for the new ab initio potentials improve the calculation of the potential energy curves, but may not capture the dipole transition moments of the real H-He system correctly for the $A-X$ transition. However the combination of this work’s potentials and the dipole moments of Theodorakopoulos et al. (1987) achieve a remarkable fit in Fig. 6 to the HST COS spectrum of WD 1425+540 when incorporated into the unified line shape theory we described here. Figure 6: The observed spectrum of WD 1425+540 (also see Fig. 1) compared with a synthetic white dwarf spectrum in the Lyman-$\alpha$ region. The synthetic spectrum is computed with TLUSTY and SYNSPEC for a temperature of 14 500 K and a He/H ratio of $4\times 10^{3}$ using the unified line profile with the potentials of this work. For the dipole moments of Theodorakopoulos et al. (1987) (red solid line) the $A-X$ transition is forbidden and its contributions to the opacity are suppressed. For the dipole moments of this work (blue dashed line), the $A-X$ transition contributes in the red wing of the model but is absent in the observed spectrum. ## 6 Conclusions The Lyman-$\alpha$ region of the spectrum of a helium-rich white dwarf with hydrogen in its atmosphere is determined by the changes in transition energy and transition probability during the H-He collisions that broaden the atomic spectral line. We developed new H-He potential energies and transition dipole moments for the hydrogen $1s$, $2s$, and $2p$ states as input data for a unified theory calculation of the profile of WD 1425+540 to test the potentials and dipole moments, and to confirm the origin of the short- wavelength “blue” satellite. We found that the spectral line profile from the new molecular data has a satellite feature in the blue wing that agrees with previous work. These results provide a benchmark implementation of ab initio atomic and molecular potentials for the most basic neutral non-resonant atom- atom pair relevant to stellar atmosphere models. The new calculations show how the profile depends on the variation of the electric dipole transition moment and interaction potential energy with atomic separation. A comparison with the observed spectrum of WD 1425+540 was made by using these theoretical opacities in a stellar atmosphere and spectrum synthesis code. While it was not our goal to refine the stellar model based on the new theoretical data, the profiles reproduce the observed spectrum with a reasonable He/H ratio. Further, the absence of an extended red wing of Lyman-$\alpha$ in the observed spectrum suggests that the states of the difference potential that could contribute to that region have the reduced transition dipole moment that was found in previous molecular models. The new work presented here shows clearly that there is an opportunity to use stellar spectra to improve the atomic and molecular physics, ultimately to yield better models for astrophysical applications. For H–He, the $A-X$ transition dipole moment remains uncertain. The blue wing of Lyman-$\alpha$ is sensitive to He density and the structure and temperature of the stellar atmosphere, with a profile that for wavelengths shortward of $1150\,$Å will have reduced opacity from regions with temperatures under $10\,000\,$K. Profiles computed with a unified theory of collision broadening based on accurate data from ab initio molecular physics take into account the strong dependence of the amplitude of the electric dipole transition moment on atom-atom separation ($R$) where the potential energy change $\Delta V(R)$ is an extremum. Incorporated into model atmospheres, this dependence may be used to probe white dwarf or stellar atmospheres for density and temperature. This emphasizes the importance of the accuracy of both the potential energies and the electric dipole transition moments for the line shape calculations that have traditionally assumed electric dipole transition moments are constant (Allard & Kielkopf, 1982; Allard & Koester, 1992; Allard et al., 1994). The effect of collision broadening is central to understanding the opacity of stellar atmospheres, yet there have been only a few definitive comparisons with experimental work for atomic H. (Kielkopf & Allard, 1995, 1998; Kielkopf et al., 2004). This is because of the difficulty of creating an environment in a laboratory experiment simulating a stellar atmosphere with accurate diagnostics. On the theoretical side, the maturing capability of ab initio methods now offers the possibility of accurately computing the interaction of H with H (Drira, 1999; Spielfiedel, 2003; Spielfiedel et al., 2004) and H with He atoms (this work). While an accurate determination of the broadening of Balmer $\alpha$ with high density atomic hydrogen (that is H–H) has been done by Allard et al. (2008), nothing comparable exists for H–He. Our calculations reported in Allard et al. (2008) support the results of Barklem et al. (2000); Barklem et al. (2002) that the Ali & Griem (1966) theory underestimates the actual line width. Recent laboratory measurements show a similar result at high density in environments comparable to white dwarf atmospheres (Kielkopf & Allard, 2014). It would be possible now to similarly improve the calculation of Balmer-$\alpha$ broadening and its contribution to the full white dwarf opacity model. A major improvement to comprehensive theoretical models for DBA white dwarf spectra is within reach that would determine H-He molecular data for $n=3$ excited states, and use those to compute accurate Balmer-$\alpha$ profiles under white dwarf atmosphere conditions. Such results would help understanding the differences in stellar parameters that are found from Balmer and Lyman line profiles. In conclusion, complete unified line profiles based on accurate atomic and molecular physics for both the Lyman-$\alpha$ and Balmer-$\alpha$ lines should be incorporated into the analysis of DBA white dwarf spectra to derive the hydrogen abundance. ## acknowledgements The paper was based on observations made with the NASA/ESA Hubble Space Telescope under program 13453, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. We thank the COST Action CM1405 MOLecules in Motion (MOLIM) of the European Community for support. The authors would like to extend their sincere appreciation to the Deanship of Scientific Research at King Saud University for funding the research through the Research Group Project No. RGP-333. This work was supported by the CNRS program Physique et Chimie du Milieu Interstellaire (PCMI) co-funded by the Centre National d’Etudes Spatiales (CNES). ## References * Ali & Griem (1966) Ali A. W., Griem H. R., 1966, Physical Review, 144, 366 * Allard & Christova (2009) Allard N. F., Christova M., 2009, New Astron. Rev., 53, 252 * Allard & Kielkopf (1982) Allard N. F., Kielkopf J. F., 1982, Rev. Mod. Phys., 54, 1103 * Allard & Koester (1992) Allard N. F., Koester D., 1992, A&A, 258, 464 * Allard et al. (1994) Allard N. F., Koester D., Feautrier N., Spielfiedel A., 1994, A&A Suppl., 108, 417 * Allard et al. (1999) Allard N. F., Royer A., Kielkopf J. F., Feautrier N., 1999, Phys. Rev. A, 60, 1021 * Allard et al. (2008) Allard N. F., Kielkopf J. F., Cayrel R., van ’t Veer-Menneret C., 2008, A&A, 480, 581 * Allard et al. (2016) Allard N. F., Leininger T., Gadéa F. X., Brousseau-Couture V., Dufour P., 2016, A&A, 588, A142 * Barklem et al. (2000) Barklem P. S., Piskunov N., O’Mara B. J., 2000, A&A, 363, 1091 * Barklem et al. (2002) Barklem P. S., Stempels H. C., Allende Prieto C., Kochukhov O. P., Piskunov N., O’Mara B. J., 2002, A&A, 385, 951 * Bergeron et al. (2011) Bergeron P., et al., 2011, ApJ, 737, 28 * Drira (1999) Drira I., 1999, Journal of Molecular Spectroscopy, 198, 52 * Dunning (1989) Dunning Jr. T. H., 1989, J. Chem. Phys., 90, 1007 * Gallagher (1994) Gallagher T. F., 1994, Rydberg Atoms. Cambridge University Press, Cambridge, U.K. * Gänsicke et al. (2018) Gänsicke B. T., Koester D., Farihi J., Toloza O., 2018, MNRAS, 481, 4323 * Hochlaf et al. (2010) Hochlaf M., Ndome H., Hammoutène D., Vervloet M., 2010, Journal of Physics B Atomic Molecular Physics, 43, 245101 * Hubeny & Lanz (2011a) Hubeny I., Lanz T., 2011a, TLUSTY: Stellar Atmospheres, Accretion Disks, and Spectroscopic Diagnostics (ascl:1109.021) * Hubeny & Lanz (2011b) Hubeny I., Lanz T., 2011b, Synspec: General Spectrum Synthesis Program (ascl:1109.022) * Hubeny & Lanz (2017) Hubeny I., Lanz T., 2017, A brief introductory guide to TLUSTY and SYNSPEC (arXiv:1706.01859) * Jura et al. (2012) Jura M., Xu S., Klein B., Koester D., Zuckerman B., 2012, ApJ, 750, 69 * Kendall et al. (1992) Kendall R. A., Dunning Jr. T. H., Harrison R. J., 1992, J. Chem. Phys., 96, 6796 * Kielkopf & Allard (1995) Kielkopf J. F., Allard N. F., 1995, ApJ, 450, L75 * Kielkopf & Allard (1998) Kielkopf J. F., Allard N. F., 1998, Phys. Rev. A, 58, 4416 * Kielkopf & Allard (2014) Kielkopf J. F., Allard N. F., 2014, Journal of Physics B Atomic Molecular Physics, 47, 155701 * Kielkopf et al. (2002) Kielkopf J. F., Allard N. F., Decrette A., 2002, European Physical Journal D, 18, 51 * Kielkopf et al. (2004) Kielkopf J. F., Allard N. F., Huber J., 2004, ApJ, 611, L129 * Knowles & Werner (1985) Knowles P. J., Werner H.-J., 1985, Chemical Physics Letters, 115, 259 * Knowles & Werner (1988) Knowles P. J., Werner H.-J., 1988, Chemical Physics Letters, 145, 514 * Kramida (2010) Kramida A. E., 2010, Atomic Data and Nuclear Data Tables, 96, 586 * Langhoff & Davidson (1974) Langhoff S. R., Davidson E. R., 1974, J. Quant. Chem., 8, 61 * Liebert et al. (1979) Liebert J., Gresham M., Hege E. K., Strittmatter P. A., 1979, Astronomical Journal, 84, 1612 * Ndome et al. (2008) Ndome H., Hochlaf M., Lewis B. R., Heays A. N., Gibson S. T., Lefebvre-Brion H., 2008, J. Chem. Phys., 129, 164307 * Royer (1971) Royer A., 1971, Phys. Rev. A, 43, 499 * Sando & Wormhoudt (1973) Sando K. M., Wormhoudt J. G., 1973, Phys. Rev. A, 7, 1889 * Shamasundar et al. (2011) Shamasundar K. R., Knizia G., Werner H.-J., 2011, J. Chem. Phys., 135, 054101 * Spelsberg & Meyer (2001) Spelsberg D., Meyer W., 2001, J. Chem. Phys., 115, 6438 * Spielfiedel (2003) Spielfiedel A., 2003, J. Mol. Spectrosc., 217, 162 * Spielfiedel et al. (2004) Spielfiedel A., Palmieri P., Mitrushenkov A., 2004, Molec. Phys., 102, 2249 * Theodorakopoulos et al. (1984) Theodorakopoulos G., Farantos S. C., Buenker R. J., Peyerimhoff S. D., 1984, Journal of Physics B Atomic Molecular Physics, 17, 1453 * Theodorakopoulos et al. (1987) Theodorakopoulos G., Petsalakis I. D., Nicolaides C. A., R.J.Buenker 1987, J. Phys. B, 20, 2339 * Tremblay & Bergeron (2009) Tremblay P. E., Bergeron P., 2009, Astrophysical Journal, 696, 1755 * Werner & Knowles (1985) Werner H.-J., Knowles P. J., 1985, J. Chem. Phys., 82, 5053 * Werner & Knowles (1988) Werner H.-J., Knowles P. J., 1988, J. Chem. Phys., 89, 5803 * Werner et al. (2015) Werner H.-J., Knowles P. J., Knizia G., Manby F. R., Schütz M., et al., 2015, MOLPRO, version 2015.1, a package of ab initio programs * Xu et al. (2017) Xu S., Zuckerman B., Dufour P., Young E. D., Klein B., Jura M., 2017, ApJ, 836, L7 ## appendix Parameters of the H–He molecular potentials are given in Tables 1 and 2. Figure 7 shows the dependence on $R$ of the radiative transition moments between the excited states and the perturbations of those states as the H and He atoms approach from large $R$. Table 1: Exponents of the diffuse uncontracted Gaussian primitives added to the aug-cc-pV6Z basis set to form the presently used aug-cc-pV6Z* basis sets for the H and He atoms. State | 1 | 2 | 3 ---|---|---|--- H(_s_) | 0.00690204 | 0.002520537 | 0.000920468 H(_p_) | 0.026565598 | 0.010533298 | 0.004176468 H(_d_) | 0.055406537 | 0.024364162 | 0.010713761 H(_f_) | 0.106396067 | 0.046204584 | 0.020065249 H(_g_) | 0.168703345 | 0.069928301 | 0.028985598 H(_h_) | 0.175320015 | 0.045069073 | 0.011585793 He(_s_) | 0.017177900 | 0.006596920 | 0.002533450 He(_p_) | 0.050416903 | 0.019858313 | 0.007821833 He(_d_) | 0.094209988 | 0.036827891 | 0.014396494 He(_f_) | 0.151890237 | 0.056684629 | 0.021154402 He(_g_) | 0.232902520 | 0.079072280 | 0.026845675 He(_h_) | 0.248198125 | 0.060632194 | 0.014811808 Table 2: Dissociation fragments, experimental and calculated relative dissociation asymptotic energies, and molecular states for the four lowest electronic states of H–He. Experimental data are from Kramida (2010). Atomic | | Observed | Calculated | Molecular ---|---|---|---|--- H | He | (cm${}^{-}1$) | (cm${}^{-}1$) | $1s\,^{2}S_{g}$ | $1s^{2}\,{}^{1}S_{g}$ | 0a | 0a | $X\,^{2}\Sigma^{+}$ $2p\,^{2}P_{u}$ | $1s^{2}\,{}^{1}S_{g}$ | 82259 | 82308 | $A\,^{2}\Sigma^{+}$, $B\,^{2}\Pi$ $2s\,^{2}S_{g}$ | $1s^{2}\,{}^{1}S_{g}$ | 82259 | 82308 | $C\,^{2}\Sigma^{+}$ aReference | | | | Table 3: Spectroscopic constants and dissociation energies for the three lowest excited electronic states of H–He as deduced from the MRCI+Q /aug-cc-pV6Z* potential energy curves. $R_{e}$ corresponds to the equilibrium distance. $\omega_{e}$ and $\omega_{e}x_{e}$ are the vibrational constants. $\beta_{e}$ and $\alpha_{e}$ are the rotational constants. $D_{e}$ is the dissociation energy. State | $R_{e}$ | $\omega_{e}$ | $\omega_{e}x_{e}$ | $\beta_{e}$ | $\alpha_{e}$ | $D_{e}$ ---|---|---|---|---|---|--- | Å | cm-1 | cm-1 | cm-1 | cm-1 | eV $A\,^{2}\Sigma^{+}$ | 0.74074 | 3697.2 | 149.5 | 38.16 | 2.608 | 2.563 $B\,^{2}\Pi$ | 0.76863 | 3313.4 | 149.8 | 35.44 | 2.629 | 2.218 $C\,^{2}\Sigma^{+}$ | 0.80953 | 2906.3 | 144.0 | 31.95 | 2.551 | 1.638 Figure 7: Potential energy differences in cm-1 and electric dipole transition moments in debye (D or $10^{-18}$ statcoulomb-cm) between the four lowest electronic states of H–He calculated at the MRCI/aug-cc-pV6Z⋆ level. Note that the $C\,\Sigma$ $\leftarrow$ $X\,\Sigma$ is asymptotically forbidden, while transitions between excited states may occur. Upper panel: Energy differences $A\Sigma-B\Sigma$ (blue) and $A\Sigma-C\Pi$ (red). Lower panel: Electric dipole transition moments for H in the presence of He for states contributing to H Lyman-$\alpha$.
2024-09-04T02:54:59.017865
2020-03-11T06:04:09
2003.05124
{ "authors": "Yiying Yan, Zhiguo L\\\"u, JunYan Luo, Hang Zheng", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26151", "submitter": "Yiying Yan", "url": "https://arxiv.org/abs/2003.05124" }
arxiv-papers
# Role of generalized parity in the symmetry of fluorescence spectrum from two-level systems under periodic frequency modulation Yiying Yan<EMAIL_ADDRESS>Department of Physics, School of Science, Zhejiang University of Science and Technology, Hangzhou 310023, China Zhiguo Lü<EMAIL_ADDRESS>Key Laboratory of Artificial Structures and Quantum Control (Ministry of Education), Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China Collaborative Innovation Center of Advanced Microstructures, Nanjing 210093, China JunYan Luo Department of Physics, School of Science, Zhejiang University of Science and Technology, Hangzhou 310023, China Hang Zheng<EMAIL_ADDRESS>Key Laboratory of Artificial Structures and Quantum Control (Ministry of Education), Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China Collaborative Innovation Center of Advanced Microstructures, Nanjing 210093, China ###### Abstract We study the origin of the symmetry of the fluorescence spectrum from the two- level system subjected to a low-frequency periodic modulation and a near- resonant high-frequency monochromatic excitation by using the analytical and numerical methods based on the Floquet theory. We find that the fundamental origin of symmetry of the spectrum can be attributed to the presence of the generalized parity of the Floquet states, which depends on the driving parameters. The absence of the generalized parity can lead to the asymmetry of the spectrum. Based on the generalized parity, the conditions for the symmetry and asymmetry of the spectrum can be derived, which succeeds in predicting symmetry and asymmetry of the spectrum for the harmonic, biharmonic, and multiharmonic modulations. Moreover, we find that the secular approximation widely used in the analytical calculation may lead to artifact symmetry of the spectrum that vanishes when such approximation is avoided. The present study provides a significant perspective on the origin of the symmetry of the spectrum. ## I Introduction Resonance fluorescence, arising from a quantum emitter driven by an external field and coupled to a radiative reservoir Mollow (1969); Scully and Zubairy (1997); Cohen-Tannoudji _et al._ (1998), is not only an important concept in quantum optics but also has potential application in quantum information technology, for instance, it plays an important role in realizing the single- photon source He _et al._ (2013); Santana _et al._ (2017); Kiršanskė _et al._ (2017). Particularly, the resonance fluorescence of two-level systems has attracted much interest and been studied in various aspects such as spectrum Ficek and Freedhoff (1993); Agarwal _et al._ (1991); Ficek and Freedhoff (1996); Ficek and Rudolph (1999); Peiris _et al._ (2014); Konthasinghe _et al._ (2014); He _et al._ (2015); Toyli _et al._ (2016), squeezing Carmichael (1985); Grünwald and Vogel (2012, 2013), photon statistics Kimble _et al._ (1977); D’Souza _et al._ (1990); Nazir (2008); Pastukhov _et al._ (2014), photon antibunching Itano _et al._ (1988); Ficek _et al._ (1984); Damanet _et al._ (2018), and so on. The line shape of the spectrum is found to depend strongly on the external field that interacts with the quantum emitters as well as the reservoirs to which the quantum emitters are coupled. As is well- known, for a sufficiently strong monochromatic field, the spectrum has a symmetric three-peak structure, known as the Mollow triplet Mollow (1969). More recently, the bi- and multi-chromatically driven quantum systems are of interest Kryuchkyan _et al._ (2017); Antón _et al._ (2017); Yan _et al._ (2018); Saiko _et al._ (2018). In such systems, the spectrum turns out to have a complicated multipeak structure Ficek and Freedhoff (1993); Agarwal _et al._ (1991); Ficek and Freedhoff (1996); Ficek and Rudolph (1999); Peiris _et al._ (2014); Konthasinghe _et al._ (2014); He _et al._ (2015), which can be either symmetric or asymmetric. In principle, the physical origin of the triplet and multipeak structures can be understood in terms of the transitions between the quantum dressed states Cohen-Tannoudji _et al._ (1998) or in terms of the transitions between the semiclassical Floquet states Breuer and Petruccione (1997); Yan _et al._ (2016a). The studies on the resonance fluorescence have enriched the physics concerning the light-matter interaction. The origin of the symmetry of the spectrum has been found in the case of the monochromatic field. Specifically, it is the detailed balance condition that guarantees the symmetry of the Mollow triplet Cohen-Tannoudji _et al._ (1998). As is well-known, the breakdown of such a condition leads to the asymmetry of the spectrum, for instance, in the presence of a pure dephasing reservoir Roy and Hughes (2012); McCutcheon and Nazir (2013) or the counter- rotating terms of the external field under certain conditions Browne and Keitel (2000); Yan _et al._ (2013, 2016a). The dephasing-induced asymmetric Mollow triplet has been experimentally observed in the quantum dots (the pure dephasing arises because of the interaction between the quantum dot and its solid-state environment) Ulrich _et al._ (2011); Ulhaq _et al._ (2013). For the bi- and multi-chromatic fields, the origin of the symmetry of the spectrum is rarely discussed, owing to the fact that the physically transparent spectrum is hardly analytically derived, and has not been comprehensively understood. Recent studies show that the fluorescence spectrum from a driven two-level system with a modulated transition frequency is symmetrically multipeaked for the vanishing detuning while asymmetrically multipeaked for the finite detuning Yan _et al._ (2016b); Kryuchkyan _et al._ (2017); Antón _et al._ (2017); Yan _et al._ (2018). Such an exotic bichromatically driven two-level system with coexistence of the longitudinal and transversal coupling between the system and the applied fields has been experimentally studied in the superconducting qubits Li _et al._ (2013); Pan _et al._ (2017), single molecule Brunel _et al._ (1998), and nitrogen-vacancy spin qubits Rohr _et al._ (2014). The quantum systems under frequency modulation are also of interest in theoretical studies Kibis _et al._ (2009); Macovei and Keitel (2014); Zhao _et al._ (2015); Silveri _et al._ (2013); Macovei _et al._ (2015), the intriguing phenomena of which were reviewed recently Silveri _et al._ (2017). It is worthwhile to note that the bichromatically driven two- level system with frequency modulation differs from those considered in Refs. Agarwal _et al._ (1991); Ficek and Freedhoff (1993), where the two-level systems are transversely driven by a bichromatic field. In such a case, the symmetry of the fluorescence spectrum is found to depend on the average detuning if the strengths of the two components of the bichromatic field are the same; the pronounced asymmetry of the spectrum is revealed when the average detuning is finite and/or the strengths of the two components of the field are unequal Agarwal _et al._ (1991); Ficek and Freedhoff (1993). For a bichromatically amplitude-modulated field, the spectrum is also found to be symmetric and asymmetric for the vanishing and finite detuning, respectively Wilkens and Rza¸ewski (1989). So far the fundamental origin of such a detuning-dependent symmetry remains obscure. In this work, we use both analytical and numerical methods based on the Floquet theory to study the fundamental origin of the symmetry of the fluorescence spectrum from the two-level system under a low-frequency periodic modulation and a near-resonant monochromatic excitation. We address the symmetry and asymmetry of the spectrum by considering the generalized parity of Floquet states rather than the behaviors of the bare-state or dressed-state populations as considered in Refs. Das and Macovei (2013); Macovei _et al._ (2015); Antón _et al._ (2017). The generalized parity is found to guarantee the symmetry of the spectrum while the breaking of such a parity can yield pronouncedly asymmetric spectrum even in the vanishing detuning case. Based on the generalized parity, the conditions for the symmetric and asymmetric spectra are derived, which are not given in the previous works and cannot be derived from the behaviors of the bare or dressed state population. The generalized-parity-induced symmetry of the spectrum is verified and illustrated in the context of the biharmonic modulation by the comparison between the analytical and numerical results. The analytical results are found to be in agreement with the numerically exact results in the regimes where the perturbation theory and secular approximation can be justified. In addition, we find that the spectrum with the secular approximation may have artifact symmetry under certain conditions, i.e., the spectrum with secular approximation is symmetric while the numerically exact calculation shows asymmetric spectra because of the broken parity. The present finding simply interprets the detuning-dependent symmetry in the harmonic modulation case and can also be extended to analyze the symmetry and asymmetry of the spectrum in the multiharmonic modulation cases. Our results suggest that it is feasible to control the symmetry and asymmetry of the spectrum via engineering the generalized parity of the Floquet states. The rest of the paper is organized as follows. In Sec. II, we first discuss the generalized-parity-induced symmetry of the fluorescence spectrum without the secular approximation and further elucidate the symmetry of the spectrum with a physically transparent formal spectrum with the secular approximation. In Sec. III, we analytically and numerically calculate the fluorescence spectrum in the context of the biharmonic modulation to verify the symmetry and asymmetry of the spectrum predicted based on the generalized parity. In the last section, the conclusions are given. ## II Fluorescence spectrum and generalized parity We consider that the transition frequency of the two-level system is modulated periodically via a low-frequency external field $f(t)$ and the two-level system is also excited by a near-resonant monochromatic field, which is described by the following Hamiltonian ($\hbar=1$) $H(t)=\frac{1}{2}[\omega_{0}+f(t)]\sigma_{z}+\frac{\Omega_{x}}{2}(\sigma_{+}e^{-i\omega_{x}t}+\sigma_{-}e^{i\omega_{x}t}),$ (1) where $\sigma_{z(x,y)}$ is the usual Pauli matrix, $\omega_{0}+f(t)$ is the modulated transition frequency, $\sigma_{\pm}=(\sigma_{x}\pm i\sigma_{y})/2$ are the raising and lowering operators, and $\Omega_{x}$ ($\omega_{x}$) is the strength (frequency) of the monochromatic driving. Here we choose $f(t)=f(t+T)$ with $T$ being the fundamental period of the modulation and much greater than $2\pi/\omega_{x}$. This is a generalized model as compared with the previous one considered in Refs. Yan _et al._ (2016b); Kryuchkyan _et al._ (2017); Antón _et al._ (2017). To study the emission processes, we need to take account of the spontaneous decay. Thus, the time evolution of the driven two-level system under study is modeled by the Lindblad master equation. In the frame rotating at the frequency $\omega_{x}$, the Lindblad master equation takes the form $\frac{d}{dt}\tilde{\rho}(t)={\cal L}(t)\tilde{\rho}(t),$ (2) where $\tilde{\rho}(t)$ is the reduced density matrix in the rotating frame and the superoperator ${\cal L}(t)$ is given by ${\cal L}(t)\tilde{\rho}(t)=-i[\tilde{H}(t),\tilde{\rho}(t)]-\kappa/2[\\{\sigma_{+}\sigma_{-},\tilde{\rho}(t)\\}-2\sigma_{-}\tilde{\rho}(t)\sigma_{+}]$ with $\kappa$ being the radiative decay rate. $\tilde{H}(t)$ is the effective Hamiltonian and reads $\tilde{H}(t)=\frac{\Omega_{x}}{2}\sigma_{x}+\frac{1}{2}[\delta+f(t)]\sigma_{z},$ (3) with $\delta=\omega_{0}-\omega_{x}$ being the detuning between the bare transition frequency and monochromatic excitation frequency. This master equation is actually a set of first-order differential equations with periodic coefficients. It can be directly solved by the so-called Floquet-Liouville (FL) approach with a desire accuracy Ho _et al._ (1986); Yan _et al._ (2016b). Although such a Floquet-theory-based numerical method is simple and efficient, it is not physically transparent to analyze the role of generalized parity of Floquet states in the symmetry of the fluorescence spectrum. We use an alternative method which is developed in our previous works Yan _et al._ (2016a, 2018) to solve the master equation and calculate the fluorescence spectrum. We first calculate the Floquet states for $\tilde{H}(t)$ and use them as the bases to reformulate Eq. (2) and derive its analytical formal solutions with the aid of the secular approximation in the Floquet picture. ### II.1 The symmetry of fluorescence spectrum without secular approximation The steady-state fluorescence spectrum is given by the Fourier transform of the time-averaged first-order correlation function Mollow (1969); Ho _et al._ (1986) $S(\Delta)\propto{\rm Re}\frac{1}{T}\int_{0}^{\infty}\int_{0}^{T}\lim_{t^{\prime}\rightarrow\infty}\left\langle\tilde{\sigma}_{+}(t^{\prime}+\tau)\tilde{\sigma}_{-}(t^{\prime})\right\rangle e^{-i\Delta\tau}dt^{\prime}d\tau,$ (4) where $\Delta=\omega-\omega_{x}$ and $\left\langle\tilde{\sigma}_{+}(t^{\prime}+\tau)\tilde{\sigma}_{-}(t^{\prime})\right\rangle$ is the first-order correlation function and the tilde indicates that it is evaluated in the rotating frame. In general, it is difficult to derive an exact analytical spectrum. Nevertheless, we find that it is possible to show that the spectrum is exactly symmetric about $\Delta=0$ when $\delta+f(t)=-[\delta+f(t+T/2)]$ by realizing the fact that the driven two- level system possesses a generalized parity symmetry, i.e., $\sigma_{x}\tilde{H}(t+T/2)\sigma_{x}=\tilde{H}(t).$ (5) Here, the generalized parity transformation consists of an exchange between the up and down states of two-level system ($\sigma_{z}\rightarrow-\sigma_{z}$) and a time shift of half period of the modulation ($t\rightarrow t+T/2$). We state briefly how the generalized parity guarantees the symmetry of the spectrum. Owing to Eq. (5), we can construct a generalized parity transformation in the Liouville space, the details of which can be found in Appendix A. When $\delta+f(t)=-[\delta+f(t+T/2)]$, the superoperator ${\cal L}(t)$ is similarly found to be invariant under the generalized parity transformation. Based on this property, it can be derived from the master equation (2) without the secular approximation that in the steady-state limit, the time-averaged first-order correlation function is a real-valued function in the rotating frame. As a result, the fluorescence spectrum is symmetric about $\Delta=0$. This finding shows that the symmetry of the spectrum occurs when $\delta+f(t)=-[\delta+f(t+T/2)]$ and results from the generalized parity. We will numerically verify the generalized-parity-induced symmetry in Sec. III. ### II.2 The symmetry of fluorescence spectrum with secular approximation To further elucidate the role of the generalized parity in determining the symmetry of the spectrum, we calculate the spectrum in the Floquet picture which allows us to derive a physically transparent formal spectrum with the aid of the secular approximation. According to the Floquet theory Shirley (1965); Sambe (1973), the time- dependent Schrödinger equation governed by $\tilde{H}(t)$ possesses a set of formal solutions $|\tilde{\psi}_{\alpha}(t)\rangle=|\tilde{u}_{\alpha}(t)\rangle e^{-i\tilde{\varepsilon}_{\alpha}t}$, where $|\tilde{u}_{\alpha}(t)\rangle=|\tilde{u}_{\alpha}(t+T)\rangle$ is Floquet state and $\tilde{\varepsilon}_{\alpha}$ is the corresponding real-valued quasienergy. The index $\alpha$ labels independent Floquet states. Substituting the formal solution into the Schrödinger equation, one readily finds that $[\tilde{H}(t)-i\partial_{t}]|\tilde{u}_{\alpha}(t)\rangle=\tilde{\varepsilon}_{\alpha}|\tilde{u}_{\alpha}(t)\rangle.$ (6) On solving this equation, one obtains the Floquet states and quasienergies of the driven two-level system. We use $|\tilde{u}_{\alpha}(t)\rangle$ ($\alpha=\pm$) as the basis to reformulate the master equation (2) and invoke the secular approximation Yan _et al._ (2016a, 2018), yielding $\displaystyle\frac{d}{dt}\tilde{\rho}_{++}(t)$ $\displaystyle=$ $\displaystyle-\Gamma_{{\rm rel}}\tilde{\rho}_{++}(t)+\Gamma_{{\rm s}},$ (7) $\displaystyle\frac{d}{dt}\tilde{\rho}_{+-}(t)$ $\displaystyle=$ $\displaystyle-(i\Delta_{+-}+\Gamma_{{\rm deph}})\tilde{\rho}_{+-}(t),$ (8) where $\tilde{\rho}_{\alpha\beta}(t)=\langle\tilde{u}_{\alpha}(t)|\tilde{\rho}(t)|\tilde{u}_{\beta}(t)\rangle$ is the element of density operator, $\Delta_{+-}=\tilde{\varepsilon}_{+}-\tilde{\varepsilon}_{-}$ is the difference of two quasienergies, and $\Gamma_{{\rm s}}=\kappa\sum_{l}|x_{-+,l}^{(+)}|^{2}$, where $x^{(+)}_{\alpha\beta,l}$ is a time-averaged transition matrix element defined as follows: $x^{(\pm)}_{\alpha\beta,l}=\frac{1}{T}\int^{T}_{0}\langle\tilde{u}_{\alpha}(t)|\sigma_{\pm}|\tilde{u}_{\beta}(t)\rangle e^{-i2\pi lt/T}dt.$ (9) The relaxation rate $\Gamma_{{\rm rel}}$ and dephasing rate $\Gamma_{{\rm deph}}$ are given by $\displaystyle\Gamma_{{\rm rel}}$ $\displaystyle=$ $\displaystyle\kappa\sum_{l}(|x_{+-,l}^{(+)}|^{2}+|x_{-+,l}^{(+)}|^{2}),$ (10) $\displaystyle\Gamma_{{\rm deph}}$ $\displaystyle=$ $\displaystyle\frac{\kappa}{2}\sum_{l}(|x_{+-,l}^{(+)}|^{2}+|x_{-+,l}^{(+)}|^{2}+4|x_{++,l}^{(+)}|^{2}).$ (11) The analytical formal solutions in the Floquet picture can be easily found as follows: $\displaystyle\tilde{\rho}_{++}(t)$ $\displaystyle=$ $\displaystyle\tilde{\rho}_{++}(0)e^{-\Gamma_{{\rm rel}}t}+\tilde{\rho}_{++}^{{\rm ss}}(1-e^{-\Gamma_{{\rm rel}}t}),$ (12) $\displaystyle\tilde{\rho}_{+-}(t)$ $\displaystyle=$ $\displaystyle\tilde{\rho}_{+-}(0)e^{-(\Gamma_{{\rm deph}}+i\Delta_{+-})t},$ (13) where $\tilde{\rho}_{++}^{{\rm ss}}=\frac{\Gamma_{{\rm s}}}{\Gamma_{{\rm rel}}}=\frac{\sum_{l}|x_{-+,l}^{(+)}|^{2}}{\sum_{l}(|x_{+-,l}^{(+)}|^{2}+|x_{-+,l}^{(+)}|^{2})}$ (14) is the steady-state population of the Floquet state. These solutions together with the quantum regression theory enable us to derive a physically transparent spectrum function Yan _et al._ (2016a, 2018) $\displaystyle S(\Delta)$ $\displaystyle\propto$ $\displaystyle\sum_{l}\bigg{\\{}\pi|x_{++,l}^{(+)}|^{2}(\tilde{\rho}_{++}^{{\rm ss}}-\tilde{\rho}_{--}^{{\rm ss}})^{2}\delta(\Delta-l\omega_{z})$ $\displaystyle+4|x_{++,l}^{(+)}|^{2}\tilde{\rho}_{++}^{{\rm ss}}\tilde{\rho}_{--}^{{\rm ss}}\frac{\Gamma_{{\rm rel}}}{\Gamma_{{\rm rel}}^{2}+(\Delta-l\omega_{z})^{2}}$ $\displaystyle+|x_{+-,l}^{(+)}|^{2}\tilde{\rho}_{++}^{{\rm ss}}\frac{\Gamma_{{\rm deph}}}{\Gamma_{{\rm deph}}^{2}+(\Delta-l\omega_{z}-\Delta_{+-})^{2}}$ $\displaystyle+|x_{-+,l}^{(+)}|^{2}\tilde{\rho}_{--}^{{\rm ss}}\frac{\Gamma_{{\rm deph}}}{\Gamma_{{\rm deph}}^{2}+(\Delta-l\omega_{z}+\Delta_{+-})^{2}}\bigg{\\}},$ It is evident that the accuracy of Eq. (LABEL:eq:sfunsa) is limited by the secular approximation when the transition matrix elements $x^{(+)}_{\alpha\beta,l}$ and quasienergies are exactly calculated. As is well-known, the secular approximation can be justified under the strong driving condition, i.e., $\Delta_{+-}\gg\kappa$. In general, we can calculate the quasienergies and transition matrix elements based on both analytical and numerical diagonalization (ND) of the Floquet Hamiltonian $\tilde{H}(t)-i\partial_{t}$ in the Sambe space Shirley (1965); Sambe (1973) , yielding the analytical and semianalytical spectra, respectively. Next, we discuss the parity phenomenon of the Floquet states resulting from Eq. (5). We consider the behavior of the Floquet states under the generalized parity transformation ${\cal P}_{G}$, which is defined as ${\cal P}_{G}|\tilde{u}_{\alpha}(t)\rangle:=\sigma_{x}|\tilde{u}_{\alpha}(t+T/2)\rangle.$ (16) By differentiating $\sigma_{x}|\tilde{u}_{\alpha}(t+T/2)\rangle$ with respect to $t$, we readily obtain $\left[\sigma_{x}\tilde{H}\left(t+T/2\right)\sigma_{x}-i\partial_{t}\right]\sigma_{x}\left|\tilde{u}_{\alpha}\left(t+T/2\right)\right\rangle=\tilde{\varepsilon}_{\alpha}\sigma_{x}\left|\tilde{u}_{\alpha}\left(t+T/2\right)\right\rangle.$ (17) When $\delta+f(t)=-[\delta+f(t+T/2)]$, $\sigma_{x}|\tilde{u}_{\alpha}(t+T/2)\rangle$ satisfies the same differential equation as $|\tilde{u}_{\alpha}(t)\rangle$ because of Eq. (5). Recalling the uniqueness of solutions of the differential equations, in such cases we must have $\sigma_{x}\left|\tilde{u}_{\alpha}\left(t+T/2\right)\right\rangle=\lambda_{\alpha}|\tilde{u}_{\alpha}(t)\rangle,$ (18) where $\lambda_{\alpha}$ is a constant. Furthermore, we have $\lambda_{\alpha}=\pm 1$ because of ${\cal P}_{G}^{2}|\tilde{u}_{\alpha}(t)\rangle=\lambda_{\alpha}^{2}|\tilde{u}_{\alpha}(t)\rangle=|\tilde{u}_{\alpha}(t)\rangle.$ Specifically, when $\delta+f(t)=-[\delta+f(t+T/2)]$, the Floquet states may be even or odd functions under the generalized parity transformation, which is referred to as the generalized parity of the Floquet states. The generalized parity has been previously investigated in other phenomena such as the coherent destruction of tunneling Grossmann _et al._ (1991) and the laser- induced electronic transport Lehmann _et al._ (2003). Clearly, if $\delta+f(t)\neq-\left[\delta+f\left(t+T/2\right)\right]$, Eq. (18) cannot hold as $\sigma_{x}\tilde{H}\left(t+T/2\right)\sigma_{x}\neq\tilde{H}(t)$, i.e., the effective Hamiltonian is no longer invariant under the generalized parity transformation. Consequently, the Floquet states also do not have the generalized parity. Figure 1: The incoherent components of the fluorescence spectrum for $p=3$, $\Omega_{x}=10\kappa$, $\delta=0$, $\Omega_{z}=\omega_{z}=40\kappa$, $r=1$, and various phase. “Ana.” and “Num.” denote the analytical and the FL numerical results, respectively. We show that the symmetry of the spectrum may be a consequence of the generalized parity of the Floquet states. By using Eq. (18) and $x_{\alpha\beta,l}^{(+)}=\left[x_{\beta\alpha,-l}^{(-)}\right]^{\ast}$, it is straightforward to show the following identity for arbitrary integer $l$ from the definition (9) of the transition matrix element: $x_{\alpha\beta,l}^{(+)}=(-1)^{l}\lambda_{\alpha}\lambda_{\beta}\left[x_{\beta\alpha,-l}^{(+)}\right]^{\ast},$ (19) provided $\delta+f(t)=-[\delta+f(t+T/2)]$. It follows that $|x_{\alpha\beta,l}^{(+)}|=|x_{\beta\alpha,-l}^{(+)}|$ (20) also holds for any integer $l$. We emphasize that the relation (20) can be deduced from relation (19), however, the relation (19) cannot be derived from relation (20). With the relation (20), it is straightforward to show that the spectrum (LABEL:eq:sfunsa) is symmetric about $\Delta=0$ Yan _et al._ (2018). Specifically, since $|x_{++,l}^{(+)}|=|x_{++,-l}^{(+)}|$, the emission lines at $\Delta=\pm l\omega_{z}$ (the positions are symmetric about $\Delta=0$) have the equal weights. Moreover, since $|x_{+-,l}^{(+)}|=|x_{-+,-l}^{(+)}|$, we also have $\tilde{\rho}_{++}^{{\rm ss}}=\tilde{\rho}_{--}^{{\rm ss}}$ according to Eq. (14), leading to $|x_{+-,l}^{(+)}|^{2}\tilde{\rho}_{++}^{{\rm ss}}=|x_{-+,-l}^{(+)}|^{2}\tilde{\rho}_{--}^{{\rm ss}}$. That is to say, the emission lines at $\Delta=\pm(l\omega_{z}+\Delta_{+-})$ (the positions are symmetric about $\Delta=0$) have the same weights. It turns out that the symmetry of the spectrum fundamentally originates from the generalized parity of the Floquet states when $\delta+f(t)=-[\delta+f(t+T/2)]$. Conversely, one may expect that the symmetry of the spectrum may break when such a parity is absent. However, it is a formidable task to analytically prove that the spectrum is asymmetric in the absence of the generalized parity. Let us discuss what happens to the formal spectrum if $\delta+f(t)\neq-[\delta+f(t+T/2)]$. Under such a condition, the generalized parity is absent, and thus we cannot have the relation (19). In principle, the absence of the generalized parity will result in two possible situations. One is that the spectrum becomes asymmetric about $\Delta=0$ because the relation $|x^{(+)}_{\alpha\beta,l}|\neq|x^{(+)}_{\beta\alpha,-l}|$ can be derived at least for a certain $l$. The other is that the spectrum is symmetric because the equality $|x^{(+)}_{\alpha\beta,l}|=|x^{(+)}_{\beta\alpha,-l}|$ still holds for any $l$, originating from other kinds of identities between the transition matrix elements rather than the generalized-parity-induced identity (19). Apparently the first situation is more trivial than the second one. Most importantly, the present analysis suggests that the formal spectrum may be symmetric even without the generalized parity. Consequently, we cannot conclude from the formal spectrum (LABEL:eq:sfunsa) that the symmetry of the spectrum breaks as long as the generalized parity is absent. To end this section, we give some remarks on the above findings based on the formal spectrum. First, we find that the symmetry of the spectrum may result from the generalized parity and requires $\delta+f(t)=-[\delta+f(t+T/2)]$. This is consistent with the analysis above without the secular approximation. Moreover, the generalized parity is found to be an important underlying cause of the relation (20), which was numerically found in harmonic modulation case Yan _et al._ (2018). It turns out here that the relation (20) can be established due to the generalized parity in the bi- and multi-harmonic cases. Second, without the generalized parity, namely, when $\delta+f(t)\neq-[\delta+f(t+T/2)]$, the formal spectrum can be either trivially asymmetric or nontrivially symmetric. The symmetry requires the relation (20) in the absence of the generalized parity, namely, Eq. (19). Third, the formal spectrum is derived with the secular approximation and thus the present analysis needs further verification. In what follows we consider a concrete biharmonic modulation to verify whether the generalized parity guarantees the symmetry of the spectrum when the secular approximation is not invoked and we also check whether the relation (20) can be established without the generalized parity and whether such relations lead to the symmetry of the spectrum without the secular approximation. Figure 2: The incoherent components of the fluorescence spectrum for $p=3$, $\delta=5\kappa$, $\Omega_{x}=10\kappa$, $\Omega_{z}=\omega_{z}=40\kappa$, $r=1$, and various phase. ## III Verification of symmetry and asymmetry of the spectrum To calculate fluorescence spectrum, without loss of generality, we mainly consider the biharmonic modulation in this work, namely, the modulation consists of two harmonics $f(t)=\Omega_{z}[\cos(\omega_{z}t)+r\cos(p\omega_{z}t+\phi)],$ (21) where $\Omega_{z}$ and $\omega_{z}=2\pi/T$ are the amplitude and fundamental frequency of the modulation, respectively, $p$ is a positive integer, $r$ is the ratio of the amplitude of the second harmonic to that of the first one, and $\phi$ is a relative phase. Since $\frac{1}{T}\int^{T}_{0}f(t)dt=0$, the condition for the presence of the generalized parity $\delta+f(t)=-[\delta+f(t+T/2)]$ is equivalent to $\delta=0$ and $f(t)=-f(t+T/2)$. The condition for the absence of the generalized parity $\delta+f(t)\neq-[\delta+f(t+T/2)]$ is simply divided into three cases: $\left\\{\begin{array}[]{c}\delta\neq 0\,{\rm and}\,f(t)=-f(t+T/2);\\\ \delta=0\,{\rm and}\,f(t)\neq-f(t+T/2);\\\ \delta\neq 0\,{\rm and}\,f(t)\neq-f(t+T/2).\end{array}\right.$ (22) It is noted that for the biharmonic modulation (21), both $f(t)=-f(t+T/2)$ and $f(t)\neq-f(t+T/2)$ can be realized by setting $p$ odd and even numbers, respectively. To verify above analysis, we calculate the numerically exact fluorescence spectrum from master equation (2) with the FL formalism Ho _et al._ (1986); Yan _et al._ (2016b), which is compared with the analytical and semianalytical results from Eq. (LABEL:eq:sfunsa). The analytical and semianalytical results are obtained by using the transition matrix elements and quasienergies calculated with the Van Vleck perturbation theory and the ND of the Floquet Hamiltonian, respectively. The detailed analytical calculation is presented in Appendix B. In addition, we just focus on the incoherent components of the fluorescence spectrum, which is of interest in the experiments. In principle, similar analysis is applicable to the coherent components. In this work, we mainly consider the parameters regime $\omega_{z}\sim\Omega_{z}\gg\Omega_{x}\gg\kappa$, in which case both the Van Vleck perturbation theory (up to second order in $\Omega_{x}$) and secular approximation can be justified. Importantly, this regime is experimentally accessible in the artificial atoms, e.g., the transmon qubit Li _et al._ (2013). We should emphasize that if the perturbation theory is inapplicable, we can obtain the transition matrix elements and quasienergies by the ND of the Floquet Hamiltonian. We first verify whether the generalized parity guarantees the symmetry of the spectrum. In Fig. 1, we display the incoherent component of fluorescence spectra obtained by the FL numerical method (solid line) and analytical result (dashed line) for $p=3$, $\delta=0$, and various values of $\phi$. Apparently the spectra are symmetric as expected. The analytical results are in agreement with the FL results. These results also show that the spectrum depends weakly on the relative phase $\phi$. In addition, it is straightforward to verify that for other driving parameters, the spectrum is symmetric as well when $p$ is an odd number and $\delta=0$. In Appendix C, we show that when $\delta=0$ and $p$ is odd, the transition matrix elements indeed satisfy Eq. (19), which guarantees the symmetry of the spectrum. The present results suggest that the symmetry of the spectrum appears as long as $\delta=0$ and $f(t)=-f(t+T/2)$ and fundamentally originates from the generalized parity of the Floquet states in such a situation. Figure 3: The incoherent components of the fluorescence spectrum for $p=2$, $\delta=0$, $\Omega_{x}=10\kappa$, $\Omega_{z}=\omega_{z}=40\kappa$, $r=1$, and various phases. We move to examine whether the symmetry of the spectrum breaks when the generalized parity is absent, namely, under the conditions $\delta+f(t)\neq-[\delta+f(t+T/2)]$. We calculate the spectra with the parameters being the same as in Fig. 1 except for the detuning $\delta=5\kappa$, corresponding to the case of $\delta\neq 0$ and $f(t)=-f(T+T/2)$. In Fig. 2, the analytical and FL numerical spectra agree with each other and are found to be asymmetric for the finite detuning, indicating that in spite of $f(t)=-f(t+T/2)$, the asymmetry of spectrum appears when $\delta\neq 0$. Let us consider the case of $\delta=0$ and $f(t)\neq-f(t+T/2)$ by setting $p$ being even. We calculate the spectrum for $p=2$ and the other parameters being the same as in Fig. 1. Figure 3 displays that the analytical and numerical spectra are pronouncedly asymmetric even though $\delta=0$ except for $\phi=\pi/2$ in which case the analytical spectrum is found to be strictly symmetric (see discussion below) while the numerical spectrum is slightly asymmetric [in particular, the intensities of emission lines at $\Delta=\pm\omega_{z}$ are unequal as shown in Fig. 6(a)]. These results confirm that the formal spectrum (LABEL:eq:sfunsa) may be symmetric without the generalized parity of the Floquet states. However, the numerically exact spectrum is asymmetric in the absence of the generalized parity. This shows that the generalized parity plays an important role in determining the symmetry of the exact spectrum. We will further analyze such discrepancy between the analytical and numerical results later. In addition, we find that in contrast with $p=3$, the spectrum is found to depend strongly on relative phase $\phi$ when $p=2$. Finally we calculate the spectra for $\delta\neq 0$ and $f(t)\neq-f(t+T/2)$. Figure 4 shows the spectra obtained for the detuning $\delta=5\kappa$ and the other parameters being the same as in Fig. 3. The spectra are still asymmetric. In general, it is straightforward to verify the asymmetry of the spectrum under the condition that $\delta+f(t)\neq-[\delta+f(t+T/2)]$. All in all, it turns out that the symmetry of the spectrum breaks in the absence of the generalized parity. Conversely, we can say that the symmetry of the spectrum can be fully attributed to the presence of the generalized parity. In contrast to the previous studies, we ascribe the asymmetry to the breaking of the generalized parity rather than the unequal populations of dressed states Antón _et al._ (2017) or the breakdown of relation (20) Yan _et al._ (2018). Figure 4: The incoherent components of fluorescence spectrum for $p=2$, $\delta=5\kappa$, $\Omega_{x}=10\kappa$, $\Omega_{z}=\omega_{z}=40\kappa$, $r=1$, and various phase. Let us explore how the analytical spectrum becomes symmetric in the absence of the generalized parity of the Floquet states. To this end, we show that the relation (20) can originate from the identities different from Eq. (19). Based on the results from the Van Vleck perturbation theory, we analytically derive the identities for the transition matrix elements in the case of vanishing detuning and even $p$. The derivation are given in Appendix C. When $p$ is even, $\delta=0$, and $\phi=\left(1/2+n\right)\pi$ ($n=0,\pm 1,\pm 2,\ldots$), we find that the following relations hold for arbitrary integer $l$: $\displaystyle x^{(+)}_{++,-l}$ $\displaystyle=$ $\displaystyle(-1)^{l}x^{(+)}_{++,l},$ (23) $\displaystyle x^{(+)}_{-+,-l}$ $\displaystyle=$ $\displaystyle-(-1)^{l}e^{-i2\theta_{0}}x^{(+)}_{+-,l},$ (24) where $\theta_{0}$ is a phase defined in Eq. (86). Although the relations (23) and (24) are derived based on the perturbation theory, it is straightforward to show that they hold in the nonperturbative regimes. In Fig. 5, we calculate $x^{(+)}_{++,l}$ $(l=\pm 1,\pm 2)$ with the variation of $\Omega_{x}$ by using the analytical and ND methods. We see that the deviation between the analytical and numerical results becomes larger and larger as $\Omega_{x}$ increases, which is due to the breakdown of the perturbation calculation. Nevertheless, $x^{(+)}_{++,l}$ obtained by the ND method still satisfies Eq. (23). This suggests that the relations (23) and (24) are not limited to the perturbative regimes. More importantly, it follows from the identities (23) and (24) that $|x_{\alpha\beta,l}^{(+)}|=|x_{\beta\alpha,-l}^{(+)}|$, which leads to the symmetry of the formal spectrum (LABEL:eq:sfunsa). That is to say, without the generalized parity of the Floquet states, the relation (20) can also be established from other kinds of the identities for the transition matrix elements instead of the generalized-parity-induced identity (19) under certain conditions. The discrepancy in the symmetry predicted by the analytical and numerical methods shown in Fig. 3(b) indicates that the relations (23) and (24) cannot guarantee the symmetry of the spectrum without the secular approximation. To further verify this, in Fig. 6, we use semianalytical and FL numerical methods to calculate the weights of the emission lines at $\Delta=\pm\omega_{z}$ as the increasing of $\Omega_{x}$ for $p=2$, $\delta=0$, and two values of $\phi$. It is evident that the weights calculated from the semianalytical method (solid and dashed lines) are the same while the weights from the numerical method (dot-dashed and dotted lines) are unequal, indicating that the semianalytical spectrum is symmetric but the numerical spectrum is not symmetric. The present results illustrate that that provided the relation (20) is established in the absence of the generalized parity, the secular approximation can induce artifact symmetry that vanishes if such approximation is not invoked. Figure 5: Transition matrix elements $x^{(+)}_{++,l}$ versus driving strength $\Omega_{x}$, calculated from the analytical method and the numerical method based on the ND of the Floquet Hamiltonian for $p=2$, $\delta=0$, $\Omega_{z}=\omega_{z}=40\kappa$, $\phi=\pi/2$, and $r=1$. Apart from the biharmonic modulation, we find that the conditions for the symmetry and asymmetry of the spectrum, which are derived based on the generalized parity, are applicable to the simple harmonic and multiharmonic modulation cases. For the simple harmonic modulation $f(t)=\Omega_{z}\cos(\omega_{z}t)$, $f(t)=-f(t+T/2)$ is met. Therefore, the symmetry and asymmetry of the spectrum is uniquely controlled by the detuning $\delta$, which simply interprets the detuning-dependent symmetry of the spectrum. Specifically, the spectrum is expected to be symmetric when $\delta=0$ and asymmetric when $\delta\neq 0$. This is consistent with the findings of previous studies Yan _et al._ (2016b); Antón _et al._ (2017); Yan _et al._ (2018). For the multiharmonic modulation $f(t)=\sum_{p=1}^{N}\Omega_{z,p}\cos(p\omega_{z}t+\phi_{p})]$, where $\Omega_{z,p}$ and $\phi_{p}$ are the amplitude and phase of the $p$th harmonic, respectively, either $f(t)=-f(t+T/2)$ or $f(t)\neq-f(t+T/2)$ can be met, similarly to the biharmonic case. We have calculated the spectrum with the FL and semianalytical methods for the cases of $N=3$, $N=4$, and $N=5$. The results (not shown here) further confirm that the symmetry and asymmetry of spectrum fundamentally originate from the presence and absence of the generalized parity of the Floquet states, respectively. Figure 6: Weights of emission lines at $\Delta=\pm\omega_{z}$ versus driving strength $\Omega_{x}$, calculated from the semianalytical method and the FL method, for $p=2$, $\delta=0$, $\Omega_{z}=\omega_{z}=40\kappa$, $r=1$, and two values of $\phi$. “Semiana.” denotes the semianalytical result. ## IV Conclusions In summary, we have studied the fundamental origin of the symmetry of the resonance fluorescence from the two-level system subjected to a periodic frequency modulation and a near-resonant high-frequency monochromatic excitation by using both analytical and numerical methods based on the Floquet theory. In such a driven two-level system, we have found that the generalized parity of Floquet states plays a fundamental role in the symmetry of the spectrum. Specifically, the generalized parity guarantees the symmetry of the spectrum. On the other hand, when the generalized parity is broken, the spectrum becomes asymmetric. This has been illustrated in the context of the biharmonic modulation, the parameters of which can be tuned to induce or break the generalized parity. For the biharmonic modulation, we find that when $\delta=0$ and $f(t)=-f(t+T/2)$, the generalized parity exists and the spectrum is symmetric. When $\delta+f(t)\neq-[\delta+f(t+T/2)]$, the generalized parity is broken and the spectrum is found to be asymmetric. Interestingly, we can obtain pronouncedly asymmetric spectrum by requiring the modulation $f(t)\neq-f(t+T/2)$ even though $\delta=0$. Moreover, these conditions for the symmetry and asymmetry of the spectrum are found to be applicable to the simple harmonic and multiharmonic modulation cases. In addition, we illustrated that the secular approximation may induce artifact symmetry that vanishes if the secular approximation is avoided under certain conditions. The present study gives a deep insight into the origin of the symmetry of the spectrum and reveals a simple relation between the symmetry of the spectrum and the generalized parity of the Floquet states. ###### Acknowledgements. This work was supported by the National Natural Science Foundation of China (Grants No. 11647082, No. 11774311, No. 11774226, and No. 11874260). ## Appendix A Derivation of symmetry of the spectrum without the secular approximation The master equation can be rewritten in a matrix form $\frac{d}{dt}\vec{\tilde{\rho}}(t)={\cal L}(t)\vec{\tilde{\rho}}(t).$ (25) Here the vector is defined as $\vec{\tilde{\rho}}(t)=(\langle\tilde{\sigma}_{+}(t)\rangle,\langle\tilde{\sigma}_{-}(t)\rangle,\langle\tilde{\pi}_{+}(t)\rangle,\langle\tilde{\pi}_{-}(t)\rangle)^{{\rm T}},$ (26) where $\pi_{\pm}=(1\pm\sigma_{z})/2$ and $\langle\tilde{\hat{o}}(t)\rangle\equiv{\rm Tr}[\hat{o}\tilde{\rho}(t)]$. The superoperator ${\cal L}(t)$ in the Liouville space spanned by the matrix bases $\\{\sigma_{\pm},\pi_{\pm}\\}$ is given by ${\cal L}(t)=\left(\begin{array}[]{cccc}i[\delta+f(t)]-\frac{\kappa}{2}&0&-\frac{i\Omega_{x}}{2}&\frac{i\Omega_{x}}{2}\\\ 0&-i[\delta+f(t)]-\frac{\kappa}{2}&\frac{i\Omega_{x}}{2}&\frac{-i\Omega_{x}}{2}\\\ \frac{-i\Omega_{x}}{2}&\frac{i\Omega_{x}}{2}&-\kappa&0\\\ \frac{i\Omega_{x}}{2}&\frac{-i\Omega_{x}}{2}&\kappa&0\end{array}\right).$ (27) If $\delta+f(t)=-[\delta+f(t+T/2)]$, in which case the Hamiltonian is invariant under the generalized parity transformation, one readily finds that ${\cal T}{\cal L}(t+T/2){\cal T}={\cal L}(t),$ (28) where the transformation matrix is given by ${\cal T}=\left(\begin{array}[]{cccc}0&1&0&0\\\ 1&0&0&0\\\ 0&0&-1&0\\\ 0&0&0&-1\end{array}\right),$ (29) and ${\cal T}^{2}=I$ with $I$ being the identity matrix. Similarly to the Hamiltonian, the matrix ${\cal L}(t)$ is invariant under the transformation defined in Eq. (28), which can be regarded as the generalized parity transformation in the Liouville space, similarly to that defined in Eq. (16) of the main text. Let us derive the specific property of the steady state in the long-time limit [as $\det{\cal L}(t)=0$, there exists a nontrivial steady state]. It follows from Eq. (25) that $\frac{d}{dt}\vec{\tilde{\rho}}(t+T/2)={\cal L}(t+T/2)\vec{\tilde{\rho}}(t+T/2),$ (30) which leads to $\displaystyle\frac{d}{dt}{\cal T}\vec{\tilde{\rho}}(t+T/2)$ $\displaystyle=$ $\displaystyle{\cal T}{\cal L}(t+T/2){\cal T}{\cal T}\vec{\tilde{\rho}}(t+T/2)={\cal L}(t){\cal T}\vec{\tilde{\rho}}(t+T/2),$ (31) which means that ${\cal T}\vec{\tilde{\rho}}(t+T/2)=c\vec{\tilde{\rho}}(t)$, owing to the uniqueness of solutions of the differential equation. On using the fact that $\vec{\tilde{\rho}}(t)=\vec{\tilde{\rho}}(t+T)$ as $t\rightarrow\infty$ because of ${\cal L}(t)={\cal L}(t+T)$, we find that $c$ may be either $+1$ or $-1$. It is easy to prove by contradiction that $c=-1$. Suppose that $c=1$, yielding $\langle\tilde{\pi}_{+}(t+T/2)\rangle=-\langle\tilde{\pi}_{+}(t)\rangle$. However, if one considers $\delta+f(t)=0$ in which case ${\cal L}(t)$ is time independent while Eq. (28) still holds, the steady state becomes time independent and one gets $\langle\tilde{\pi}_{+}(t)\rangle=\langle\tilde{\pi}_{+}(t+T/2)\rangle$. By contradiction, one finds that $c=-1$. Consequently, in the steady-state limit, we have ${\cal T}\vec{\tilde{\rho}}(t+T/2)=-\vec{\tilde{\rho}}(t)\quad(t\rightarrow\infty).$ (32) Next, let us derive the property of the principal matrix solution $\Pi(t,t^{\prime})$ of the master equation, which solves the differential equation $\frac{d}{dt}\Pi(t,t^{\prime})={\cal L}(t)\Pi(t,t^{\prime}),$ (33) with the initial condition $\Pi(t^{\prime},t^{\prime})=I$. It is straightforward to show that $\displaystyle\frac{d}{dt}{\cal T}\Pi(t+T/2,t^{\prime}+T/2){\cal T}$ $\displaystyle=$ $\displaystyle{\cal T}{\cal L}(t+T/2){\cal T}{\cal T}\Pi(t+T/2,t^{\prime}+T/2){\cal T}={\cal L}(t){\cal T}\Pi(t+T/2,t^{\prime}+T/2){\cal T},$ (34) namely, ${\cal T}\Pi(t+T/2,t^{\prime}+T/2){\cal T}$ satisfies the same differential equation and the same initial condition as $\Pi(t,t^{\prime})$. As a result, we simply have ${\cal T}\Pi(t+T/2,t^{\prime}+T/2){\cal T}=\Pi(t,t^{\prime}).$ (35) According to the quantum regression theory Mollow (1969), the two-time correlation functions $\vec{\tilde{g}}(t,t^{\prime})=(\langle\tilde{\sigma}_{+}(t)\tilde{\sigma}_{-}(t^{\prime})\rangle,\langle\tilde{\sigma}_{-}(t)\tilde{\sigma}_{-}(t^{\prime})\rangle,\langle\tilde{\pi}_{+}(t)\tilde{\sigma}_{-}(t^{\prime})\rangle,\langle\tilde{\pi}_{-}(t)\tilde{\sigma}_{-}(t^{\prime})\rangle)^{{\rm T}}$ (36) satisfy the same equation as $\vec{\tilde{\rho}}(t)$, however, with a different initial condition $\vec{\tilde{g}}(t^{\prime},t^{\prime})=(\langle\tilde{\pi}_{+}(t^{\prime})\rangle,0,0,\langle\tilde{\sigma}_{-}(t^{\prime})\rangle)^{{\rm T}}.$ (37) Similarly, another set of two-time correlation functions $\vec{\tilde{G}}(t,t^{\prime})=(\langle\tilde{\sigma}_{+}(t^{\prime})\tilde{\sigma}_{+}(t)\rangle,\langle\tilde{\sigma}_{+}(t^{\prime})\tilde{\sigma}_{-}(t)\rangle,\langle\tilde{\sigma}_{+}(t^{\prime})\tilde{\pi}_{+}(t)\rangle,\langle\tilde{\sigma}_{+}(t^{\prime})\tilde{\pi}_{-}(t)\rangle)^{{\rm T}}$ (38) also satisfy the same differential equation as $\vec{\tilde{g}}(t,t^{\prime})$ but with the initial condition $\vec{\tilde{G}}(t^{\prime},t^{\prime})=(0,\langle\tilde{\pi}_{+}(t^{\prime})\rangle,0,\langle\tilde{\sigma}_{+}(t^{\prime})\rangle)^{{\rm T}}.$ (39) Using Eq. (32), we have ${\cal T}\vec{\tilde{g}}(t^{\prime},t^{\prime})=\left(\begin{array}[]{c}0\\\ \langle\tilde{\pi}_{+}(t^{\prime})\rangle\\\ 0\\\ -\langle\tilde{\sigma}_{-}(t^{\prime})\rangle\end{array}\right)=\left(\begin{array}[]{c}0\\\ \langle\tilde{\pi}_{+}(t^{\prime}+T/2)\rangle\\\ 0\\\ \langle\tilde{\sigma}_{+}(t^{\prime}+T/2)\rangle\end{array}\right)=\vec{\tilde{G}}\left(t^{\prime}+\frac{T}{2},t^{\prime}+\frac{T}{2}\right)\quad(t^{\prime}\rightarrow\infty).$ (40) In the steady-state limit, the correlation functions are found to have the following relation $\displaystyle\vec{\tilde{g}}(t,t^{\prime})$ $\displaystyle=$ $\displaystyle\Pi(t,t^{\prime})\vec{\tilde{g}}(t^{\prime},t^{\prime})$ (41) $\displaystyle=$ $\displaystyle{\cal T}\Pi\left(t+\frac{T}{2},t^{\prime}+\frac{T}{2}\right){\cal T}\vec{\tilde{g}}(t^{\prime},t^{\prime})$ $\displaystyle=$ $\displaystyle{\cal T}\Pi\left(t+\frac{T}{2},t^{\prime}+\frac{T}{2}\right)\vec{\tilde{G}}\left(t^{\prime}+\frac{T}{2},t^{\prime}+\frac{T}{2}\right)$ $\displaystyle=$ $\displaystyle{\cal T}\vec{\tilde{G}}\left(t+\frac{T}{2},t^{\prime}+\frac{T}{2}\right)\quad(t^{\prime}\rightarrow\infty).$ It follows that as $t^{\prime}\rightarrow\infty$, $\displaystyle\langle\tilde{\sigma}_{+}(t)\tilde{\sigma}_{-}(t^{\prime})\rangle$ $\displaystyle=$ $\displaystyle\langle\tilde{\sigma}_{+}(t^{\prime}+T/2)\tilde{\sigma}_{-}(t+T/2)\rangle$ (42) $\displaystyle=$ $\displaystyle\langle\tilde{\sigma}_{+}(t+T/2)\tilde{\sigma}_{-}(t^{\prime}+T/2)\rangle^{\ast}.$ In the steady-state limit, the first-order correlation function depends explicitly on time $t^{\prime}$, however, the $t^{\prime}$ dependence can be eliminated by setting $t=\tau+t^{\prime}$ and integrating over $t^{\prime}$ (because the contributions of $t^{\prime}$-dependent terms are negligible to a long-time observation), yielding the $\tau$-dependent first-order correlation function $\displaystyle\bar{\tilde{g}}_{1}(\tau)$ $\displaystyle\equiv$ $\displaystyle\frac{1}{T}\int_{0}^{T}\lim_{t^{\prime}\rightarrow\infty}\langle\tilde{\sigma}_{+}(\tau+t^{\prime})\tilde{\sigma}_{-}(t^{\prime})\rangle dt^{\prime}$ (43) $\displaystyle=$ $\displaystyle\frac{1}{T}\int_{0}^{T}\lim_{t^{\prime}\rightarrow\infty}\langle\tilde{\sigma}_{+}(\tau+t^{\prime}+T/2)\tilde{\sigma}_{-}(t^{\prime}+T/2)\rangle^{\ast}dt^{\prime}$ $\displaystyle=$ $\displaystyle\frac{1}{T}\int_{T/2}^{T+T/2}\lim_{t^{\prime}\rightarrow\infty}\langle\tilde{\sigma}_{+}(\tau+t^{\prime})\tilde{\sigma}_{-}(t^{\prime})\rangle^{\ast}dt^{\prime}$ $\displaystyle=$ $\displaystyle\frac{1}{T}\int_{0}^{T}\lim_{t^{\prime}\rightarrow\infty}\langle\tilde{\sigma}_{+}(\tau+t^{\prime})\tilde{\sigma}_{-}(t^{\prime})\rangle^{\ast}dt^{\prime}$ $\displaystyle=$ $\displaystyle\bar{\tilde{g}}_{1}^{\ast}(\tau),$ where we used relation (42) and the fact that $\langle\tilde{\sigma}_{+}(\tau+t^{\prime}+T)\tilde{\sigma}_{-}(t^{\prime}+T)\rangle^{\ast}=\langle\tilde{\sigma}_{+}(\tau+t^{\prime})\tilde{\sigma}_{-}(t^{\prime})\rangle^{\ast}$ as $t^{\prime}\rightarrow\infty$. This means that the generalized parity guarantees that the correlation function is a real-valued function of $\tau$ in the rotating frame and thus results in the symmetry of the spectrum when $\delta+f(t)=-[\delta+f(t+T/2)]$. This is consistent with the prediction from the spectrum (LABEL:eq:sfunsa). In general, it is a formidable task to show that the spectrum is asymmetric when $\delta+f(t)\neq-[\delta+f(t+T/2)]$ with or without the secular approximation. Nevertheless, from the above derivation, one readily notes that the generalized parity plays an important role in determining the symmetry of the spectrum. Consequently, if such parity breaks, it is not difficult to imagine that the symmetry of the spectrum also breaks trivially if there is no other symmetry-inducing mechanism. ## Appendix B Analytical calculation of quasienergies and transition matrix elements in the biharmonic modulation case We use the Van Vleck perturbation theory Cohen-Tannoudji _et al._ (1998); Hausinger and Grifoni (2010) to analytically calculate the quasienergies and transition matrix elements $x_{\alpha\beta,l}^{(+)}$ for the biharmonic modulation, which leads to the analytical fluorescence spectrum. Since we are interested in the regime of $\Omega_{z},\,\omega_{z}\gg\Omega_{x}$, which is accessible in the experiment Li _et al._ (2013), we use $\Omega_{x}$ as the perturbation parameter. We first transform Eq. (6) with the unitary transformation $e^{S(t)}[\tilde{H}(t)-i\partial_{t}]e^{-S(t)}e^{S(t)}|\tilde{u}_{\alpha}(t)\rangle=\tilde{\varepsilon}_{\alpha}e^{S(t)}|\tilde{u}_{\alpha}(t)\rangle,$ (44) where $S(t)=i\frac{\Omega_{z}}{2\omega_{z}}\left\\{\sin(\omega_{z}t)+\frac{r}{p}[\sin(p\omega_{z}t+\phi)-\sin\phi]\right\\}\sigma_{z}.$ (45) We can define the transformed Floquet states and transformed Hamiltonian as follows: $|u_{\alpha}^{\prime}(t)\rangle=e^{S(t)}|\tilde{u}_{\alpha}(t)\rangle,$ (46) $\displaystyle H^{\prime}(t)$ $\displaystyle=$ $\displaystyle e^{S(t)}[\tilde{H}(t)-i\partial_{t}]e^{-S(t)}$ (47) $\displaystyle=$ $\displaystyle\frac{1}{2}\delta\sigma_{z}+\frac{1}{2}\sum_{l}(f_{l}\sigma_{+}+f_{-l}^{\ast}\sigma_{-})e^{il\omega_{z}t},$ where $f_{l}=\Omega_{x}F_{l},$ (48) and $\displaystyle F_{l}$ $\displaystyle=$ $\displaystyle\frac{1}{T}\int_{0}^{T}e^{i\frac{\Omega_{z}}{\omega_{z}}\left\\{\sin(\omega_{z}t)+\frac{r}{p}[\sin(p\omega_{z}t+\phi)-\sin\phi]\right\\}-il\omega_{z}t}dt$ (49) $\displaystyle=$ $\displaystyle e^{-i\Theta}\sum_{k}J_{k}\left(\frac{r\Omega_{z}}{p\omega_{z}}\right)J_{l-kp}\left(\frac{\Omega_{z}}{\omega_{z}}\right)e^{ik\phi},$ with $\Theta=\frac{r\Omega_{z}}{p\omega_{z}}\sin\phi$ and $J_{k}(z)$ being the Bessel function of the first kind. To proceed, we introduce an extended Hilbert space in which the time-dependent Floquet Hamiltonian $H^{\prime}(t)-i\partial_{t}$ becomes time independent Sambe (1973). One readily introduces the Fourier basis $|l\rangle\equiv\exp(il\omega_{z}t)$ and inner product $\langle l|n\rangle\equiv\frac{1}{T}\int_{0}^{T}\exp[i(n-l)\omega_{z}t]dt=\delta_{l,n}$, where $\delta_{l,n}$ is the Kronecker delta function. Denoting $|\uparrow\rangle$ and $|\downarrow\rangle$ as the eigenstates for $\sigma_{z}$ with the eigenvalues $+1$ and $-1$, respectively, one gets the composite bases $|\uparrow(\downarrow),l\rangle=|\uparrow(\downarrow)\rangle\otimes|l\rangle$. In the extended Hilbert space spanned by such bases, we can obtain the explicit form of the Floquet Hamiltonian, which is written as $\displaystyle H_{{\cal F}}^{\prime}$ $\displaystyle=$ $\displaystyle H^{\prime}(t)-i\partial_{t}$ (50) $\displaystyle=$ $\displaystyle\frac{1}{2}\delta\sigma_{z}+\sum_{n}n\omega_{z}|n\rangle\langle n|+\frac{1}{2}\sum_{n,l}(f_{l}\sigma_{+}+f_{-l}^{\ast}\sigma_{-})$ $\displaystyle\otimes|n+l\rangle\langle n|.$ The Floquet Hamiltonian has an infinite size and is difficult to be diagonalized exactly in analytical calculation. To carry out perturbation calculation, we transform the Floquet Hamiltonian with a further unitary transformation with the Hermitian generator $K$, leading to $\displaystyle H_{{\cal F}}^{\prime\prime}$ $\displaystyle=$ $\displaystyle e^{iK}H_{{\cal F}}^{\prime}e^{-iK}$ (51) $\displaystyle=$ $\displaystyle H_{\cal F}^{\prime}+[iK,H_{{\cal F}}^{\prime}]+\frac{1}{2!}[iK,[iK,H_{{\cal F}}^{\prime}]]+\ldots,$ where the explicit form of $K$ is to be determined by requiring $H_{{\cal F}}^{\prime\prime}$ to be block diagonal. The generator is expanded as $K=K^{(1)}+K^{(2)}+K^{(3)}+\ldots,$ (52) where the superscripts indicate the orders in the perturbation. We use $H_{0}=\frac{1}{2}\delta\sigma_{z}+\sum_{n}n\omega_{z}|n\rangle\langle n|$ and $V=\frac{1}{2}\sum_{n,l}(f_{l}\sigma_{+}+f_{-l}^{\ast}\sigma_{-})\otimes|n+l\rangle\langle n|$ as the dominate and perturbation components, respectively. Up to the second order in $\Omega_{x}$, we have $\displaystyle H_{{\cal F}}^{\prime\prime}$ $\displaystyle\simeq$ $\displaystyle H_{0}+V+[iK^{(1)},H_{0}]+[iK^{(1)},V]+[iK^{(2)},H_{0}]$ (53) $\displaystyle+\frac{1}{2}[iK^{(1)},[iK^{(1)},H_{0}]].$ Next, we discuss under which condition the transformed Hamiltonian may reasonably be block diagonal. For the dominate component $H_{0}$, we simply have $H_{0}|\uparrow(\downarrow),n\rangle=[+(-)\delta/2+n\omega_{z}]|\uparrow(\downarrow),n\rangle\equiv\tilde{\varepsilon}^{(0)}_{+(-),n}|\uparrow(\downarrow),n\rangle$. Provided that $\tilde{\varepsilon}^{(0)}_{+,n}-\tilde{\varepsilon}^{(0)}_{-,n+m}=\delta-m\omega_{z}\approx 0$, we have a subspace spanned by two almost degenerate unperturbed states $|\uparrow,n\rangle$ and $|\downarrow,n+m\rangle$, where $n$ is an arbitrary integer and $m$ is the integer nearest to $\delta/\omega_{z}$. The projection onto such a subspace is realized by the operator: $\Pi_{n}=|\uparrow,n\rangle\langle\uparrow,n|+|\downarrow,n+m\rangle\langle\downarrow,n+m|.$ (54) The eigenvalues of the dominate component $H_{0}$ in the $n$th subspace are well-separated from those in the $(n+l)$th subspace as long as $|l\omega_{z}|\gg|\delta-m\omega_{z}|$ for any $l\neq 0$. Moreover, if we assume that $|\langle\uparrow,n|V|\downarrow,n+l+m\rangle|\ll|\tilde{\varepsilon}^{(0)}_{+,n}-\tilde{\varepsilon}^{(0)}_{-,n+l+m}|,$ (55) which is simply $|f_{-l-m}/2|\ll|l\omega_{z}|$, the transitions between the states in the different subspaces can be neglected up to a certain order in the perturbation Cohen-Tannoudji _et al._ (1998), yielding the following condition $\Pi_{n}H_{{\cal F}}^{\prime\prime}\Pi_{l}=0,$ (56) for $n\neq l$. Therefore, $H_{{\cal F}}^{\prime\prime}$ is block diagonal. The second condition that $K$ cannot have matrix elements inside each subspace of two almost degenerate states is also assumed, i.e., $\Pi_{n}K\Pi_{n}=0.$ (57) The generator can now be fully determined via Eqs. (56) and (57). The nonvanishing elements of $K^{(1)}$ and $K^{(2)}$are given by $\langle\uparrow,n|iK^{(1)}|\downarrow,l\rangle=\frac{1}{2}\frac{f_{n-l}}{\delta+(n-l)\omega_{z}},$ (58) $\langle\downarrow,l|iK^{(1)}|\uparrow,n\rangle=-\frac{1}{2}\frac{f_{n-l}^{\ast}}{\delta+(n-l)\omega_{z}},$ (59) for $n-l\neq-m$, and $\displaystyle\langle\uparrow,n|iK^{(2)}|\uparrow,l\rangle$ $\displaystyle=$ $\displaystyle\frac{1}{4(n-l)\omega_{z}}\left\\{\sum_{k\neq n+m,l+m}\frac{f_{n-k}f_{l-k}^{\ast}}{2}\left[\frac{1}{\delta+(n-k)\omega_{z}}+\frac{1}{\delta+(l-k)\omega_{z}}\right]\right.$ (60) $\displaystyle\left.+\frac{f_{l-n-m}^{\ast}f_{-m}}{\delta+(l-n-m)\omega_{z}}+\frac{f_{n-l-m}f_{-m}^{\ast}}{\delta+(n-l-m)\omega_{z}}\right\\},$ $\displaystyle\langle\downarrow,n|iK^{(2)}|\downarrow,l\rangle$ $\displaystyle=$ $\displaystyle-\frac{1}{4(n-l)\omega_{z}}\left\\{\sum_{k\neq l-m,n-m}\frac{f_{k-n}^{\ast}f_{k-l}}{2}\left[\frac{1}{\delta+(k-n)\omega_{z}}+\frac{1}{\delta+(k-l)\omega_{z}}\right]\right.$ (61) $\displaystyle+\left.\frac{f_{l-n-m}^{\ast}f_{-m}}{\delta+(l-n-m)\omega_{z}}+\frac{f_{n-l-m}f_{-m}^{\ast}}{\delta+(n-l-m)\omega_{z}}\right\\},$ for $n\neq l$. The rest elements of $K^{(1)}$ and $K^{(2)}$ are vanishing. The transformed Hamiltonian have the $2\times 2$ submatrix $H_{{\cal F}}^{\prime\prime(n)}$ in the diagonal, which reads Cohen-Tannoudji _et al._ (1998) $\displaystyle H_{{\cal F}}^{\prime\prime(n)}$ $\displaystyle=$ $\displaystyle H_{0}\Pi_{n}+\Pi_{n}V\Pi_{n}+\frac{1}{2}\Pi_{n}[iK^{(1)},V]\Pi_{n}$ (64) $\displaystyle=$ $\displaystyle\left(\begin{array}[]{cc}\frac{\delta}{2}+n\omega_{z}+\sum_{j\neq-m}\frac{|f_{j}|^{2}}{4(\delta+j\omega_{z})}&\frac{f_{-m}}{2}\\\ \frac{f_{-m}^{\ast}}{2}&-\frac{\delta}{2}+(n+m)\omega_{z}-\sum_{j\neq-m}\frac{|f_{j}|^{2}}{4(\delta+j\omega_{z})}\end{array}\right).$ One can diagonalize the submatrix $H_{{\cal F}}^{\prime\prime(n)}$ analytically. Its eigenvalues (quasienergies) are $\tilde{\varepsilon}_{\pm,n}=\frac{1}{2}\left(m\omega_{z}\pm\Omega_{m}\right)+n\omega_{z},$ (65) where $\Omega_{m}=\sqrt{\left[\delta-m\omega_{z}+\sum_{j\neq-m}\frac{|f_{j}|^{2}}{2(\delta+j\omega_{z})}\right]^{2}+|f_{-m}|^{2}}.$ (66) The eigenvectors are given by $\displaystyle|\Psi_{+,n}^{\prime\prime}\rangle$ $\displaystyle=$ $\displaystyle u|\uparrow,n\rangle+v|\downarrow,n+m\rangle,$ (67) $\displaystyle|\Psi_{-,n}^{\prime\prime}\rangle$ $\displaystyle=$ $\displaystyle v|\uparrow,n\rangle-u^{\ast}|\downarrow,n+m\rangle,$ (68) with $\displaystyle u$ $\displaystyle=$ $\displaystyle\frac{f_{-m}}{|f_{-m}|}\sqrt{\frac{1}{2}\left[1+\frac{1}{\Omega_{m}}\left(\delta-m\omega_{z}+\sum_{j\neq-m}\frac{|f_{j}|^{2}}{2(\delta+j\omega_{z})}\right)\right]},$ (69) $\displaystyle v$ $\displaystyle=$ $\displaystyle\sqrt{\frac{1}{2}\left[1-\frac{1}{\Omega_{m}}\left(\delta-m\omega_{z}+\sum_{j\neq-m}\frac{|f_{j}|^{2}}{2(\delta+j\omega_{z})}\right)\right]}.$ (70) The eigenvectors for $H_{{\cal F}}^{\prime}$ can be derived as follows: $|\Psi_{\pm,n}^{\prime}\rangle=e^{-iK}|\Psi_{\pm,n}^{\prime\prime}\rangle\simeq\left(1-iK^{(1)}-iK^{(2)}+\frac{1}{2!}iK^{(1)}iK^{(1)}\right)|\Psi_{\pm,n}^{\prime\prime}\rangle.$ (71) It is straightforward to derive the explicit form of the eigenvectors, which reads $\displaystyle|\Psi_{+,n}^{\prime}\rangle$ $\displaystyle=$ $\displaystyle\frac{1}{{\cal{\cal N}}}\left\\{uB|\uparrow,n\rangle-\sum_{j\neq 0}P_{j}|\uparrow,n+j\rangle+vB|\downarrow,n+m\rangle+\sum_{j\neq 0}Q_{j}|\downarrow,n+m+j\rangle\right\\},$ (72) $\displaystyle|\Psi_{-,n}^{\prime}\rangle$ $\displaystyle=$ $\displaystyle\frac{1}{{\cal{\cal N}}}\left\\{vB|\uparrow,n\rangle+\sum_{j\neq 0}Q_{-j}^{\ast}|\uparrow,n+j\rangle-u^{\ast}B|\downarrow,n+m\rangle+\sum_{j\neq 0}P_{-j}^{\ast}|\downarrow,n+m+j\rangle\right\\},$ (73) where $B=1-\frac{1}{8}\sum_{l\neq-m}\frac{|f_{l}|^{2}}{(\delta+l\omega_{z})^{2}},$ (74) $\displaystyle P_{j}$ $\displaystyle=$ $\displaystyle\frac{f_{j-m}}{2[\delta+(j-m)\omega_{z}]}\left(v+\frac{uf_{-m}^{\ast}}{2j\omega_{z}}\right)+\frac{u}{4j\omega_{z}}\sum_{k\neq-m}\frac{f_{k+j}f_{k}^{\ast}}{\delta+k\omega_{z}},$ (75) $\displaystyle Q_{j}$ $\displaystyle=$ $\displaystyle\frac{f_{-j-m}^{\ast}}{2[\delta-(j+m)\omega_{z}]}\left(u+\frac{vf_{-m}}{2j\omega_{z}}\right)+\frac{v}{4j\omega_{z}}\sum_{k\neq-m}\frac{f_{k-j}^{\ast}f_{k}}{\delta+k\omega_{z}},$ (76) and ${\cal N}=\sqrt{B^{2}+\sum_{j\neq 0}(|P_{j}|^{2}+|Q_{j}|^{2})}$ is the normalization factor. The Floquet states $|u_{\alpha,n}^{\prime}(t)\rangle$ with the quasienergy $\tilde{\varepsilon}_{\alpha,n}$ can be derived from $|\Psi_{\alpha,n}^{\prime}\rangle$ by replacing $|n\rangle$ with $e^{in\omega_{z}t}$. With above results at hand, we can analytically calculate the transition matrix element $\displaystyle x_{\alpha\beta,l}^{(+)}$ $\displaystyle=$ $\displaystyle\frac{1}{T}\int_{0}^{T}\langle\tilde{u}_{\alpha}(t)|\sigma_{\pm}|\tilde{u}_{\beta}(t)\rangle e^{-il\omega_{z}t}dt=\frac{1}{T}\int_{0}^{T}\langle u_{\alpha}^{\prime}(t)|e^{S(t)}\sigma_{+}e^{-S(t)}|u_{\beta}^{\prime}(t)\rangle e^{-il\omega_{z}t}dt$ (77) $\displaystyle=$ $\displaystyle\sum_{n}\frac{1}{T}\int_{0}^{T}F_{n}\langle u_{\alpha}^{\prime}(t)|\sigma_{+}|u_{\beta}^{\prime}(t)\rangle e^{i(n-l)\omega_{z}t}dt=\sum_{n}F_{n+l}\langle\Psi_{\alpha,0}^{\prime}|\sigma_{+}|\Psi_{\beta,n}^{\prime}\rangle,$ and $\displaystyle\langle\Psi_{+,0}^{\prime}|\sigma_{+}|\Psi_{+,n}^{\prime}\rangle$ $\displaystyle=$ $\displaystyle\frac{1}{{\cal N}^{2}}\left\\{u^{\ast}vB^{2}\delta_{n,-m}-\sum_{j\neq 0,n+m}P_{j}^{\ast}Q_{j-n-m}+(u^{\ast}Q_{-n-m}-vP_{n+m}^{\ast})B(1-\delta_{n,-m})\right\\},$ (78) $\displaystyle\langle\Psi_{+,0}^{\prime}|\sigma_{+}|\Psi_{-,n}^{\prime}\rangle$ $\displaystyle=$ $\displaystyle\frac{1}{{\cal N}^{2}}\left\\{-(u^{\ast})^{2}B^{2}\delta_{n,-m}-\sum_{j\neq 0,n+m}P_{j}^{\ast}P_{n+m-j}^{\ast}+2u^{\ast}P_{n+m}^{\ast}B(1-\delta_{n,-m})\right\\},$ (79) $\displaystyle\langle\Psi_{-,0}^{\prime}|\sigma_{+}|\Psi_{+,n}^{\prime}\rangle$ $\displaystyle=$ $\displaystyle\frac{1}{{\cal N}^{2}}\left\\{v^{2}B^{2}\delta_{n,-m}+\sum_{j\neq 0,n+m}Q_{-j}Q_{j-n-m}+2vQ_{-n-m}B(1-\delta_{n,-m})\right\\},$ (80) $\displaystyle\langle\Psi_{-,0}^{\prime}|\sigma_{+}|\Psi_{-,n}^{\prime}\rangle$ $\displaystyle=$ $\displaystyle\frac{1}{{\cal N}^{2}}\left\\{-u^{\ast}vB^{2}\delta_{n,-m}+\sum_{j\neq 0,n+m}P_{j}^{\ast}Q_{j-n-m}+(vP_{n+m}^{\ast}-u^{\ast}Q_{-n-m})B(1-\delta_{n,-m})\right\\},$ (81) where $(1-\delta_{n,-m})$ indicates that the term vanishes for $n=-m$. Clearly, the validity of the perturbation theory is limited to the condition (55). For $\delta\approx 0$, roughly speaking, the above results can be justified when $r\sim 1$ and $\omega_{z}\sim\Omega_{z}\gg\Omega_{x}$. ## Appendix C Equalities for transition matrix elements in the vanishing detuning case For the biharmonic modulation, we show the equalities that the transition matrix elements satisfy under the vanishing detuning condition ($\delta=0$) using the above analytical results, which helps us to understand the symmetry of the spectrum in the main text. It follows from Eq. (49) that $\displaystyle F_{-l}$ $\displaystyle=$ $\displaystyle e^{-i\Theta}\sum_{k}J_{k}\left(\frac{r\Omega_{z}}{p\omega_{z}}\right)J_{-l-kp}\left(\frac{\Omega_{z}}{\omega_{z}}\right)e^{ik\phi}$ (82) $\displaystyle=$ $\displaystyle(-1)^{l}e^{-i\Theta}\sum_{k}J_{k}\left(\frac{r\Omega_{z}}{p\omega_{z}}\right)(-1)^{k(p+1)}$ $\displaystyle\times J_{l-kp}\left(\frac{\Omega_{z}}{\omega_{z}}\right)e^{-ik\phi},$ where we used the relation $J_{-n}(z)=(-1)^{n}J_{n}(z)$. It is evident that when $p$ is an odd number, $p+1$ is even and thus $(-1)^{k(p+1)}=1$, leading to $F_{-l}=(-1)^{l}e^{-i2\Theta}F_{l}^{\ast}.$ (83) When $p$ is an even number, $(-1)^{k(p+1)}=(-1)^{k}$ may be either $+1$ or $-1$. Nevertheless, we can obtain a simple relation between $F_{l}$ and $F_{-l}$ by setting $(-1)^{k}e^{-ik\phi}=e^{ik\phi},$ (84) which yields that $\phi=\left(1/2+n\right)\pi$ $(n=0,\pm 1,\pm 2,\ldots)$. With an even $p$ and such values of phase, we have $F_{l}=(-1)^{l}F_{-l}.$ (85) We should emphasize that Eqs. (83) and (85) hold under different conditions. The former is available when $p$ is odd and regardless of $\phi$ while the latter is established when $p$ is even and $\phi=(1/2+n)\pi$. Provided that $\delta=0$, we get $m=\delta/\omega_{z}=0$. We define the phase of $F_{0}$ via $F_{0}=e^{-i\theta_{0}}|F_{0}|.$ (86) Together with Eqs. (69) and (70), we simply have $v=ue^{i\theta_{0}}$ (87) with the aid of Eq. (83) or (85). Such an equality between $u$ and $v$ is valid only for $\delta=0$ and in the valid regime of Eq. (83) or (85). ### C.1 Odd $p$ We consider that $p$ is an odd number. It follows from Eq. (49) that $\theta_{0}=\Theta$. Using $\delta=0$ and Eqs. (83) and (87), one readily gets from Eqs. (75) and (76) that $\displaystyle Q_{j}$ $\displaystyle=$ $\displaystyle-\frac{f_{-j}^{\ast}}{2j\omega_{z}}\left(u+\frac{vf_{0}}{2j\omega_{z}}\right)+\frac{v}{4j\omega_{z}}\sum_{k\neq 0}\frac{f_{k-j}^{\ast}f_{k}}{k\omega_{z}}$ (88) $\displaystyle=$ $\displaystyle\frac{(-1)^{j+1}e^{i2\Theta}f_{j}}{2j\omega_{z}}\left(u+\frac{vf_{0}^{\ast}e^{-i2\Theta}}{2j\omega_{z}}\right)+\frac{v}{4j\omega_{z}}\sum_{k\neq 0}\frac{f_{-k-j}^{\ast}f_{-k}}{-k\omega_{z}}$ $\displaystyle=$ $\displaystyle\frac{(-1)^{j+1}e^{i\Theta}f_{j}}{2j\omega_{z}}\left(v+\frac{uf_{0}^{\ast}}{2j\omega_{z}}\right)+\frac{e^{i\Theta}u}{4j\omega_{z}}\sum_{k\neq 0}\frac{(-1)^{j+1}f_{k+j}f_{k}^{\ast}}{k\omega_{z}}$ $\displaystyle=$ $\displaystyle(-1)^{j+1}e^{i\Theta}P_{j}.$ From this relation and Eqs. (77)-(80), it is straightforward to show that $\displaystyle\left[x_{-+,-l}^{(+)}\right]^{\ast}$ $\displaystyle=$ $\displaystyle\sum_{n}\frac{F_{n-l}^{\ast}}{{\cal N}^{2}}\left\\{v^{2}B^{2}\delta_{n,0}+\sum_{n\neq 0,n}Q_{-j}^{\ast}Q_{j-n}^{\ast}+2vBQ_{-n}^{\ast}(1-\delta_{n,0})\right\\}$ (89) $\displaystyle=$ $\displaystyle\sum_{n}\frac{F_{-n-l}^{\ast}}{{\cal N}^{2}}\left\\{v^{2}B^{2}\delta_{n,0}+\sum_{j\neq 0,-n}Q_{-j}^{\ast}Q_{j+n}^{\ast}+2vBQ_{n}^{\ast}(1-\delta_{n,0})\right\\}$ $\displaystyle=$ $\displaystyle\sum_{n}\frac{(-1)^{n+l}F_{n+l}e^{i2\Theta}}{{\cal N}^{2}}\left\\{v^{2}B^{2}\delta_{n,0}+\sum_{j\neq 0,n}Q_{j}^{\ast}Q_{n-j}^{\ast}+2vBQ_{n}^{\ast}(1-\delta_{n,0})\right\\}$ $\displaystyle=$ $\displaystyle\sum_{n}\frac{(-1)^{n+l}F_{n+l}e^{i2\Theta}}{{\cal N}^{2}}\left\\{v^{2}B^{2}\delta_{n,0}+\sum_{j\neq 0,n}(-1)^{n}e^{-i2\Theta}P_{j}^{\ast}P_{n-j}^{\ast}+2vB(-1)^{n+1}e^{-i\Theta}P_{n}^{\ast}(1-\delta_{n,0})\right\\}$ $\displaystyle=$ $\displaystyle(-1)^{l}\sum_{n}\frac{F_{n+l}}{{\cal N}^{2}}\left\\{(u^{\ast})^{2}B^{2}\delta_{n,0}+\sum_{j\neq 0,n}P_{j}^{\ast}P_{n-j}^{\ast}-2u^{\ast}BP_{n}^{\ast}(1-\delta_{n,0})\right\\}$ $\displaystyle=$ $\displaystyle-(-1)^{l}x_{+-,l}^{(+)}.$ Similarly, we find that $\left[x^{(+)}_{++,-l}\right]^{\ast}=(-1)^{l}x^{(+)}_{++,l}$. Not surprisingly, due to the generalized parity of the Floquet states, the transition matrix elements satisfy Eq. (19) as long as $\delta+f(t)=-[\delta+f(t+T/2)]$. For the biharmonic modulation, such equalities are established when $p$ is odd and $\delta=0$. ### C.2 Even $p$ We move to consider that $p$ is an even number. In such a case, the generalized parity of the Floquet states is broken even if $\delta=0$. Thus, we cannot expect that the transition matrix elements satisfy Eq. (19). However, we have another type of equality. With Eqs. (85) and (87), one gets $\displaystyle Q_{j}$ $\displaystyle=$ $\displaystyle\frac{f_{-j}^{\ast}}{-2j\omega_{z}}\left(u+\frac{vf_{0}}{2j\omega_{z}}\right)+\frac{v}{4j\omega_{z}}\sum_{k\neq 0}\frac{f_{k-j}^{\ast}f_{k}}{k\omega_{z}}$ (90) $\displaystyle=$ $\displaystyle\frac{(-1)^{j+1}f_{j}^{\ast}}{2j\omega_{z}}\left(u+\frac{vf_{0}}{2j\omega_{z}}\right)+\frac{v}{4j\omega_{z}}\sum_{k\neq 0}\frac{(-1)^{j+1}f_{j-k}^{\ast}f_{-k}}{-k\omega_{z}}$ $\displaystyle=$ $\displaystyle\frac{(-1)^{j+1}e^{-i\theta_{0}}f_{j}^{\ast}}{2j\omega_{z}}\left(v+\frac{u^{\ast}f_{0}}{2j\omega_{z}}\right)+\frac{e^{-i\theta_{0}}u^{\ast}}{4j\omega_{z}}\sum_{k\neq 0}\frac{(-1)^{j+1}f_{j+k}^{\ast}f_{k}}{k\omega_{z}}$ $\displaystyle=$ $\displaystyle(-1)^{j+1}e^{-i\theta_{0}}P_{j}^{\ast}.$ It is straightforward to derive Eqs. (23) and (24) in the main text via Eqs. (77)-(80) and (90). We stress that the conditions for establishing such relations require that $p$ is even, $\phi=(1/2+n)\pi$, and $\delta=0$. ## References * Mollow (1969) B. R. Mollow, Phys. Rev. 188, 1969 (1969). * Scully and Zubairy (1997) M. O. Scully and M. S. Zubairy, _Quantum optics_ (Cambridge University Press, 1997). * Cohen-Tannoudji _et al._ (1998) C. Cohen-Tannoudji, J. Dupont-Roc, G. Grynberg, and P. Thickstun, _Atom-photon interactions: basic processes and applications_ (Wiley, 1998). * He _et al._ (2013) Y.-M. He, Y. He, Y. Wei, D. Wu, M. Atature, C. Schneider, S. Hofling, M. Kamp, C. Lu, and J. Pan, Nat. Nanotech. 8, 213 (2013). * Santana _et al._ (2017) T. S. Santana, Y. Ma, R. N. E. Malein, F. Bastiman, E. Clarke, and B. D. Gerardot, Phys. Rev. B 95, 201410(R) (2017). * Kiršanskė _et al._ (2017) G. Kiršanskė, H. Thyrrestrup, R. S. Daveau, C. L. Dreeßen, T. Pregnolato, L. Midolo, P. Tighineanu, A. Javadi, S. Stobbe, R. Schott, A. Ludwig, A. D. Wieck, S. I. Park, J. D. Song, A. V. Kuhlmann, I. Söllner, M. C. Löbl, R. J. Warburton, and P. Lodahl, Phys. Rev. B 96, 165306 (2017). * Ficek and Freedhoff (1993) Z. Ficek and H. S. Freedhoff, Phys. Rev. A 48, 3092 (1993). * Agarwal _et al._ (1991) G. S. Agarwal, Y. Zhu, D. J. Gauthier, and T. W. Mossberg, J. Opt. Soc. Am. B 8, 1163 (1991). * Ficek and Freedhoff (1996) Z. Ficek and H. S. Freedhoff, Phys. Rev. A 53, 4275 (1996). * Ficek and Rudolph (1999) Z. Ficek and T. Rudolph, Phys. Rev. A 60, R4245 (1999). * Peiris _et al._ (2014) M. Peiris, K. Konthasinghe, Y. Yu, Z. C. Niu, and A. Muller, Phys. Rev. B 89, 155305 (2014). * Konthasinghe _et al._ (2014) K. Konthasinghe, M. Peiris, and A. Muller, Phys. Rev. A 90, 023810 (2014). * He _et al._ (2015) Y. He, Y.-M. He, J. Liu, Y.-J. Wei, H. Y. Ramírez, M. Atatüre, C. Schneider, M. Kamp, S. Höfling, C.-Y. Lu, and J.-W. Pan, Phys. Rev. Lett. 114, 097402 (2015). * Toyli _et al._ (2016) D. M. Toyli, A. W. Eddins, S. Boutin, S. Puri, D. Hover, V. Bolkhovsky, W. D. Oliver, A. Blais, and I. Siddiqi, Phys. Rev. X 6, 031004 (2016). * Carmichael (1985) H. J. Carmichael, Phys. Rev. Lett. 55, 2790 (1985). * Grünwald and Vogel (2012) P. Grünwald and W. Vogel, Phys. Rev. Lett. 109, 013601 (2012). * Grünwald and Vogel (2013) P. Grünwald and W. Vogel, Phys. Rev. A 88, 023837 (2013). * Kimble _et al._ (1977) H. J. Kimble, M. Dagenais, and L. Mandel, Phys. Rev. Lett. 39, 691 (1977). * D’Souza _et al._ (1990) R. D’Souza, A. S. Jayarao, and S. V. Lawande, Phys. Rev. A 41, 4083 (1990). * Nazir (2008) A. Nazir, Phys. Rev. B 78, 153309 (2008). * Pastukhov _et al._ (2014) V. M. Pastukhov, Y. V. Vladimirova, and V. N. Zadkov, Phys. Rev. A 90, 063831 (2014). * Itano _et al._ (1988) W. M. Itano, J. C. Bergquist, and D. J. Wineland, Phys. Rev. A 38, 559 (1988). * Ficek _et al._ (1984) Z. Ficek, R. Tanaś, and S. Kielich, Phys. Rev. A 29, 2004 (1984). * Damanet _et al._ (2018) F. Damanet, J. Kübler, J. Martin, and D. Braun, Phys. Rev. A 97, 023832 (2018). * Kryuchkyan _et al._ (2017) G. Y. Kryuchkyan, V. Shahnazaryan, O. V. Kibis, and I. A. Shelykh, Phys. Rev. A 95, 013834 (2017). * Antón _et al._ (2017) M. A. Antón, S. Maede-Razavi, F. Carreño, I. Thanopulos, and E. Paspalakis, Phys. Rev. A 96, 063812 (2017). * Yan _et al._ (2018) Y. Yan, Z. Lü, J. Y. Luo, and H. Zheng, Phys. Rev. A 97, 033817 (2018). * Saiko _et al._ (2018) A. P. Saiko, S. A. Markevich, and R. Fedaruk, Phys. Rev. A 98, 043814 (2018). * Breuer and Petruccione (1997) H.-P. Breuer and F. Petruccione, Phys. Rev. A 55, 3101 (1997). * Yan _et al._ (2016a) Y. Yan, Z. Lü, and H. Zheng, Ann. Phys. 371, 159 (2016a). * Roy and Hughes (2012) C. Roy and S. Hughes, Phys. Rev. B 85, 115309 (2012). * McCutcheon and Nazir (2013) D. P. S. McCutcheon and A. Nazir, Phys. Rev. Lett. 110, 217401 (2013). * Browne and Keitel (2000) D. E. Browne and C. H. Keitel, J. Mod. Opt. 47, 1307 (2000). * Yan _et al._ (2013) Y. Yan, Z. Lü, and H. Zheng, Phys. Rev. A 88, 053821 (2013). * Ulrich _et al._ (2011) S. M. Ulrich, S. Ates, S. Reitzenstein, A. Löffler, A. Forchel, and P. Michler, Phys. Rev. Lett. 106, 247402 (2011). * Ulhaq _et al._ (2013) A. Ulhaq, S. Weiler, C. Roy, S. M. Ulrich, M. Jetter, S. Hughes, and P. Michler, Opt. Express 21, 4382 (2013). * Yan _et al._ (2016b) Y. Yan, Z. Lü, H. Zheng, and Y. Zhao, Phys. Rev. A 93, 033812 (2016b). * Li _et al._ (2013) J. Li, M. P. Silveri, K. S. Kumar, J. M. Pirkkalainen, A. Vepsäläinen, W. C. Chien, J. Tuorila, M. A. Sillanpää, P. J. Hakonen, E. V. Thuneberg, _et al._ , Nat. Commun. 4, 1420 (2013). * Pan _et al._ (2017) J. Pan, H. Z. Jooya, G. Sun, Y. Fan, P. Wu, D. A. Telnov, S.-I. Chu, and S. Han, Phys. Rev. B 96, 174518 (2017). * Brunel _et al._ (1998) C. Brunel, B. Lounis, P. Tamarat, and M. Orrit, Phys. Rev. Lett. 81, 2679 (1998). * Rohr _et al._ (2014) S. Rohr, E. Dupont-Ferrier, B. Pigeau, P. Verlot, V. Jacques, and O. Arcizet, Phys. Rev. Lett. 112, 010502 (2014). * Kibis _et al._ (2009) O. V. Kibis, G. Y. Slepyan, S. A. Maksimenko, and A. Hoffmann, Phys. Rev. Lett. 102, 023601 (2009). * Macovei and Keitel (2014) M. Macovei and C. H. Keitel, Phys. Rev. A 90, 043838 (2014). * Zhao _et al._ (2015) Y.-J. Zhao, Y.-L. Liu, Y.-x. Liu, and F. Nori, Phys. Rev. A 91, 053820 (2015). * Silveri _et al._ (2013) M. Silveri, J. Tuorila, M. Kemppainen, and E. Thuneberg, Phys. Rev. B 87, 134505 (2013). * Macovei _et al._ (2015) M. Macovei, M. Mishra, and C. H. Keitel, Phys. Rev. A 92, 013846 (2015). * Silveri _et al._ (2017) M. P. Silveri, J. A. Tuorila, E. V. Thuneberg, and G. S. Paraoanu, Rep. Prog. Phys. 80, 056002 (2017). * Wilkens and Rza¸ewski (1989) M. Wilkens and K. Rza¸ewski, Phys. Rev. A 40, 3164 (1989). * Das and Macovei (2013) S. Das and M. A. Macovei, Phys. Rev. B 88, 125306 (2013). * Ho _et al._ (1986) T.-S. Ho, K. Wang, and S.-I. Chu, Phys. Rev. A 33, 1798 (1986). * Shirley (1965) J. H. Shirley, Phys. Rev. 138, B979 (1965). * Sambe (1973) H. Sambe, Phys. Rev. A 7, 2203 (1973). * Grossmann _et al._ (1991) F. Grossmann, T. Dittrich, P. Jung, and P. Hänggi, Phys. Rev. Lett. 67, 516 (1991). * Lehmann _et al._ (2003) J. Lehmann, S. Kohler, P. Hänggi, and A. Nitzan, J. Chem. Phys. 118, 3283 (2003). * Hausinger and Grifoni (2010) J. Hausinger and M. Grifoni, Phys. Rev. A 81, 022117 (2010).
2024-09-04T02:54:59.030750
2020-03-11T06:05:47
2003.05126
{ "authors": "Sergey P. Shary", "full_text_license": null, "license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/", "provenance": "arxiv-papers-0000.json.gz:26152", "submitter": "Sergey Shary", "url": "https://arxiv.org/abs/2003.05126" }
arxiv-papers
# A variability measure for estimates of parameters in interval data fitting Sergey P. Shary Institute of Computational Technologies SB RAS and Novosibirsk State University, Novosibirk, Russia E-mail<EMAIL_ADDRESS> ###### Abstract The paper presents a construction of a quantitative measure of variability for parameter estimates in the data fitting problem under interval uncertainty. It shows the degree of variability and ambiguity of the estimate, and the need for its introduction is dictated by non-uniqueness of answers to the problems with interval data. A substantiation of the new variability measure is given, its application and motivations are discussed. Several examples and a series of numerical tests are considered, showing the features of the new characteristic and the specifics of its use. Keywords: data fitting problem, linear regression, interval data uncertainty, maximum compatibility method, strong compatibility, variability measure. MSC 2010: 65G40, 62J10, 90C90 ## 1 Introduction and problem statement The purpose of this work is to present a quantitative variability measure for estimates of parameters of functional dependencies in the statistics of interval data. This is a relatively young branch of modern data science that does not rely on the probability theory, but makes extensive use of interval analysis methods (see, e. g., the surveys in [4, 7, 10]). Fig. 1: A variability measure can be an estimate of the size of the set of possible solutions. By the term “variability”, we understand the degree of variation and ambiguity of the estimate, and the need for its introduction is dictated by the fact that, in processing interval data, the answer is typically not unique. Usually, we get a whole set of different estimates that are equally consistent (compatible) with the source data and, thus, suitable as solutions to the problem. The extent to which this set is large or small is, partly, characterized by the term “variability”. In traditional probabilistic statistics, estimates of parameters are known to be random variables themselves, and the measure of their variability can be the variance of the estimates, mean absolute difference, median absolute deviation, average absolute deviation, and such like. What could be their analogues in the statistics of interval data? At first glance, the answer to this question seems quite obvious: it can be any value that characterizes the size of the set of solutions to the problem, if it is non-empty. We can even take an enclosure of the solution set obtained by an interval method. A certain disadvantage of this variant is the excessive detailing of the answer given as a box in $\mathbb{R}^{n}$, a large amount of information that still needs to be “digested” and reduced to a compact and expressive form. Sometimes, an interval estimate in the form of an axes- aligned box may inadequately represent the solution set. Another disadvantage is the complexity of finding such an estimate. It is desirable to have a relatively simple and efficiently computable quantity, expressed in a single number, because it would give a general aggregate view of the subject of interest. Similarly to variance and other probabilistic measures, it can serve as an approximate characteristic of the quality of parameter estimation. The greater the variability of an estimate, the less its certainty and the worse its quality, and this can serve as a basis for conclusions about the quality of the estimate. At the same time, the introduced variability measure should not be simply the “size of the solution set”. If this solution set, for example, is unstable and changes abruptly with arbitrarily small changes in the data, then its size is, to some extent, misleading and disorienting (see example in Section 4). A practically useful variability measure should take into account this possible instability of the solution set to the problem and give us a robust value. Fig. 2: An illustration for the data fitting problem under interval uncertainty. In our article, we are within the framework of the data fitting problem (often called regression analysis problem): given results of measurements or observations, it is required to construct a functional dependence of a fixed type that “best fit” these data. Specifically, we need to determine the parameters $x_{1}$, $x_{2}$, …, $x_{n}$ of a linear function of the form $b=x_{1}a_{1}+\ldots+x_{n}a_{n}$ (1) from a number of values of the independent variables $a_{1}$, $a_{2}$, …, $a_{n}$ (also called _exogenous_ , _explanatory_ , _predictor_ or _input_ variables), and the corresponding values of the dependent variable $b$ (also called _endogenous_ , _response_ , _criterion_ or _output_ variable). Both $a_{1}$, $a_{2}$, …, $a_{n}$ and $b$ are not known precisely, and we only have intervals of their possible values (see Fig. 2). To find estimates of the coefficients $x_{1}$, $x_{2}$, …, $x_{n}$, we use the so-called maximum compatibility method (previously called “maximum consistency method”), which was proposed and developed in the works [6, 16, 17, 19] and others. After the estimates for $x_{1}$, $x_{2}$, …, $x_{n}$ are found, we need to somehow evaluate their variability. Our article presents a construction of the variability measure in the above data fitting problem. Note that traditional methods of data fitting and regression analysis, such as the least squares method and its modifications, the least modulus method, etc., cannot be applied to the solution of our problem, since they are unsuitable for situations where the source data are intervals rather than points. ## 2 Formulation of the main results ### 2.1 Maximum compatibility method and tolerable solution set The initial data for our problem is a set of values of independent and dependent variables for function (1), which are obtained as a result of $m$ measurements (observations): $\begin{array}[]{ccccc}\text{\boldmath$a$}_{11},&\text{\boldmath$a$}_{12},&\ldots&\text{\boldmath$a$}_{1n},&\text{\boldmath$b$}_{1},\\\ \text{\boldmath$a$}_{21},&\text{\boldmath$a$}_{22},&\ldots&\text{\boldmath$a$}_{2n},&\text{\boldmath$b$}_{2},\\\ \vdots&\vdots&\ddots&\vdots&\vdots\\\ \text{\boldmath$a$}_{m1},&\text{\boldmath$a$}_{m2},&\ldots&\text{\boldmath$a$}_{mn},&\text{\boldmath$b$}_{m}.\end{array}$ (2) These are intervals as we assume that these data are inaccurate and have interval uncertainty due to measurement errors, etc. Both the data (2) and other interval values throughout the text are highlighted in bold mathematical font according to the informal international standard [5]. The first index of the interval values from (2) means the measurement number, and the second one, at $\text{\boldmath$a$}_{ij}$’s, is the number of the independent variable that takes the corresponding value in this measurement. To find an estimate $(\hat{x}_{1},\hat{x}_{2},\ldots,\hat{x}_{n})$ of the parameters of the linear function (1), we “substitute” data (2) into equality (1), thus getting an interval system of linear algebraic equations $\left\\{\ \begin{array}[]{ccccccccc}\text{\boldmath$a$}_{11}x_{1}&+&\text{\boldmath$a$}_{12}x_{2}&+&\ldots&+&\text{\boldmath$a$}_{1n}x_{n}&=&\text{\boldmath$b$}_{1},\\\\[1.0pt] \text{\boldmath$a$}_{21}x_{1}&+&\text{\boldmath$a$}_{22}x_{2}&+&\ldots&+&\text{\boldmath$a$}_{2n}x_{n}&=&\text{\boldmath$b$}_{2},\\\\[1.0pt] \vdots&&\vdots&&\ddots&&\vdots&&\vdots\\\\[1.0pt] \text{\boldmath$a$}_{m1}x_{1}&+&\text{\boldmath$a$}_{m2}x_{2}&+&\ldots&+&\text{\boldmath$a$}_{mn}x_{n}&=&\text{\boldmath$b$}_{m},\end{array}\right.$ (3) or, briefly, $\text{\boldmath$A$}x=\text{\boldmath$b$}$ (4) with an interval $m\times n$-matrix $\text{\boldmath$A$}=(\text{\boldmath$a$}_{ij})$ and interval $m$-vector $\text{\boldmath$b$}=(\text{\boldmath$b$}_{i})$ in the right-hand side. The sets of parameters which are compatible, in this or that sense, with the measurement data (2) form various solution sets for the equations system (3). The most popular of them are the _united solution set_ and _tolerable solution set_. The united solution set, defined as $\varXi_{uni}(\text{\boldmath$A$},\text{\boldmath$b$})=\bigl{\\{}\,x\in\mathbb{R}^{n}\mid\text{ $Ax=b\,$ for some $A\in\text{\boldmath$A$}$ and $b\in\text{\boldmath$b$}$}\,\bigr{\\}},$ corresponds to the so-called weak compatibility between the parameters of function (1) and data (2) (see [6, 16, 17]). The tolerable solution set, defined as $\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})=\bigl{\\{}\,x\in\mathbb{R}^{n}\mid\text{ $Ax\in\text{\boldmath$b$}\,$ for each matrix $A\in\text{\boldmath$A$}$}\,\bigr{\\}},$ corresponds to the so-called strong compatibility between the parameters of function (1) and data (2) (see [19]). Fig. 3: An illustration of the strong compatibility between interval data and a linear function. Further, we assume that the solution to the data fitting problem for function (1) is found by the maximum compatibility method (see [16, 17, 19]). As an estimate of the parameters of function (1), it takes the maximum point of the _recognizing functional_ , a special function that gives a quantitative “compatibility measure” of this estimate with empirical data (2). The maximum compatibility method has two versions, “weak” and “strong”, that differ in understanding how exactly the interval data should be “compatible” with the function that we construct on them. Weak and strong compatibility reflect two different situations that may occur in data processing. In the weak version, it is required that the graph of the constructed function just intersects the measurement uncertainty boxes (see [16, 17]). The strong version implies more stringent condition: it requires that the function graph passes within the “corridors” specified by the intervals $\text{\boldmath$b$}_{i}$, $i=1,2,\ldots,m$, for _any_ values of the independent variables $a_{1}$, $a_{2}$, …, $a_{n}$ from the respective intervals $\text{\boldmath$a$}_{i1}$, $\text{\boldmath$a$}_{i2}$, …, $\text{\boldmath$a$}_{in}$ obtained in the $i$-th measurement (see [19]). This is illustrated in Fig. 3, where the straight line of the function graph goes through the vertical faces of the measurement uncertainty boxes. The weak compatibility is shown in Fig. 2 by two upper straight lines. The lower line in Fig. 2 does not satisfy compatibility condition at all, neither weak nor strong, since it does not intersect some boxes. The “strong version” of the maximum compatibility method has a number of theoretical and practical advantages over the “weak version”. These are polynomial complexity, robustness of estimates and their finite variability, the fact that the strong compatibility partially overcomes the so-called Demidenko paradox, etc. (see details in [19]). Hence, we consider below a strong version of the maximum compatibility method, which corresponds to the tolerable solution set $\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})$ for the interval system of equations (4). Its recognizing functional is usually denoted by “Tol”, $\mathrm{Tol}\,(x,\text{\boldmath$A$},\text{\boldmath$b$})\;=\,\min_{1\leq i\leq m}\left\\{\,\mathrm{rad}\,\text{\boldmath$b$}_{i}-\left|\;\mathrm{mid}\,\text{\boldmath$b$}_{i}-\sum_{j=1}^{n}\,\text{\boldmath$a$}_{ij}x_{j}\,\right|\,\right\\},$ (5) where $\mathrm{rad}\,\text{\boldmath$b$}_{i}=\tfrac{1}{2}(\overline{\text{\boldmath$b$}}_{i}-\underline{\text{\boldmath$b$}}_{i}),\hskip 65.44133pt\mathrm{mid}\,\text{\boldmath$b$}_{i}=\tfrac{1}{2}(\overline{\text{\boldmath$b$}}_{i}+\underline{\text{\boldmath$b$}}_{i})$ are radii and midpoints of the components of the right-hand side $b$, the arithmetic operations inside the modulus in (5) are those of the classical interval arithmetic (see, e. g., [4, 8, 9]), and the modulus is understood as the maximum absolute value of the points from the interval, $|\text{\boldmath$a$}|=\max\,\\{\,|a|\mid a\in\text{\boldmath$a$}\,\\}=\max\,\bigl{\\{}\,|\underline{\text{\boldmath$a$}}|,|\overline{\text{\boldmath$a$}}|\,\bigr{\\}}.$ Typical graphs of the functional Tol for the one-dimensional case are shown in Fig. 4 and Fig. 5. To solve the data fitting problem for the linear function (1) and data set (2), it is necessary to find the unconstrained maximum, over all $x\in\mathbb{R}^{n}$, of the functional $\mathrm{Tol}\,(x,\text{\boldmath$A$},\text{\boldmath$b$})$, $\mathrm{Tol}\,(x,\text{\boldmath$A$},\text{\boldmath$b$})\rightarrow\max,$ and the vector $\hat{x}=\arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,(x,\text{\boldmath$A$},\text{\boldmath$b$})$ at which this maximum is attained provides an estimate of the parameters of function (1). If $\max\,\mathrm{Tol}\,\geq 0$, then the solution set $\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})$, i. e., the set of parameters strongly compatible with the data is non-empty, and $\hat{x}\in\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})$. If $\max\,\mathrm{Tol}\,<0$, then the solution set $\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})$ is empty and there do not exist parameters that are strongly compatible with data (2). However, the argument $\hat{x}$ of $\max\,\mathrm{Tol}\,$ still provides the best compatibility of the constructed linear function with data (2) (more precisely, the least incompatibility). To conclude this subsection, we give a useful result on the tolerable solution set that allows us to investigate whether it is bounded or unbounded, i. e., whether the tolerable solution sets is finite in size or extends infinitely. Irene Sharaya’s boundedness criterion [13] Let the tolerable solution set to an interval linear system $\text{\boldmath$A$}x=\text{\boldmath$b$}$ be nonempty. It is unbounded if and only if the matrix $A$ has linearly dependent noninterval columns. The criterion of boundedness shows that the tolerable solution set is unbounded, in fact, under exceptional circumstances, which are almost never fulfilled in practice, when working with actual interval data. That is, the tolerable solution set is mostly bounded, and the estimates obtained by the strong version of the maximum compatibility method almost always has finite variability. ### 2.2 Variability measures As a quantity characterizing the variability of the estimate of the parameter vector $\hat{x}=(\hat{x}_{1},\hat{x}_{2},\ldots,\hat{x}_{n})$ in the linear function (1), which is obtained by the maximum compatibility method from data (2), we propose $\mathrm{IVE}\,(\text{\boldmath$A$},\text{\boldmath$b$})\;=\;\sqrt{n}\;\max_{\mathbb{R}^{n}}\,\mathrm{Tol}\,\cdot\Bigl{(}\;\min_{A\in\text{\boldmath$A$}}\,\mathrm{cond}_{2}\,A\,\Bigr{)}\cdot\frac{\displaystyle\bigl{\|}\,\arg\max_{\mathbb{R}^{n}}\,\mathrm{Tol}\,\bigr{\|}_{2}}{\|\hat{\text{\boldmath$b$}}\|_{2}}.$ (6) In this formula, $n$ is the dimension of the parameter vector of function (1) under construction, $\|\cdot\|_{2}$ is the Euclidean norm (2-norm) of vectors from $\mathbb{R}^{n}$, defined as $\|x\|_{2}\;=\;\left(\;\sum_{i=1}^{n}|x_{i}|^{2}\,\right)^{1/2},$ $\mathrm{cond}_{2}\,A$ is the spectral condition number of the matrix $A$, defined as $\mathrm{cond}_{2}\,A\;=\;\frac{\sigma_{\max}(A)}{\sigma_{\min}(A)},$ i. e., the ratio of the maximal $\sigma_{\max}(A)$ and minimal $\sigma_{\min}(A)$ singular values of $A$; it is an extension, to the rectangular case, of the concept of the condition number from computational linear algebra (see e. g. [2, 24]); $\hat{\text{\boldmath$b$}}$ is a certain “most representative” point from the interval vector $b$, which is taken as $\hat{\text{\boldmath$b$}}\;=\;\tfrac{1}{2}(|\mathrm{mid}\,\text{\boldmath$b$}+\mathrm{rad}\,\text{\boldmath$b$}|+|\mathrm{mid}\,\text{\boldmath$b$}-\mathrm{rad}\,\text{\boldmath$b$}|),$ (7) where the operations “mid” and “rad” are applied in componentwise manner. $\varXi_{\mathit{tol}}$ Fig. 4: The maximum value of the recognizing functional gives an idea of the size of the tolerable solution set $\varXi_{tol}$. Despite the definite formula (7) for $\hat{\text{\boldmath$b$}}$, it should be noted that the introduction of this point is, to a large extent, a matter of common sense. The general approach to the definition of $\hat{\text{\boldmath$b$}}$ is that it must be a kind of “most representative” point from the right-hand side vector $b$, and in some situations this choice may be different from formula (7). For example, $\hat{\text{\boldmath$b$}}$ can be a point result of the measurement, around which the uncertainty interval is built later, based on information about the accuracy of the measuring device. Apart from (6), as a measure of relative variability of the parameter estimate, the value $n\;\Bigl{(}\,\min_{A\in\text{\boldmath$A$}}\,\mathrm{cond}_{2}A\,\Bigr{)}\cdot\frac{\max_{\mathbb{R}^{n}}\,\mathrm{Tol}\,}{\|\hat{\text{\boldmath$b$}}\|_{2}},$ (8) can have a certain significance. Both IVE and value (8) are defined for interval linear systems (4) with nonzero right-hand sides. They can take either positive real values or be infinite. The latter occurs in the only case of $\,\min_{A\in\text{\boldmath$A$}}\,\mathrm{cond}_{2}A=\infty$, when all the point matrices $A\in\text{\boldmath$A$}$ have incomplete rank, i. e., when $\sigma_{\min}(A)=0$ for every $A\in\text{\boldmath$A$}$. Then the variability measures are set to be infinite. The symbol IVE is built as an abbreviation of the phrase “interval variability of the estimate”. Below, we show that the value IVE adequately characterizes the size of non-empty tolerable solution set for a large class of practically important situations. But it is useful to discuss informal motivations that lead to the estimate IVE and to demonstrate that IVE has an intuitive, clear and even visual meaning. $\varXi_{\mathit{tol}}$$\varXi_{\mathit{tol}}$ Fig. 5: In addition to the maximum of the recognizing functional, the size of the tolerable solution set is also affected by “steepness” of the graph. The tolerable solution set of an interval system of linear algebraic equations is the set of zero level of the recognizing functional Tol (see details in [15]), or, in other words, the intersection of the hypograph of this functional with the coordinate plane $\mathrm{Tol}\,=0$ (this is illustrated in Fig. 4). As a consequence, the magnitude of the maximum of the recognizing functional can, with other things being equal, be a measure of how extensive or narrow the tolerable solution set is. The more $\max\,\mathrm{Tol}\,$, the larger the size of the tolerable solution set, and vice versa. An additional factor that provides “other things being equal” is the slope (steepness) of pieces of hyperplanes of which the polyhedral graph of the functional Tol is compiled (these are straight lines in the 1D case in Fig. 4 and Fig. 5). The slope of the hyperplanes is determined by the coefficients of the equations that define them, which are the endpoints of the data intervals (2). The value of this slope is summarized in terms of the condition number of point matrices from the interval data matrix $A$. Finally, the multiplier $\frac{\|\arg\max\,\mathrm{Tol}\,\|_{2}}{\|\hat{\text{\boldmath$b$}}\|_{2}}\ =\ \frac{\|\hat{x}\|_{2}}{\|\hat{\text{\boldmath$b$}}\|_{2}}$ is a scaling coefficient that helps to provide the commensurability of the final value with magnitudes of the solution, $\arg\max\,\mathrm{Tol}\,$, and the right-hand side vector of the equations system. Thus, formula (6) is obtained. ## 3 A justification of the variability measure Considering the most general case, we should assume that the number of measurements $m$ may not coincide with the number $n$ of unknown parameters of the linear function (1). In this section, we consider only the case $m\geq n$. In other words, the number of measurements (observations) made is not less than the number of function parameters. Then the interval system of linear equations (4) is either square or tall (overdetermined). Of course, the data fitting problem makes sense for $m<n$ too, the maximum compatibility method also works for this case, and the variability measure IVE is then also applicable (see Section 4), but the latter still needs a separate substantiation. ### 3.1 Estimates of perturbations of the solution to rectangular linear systems The starting point of our constructions justifying the choice of (6) exactly in the form described above is the well-known inequality that estimates perturbation $\Delta x$ of a nonzero solution $x$ to the system of linear algebraic equations $Ax=b$ depending on the change $\Delta b$ of the right- hand side $b$ (see, e. g., [2, 24]): $\frac{\|\Delta x\|_{2}}{\|x\|_{2}}\ \leq\ \mathrm{cond}_{2}\,A\,\cdot\frac{\|\Delta b\|_{2}}{\|b\|_{2}}.$ (9) It is usually considered for square systems of linear equations, when $m=n$, but in the case of the Euclidean vector norm and the spectral condition number of matrices, this inequality holds true in the more general case with $m\geq n$. Naturally, estimate (9) makes sense only for $\sigma_{\min}(A)\neq 0$, when $\mathrm{cond}_{2}A<\infty$, i. e., when the matrix $A$ has full column rank. Let us briefly recall its derivation for this case. Given $Ax=b\quad\text{ и }\quad A(x+\Delta x)=b+\Delta b,$ we have $A\Delta x=\Delta b.$ Further, $\displaystyle\displaystyle\frac{\displaystyle\phantom{M}\frac{\|\Delta x\|_{2}}{\|x\|_{2}}\phantom{M}}{\displaystyle\frac{\|\Delta b\|_{2}}{\|b\|_{2}}}\ $ $\displaystyle=\ \frac{\|\Delta x\|_{2}\,\|b\|_{2}\phantom{I}}{\phantom{I}\|x\|_{2}\,\|\Delta b\|_{2}}=\ \frac{\|\Delta x\|_{2}\,\|Ax\|_{2}\phantom{I}}{\phantom{I}\|x\|_{2}\,\|A\Delta x\|_{2}}\ =\ \frac{\|\Delta x\|_{2}}{\|A\Delta x\|_{2}}\;\frac{\|Ax\|_{2}}{\|x\|_{2}}$ $\displaystyle\leq\;\max_{\Delta x\neq 0}\frac{\|\Delta x\|_{2}}{\|A\Delta x\|_{2}}\ \max_{x\neq 0}\frac{\|Ax\|_{2}}{\|x\|_{2}}\ =\ \left(\min_{\Delta x\neq 0}\frac{\|A\Delta x\|_{2}}{\|\Delta x\|_{2}}\right)^{-1}\ \max_{x\neq 0}\frac{\|Ax\|_{2}}{\|x\|_{2}}$ $\displaystyle=\ \bigl{(}\sigma_{\min}(A)\bigr{)}^{-1}\,\sigma_{\max}(A)\,=\ \mathrm{cond}_{2}(A)$ by virtue of the properties of the singular values (see e. g. [3, 24]). A comparison of the beginning and the end of this calculation leads to the inequality (9), which, as is easy to understand, is attainable for some $x$ and $\Delta x$, or, equivalently, for some right-hand sides of $b$ and their perturbations $\Delta b$. Naturally, the above calculations and the resulting estimate make sense only for $\sigma_{\min}(A)\neq 0$. ### 3.2 Interval systems with point matrices Let us consider an interval system of linear algebraic equations $Ax=\text{\boldmath$b$}$ (10) with a point (noninterval) $m\times n$-matrix $A$, $m\geq n$, and an interval $m$-vector $b$ in the right-hand side. We assume that $A$ has full column rank and, therefore, $\mathrm{cond}_{2}\,A<\infty$. Suppose also that the tolerable solution set for system (10) is non-empty, i. e. $\varXi_{tol}(A,\text{\boldmath$b$})=\bigl{\\{}\,x\in\mathbb{R}^{n}\mid Ax\in\text{\boldmath$b$}\,\bigr{\\}}\neq\varnothing$. We need to quickly and with little effort estimate the size of this solution set, and our answer will be a “radius type” estimate for $\varXi_{tol}(A,\text{\boldmath$b$})$. More precisely, we are going to evaluate $\max\|x^{\prime}-\hat{x}\|_{2}$ over all $x^{\prime}\in\varXi_{tol}(A,\text{\boldmath$b$})$ and for a special fixed point $\hat{x}\in\varXi_{tol}(A,\text{\boldmath$b$})$, which is taken as $\hat{x}\ =\ \arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,(x,A,\text{\boldmath$b$}).$ Recall that the argument $\hat{x}$ of the maximum of the recognizing functional for system (10) is an estimate of parameters of linear function (1) from empirical data. Strictly speaking, this point can be determined non- uniquely, but then let $\hat{x}$ be any one of the points at which the maximum is reached. Let $x^{\prime}$ be a point in the tolerable solution set $\varXi_{tol}(A,\text{\boldmath$b$})$. How to evaluate $\|x^{\prime}-\hat{x}\|_{2}$? It is clear that $x^{\prime}$ and $\hat{x}$ are solutions of systems of linear algebraic equations with the matrix $A$ and some right-hand sides $b^{\prime}$ and $\hat{b}$, respectively, from the interval vector $b$. If $\hat{x}\neq 0$ and $\hat{b}\neq 0$, then we can apply inequality (9), considering a perturbation of the solution $\hat{x}$ to the system of linear algebraic equations $Ax=\hat{b}$. Then $\Delta x=x^{\prime}-\hat{x}$, $\Delta b=b^{\prime}-\hat{b}$, and we get $\frac{\|x^{\prime}-\hat{x}\|_{2}}{\|\hat{x}\|_{2}}\ \leq\;\mathrm{cond}_{2}\,A\cdot\frac{\|b^{\prime}-\hat{b}\|_{2}}{\|\hat{b}\|_{2}},$ from where the absolute estimate is obtained $\|x^{\prime}-\hat{x}\|_{2}\ \leq\;\mathrm{cond}_{2}\,A\cdot\|\hat{x}\|_{2}\cdot\frac{\|b^{\prime}-\hat{b}\|_{2}}{\|\hat{b}\|_{2}}.$ (11) The point $\hat{x}$ is found as the result of maximization of the recognizing functional Tol, the point $\hat{b}$ coincides with $A\hat{x}$, the condition number $\mathrm{cond}_{2}\,A$ can be computed by well-developed standard procedures. Therefore, for practical work with inequality (11), one need somehow evaluate $\|b^{\prime}-\hat{b}\|_{2}$. But first, bearing in mind the further application of the deduced estimate in a situation where the matrix $A$ may vary, we somewhat roughen (11) by taking approximately $\|\hat{b}\|_{2}\approx\|\hat{\text{\boldmath$b$}}\|_{2}$, that is, as the norm of the “most representative” point $\hat{\text{\boldmath$b$}}$ of the interval vector $b$, which we defined in Section 2.2: $\|\hat{b}\|_{2}\,\approx\,\|\hat{\text{\boldmath$b$}}\|_{2},\qquad\text{ where }\ \hat{\text{\boldmath$b$}}\,=\,\tfrac{1}{2}\,\bigl{(}\,|\mathrm{mid}\,\text{\boldmath$b$}+\mathrm{rad}\,\text{\boldmath$b$}|+|\mathrm{mid}\,\text{\boldmath$b$}-\mathrm{rad}\,\text{\boldmath$b$}|\,\bigr{)}.$ In doing this, some coarsening is allowed, so instead of (11) we write $\|x^{\prime}-\hat{x}\|_{2}\ \lessapprox\;\mathrm{cond}_{2}\,A\cdot\|\hat{x}\|_{2}\cdot\frac{\|\Delta b\|_{2}}{\|\hat{\text{\boldmath$b$}}\|_{2}}.$ (12) Now it is necessary to determine the increment of the right-hand side $\Delta b=b^{\prime}-\hat{b}$. Its obvious upper bound is $2\,\mathrm{rad}\,\text{\boldmath$b$}$, but it is too crude. To get a more accurate estimate of $\Delta b$, we also consider, along with system (10), a system of linear algebraic equations $Ax=\tilde{\text{\boldmath$b$}},$ (13) for which the right-hand side is obtained by uniform “compressing” the interval vector $b$: $\tilde{\text{\boldmath$b$}}\,:=\,\bigl{[}\,\underline{\text{\boldmath$b$}}+M,\overline{\text{\boldmath$b$}}-M\,\bigr{]},$ (14) where $M\;:=\;\max_{x\in\mathbb{R}^{n}}\;\mathrm{Tol}\,(x,A,\text{\boldmath$b$})\ \geq\ 0.$ Since the maximum $M$ is reached for a certain value of the argument, $\hat{x}$, then $M=\,\min_{1\leq i\leq m}\left\\{\,\mathrm{rad}\,\text{\boldmath$b$}_{i}-\left|\;\mathrm{mid}\,\text{\boldmath$b$}_{i}-\sum_{j=1}^{n}\,\text{\boldmath$a$}_{ij}\hat{x}_{j}\,\right|\,\right\\}\ \leq\,\min_{1\leq i\leq m}\,\mathrm{rad}\,\text{\boldmath$b$}_{i}.$ As a result, $\underline{\text{\boldmath$b$}}+M\leq\overline{\text{\boldmath$b$}}-M$ in componentwise sense, and the endpoints in the interval vector (14) do not “overlap” each other. But the properties of the recognizing functional imply that, for the interval system of linear algebraic equations (13) with the right-hand side (14), the maximum of the recognizing functional is zero: $\max_{x\in\mathbb{R}^{n}}\;\mathrm{Tol}\,(x,A,\tilde{\text{\boldmath$b$}})\ =\ 0.$ Indeed, the values of $\mathrm{rad}\,\text{\boldmath$b$}_{i}$ are summands in all expressions in (5), for which we take the minimum over $i=1,2,\ldots,m$. Hence, if we simultaneously increase or decrease all $\mathrm{rad}\,\text{\boldmath$b$}_{i}$ by the same value, keeping the midpoints $\mathrm{mid}\,\text{\boldmath$b$}_{i}$ unchanged, then the total value of the recognizing functional will increase or decrease by exactly same value. In other words, if we take a constant $C\geq 0$ and the interval $m$-vector $\text{\boldmath$e$}=([-1,1],\ldots,[-1,1])^{\top}$, then, for the system $Ax=\text{\boldmath$b$}+C\text{\boldmath$e$}\,$ with all the right-hand sides expanded by $[-C,C]$, we have $\mathrm{Tol}\,(x,A,\text{\boldmath$b$}+C\text{\boldmath$e$})\ =\ \mathrm{Tol}\,(x,A,\text{\boldmath$b$})+C.$ (15) Therefore, $\max_{x\in\mathbb{R}^{n}}\;\mathrm{Tol}\,(x,A,\text{\boldmath$b$}+C\text{\boldmath$e$})\ =\ \max_{x\in\mathbb{R}^{n}}\;\mathrm{Tol}\,(x,A,\text{\boldmath$b$})+C.$ (16) The uniform narrowing of the right-hand side vector acts on the tolerable solution set and the recognizing functional in a completely similar way. If we narrow down all the components by the same value $M$, then the maximum of the recognizing functional of the new interval system also decreases by $M$. By virtue of the properties of the recognizing functional, the tolerable solution set $\varXi_{tol}(A,\tilde{\text{\boldmath$b$}})$ for system (13) has empty interior (such sets are often called “non-solid” or “meager”), which we will consider equivalent to “having zero size”. Naturally, this is a simplifying assumption, since in reality the tolerable solution set corresponding to the zero maximum of the recognizing functional may be not a single-point set. But we still accept that. This implication is also supported by the fact that the situation with the zero maximum of the recognizing functional is unstable: the corresponding tolerable solution set can become empty with an arbitrarily small data perturbation (see Section 4). Another fact concerning the auxiliary system (13) with the narrowed right-hand side, which follows from (15)–(16), is that the point $\hat{x}$ remains to be the argument of the maximum of the recognizing functional: $\hat{x}\ =\ \arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,(x,A,\tilde{\text{\boldmath$b$}}).$ For this reason, the point $\hat{b}=A\hat{x}$ lies in the interval vector $\tilde{\text{\boldmath$b$}}$ defined by (14). From what has been said, it follows that the solution set for the system $Ax=\text{\boldmath$b$}$ is obtained from the solution set of the system $Ax=\tilde{\text{\boldmath$b$}}$, which has “negligible size” and for which $\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,(x,\text{\boldmath$A$},\tilde{\text{\boldmath$b$}})=0$, through expanding the right-hand side vector $\tilde{\text{\boldmath$b$}}$ in each component simultaneously by $[-M,M]$, where $M=\max_{x\in\mathbb{R}^{n}}\;\mathrm{Tol}\,(x,\text{\boldmath$A$},\text{\boldmath$b$}).$ The interval vector $\tilde{\text{\boldmath$b$}}\ni b$ may have non-zero size, but we put $[-\Delta b,\Delta b]=([-M,M],\ldots,[-M,M])^{\top}$ in order to make our estimate (12) attainable. Accordingly, in inequality (12) we take $\|\Delta b\|=\max_{x\in\mathbb{R}^{n}}\;\mathrm{Tol}\,(x,\text{\boldmath$A$},\text{\boldmath$b$}),$ if the Chebyshev norm ($\infty$-norm) is considered, or a value that differs from it by a corrective factor from the equivalence inequality for vector norms, if we take any other norm. As is known, for any vector $y\in\mathbb{R}^{n}$ (see [2]) $\|y\|_{\infty}\leq\|y\|_{2}\leq\sqrt{n}\;\|y\|_{\infty}.$ (17) Then $\|x^{\prime}-\hat{x}\|_{2}\ \lessapprox\,\sqrt{n}\ \,\mathrm{cond}_{2}A\cdot\bigl{\|}\,\arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,\bigr{\|}_{2}\cdot\frac{\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,}{\|\hat{\text{\boldmath$b$}}\|_{2}}.$ (18) What happens if the matrix $A$ does not have a full column rank? Then, by virtue of the Irene Sharaya criterion, the nonempty tolerable solution set to the system (10) is unbounded. This is completely consistent with the fact that then $\mathrm{cond}_{2}A=\infty$ and the value of the variability measure IVE is infinite too. ### 3.3 General interval systems Finally, we consider a general interval system of linear equations $\text{\boldmath$A$}x=\text{\boldmath$b$}$, with an essentially interval matrix, i. e., when $\mathrm{rad}\,\text{\boldmath$A$}\neq 0$. In view of the properties of the tolerable solution set (see, e. g., [15]), it can be represented as $\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})\ =\ \bigcap_{A\in\text{\boldmath$A$}}\;\bigl{\\{}\,x\in\mathbb{R}^{n}\mid Ax\in\text{\boldmath$b$}\,\bigr{\\}}\ =\ \bigcap_{A\in\text{\boldmath$A$}}\varXi_{tol}(A,\text{\boldmath$b$}),$ (19) i. e., as the intersection of the solution sets to the individual systems $Ax=b$ with point matrices $A\in\text{\boldmath$A$}$. For each interval linear system $Ax=\text{\boldmath$b$}$ with $A\in\text{\boldmath$A$}$, we have estimate (18), if $A$ has full column rank. Otherwise, if the point matrix $A$ has incomplete column rank and the corresponding solution set $\varXi_{tol}(A,\text{\boldmath$b$})$ is unbounded, then we do not take it into account. Consequently, for the tolerable solution set of the system $\text{\boldmath$A$}x=\text{\boldmath$b$}$, which is the intersection of the solution sets $\varXi_{tol}(A,\text{\boldmath$b$})$ for all $A\in\text{\boldmath$A$}$, the following should be true: $\|x^{\prime}-\hat{x}\|_{2}\ \lessapprox\ \min_{A\in\text{\boldmath$A$}}\,\left\\{\;\sqrt{n}\;\,\mathrm{cond}_{2}A\cdot\bigl{\|}\,\arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,\bigr{\|}_{2}\cdot\frac{\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,}{\|\hat{\text{\boldmath$b$}}\|_{2}}\,\right\\}.$ (20) The transition from representation (19) to inequality (20) can be both very accurate and rather crude (as can be seen from considering the intersection of two 1D intervals). It all depends on the size of the intersection of the solution sets of individual systems $Ax=\text{\boldmath$b$}$. On the other hand, the amount of this intersection is indirectly characterized by the magnitude of $\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,$. Taking the above facts into account, we perform approximate estimation of the right-hand side of inequality (20) by moving the minimum over $A\in\text{\boldmath$A$}$ through the curly brackets. First of all, we evaluate the factor $\|\arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,\|_{2}$, which changes to the smallest extent, by the constant available to us after the numerical solution of the data fitting problem: $\bigl{\|}\arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,(x,A,\text{\boldmath$b$})\bigr{\|}_{2}\,\approx\;\mathrm{const}\;=\;\bigl{\|}\arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,(x,\text{\boldmath$A$},\text{\boldmath$b$})\bigr{\|}_{2}.$ (21) Next, the minimum of $\mathrm{cond}_{2}A$ naturally turns to $\min\mathrm{cond}_{2}A$, and the most important factor $\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,(x,A,\text{\boldmath$b$})$ will be changed to $\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,(x,\text{\boldmath$A$},\text{\boldmath$b$})$. This choice (as well as (21)) is rather rigidly determined by the following reasons. The expression for our variability measure should preserve its simplicity and be uniform for all cases and situations. In particular, if the interval matrix $A$ squeezes to a point matrix $A$, then our measure should turn to the estimate (18) for the point case. Finally, if $\max\,\mathrm{Tol}\,=0$, then our measure must be zero too, since the size of the (stable) tolerable solution set is also zero, and our variability measure should reliably detect such situations. All this taken together leads to the estimate $\|x^{\prime}-\hat{x}\|_{2}\;\,\lessapprox\ \sqrt{n}\ \Bigl{(}\;\min_{A\in\text{\boldmath$A$}}\,\mathrm{cond}_{2}A\,\Bigr{)}\cdot\bigl{\|}\,\arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,\bigr{\|}_{2}\cdot\frac{\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,}{\|\hat{\text{\boldmath$b$}}\|_{2}}.$ (22) The same estimate as (22), by virtue of the equivalence inequality (17), is also true for the Chebyshev norm: $\max_{x^{\prime}\in\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})}\|x^{\prime}-\hat{x}\|_{\infty}\ \lessapprox\ \sqrt{n}\;\,\Bigl{(}\;\min_{A\in\text{\boldmath$A$}}\,\mathrm{cond}_{2}\,A\,\Bigr{)}\cdot\bigl{\|}\,\arg\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,\bigr{\|}_{2}\cdot\frac{\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,}{\|\hat{\text{\boldmath$b$}}\|_{2}}.$ This completes the rationale for (6). If we want to evaluate the relative size of the tolerable solution set, expressing it in ratio to the norm of its points, then it is reasonable to take $\hat{x}=\arg\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,$ as the “most typical” point from the tolerable solution set $\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})$. Using (17) again, we obtain $\frac{\max_{x^{\prime}\in\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})}\|x^{\prime}-x^{\prime\prime}\|_{\infty}}{\|\hat{x}\|_{\infty}}\ \lessapprox\ n\;\Bigl{(}\;\min_{A\in\text{\boldmath$A$}}\,\mathrm{cond}_{2}A\,\Bigr{)}\cdot\frac{\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,}{\|\hat{\text{\boldmath$b$}}\|_{2}}.$ This gives value (8). ## 4 Numerical examples and some tests First of all, we consider an example of unstable tolerable solution set that changes abruptly with small perturbations in the system of equations. For all interval $2\times 2$-systems of linear algebraic equations of the form $\begin{pmatrix}[-1,1]&[-1,1]\\\\[2.0pt] 1&-1\\\\[2.0pt] \end{pmatrix}\begin{pmatrix}x_{1}\\\\[2.0pt] x_{2}\end{pmatrix}=\begin{pmatrix}{[-1,1]}\\\\[2.0pt] {[1,1+\eta]}\end{pmatrix},\qquad\eta\geq 0,$ (23) the tolerable solution sets are the same: this is the straight line segment joining the points $(0,-1)$ and $(1,0)$ and depicted in Fig. 6. The diameter of the solution set is essentially non-zero (namely, $\sqrt{2}$), but the unconstrained maximum of the recognizing functional Tol for all such systems is zero, and it is attained at the point $(0.5,-0.5)$. $\varXi_{\mathit{tol}}$ Fig. 6: The tolerable solution set for the interval equations systems (23). At the same time, any arbitrarily small increase in the lower endpoint of the interval $[1,1+\eta]$ in the right-hand side of the second equation makes the tolerable solution set empty. An arbitrarily small reduction of the upper endpoint of the interval $[-1,1]$, located in the first component of the right-hand side vector, produces a similar effect. It turns out that the maximum value of the recognizing functional Tol characterizes very precisely the instability of the original solution set and the zero size of the solution sets of perturbed systems. As the second example, we consider the problem of constructing a linear function of two variables $a_{1}$ and $a_{2}$, $b=x_{1}a_{1}+x_{2}a_{2},$ (24) from the interval data obtained in 3 measurements: $\begin{array}[]{c|ccc}&\text{\boldmath$a$}_{1}&\text{\boldmath$a$}_{2}&\text{\boldmath$b$}\\\ \hline\cr\\\\[-8.53581pt] 1&[98,100]&[99,101]&[190,210]\\\\[3.0pt] 2&[97,99]&[98,100]&[200,220]\\\\[3.0pt] 3&[96,98]&[97,99]&[190,210]\end{array}$ Note that in these data the three-dimensional uncertainty boxes of measurements 1 and 2, as well as 2 and 3, substantially “overlap” each other: their intersections are boxes with non-empty interiors, the sizes of which are comparable to the sizes of the original data boxes. $x_{1}$$x_{2}$$\hat{\text{\boldmath$x$}}$ Fig. 7: The tolerable solution set of the system of equations (25) in comparison with the box constructed by using the estimate IVE. To determine the coefficients $x_{1}$ and $x_{2}$, we compose an interval $3\times 2$-system of linear algebraic equations $\begin{pmatrix}[98,100]&[99,101]\\\\[2.0pt] [97,99]&[98,100]\\\\[2.0pt] [96,98]&[97,99]\end{pmatrix}\begin{pmatrix}x_{1}\\\\[2.0pt] x_{2}\end{pmatrix}=\begin{pmatrix}{[190,210]}\\\\[2.0pt] {[200,220]}\\\\[2.0pt] {[190,210]}\end{pmatrix}.$ (25) Its matrix has incomplete rank, since its member is a point matrix with the rank 1: $\begin{pmatrix}98&99\\\ 98&99\\\ 98&99\end{pmatrix}.$ (26) The united solution set for system (25) is unbounded, therefore it is hardly possible to determine, with certainty, the coefficients of the linear function (24) satisfying the weak compatibility between parameters and data (see Section 2). However, the interval matrix of system (25) does not contain linearly dependent point columns, and therefore, according to the Irene Sharaya criterion [13] (see Section 2.1), the tolerable solution set is bounded. It is depicted in Fig. 7, which is drawn by the procedure EqnTol2D from the package IntLinInc2D [14]. The minimum spectral condition number of the point matrices contained in the interval matrix of (25) is $103.83$, and it is reached on the matrix $\begin{pmatrix}100&99\\\ 97&100\\\ 96&99\end{pmatrix}.$ This result can be obtained, for example, using the simulated annealing algorithm on the set of point matrices contained in the interval matrix of (25). Numerical solution of the maximization problem for the recognizing functional Tol can be carried out within MATLAB environment, using the free program tolsolvty2.m (available from the author’s page at ResearchGate [21]). It implements a modified version of the so-called $r$-algorithms for non- differentiable optimization proposed and developed by N.Z. Shor and N.G. Zhurbenko [20]. Using the precision settings specified in it “by default”, we get $\max\,\mathrm{Tol}\,=1.9095$, which is reached at $\hat{x}=(5.1857\cdot 10^{-7},2.0603)^{\top}$. Then, $\mathrm{IVE}\,=\sqrt{2}\cdot 1.9095\cdot 103.83\cdot\frac{\|\hat{x}\|_{2}}{\sqrt{200^{2}+210^{2}+200^{2}}}=1.6399.$ Interval hull of the tolerable solution set for system (25) (that is, its optimal interval enclosure) is the box $\begin{pmatrix}[-0.9620,3.0227]\\\\[2.0pt] [-0.9320,3.0127]\end{pmatrix},$ which can also be found by the procedure EqnTol2D. We see that the value of IVE is in satisfactory agreement with the radii of the components of the optimal estimate of the solution set, equal to $1.9924$ and $1.9724$ respectively. In the maximum compatibility method, the argument $\hat{x}=\arg\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,$ of the unconstrained maximum of the recognizing functional plays a crucial role, and, in fact, our variability estimate IVE relies heavily on it. This is why it makes sense to look at the box $\hat{\text{\boldmath$x$}}$ with the components $[\hat{x}_{i}-\mathrm{IVE}\,,\hat{x}_{i}+\mathrm{IVE}\,]$, $i=1,2$. It is also depicted in Fig. 7, and the substantial asymmetry of its location relative to the solution set is, of course, explained by the specific position of the center, the point $\hat{x}$, as well as the ill-conditioning of the point systems from (25). With other data, the box $\hat{\text{\boldmath$x$}}$ estimates the tolerable solution sets significantly better (see further). Next, we give an example of the opposite type (in a sense, dual to the previous example), where a linear function of three variables $b=x_{1}a_{1}+x_{2}a_{2}+x_{2}a_{3}$ (27) is to be constructed from the data of two experiments summarized below: $\begin{array}[]{c|cccc}&\text{\boldmath$a$}_{1}&\text{\boldmath$a$}_{2}&\text{\boldmath$a$}_{3}&\text{\boldmath$b$}\\\ \hline\cr\\\\[-8.53581pt] 1&[98,100]&[97,99]&[96,98]&[190,210]\\\\[3.0pt] 2&[99,101]&[98,100]&[97,99]&[200,220]\end{array}$ To find the parameters of function (27), we come to an underdetermined interval system of linear algebraic equations $\begin{pmatrix}[98,100]&[97,99]&[96,98]\\\\[2.0pt] [99,101]&[98,100]&[97,99]\end{pmatrix}\begin{pmatrix}x_{1}\\\\[1.0pt] x_{2}\\\\[1.0pt] x_{3}\end{pmatrix}=\begin{pmatrix}{[190,210]}\\\\[2.0pt] {[200,220]}\end{pmatrix}.$ (28) Its matrix is the transposed matrix of system (25), so $\min_{A\in\text{\boldmath$A$}}\mathrm{cond}_{2}\,A$ is the same for it. Also, the matrix of system (28) contains a point matrix of the incomplete rank 1, which is transposed for (26) (and many more such matrices). $x_{1}$$x_{2}$$x_{3}$ Fig. 8: The tolerable solution set for the interval equations system (28). Again, the united solution set for system (28) is unbounded, and it is difficult (if at all possible) to determine the coefficients of the linear function (27), relying on the weak compatibility between parameters and data, due to “infinite variability” of the resulting estimate. Nevertheless, in these adverse conditions, the nonempty tolerable solution set to the interval system of equations (28) is bounded by virtue of the Irene Sharaya criterion [13] (see Section 2.1). In Fig. 8, the tolerable solution set is depicted as a thin hexagonal plate. Computation of the maximum of the recognizing functional for this system using the code tolsolvty2.m gives the value $\max\mathrm{Tol}\,=3.9698$, which is reached at the point $\hat{x}=\arg\max\mathrm{Tol}\,=\,\bigl{(}\,2.0603,3\cdot 10^{-6},2.1\cdot 10^{-6}\,\bigr{)}^{\top}.$ It can be taken as an estimate of the coefficients in (27). Then the varibility measure of the above estimate is $\mathrm{IVE}\,=\sqrt{2}\cdot 3.9698\cdot 103.83\cdot\frac{\|\hat{x}\|_{2}}{\sqrt{200^{2}+210^{2}}}=4.1413.$ Interval hull (optimal interval enclosure) of the tolerable solution set for system (28) is the box $\begin{pmatrix}[-1.9747,4.0302]\\\\[2.0pt] [-1.9899,4.0759]\\\\[2.0pt] [-1.9949,4.1071]\end{pmatrix},$ which can also be computed by the procedure EqnTolR3. The radii of the components of this interval vector are $3.0024$, $3.0329$, $3.0510$ respectively, which is also not very different from the value of IVE. The example shows that the value IVE works even in the case of $m<n$, when the number of measurements is less than the number of parameters to be determined. But a rigorous justification of this fact is waiting for its study. To conclude the section, we present, in Table 1, the results of numerical tests for the interval linear $n\times n$-system $\left(\begin{array}[]{cccc}\theta&{[0,2]}&\cdots&{[0,2]}\\\\[1.0pt] {[0,2]}&\theta&\cdots&{[0,2]}\\\\[1.0pt] \vdots&\vdots&\ddots&\vdots\\\\[1.0pt] {[0,2]}&{[0,2]}&\cdots&\theta\end{array}\right)\;\left(\begin{array}[]{@{\,}c@{\,}}x_{1}\\\\[1.0pt] x_{2}\\\\[1.0pt] \vdots\\\\[1.0pt] x_{n}\end{array}\right)=\left(\begin{array}[]{@{\;}c@{\;}}{[1,K]}\\\\[1.0pt] {[1,K]}\\\\[1.0pt] \vdots\\\\[1.0pt] {[1,K]}\end{array}\right),$ (29) with various $n$ and $K$. System (29) resembles that proposed in [9], having exactly same matrix. But the right-hand sides were taken as positive intervals $[1,K]$, since the original balanced intervals $[-1,1]$ in the system from [9] make the tolerable solution set “too symmetric”. Table 1: Results of the computational tests with system (29) $\theta\rule[-8.53581pt]{0.0pt}{22.76219pt}$ | $\mathrm{IVE}\,$ | $\|\,\mathrm{rad}\,(\,\scalebox{0.67}[0.87]{$\Box$\hskip 1.0pt}\varXi_{tol})\|_{\infty}$ | $\|\,\mathrm{rad}\,(\,\scalebox{0.67}[0.87]{$\Box$\hskip 1.0pt}\varXi_{tol})\|_{2}$ | $\theta\rule[-8.53581pt]{0.0pt}{22.76219pt}$ | $\mathrm{IVE}\,$ | $\|\,\mathrm{rad}\,(\,\scalebox{0.67}[0.87]{$\Box$\hskip 1.0pt}\varXi_{tol})\|_{\infty}$ | $\|\,\mathrm{rad}\,(\,\scalebox{0.67}[0.87]{$\Box$\hskip 1.0pt}\varXi_{tol})\|_{2}$ ---|---|---|---|---|---|---|--- $n=5$, $K=10$ | $n=10$, $K=10$ 2.0 | 1.019 | 1.25 | 2.795 | 6.0 | 0.894 | 0.5 | 1.581 4.0 | 1.081 | 0.875 | 1.957 | 9.0 | 1.491 | 0.389 | 1.230 6.0 | 0.786 | 0.639 | 1.429 | 12.0 | 0.582 | 0.313 | 0.988 8.0 | 0.681 | 0.5 | 1.118 | 15.0 | 0.495 | 0.26 | 0.822 10.0 | 0.534 | 0.41 | 0.917 | 20.0 | 0.396 | 0.203 | 0.640 $n=5$, $K=20$ | $n=10$, $K=20$ 2.0 | 2.953 | 3.75 | 8.385 | 6.0 | 2.489 | 1.333 | 4.216 4.0 | 2.698 | 2.125 | 4.752 | 9.0 | 1.831 | 0.944 | 2.987 6.0 | 2.015 | 1.472 | 3.292 | 12.0 | 1.432 | 0.729 | 2.306 8.0 | 1.591 | 1.125 | 2.516 | 15.0 | 1.255 | 0.593 | 1.876 10.0 | 1.378 | 0.91 | 2.035 | 20.0 | 0.985 | 0.453 | 1.431 The interval matrix of system (29) is known to be regular if and only if $\theta>n$ for even $n$ and $\theta>\sqrt{n^{2}-1}$ for odd $n$ [9]. Consequently, in Table 1, the first two rows that correspond to each separate case of $n$ and $K$ refer to systems with singular matrices. As the parameter $\theta$ grows, the matrix of the system becomes regular and better conditioned. The values of IVE are compared with $\|\,\mathrm{rad}\,(\,\scalebox{0.67}[0.87]{$\Box$\hskip 1.0pt}\varXi_{tol})\|_{\infty}$ and $\|\,\mathrm{rad}\,(\,\scalebox{0.67}[0.87]{$\Box$\hskip 1.0pt}\varXi_{tol})\|_{2}$, that is, with the Chebyshev norm (max-norm) and the Euclidean norm of the radius of the interval hull of the tolerable solution set (denoted as $\scalebox{0.67}[0.87]{$\Box$\hskip 1.0pt}\varXi_{tol}$). We can see that, with the exception of two cases corresponding to $n=5$ and $K=10,20$, the values of IVE are always within the two-sided bounds given by $\|\,\mathrm{rad}\,(\,\scalebox{0.67}[0.87]{$\Box$\hskip 1.0pt}\varXi_{tol})\|_{\infty}$ (lower bound) and $\|\,\mathrm{rad}\,(\,\scalebox{0.67}[0.87]{$\Box$\hskip 1.0pt}\varXi_{tol})\|_{2}$ (upper bound). And that is reasonable. Overall, our numerical experiments confirm the adequacy of the new measure of variability, which gives quite satisfactory approximate estimates of the size of the tolerable solution sets in various situations. ## 5 Discussion IVE is the first measure of variability proposed in the statistics of interval data, for estimation using the maximum compatibility method, and we can not compare IVE with similar other measures, since they simply do not exist. However, it is useful to correlate the estimate IVE with the ideal mathematical characteristics of the solution set, such as its diameter, in terms of computational convenience and laboriousness. First of all, IVE reflects instabilities in the solution set better than the diameter (see the first example in Section 4). An instability of the tolerable solution set for an interval linear system arises in the case when the maximum value of the recognizing functional is zero, $\max\mathrm{Tol}\,=0$. Then the tolerable solution set can be either a single-point or an extended set with non-zero diameter and empty interior [15]. After an arbitrarily small perturbation of data, the latter situation can abruptly turn into the empty solution set. In any case, this phenomenon is signaled by the zero value of the maximum of the recognizing functional. The corresponding variability measure IVE is also zero, which is quite natural: it makes sense to show only “stable size” of the solution set. The equality of IVE to zero or “almost zero” thus allows us to diagnose unstable cases. Next, the problem of computing the diameter, in 2-norm, of the tolerable solution set to an interval linear system of equations is NP-hard in general. This follows from its reducibility to the quadratic programming problem with linear constraints (see [22]). Indeed, the membership of a point in the tolerable solution set to an interval $m\times n$-system of equations is determined by a system of linear inequalities of the size $2m\times 2n$ (the Rohn theorem [12]). To compute the diameter of the tolerable solution set in 2-norm, we have to maximize the quadratic objective function $\|x^{\prime}-x^{\prime\prime}\|^{2}_{2}$ over all pairs of points $x^{\prime}$, $x^{\prime\prime}$ from the tolerable solution set, i. e. satisfying $2m\times 2n$-systems of linear inequalities. So, computing the diameter of the tolerable solution set is not easy. The diameter of the interval hull of the tolerable solution set can be computed more simply, but it is not better than IVE in any case, since the interval hull is not the solution set itself, and the coarsening resulted from such a replacement may be large. Calculation of IVE by formula (6) requires solving the data fitting problem, that is, finding $\max\,\mathrm{Tol}\,$ and $\arg\max\,\mathrm{Tol}\,$, and then we need to evaluate the minimum of the condition number of the matrices from the interval data matrix. In turn, the recognizing functional Tol is a concave piecewise linear function [15], so computing its maximum is polynomially complex. The author efficiently solves it by nonsmooth optimization methods developed in recent decades, in particular, using $r$-algorithms proposed by N.Z. Shor [20], or using separating plane algorithms (see, e. g., [11, 23]). The most difficult part in calculating IVE is thus evaluating the minimum condition number of point matrices from a given interval matrix. Computing the exact minimum of the condition number is not simple, but to address practical problems which will apply the value IVE, it is sufficient to have an approximate estimate for $\min_{A\in\text{\boldmath$A$}}\,\mathrm{cond}_{2}\,A$ from above. This follows from our considerations in Section 3.3. Sometimes, it is not necessary to compute $\min\,\mathrm{cond}_{2}\,A$ at all, if we have to compare, with each other, the variability of the estimates obtained for the same data matrix $A$. If the interval matrix is “sufficiently narrow”, being not very different from a point matrix, then we can approximate $\min_{A\in\text{\boldmath$A$}}\,\mathrm{cond}_{2}\,A\;\approx\;\mathrm{cond}_{2}(\mathrm{mid}\,\text{\boldmath$A$}).$ (30) But in general, this recipe may work poorly, since the left and right sides of the approximate equality (30) can be quite different. In the examples with systems (25) and (28) from Section 4, the condition number of the midpoint matrix is $2.38\cdot 10^{4}$, and, using the simplified formula (30), we are mistaken in evaluating the variability measure IVE by more than 20 times. In the general case, for a more accurate evaluation of $\min\,\mathrm{cond}_{2}\,A$, we can use popular evolutionary optimization methods, such as a genetic algorithm, simulated annealing, particle swarm optimization, etc., within the interval matrix $A$. In the numerical experiments from Section 4, the minimum of the condition number was found using the standard program of simulated annealing from free computer math system Scilab. Note that there is a fundamental difference between computing the variability measure IVE and computing the diameter of the tolerable solution set: in the first case, we calculate a minimum, while in the second we have to find a maximum. Using traditional optimization methods and various heuristics, in the first case we compute an approximation to the minimum from above, and in the second case we find an approximation to the maximum from below. If we want to get, with our variability measure, a guaranteed outer estimation of the solution set, then the upper estimate, which is obtained by calculating the minimum in IVE, is more preferable. There exists one more viewpoint at the variability measure IVE. In traditional probabilistic statistics, the phenomenon of collinearity of data (also called “multicollinearity”) plays a large role. It is the presence of a linear dependence between the input (predictor) variables of the regression model. The $k$ variables of the model in question are usually called _collinear_ if the vectors representing them lie in a linear space of dimension less than $k$ [1], so that one of these vectors is a linear combination of the others. In practice, such exact collinearity of data is rare, but real computational problems in data fitting often starts when the data vectors are “almost linearly dependent”. Then the parameter estimates are unstable, which leads to increased statistical uncertainty, i. e., an increase in the variance of the estimates. According to modern views, the collinearity of data is most adequately described by the condition number of the matrix made up of these data (see, e. g., [1], Chapter 3). In this sense, our IVE is, in fact, a measure of the collinearity of the data, corrected with the help of the actual value of the estimate and compatibility of this estimate with the data (which is indicated by the maximal value of the recognizing functional). The minimum over all $\mathrm{cond}_{2}A$ for $A\in\text{\boldmath$A$}$ is taken due to the specifics of the strong compatibility of parameters and data, and it agrees well with the regularizing role of the tolerable solution set (see [18]). With this interpretation, IVE makes sense even with a negative maximum of the recognizing functional, max Tol, when the tolerable solution set is empty and the parameters of the linear function (1), which are strongly compatible with the data, do not exist. The absolute value of IVE still shows, up to a certain scaling coefficient, a measure of the collinearity of the data (a measure of their ill-conditioning), and the negative sign indicates the status of the solution to the problem, i. e., that the parameter vector computed is not strongly compatible with the data, but only provides the best possible approximation for the input data of the problem. The immediate goal of further research is to justify the use of IVE for underdetermined situations, where the number $m$ of observations is less than the number $n$ of parameters to be determined. The maximum compatibility method works well in this case too, we get parameter estimates and we can calculate their values of IVE, but its application needs to be justified, at least at the same level of rigor as was done in this work for $m\geq n$. #### Aknowledgements The author is grateful to Alexander Bazhenov, Sergey Kumkov, and Sergei Zhilin for stimulating discussions and valuable comments on the work. Also, the author thanks the anonymous reviewers for their constructive criticism and good suggestions. ## References * [1] D.A. Belsley, E. Kuh, R.E. Welsch, Regerssion Diagnostics, Wiley-Interscience, Hoboken, N.J., 1980, 2004. * [2] G.H. Golub, Ch.F. Van Loan, Matrix Computations, The Johns Hopkins University Press, Baltimore, 1996, 2013. * [3] R.A. Horn, Ch.R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, 1994. * [4] L. Jaulin, M. Kieffer, O. Didrit, E. Walter, Applied Interval Analysis, Springer, London, 2001. * [5] R.B. Kearfott, M. Nakao, A. Neumaier, S. Rump, S.P. Shary, P. van Hentenryck, Standardized notation in interval analysis, Computational Technologies 15 (2010), No. 1, pp. 7–13. * [6] V. Kreinovich, S.P. Shary, Interval methods for data fitting under uncertainty: a probabilistic treatment, Reliable Computing 23 (2016), pp. 105–140. URL: http://interval.louisiana.edu/reliable-computing-journal/volume-23/reliable-computing-23-pp-105-140.pdf (accessed 10 March 2020). * [7] M. Milanese, J. Norton, H. Piet-Lahanier, E. Walter (Eds.), Bounding Approaches to System Identification, Plenum Press, New York, 1996. DOI: 10.1007/978-1-4757-9545-5 * [8] R.E. Moore, R.B. Kearfott, M.J. Cloud, Introduction to Interval Analysis, SIAM, Philadelphia, 2009. * [9] A. Neumaier, Interval Methods for Systems of Equations, Cambridge University Press, Cambridge, 1990. * [10] H.T. Nguyen, V. Kreinovich, B. Wu, G. Xiang, Computing Statistics under Interval and Fuzzy Uncertainty. Applications to Computer Science and Engineering, Springer, Berlin-Heidelberg, 2012. * [11] E.A. Nurminski, Separating plane algorithms for convex optimization, Mathematical Programming 76 (1997), pp. 373–391. DOI: 10.1007/BF02614389 * [12] J. Rohn, Inner solutions of linear interval systems, in: Nickel K. (Ed.), Interval Mathematics 1985, Lecture Notes on Computer Science 212, Springer, New York, 1986, pp. 157–158. * [13] I.A. Sharaya, On unbounded tolerable solution sets, Reliable Computing 11 (2005), pp. 425–432. DOI: 10.1007/s11155-005-0049-9 * [14] I.A. Sharaya, IntLinInc2D, a MATLAB software package for visualization of solution sets to interval linear 2D systems. Novosibirsk, 2014. URL: http://www.nsc.ru/interval/sharaya * [15] S.P. Shary, Solving the linear interval tolerance problem, Mathematics and Computers in Simulation 39 (1995), pp. 53–85. DOI: 10.1016/0378-4754(95)00135-K * [16] S.P. Shary, Solvability of interval linear equations and data analysis under uncertainty, Automation and Remote Control 73 (2012), pp. 310–322. DOI: 10.1134/S0005117912020099 * [17] S.P. Shary, Maximum consistency method for data fitting under interval uncertainty, Journal of Global Optimization 66 (2016), pp. 111–126. DOI: 10.1007/s10898-015-0340-1 * [18] S.P. Shary, Interval regularization for imprecise linear algebraic equations. Deposited in arXiv.org on 27 Sep 2018, No. arXiv:1810.01481. * [19] S.P. Shary, Weak and strong compatibility in data fitting under interval uncertainty, Advances in Data Science and Adaptive Analysis 12 (2020) (in press). * [20] N.Z. Shor, N.G. Zhurbenko, A minimization method using space dilation in the direction of difference of two successive gradients, Cybernetics 7(3) (1971), pp. 450–459. DOI: 10.1007/BF01070454 * [21] TOLSOLVTY2, a MATLAB code for examining solvability of the interval linear tolerance problem. URL: https://www.researchgate.net/publication/294889566_TOLSOLVTY2 * [22] S.A. Vavasis, Complexity theory: Quadratic programming, in: C.A. Floudas and P.M. Pardalos (Eds.), Encyclopedia of Optimization. Second Edition, New York, Springer, 2009, pp. 451–453. * [23] E. Vorontsova, Extended separating plane algorithm and NSO-solutions of PageRank problem, in: Y. Kochetov, M. Khachay, V. Beresnev, E. Nurminski, P. Pardalos (Eds.), Discrete Optimization and Operations Research. Proceedings of 9th International Conference DOOR 2016, Vladivostok, Russia, September 19-23, 2016, Lecture Notes in Computer Science 9869, Cham, Switzerland, Springer International, 2016, pp. 547–560. DOI: 10.1007/978-3-319-44914-2`_`43 * [24] D.S. Watkins, Fundamentals of Matrix Computations, Wiley-Interscience, New York, 2002.
2024-09-04T02:54:59.042857
2020-03-11T06:31:10
2003.05133
{ "authors": "B\\\"ulent Karas\\\"ozen", "full_text_license": null, "license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/", "provenance": "arxiv-papers-0000.json.gz:26153", "submitter": "Bulent Karas\\\"ozen", "url": "https://arxiv.org/abs/2003.05133" }
arxiv-papers
# Model Order Reduction in Neuroscience Bülent Karasözen Institute of Applied Mathematics & Department of Mathematics, Middle East Technical University, Ankara-Turkey<EMAIL_ADDRESS> ###### Abstract Human brain contains approximately $10^{9}$ neurons, each with approximately $10^{3}$ connections, synapses, with other neurons. Most sensory, cognitive and motor functions of our brains depend on the interaction of a large population of neurons. In recent years, many technologies are developed for recording large numbers of neurons either sequentially or simultaneously. Increase in computational power and algorithmic developments have enabled advanced analyses of neuronal population parallel to the rapid growth of quantity and complexity of the recorded neuronal activity. Recent studies made use of dimensionality and model order reduction techniques to extract coherent features which are not apparent at the level of individual neurons. It has been observed that the neuronal activity evolves on low-dimensional subspaces. The aim of model reduction of large-scale neuronal networks is accurate and fast prediction of patterns and their propagation in different areas of brain. Spatiotemporal features of the brain activity are identified on low dimensional subspaces with methods such as dynamic mode decomposition (DMD), proper orthogonal decomposition (POD), discrete empirical interpolation (DEIM) and combined parameter and state reduction. In this paper, we give an overview about the currently used dimensionality reduction and model order reduction techniques in neuroscience. Keywords:neuroscience, dimensionality reduction, proper orthogonal decomposition, discrete empirical interpolation, dynamic mode decomposition, state and parameter estimation. Classification[MSC 2010]: 93A15,92C55, 37M10,37M99,37N40,65R32. ## 1 Introduction Due to the advances in recording and imaging technologies, the number of recorded signals from brain cells increased significantly in the last few years. The recorded spatio-temporal neural activity give rise to networks with complex dynamics. In neuroscience, molecular and cellular level details are incorporated in large-scale models of the brain in order to reproduce phenomena such as learning and behavior. The rapid growth of simultaneous neuronal recordings in scale and resolution brings challenges with the analysis of the neuronal population activity. New computational approaches have to be developed to analyze, visualize, and understand large-scale recordings of neural activity. While algorithmic developments and the availability of significantly more computing power have enabled analysis of larger neuronal networks, these advances cannot keep pace with increasing size and complexity of recorded activity. The activity of complex networks of neurons can often be described by relatively few distinct patterns. Model order reduction techniques enable us to identify the coherent spatial–temporal patterns. The presence or absence of a neural mechanism can be analyzed for neuronal populations. Dimensionality reduction methods [6] are data-driven statistical techniques for forming and evaluating hypotheses about population activity structure, which are summarized in Section 2. One of the goals of neurosciences is fast and accurate predictions of the potential propagation in neurons. The differential equations describing the propagation of potential in neurons were developed and are described by Hodgkin and Huxley equations [12]. They consists of a coupled system of ordinary and partial differential equations (ODEs and PDEs). The dimension of the associated discretized systems is very large for accurately simulating neurons with realistic morphological structure and synaptic inputs. In Section 3 we present two model order reduction approaches based on POD and DEIM [5] which can predict accurately the potential propagation in large scale neuronal networks leading to important speedups [17, 16, 2]. Using the functional neuroimagining data from electroencephalography (EEG) or functional magnetic resonance imaging (fMRI), different regions of the brain can be inferred by dynamic causal modeling (DCM) [7]. Effective connectivity is parameterised in terms of coupling among unobserved brain states and neuronal activity in different regions. In Section 4 we describe a combined state and parameter reduction for parameter estimation and identification [10] to extract effective connectivity in neuronal networks from measured data, such as data from electroencephalography (EEG) or functional magnetic resonance imaging (fMRI). In Section 5 the data- driven, equation free, model order reduction method dynamic mode decomposition (DMD) is described for identifying sleep spindle networks [3]. Reduced order models with POD and DEIM and four variants of them are presented for neuronal synaptic plasticity and neuronal spiking networks in Section 6. ## 2 Dimensionality reduction methods Coordination of responses across neurons exists only at the level of the population and not at the level of single neurons. The presence or absence of a neural mechanism can be analyzed for neuronal populations. Dimensionality reduction methods are data-driven statistical techniques for forming and evaluating hypotheses about population activity structure. They produce low- dimensional representations of high-dimensional data with the aim to extract coherent patterns which preserve or highlight some feature of interest in the data [6]. The recorded neurons of dimension $D$ are likely not independent of each other, because they belong to a common network of neuronal populations. From the high-dimensional data of neuronal recordings, a smaller number of explanatory variables $K$ ( $K<D$) are extracted with the help of dimensionality reduction methods. The explanatory variables are not directly observed, therefore they are referred to as latent variables. The latent variables define a $K$-dimensional space representing coherent patterns of the noisy neural activity of $D$ neurons. There exists several dimensionality reduction methods which differ in the statistical interpretation of the preserved and discarded features of the neuronal populations. We summarize the commonly used statistical methods of dimensionality reduction methods in [6], where further references about the methods can be found. Principal component and factor analysis; The most widely known technique to extract coherent patterns from high dimensional data is the modal decomposition. A particularly popular modal decomposition technique is principal component analysis (PCA), which derives modes ordered by their ability to account for energy or variance in the data. In particular, PCA is a static technique and does not model temporal dynamics of time-series data explicitly, so it often performs poorly in reproducing dynamic data, such as recordings of neural activity. The low-dimensional space identified by PCA captures variance of all types, including firing rate variability and spiking variability, whereas factor analysis (FA) discards the independent variance for each neuron. and preserves variance that is shared across neurons. Time series methods: The temporal dynamics of the population activity can be identified if the data comes from a time series. The most commonly used time series methods for dimensionality reduction neural recordings are: hidden Markov models (HMM) [18], kernel smoothing followed by a static dimensionality reduction method, Gaussian process factor analysis (GPFA) [35], latent linear dynamical systems (LDS) [4] and latent nonlinear dynamical systems (NLDS) [26]. They produce latent neural trajectories that capture the shared variability across neurons. The HMM is applied when a jump between discrete states of neurons exists, other methods identify smooth changes in firing rates over time. Methods with dependent variables: On many neuronal recordings the high- dimensional firing rate space is associated with labels of one or more dependent variables, like stimulus identity, decision identity or a time index. The dimensionality reduction aims in this case to project the data such that differences in these dependent variables are preserved. The linear discriminant analysis (LDA) can be used to find a low-dimensional projection in which the $G$ groups to which the data points belong are well separated. Nonlinear dimensionality reduction methods: All the previous methods assume a linear relationship between the latent and observed variables. When the data lies on a low-dimensional, nonlinear manifold in the high-dimensional space, a linear method may require more latent variables than the number of true dimensions of the data. The most frequently used methods to identify nonlinear manifolds are Isomap79 [31] and locally linear embedding (LLE) [28]. Because the nonlinear methods use local neighborhoods to estimate the structure of the manifold, population responses may not evenly explore the high-dimensional space. Therefore theses methods should be used with care. ## 3 Proper orthogonal decomposition (POD) and discrete empirical interpolation (DEIM) for Hodgin-Huxley model One of the goals of neurosciences is fast and accurate predictions of the potential propagation in neurons. The differential equations describing propagation of potential in neurons are described by Hodgkin and Huxley (HH) cable equations [12]. They consists of a coupled system of ordinary (ODEs) and partial differential equations PDEs. Accurate simulation of morphology, kinetics and synaptic inputs of neurons requires solution of large systems of nonlinear ODEs. The complexity of the models are determined by the synapse density of the dentritic length ($1\mu$ one micron). In simulations, for one synapse per micron on a cell $5$ mm dendrite at $5,000$ compartments each with $10$ variables are needed, which results in $50,000$ coupled nonlinear system of ODEs [17, 16]. To recover complex dynamics, efficient reduced order neuronal methods are developed using the POD and DEIM from the snapshots of the in space and time discretized coupled PDEs and ODEs [17, 16, 2]. In this section we describe two of them. They differ in the formulation of the HH cable equation and of the equations for the gating variables. ### 3.1 Morphologically accurate reduced order modeling The neuronal full order models (FOMs) in [17, 16] consists of $D$ branched dendritic neurons $B=\sum_{d=1}^{D}B_{d}$ meeting at the soma, where the $d^{th}$ has $B_{d}$ branches. It is assumed that the branch $b$ carries $C$ distinct ionic currents with associated densities and $G_{bc}(x)$ and reversal potentials $E_{c},c=1,\ldots,C$. The kinetics of current $c$ on branch $b$ is governed by the $F_{c}$ gating variables, $w_{bcf},f=1,\ldots,F_{c}$. When subjected to input at $S_{b}$ synapses, the nonlinear HH cable equation for the transmembrane potential $v_{b}(x,t)$ with the equation for the gating variables $w_{bcf}$ is given by (see [2] Fig.1, model network with three cables) $\displaystyle a_{b}C_{m}\frac{\partial v_{b}}{\partial t}=$ $\displaystyle\frac{1}{2R_{i}}\frac{\partial}{x}\left(a_{b}^{2}\frac{\partial v_{b}}{\partial x}\right)$ (1) $\displaystyle- a_{b}\sum_{c=1}^{C}G_{bc}(x)(v_{b}-E_{c})\prod_{f=1}^{F_{c}}w_{bcf}^{q_{cf}}$ $\displaystyle\frac{1}{2\pi}\sum_{s=1}^{S_{b}}g_{bs}(t)\delta(x-x_{bs})(v_{b}-E_{bs})$ $\displaystyle\frac{\partial w_{bcf}}{\partial t}$ $\displaystyle=$ $\displaystyle\frac{w_{cf,\infty}(v_{b})-w_{bcf}}{\tau_{cf}(v_{b})},\quad 0<x<l_{b},\;t>0,$ (2) where $g_{bs}(nS)$ is the time course, $x_{bs}$ is the spatial location and $E_{bs}$ is the reversal potential of the $s$th synapse on branch $b$. The variables and parameters in (1) are described in [17, 16]. These branch potentials interact at $J$ junction points, where junction $J$ denotes the soma. The $D$ dendrites join at soma. Continuity of the potential at the soma leads to a common value at current balance denoted by $v_{\sigma}(t)$. Then the networked form of (1) becomes $\displaystyle a_{b}C_{m}\frac{\partial v_{\sigma}}{\partial t}=$ $\displaystyle\frac{\pi}{A_{\sigma}R_{i}}\sum_{d=1}^{D}\frac{\partial}{\partial x}\left(a_{b_{J}^{d}}^{2}(l_{b_{J}^{d}})\frac{\partial v_{b_{J^{d}}}(l_{b_{J^{d}}},t)}{\partial x}\right)$ (3) $\displaystyle- a_{b}\sum_{c=1}^{C}G_{\sigma c}(x)(v_{\sigma}-E_{c})\prod_{f=1}^{F_{c}}w_{\sigma cf}^{q_{cf}}(t)$ $\displaystyle\frac{1}{A_{\sigma}}\sum_{s=1}^{S_{b}}g_{\sigma s}(t)(v_{\sigma}(t)-E_{\sigma s})$ $\displaystyle\frac{\partial w_{\sigma cf}(t)}{\partial t}$ $\displaystyle=$ $\displaystyle\frac{w_{cf,\infty}(v_{\sigma}(t))-w_{\sigma cf}(t)}{\tau_{cf}(v_{\sigma})(t)},\quad 0<x<l_{b},\;t>0.$ (4) When the cell is partitioned into $N$ compartments, with $C$ distinct ionic currents per compartment and with $F$ gating variables per current, the following nonlinear ODEs are obtained $\displaystyle v^{\prime}(t)=$ $\displaystyle Hv(t)-(\Phi(w(t)e).v(t)+\Phi(w(t))E_{i}$ (5) $\displaystyle+G(t).(v(t)-E_{s}),\quad v(t)\in\mathbb{R}^{N}$ $\displaystyle w^{\prime}(t)=$ $\displaystyle(A(v(t))-w(t))./B(v(t)),\quad w(t)\in\mathbb{R}^{N\times C\times F}$ (6) where $H\in\mathbb{R}^{N\times N}$ is the Hines matrix [11], $e=[1\;1\cdots 1]^{T}\in\mathbb{R}^{C}$ and the ‘dot’ operator, $a.b$, denotes element-wise multiplication. $E_{i}$ and $E_{s}$ are respectively the vector of channel reversal potentials and the vector of synaptic reversal potentials, respectively Eq. (5) is discretized in time by the second order discretized Euler scheme [11]. In [16] using the snapshots of $v(t)$ and of the nonlinear term $N(v(t),w(t))\equiv(\Phi)w(t))e).v(t)-\Phi(w(t)))E_{i}$ at times $t_{1},t_{2},\ldots,t_{n}$ the POD and DEIM modes are constructed. The reduced membrane potential $v_{r}$ are constructed using the POD basis, the reduced gating variables $w_{r}$ are obtained after applying the DEIM to the nonlinear terms. The reduced order model in [16] preserves the spatial precision of synaptic input, captures accurately the subthreshold and spiking behaviors. In [17] a linearized quasi active reduced neuronal model is constructed using balanced truncation and ${\mathcal{H}}_{2}$ approximation of transfer functions in time. ROMs preserve the input-output relationship and reproduce only subthreshold dynamics. ### 3.2 Energy stable neuronal reduced order modeling In [1, 2] a different form of the HH cable equation and ODEs for gating variables is considered. The intracellular potential $v(x,t)$ and three gating variables $m(x,t),\;h(x,t)$, and $n(x,t)$ describe the activation and decativation of the ion channels, of the sodium channels and of the potassium channels, respectively. A single cable in the computational domain $(x,t)\in[0,L]\times(0,T]$ describing the distribution of the potential $u(x,t)$ is given by [1, 2] $\frac{\partial u}{\partial t}=\frac{\mu}{a(x)}\left(a(x)^{2}u_{x}\right)_{x}-\frac{1}{C_{m}}g(m,h,n)u+\frac{1}{C_{m}}f(m,h,n,x,t),$ (7) where $a(x)$ the radius of the neurons and $C_{m}$ is specific membrane capacitance, $\mu=\frac{1}{2C_{m}R_{i}}>0$ the ratio with $R_{i}$ the axial resistivity. The conductance $g(x,t)$ is a polynomial of the gating variables $g(x,t)=g_{1}m^{3}h+g_{2}n^{4}+g_{3}>0,$ (8) with the source term $f(m,h,n,x,t)=g_{1}E_{1}m^{2}h+g_{2}E_{2}n^{4}+g_{3}E_{3}-i(x,t),$ (9) where $E_{l},\;l=1,2,3$ are equilibrium potentials and $i(x,t)$ input current at $x$ $i(x,t)=\sum_{s=1}^{N_{s}}i_{s}(x,t),\quad x\in[0,L].$ (10) The nonlinear ODEs for the gating variables are given by $\displaystyle\frac{\partial m}{\partial t}$ $\displaystyle=$ $\displaystyle\alpha_{m}(v(x,t)(1-m(x,t))-\beta_{m}v(x,t))m(x,t),$ $\displaystyle\frac{\partial h}{\partial t}$ $\displaystyle=$ $\displaystyle\alpha_{h}(v(x,t))(1-h(x,t))-\beta_{h}v(x,t))h(x,t),$ (11) $\displaystyle\frac{\partial n}{\partial t}$ $\displaystyle=$ $\displaystyle\alpha_{n}(v(x,t))(1-n(x,t))-\beta_{n}v(x,t))n(x,t),$ Expression for $\alpha_{m},\;\alpha_{h},\;\alpha_{n},\;\beta_{m},\;\beta_{h},\;\beta_{n}$ and boundary conditions can be found in [2]. In [1, 2], a model network with three cables connected to a soma is used. The equations governing the potential propagation in the network $N_{c}$ neuron cables-dentrites and /or axons with the superscript ${}^{(c)},\;c=1,\ldots N_{c}$, are given as $\displaystyle\frac{\partial v^{(c)}}{\partial t}=$ $\displaystyle\frac{\mu}{a^{(c)}(x^{(c)})}\left(\left(a^{(c)}\left(x^{(c)}\right)^{2}\right)v^{(c)}_{x}\right)_{x}-\frac{1}{C_{m}}g\left(m^{(c)},h^{(c)},n^{(c)}\right)u^{(c)}$ $\displaystyle+$ $\displaystyle\frac{1}{C_{m}}f\left(m^{(c)},h^{(c)},n^{(c)},x^{(c)},t\right)$ (12) $\displaystyle\frac{\partial m^{(c)}}{\partial t}$ $\displaystyle=$ $\displaystyle\alpha_{m}(v^{(c)}(1-m^{(c)})-\beta_{m}v^{(c)})m^{(c)},$ $\displaystyle\frac{\partial h^{(c)}}{\partial t}$ $\displaystyle=$ $\displaystyle\alpha_{h}(v^{(c)})(1-h^{(c)})-\beta_{h}v{(c)})h^{(c)},$ (13) $\displaystyle\frac{\partial n^{(c)}}{\partial t}$ $\displaystyle=$ $\displaystyle\alpha_{n}(v^{(c)}))(1-n^{(c)})-\beta_{n}v^{(c)})n^{(c)},$ for $x^{(c)}\in\Omega^{(c)}=[0,L^{(c)}]$ together with the boundary conditions. The semi-discrete form of these equations are approximated using energy stable summation by parts [1, 2] for the model network. The reduced order bases (ROB) for multiple cables of identical lengths are assembled into a network in block form. The block structure of the ROB allows a flexible structure-preserving model reduction approach with an independent approximation in each cable and energy stability and accuracy properties follow from this block structure. Computation of the time varying reduced variables in the gating equations at every time $t$ is costly because they scale with dimension of FOM. A nonnegative variant of the discrete empirical interpolation method (NNDEIM) is developed in [2] to preserve the structure and energy stability properties of the equations. The capability of the greedy-based approach to generate accurate predictions in large-scale neuronal networks is demonstrated for system with more than $15,000$ degree of freedoms (dofs). The state variable ROB of dimension $l=15$ with POD modes together with the nonnegative ROBs of dimension $p=60$ with NNDEIM modes are constructed using a greedy approach to predict the potential variation at the soma. The speedup of simulations is about $20$ larger than Galerkin projection only is $1.3$ without using the NDEIM. ## 4 Combined state and parameter reduction for dynamic causal modelling In neuroscience different regions of the brain are inferred using neuroimagining data from EEG or fMRI recordings using the method od dynamic causal modeling (DCM) [7]. Effective connectivity is parameterised in terms of coupling among unobserved brain states and neuronal activity in different regions. In DCM the neuronal activity is of the observed brain region is represented as a SISO (single input single output) linear state-space system $\dot{x}=A_{\mathrm{d}yn}(\mu)x+B_{\mathrm{d}yn}u$ (14) with the parameterized connectivity $A_{\mathrm{d}yn}(\mu)$ and external input matrices $B_{\mathrm{d}yn}$. Linearization of the nonlinear DCM hemodynamic forward sub-model (balloon model) [7] transforms the neuronal activity to the measured BOLD (blood oxygen level dependent) response. Linearization around the equilibrium results in the following single input, single output (SISO) system: $\displaystyle B_{obs}$ $\displaystyle:=$ $\displaystyle(1\;0\;0\;0)^{T},\quad C_{obs}=(0\;0\;V_{0}k_{1}\;V_{0}k_{2}),$ (15) $\displaystyle\dot{z}_{i}$ $\displaystyle=$ $\displaystyle A_{obs}z_{i}+B_{obs}x_{i},$ (16) $\displaystyle y_{i}$ $\displaystyle=$ $\displaystyle C_{obs}z_{i},$ (17) $\displaystyle z_{0}$ $\displaystyle=$ $\displaystyle(0\;0\;0\;0)^{T},$ (18) $A_{\mathrm{o}bs}:=\left(\begin{array}[]{cccc}\frac{1}{\tau_{s}}&\frac{1}{\tau_{f}}&0&0\\\ 1&0&0&0\\\ 0&\frac{1}{\tau_{0}E_{0}}(1-(1-E_{0})(1-\ln(1-E_{0})))&\frac{1}{\tau_{0}}&\frac{1-\alpha}{\tau_{0}\alpha}\\\ 0&\frac{1}{\tau_{0}}&0&\frac{1}{\tau_{0}\alpha}\end{array}\right).$ (19) The fMRI measurements at the $i^{th}$ brain region are reflected by the output variables $y_{i}$. For the meaning of the variables and parameters in (15) and (19) we refer to [10, 9]. The linearized forward sub-models are embedded into the fMRI connectivity model $\left(\begin{array}[]{c}\dot{x}\\\ \dot{z}_{1}\\\ \dot{z}_{2}\\\ \vdots\\\ z_{N_{dyn}}\end{array}\right)=\left(\begin{array}[]{ccccc}A_{dyn}(\mu)&0&0&\cdots&0\\\ \delta_{1,1}&A_{obs}&0&&\\\ \delta_{2,1}&0&A_{obs}&&\\\ \vdots&&\ddots&\\\ \delta_{1,N_{dyn}}&&&A_{obs}\end{array}\right)\left(\begin{array}[]{c}x\\\ z_{1}\\\ z_{2}\\\ \vdots\\\ z_{N_{dyn}}\end{array}\right)+\left(\begin{array}[]{c}B_{dyn}\\\ 0\\\ 0\\\ \vdots\\\ 0\end{array}\right)v,$ (20) $y=\left(0\left(\begin{array}[]{ccc}C_{obs}&&\\\ &\ddots&\\\ &&C_{obs}\end{array}\right)\right)\left(\begin{array}[]{c}x\\\ z_{1}\\\ z_{2}\\\ \vdots\\\ z_{N_{dyn}}\end{array}\right),$ (21) where $\delta_{ij}\in\mathbb{R}^{4\times N_{\mathrm{d}yn}}$ denotes the Kronecker matrix with non-zero elements located at the $(i,j)^{th}$ component. The linearized state-space forward model (20) and (21) corresponds to a multiple input, multiple output (MIMO) system $\dot{x}(t)=A(\mu)x(t)+Bu(t),\qquad y(t)=Cx(t),$ (22) where $x\in\mathbb{R}^{N}$ is the internal state, $u\in\mathbb{R}^{J}$ the external input, $y\in\mathbb{R}^{O}$ the observed output and $\mu$ are the parameters describing different conditions. For large number of $M:=N^{2}$ parameters, the computational cost for inferring the parameters and states is very high. In [10, 8] a combined state and parameter model order reduction is developed for parameter estimation and identification to extract effective connectivity. The inversion procedure consists of two phases, the offline and online phases. In the offline phase, the underlying parameterized model is reduced jointly in states and parameters. In online phase, the reduced order model’s parameters are estimated to fit the observed experimental data. Using state and parameter reduction, the computational cost is reduced in the offline phase. The simultaneous reduction of state and parameter space is based on Galerkin projections with the orthogonal matrices for the state $V\in\mathbb{R}^{N\times n}$ and for the parameters $P\in\mathbb{R}^{M\times m}$. The reduced model is of lower order $n<<N,\;m<<M$ than the original full order model. The reduced states $x_{r}(t)\in\mathbb{R}^{n}$ and the reduced parameters $\mu\in\mathbb{R}^{m}$ are computed as $\dot{x}_{r}(t)=A_{r}(\mu_{r})x_{r}(t)+B_{r}u(t),\qquad y_{r}(t)=C_{r}x(t)$ (23) with a reduced initial condition $x_{r,0}=V^{T}x_{0}$ and the reduced components $\displaystyle\mu_{r}$ $\displaystyle=$ $\displaystyle P^{T}\mu\in\mathbb{R}^{m},$ $\displaystyle A_{r}(\mu_{r})$ $\displaystyle=$ $\displaystyle V^{T}A(P\mu_{r})V\in\mathbb{R}^{n\times n},$ $\displaystyle B_{r}$ $\displaystyle=$ $\displaystyle V^{T}B\in\mathbb{R}^{n\times J},$ $\displaystyle C_{r}$ $\displaystyle=$ $\displaystyle CV\in\mathbb{R}^{O\times m}.$ In the online phase, the optimization based inverse problem is combined with the reduction of state and parameter space. The inversion is based on generalized data-driven optimization approach to construct the ROMs in [23] and enhanced with the Monte-Carlo method to speed up the computations. The state projection $V\in\mathbb{R}^{N\times n}$ and parameter projection $P\in\mathbb{R}^{m\times m}$ are determined iteratively based on a greedy algorithm by maximizing the error between the high-fidelity original and the low-dimensional reduced model in the Bayesian setting. Numerical experiments with the DCM model in [23] show highly dimensional neuronal network system is efficiently inverted due to the short offline durations. In the offline phase, Monte-Carlo enhanced methods require more than an order of magnitude less offline time compared to the original and data-misfit enhanced methods. In the online phase the reduced order model has a speedup factor about an order of magnitude more compared to the full-order inversion. The output error of the data-misfit enhanced method is close to full order method. The output-errors of Monte-Carlo decrease with increasing number of simulation but does not reach the output error of the full order system. The source code is available in MATLAB [8]. ## 5 Dynamic mode decomposition Dynamic mode decomposition (DMD) is a data-driven, equation free ROM technique [20]. It was initially developed to reduce the high dimensional dynamic data obtained from experiments and simulations in the fluid mechanics into a small number of coupled spatial–temporal modes [29, 30]. DMD was applied to explore spatial–temporal patterns in large-scale neuronal recordings in [3]. DMD can be interpreted as combination of discrete Fourier transform (DFT) in time and principal component analysis (PCA) [15] in space. Both PCA and independent component analyses (ICA) [13] are static techniques, which perform poorly in reproducing dynamic data, such as recordings of neural activity. The data is taken from electrocorticography (ECoG) recordings. Voltages from $n$ channels of an electrode array sampled every $\Delta t$. These measurements are arranged at snapshot $k$ to the column vectors ${\mathbf{x}}_{k}$. The $m$ snapshots in time construct to data matrices shifted in time with $\Delta t$ ${\mathbf{X}}=\left[\begin{array}[]{cccc}|&|&&|\\\ {\mathbf{x}}_{1}&{\mathbf{x}}_{2}&\cdots&{\mathbf{x}}_{m-1}\\\ |&|&&|\end{array}\right],\quad{\mathbf{X}}^{\prime}=\left[\begin{array}[]{cccc}|&|&&|\\\ {\mathbf{x}}_{2}&{\mathbf{x}}_{3}&\cdots&{\mathbf{x}}_{m}\\\ |&|&&|\end{array}\right]$ (24) These matrices are assumed to be related linearly in time ${\mathbf{X}}^{\prime}={\mathbf{A}}{\mathbf{X}}.$ (25) The DMD of the data matrix pair ${\mathbf{X}}$ and ${\mathbf{X}}^{\prime}$ is given by the eigendecomposition of ${\mathbf{A}}$ using the singular value decomposition (SVD) of the data matrix ${\mathbf{X}}=U\Sigma V^{*}$ by computing the pseudoinverse ${\mathbf{A}}\approx{\mathbf{X}}^{\prime}{\mathbf{X}}^{\dagger}\equiv{\mathbf{X}}^{\prime}{\mathbf{V}}{\mathbf{\Sigma}}^{-1}{\mathbf{U}}^{*}.$ The spatio-temporal modes are computed by the exact DMD algorithm [32]. Because DMD does not contain explicit spatial relationship between neighboring measurements, traveling waves occurring in neuronal networks can not be captured well with a few coherent modes. DMD was also used as a windowed technique with a temporal window size constrained by lower bound as for the discrete Fourier transformation (DFT). In contrast to the fluid dynamics where $n>>m$, in neuroscience the electrode arrays that have tens of channels $n$ in the recordings with $m$ number of snapshots in the windows data per second, so that $n<m$. The number of singular values $v$ of ${\mathbf{X}}$ are less than $n$ and $m-1$, which restricts the maximum number of DMD modes and eigenvalues to $n$. Because of this the dynamics can be captured over $m$ snapshots. The rank mismatch is resolved by appending to the snapshot measurements with $h-1$ time-shifted versions of the data matrices. The augmented data matrix ${\mathbf{X}}_{\mathrm{a}ug}$ is given as ${\mathbf{X}}_{\mathrm{a}ug}=\left[\begin{array}[]{cccc}|&|&&|\\\ {\mathbf{x}}_{1}&{\mathbf{x}}_{2}&\cdots&{\mathbf{x}}_{m-h}\\\ |&|&&|\\\ |&|&&|\\\ {\mathbf{x}}_{2}&{\mathbf{x}}_{3}&\cdots&{\mathbf{x}}_{m-h-1}\\\ |&|&&|\\\ &&\cdots&\\\ |&|&&|\\\ {\mathbf{x}}_{h}&{\mathbf{x}}_{h+1}&\cdots&{\mathbf{x}}_{m-1}\\\ |&|&&|\\\ \end{array}\right].$ (26) The augmented matrices ${\mathbf{X}}_{{\mathrm{a}ug}}$ and ${\mathbf{X}}^{\prime}_{{\mathrm{a}ug}}$ are Hankel matrices, which are constant along the skew diagonal, as in the Eigenvalue Realization Algorithm (ERA) [14]. The number of the stacks $h$ is chosen such that $hn>2m$. A measure to determined the optimal number of stacks $h$ is the approximation error $E=\frac{||{\mathbf{X}}-\hat{\mathbf{X}}||_{F}}{||{\mathbf{X}}||_{F}}$ where $||\cdot||_{F}$ is the Frobenius norm. The approximation error $E$ is decreasing with increasing number of stacks $h$ and reaches a plateau, so that the DMD accuracy does not significantly increases. DMD is applied in [3] as an automated approach to detect and analyze reliably spatial localization and frequencies of sleep spindle networks from human ECoG recordings. A MATLAB implementation is available at github.com/bwbrunton/dmd- neuro/. ## 6 Reduced order modeling of biophysical neuronal networks Recently reduced order models for ODEs $\dot{x}(t)=A(t)x(t)+f(x(t))+Bu(t)$ (27) are constructed using POD and DEIM to investigate input-output behavior of the neuronal networks in the brain [22, 21], where $x(t)$ are state, and $u(t)$ are input variables, respectively. The model in [22] is based on the chemical reactions of molecules in synapses, that are the intercellular information transfer points of neurons. The signaling pathways in striatal synaptic plasticity are modeled in [19]. This model describes how certain molecules, which are a prerequisite for learning in the brain, act in synapses. The stoichiometric equations obey the law of mass action, which leads to a deterministic system of $44$ ODEs of the form (27) . The state $x(t)$ of the control system (27) is a collection of ions, molecules, and proteins that act in neuronal synapses. The linear part of (27) is sparse, the nonlinearities are quadratic. The time dependent stimulus $u(t)$ consists of molecules that are important for neuronal excitability and plasticity, calcium and glutamate. In [21], a nonlinear biophysical network model is considered, describing synchronized population bursting behavior of heterogeneous pyramidal neurons in the brain [27]. Neurons communicate by changing their membrane voltage to create action potentials (spikes), propagating from cell to cell. Spiking is the fundamental method of sensory information processing in the brain, and synchronized spiking is an emergent property of biological neuronal networks. The ODE system (27) in [21] consists of the states $x(t)$ as a collection of $50$ neurons, each modeled with $10$ ODEs, leading totally to a system of ODEs of dimension $500$. Each cell is modeled with Hodgkin-Huxley equations, where each cell has only two compartments (dendrites and soma) and these compartments have different ion channels. The state variables $x(t)$ include the voltages of somatic and dendritic compartments, dendritic calcium concentration, synaptic and ion channel gating variables and the input $u(t)$ is an injected current. Additionally, the soma compartment voltages are coupled to dentritic compartments of randomly chosen cells. This networking of the output of cells as input to other cells is key for producing synchronized population behavior. The nonlinearities are Hodgkin-Huxley type, i.e. exponential functions as well as cubic and quartic polynomials. In [22], POD+DEIM was applied to a data-driven biological model of plasticity in the brain (27). The ROMs with POD-DEIM reduce significantly the simulation time and error between the original model and reduced order solutions can be tuned by adjusting the number POD and DEIM bases independently. When the ROMs are trained in a matching time interval of $10000$ seconds, accurate results are obtained. However, generalizing the reduced model to longer time intervals is challenging, which is characteristic for all nonlinear models. In [21], the network model (27) is reduced with localized DEIM (LDEIM) [24], discrete adaptive POD (DAPOD) [33, 34], and adaptive DEIM [25]. DEIM and the variations are used here in combination with POD. ROMs require large number POD and DEIM bases, to accurately simulate the input-output behavior in the ROMs. In this model, every cell is heterogeneous in parameters and there are also jump/reset conditions, which are factors that pose additional challenges to the reduced order methods. However, the ROMs in [21] were able to replicate the emergent synchronized population activity in the original network model. DAPOD and ADEIM perform best in preserving the spiking activity of the original network model. ADEIM is too slow and does not allow low enough dimensions to offset the computational costs of online adaptivity. DAPOD is able to find a lower dimensional POD basis online than the other methods find offline, but the runtime is close to the original model. ## References * [1] D. Amsallem and J. Nordström. High-order accurate difference schemes for the Hodgkin–Huxley equations. Journal of Computational Physics, 252:573 – 590, 2013. * [2] D. Amsallem and J. Nordström. Energy stable model reduction of neurons by nonnegative discrete empirical interpolation. SIAM Journal on Scientific Computing, 38(2):B297–B326, 2016. * [3] B. W. Brunton, L. A. Johnson, J. G. Ojemann, and J. N. Kutz. Extracting spatial–temporal coherent patterns in large-scale neural recordings using dynamic mode decomposition. Journal of Neuroscience Methods, 258:1 – 15, 2016. * [4] L. Buesing, J. H. Macke, and M. Sahani. Spectral learning of linear dynamics from generalised-linear observations with application to neural population data. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1682–1690. Curran Associates, Inc., 2012. * [5] S. Chaturantabut and D. Sorensen. Nonlinear model reduction via discrete empirical interpolation. SIAM Journal on Scientific Computing, 32(5):2737–2764, 2010. * [6] John P Cunningham and Byron M Yu. Dimensionality reduction for large-scale neural recordings. Nature Neuroscience, 17(11):1500–1509, 2014. * [7] K.J. Friston, L. Harrison, and W. Penny. Dynamic causal modelling. NeuroImage, 19(4):1273 – 1302, 2003. * [8] C. Himpe. optmor - optimization-based model order reduction (version 1.2), 2015\. * [9] C. Himpe. Combined State and Parameter Reduction: For Nonlinear Systems with an Application in Neuroscience. Internationaler Fachverlag für Wissenschaft & Praxis, 2016. * [10] C. Himpe and M. Ohlberger. Data-driven combined state and parameter reduction for inverse problems. Advances in Computational Mathematics, 41(5):1343–1364, 2015. * [11] M. Hines. Efficient computation of branched nerve equations. Int J Biomed Comput., 15(1):69–76, 1984. * [12] A. L. Hodgkin and A. F. Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. Bulletin of Mathematical Biology, 52(1):25–71, Jan 1990. * [13] A. Hyvärinen and E. Oja. Independent component analysis: algorithms and applications. Neural Networks, 13(4):411 – 430, 2000. * [14] Juang J.N. and Pappa R.S. An eigensystem realization algorithm for modal parameter identification and model reduction. Journal of Guidance, Control, and Dynamics, 8(5):620–627, 1985\. * [15] I. T. Jolliffe. Principal component analysis. Springer Series in Statistics. Springer-Verlag, New York, 2005. * [16] A. R. Kellems, S. Chaturantabut, D. C. Sorensen, and S. J. Cox. Morphologically accurate reduced order modeling of spiking neurons. Journal of Computational Neuroscience, 28(3):477–494, 2010. * [17] A. R. Kellems, D. Roos, N. Xiao, and S. J. Cox. Low-dimensional, morphologically accurate models of subthreshold membrane potential. Journal of Computational Neuroscience, 27(2):161, 2009. * [18] C. Kemere, G. Santhanam, B. M. Yu, A. Afshar, S. I. Ryu, T. H. Meng, and K. V. Shenoy. Detecting neural-state transitions using hidden Markov models for motor cortical prostheses. Journal of Neurophysiology, 100(4):2441–2452, 2008. * [19] B. Kim, SL. Hawes, F. Gillani, LJ. Wallace, and KT. Blackwell. Signaling pathways involved in striatal synaptic plasticity are sensitive to temporal pattern and exhibit spatial specificity. PLoS Comput Biol, 9(3):e1002953, 2013. * [20] J. N. Kutz, S. L. Brunton, B. W. Brunton, and J. L. Proctor. Dynamic mode decomposition: Data-driven modeling of complex systems. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2016. * [21] M. Lehtimäki, L. Paunonen, and M.-L. Linne. Projection-based order reduction of a nonlinear biophysical neuronal network model. In 2019 IEEE 58th Conference on Decision and Control (CDC), pages 1–6, 2019. * [22] M. Lehtimäki, L. Paunonen, S. Pohjolainen, and M.-L. Linne. Order reduction for a signaling pathway model of neuronal synaptic plasticity. IFAC-PapersOnLine, 50(1):7687 – 7692, 2017. 20th IFAC World Congress. * [23] C. Lieberman, K. Willcox, and O. Ghattas. Parameter and state model reduction for large-scale statistical inverse problems. SIAM Journal on Scientific Computing, 32(5):2523–2542, 2010. * [24] B. Peherstorfer, D. Butnaru, K. Willcox, and H.-J. Bungartz. Localized discrete empirical interpolation method. SIAM J. Sci. Comput., 36(1):A168–A192, 2014. * [25] B. Peherstorfer and K. Willcox. Online adaptive model reduction for nonlinear systems via low-rank updates. SIAM J. Sci. Comput., 37(4):A2123–A2150, 2015. * [26] B. Petreska, B. M Yu, J. P Cunningham, S. Gopal, Stephen I. Y., K. V. Shenoy, and M. Sahani. Dynamical segmentation of single trials from population neural data. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 756–764. Curran Associates, Inc., 2011. * [27] P. F. Pinsky and J. Rinzel. Intrinsic and network rhythmogenesis in a reduced traub model for CA3 neurons. Journal of Computational Neuroscience, 1(1):39–60, 1994. * [28] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000. * [29] C. W. Rowley, I. Mezić, S. Bahheri, P. Schlatter, and D. S. Hennigson. Spectral analysis of nonlinear flows. Journal of Fluid Mechanics, 641:115–127, 2009. * [30] P. J. Schmid. Dynamic mode decomposition of numerical and experimental data. Journal of Fluid Mechanics, 656:5–28, 2010. * [31] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000. * [32] J. H. Tu, C. W. Rowley., D. M. Luchtenburg, S. L. Brunton, and J. N. Kutz. On dynamic mode decomposition: Theory and applications. Journal of Computational Dynamics, 1(2158-2491_2014_2_391):391, 2014. * [33] M. Yang and A. Armaou. Dissipative distributed parameter systems on-line reduction and control using DEIM/APOD combination. In 2018 Annual American Control Conference (ACC), pages 2557–2562, 2018. * [34] M. Yang and A. Armaou. Revisiting APOD accuracy for nonlinear control of transport reaction processes: A spatially discrete approach. Chemical Engineering Science, 181:146 – 158, 2018. * [35] B. M. Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. Journal of Neurophysiology, 102(1):614–635, 2009.
2024-09-04T02:54:59.054047
2020-03-11T08:03:25
2003.05150
{ "authors": "G. Hasinger", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26154", "submitter": "Guenther Hasinger", "url": "https://arxiv.org/abs/2003.05150" }
arxiv-papers
# Illuminating the dark ages: Cosmic backgrounds from accretion onto primordial black hole dark matter G. Hasinger ###### Abstract The recent interpretation of cold dark matter as the sum of contributions of different mass Primordial Black Hole (PBH) families [1] could explain a number of so far unsolved astrophysical mysteries. Here I assume a realistic $10^{-8}$–$10^{10}$ M⊙ PBH mass distribution providing the bulk of the dark matter, consistent with all observational constraints. I estimate the contribution of baryon accretion onto this PBH population to various cosmic background radiations, concentrating first on the cross-correlation signal between the Cosmic X–ray and the Cosmic infrared background fluctuations discovered in deep Chandra and Spitzer surveys. I assume Bondi capture and advection dominated disk accretion with reasonable parameters like baryon density and effective relative velocity between baryons and PBH, as well as appropriate accretion and radiation efficiencies, and integrate these over the PBH mass spectrum and cosmic time. The prediction of the PBH contribution to the X–ray background is indeed consistent with the residual X–ray background signal and the X–ray/infrared fluctuations. The predicted flux peaks at redshifts z$\approx$17–30, consistent with other constraints requiring the signal to come from such high redshifts. The PBH contribution to the 2–5 $\mu$m cosmic infrared background fluctuations is only about 1%, so that these likely come from star formation processes in regions associated with the PBH. I discuss a number of other phenomena, which could be significantly affected by the PBH accretion. Magnetic fields are an essential ingredient in the Bondi capture process, and I argue that the PBH can play an important role in amplifying magnetic seed fields in the early universe and maintaining them until the galactic dynamo processes set in. Next I study the contribution of the assumed PBH population to the re-ionization history of the universe and find that they do not conflict with the stringent ionization limits set by the most recent Planck measurements. X–ray heating from the PBH population can provide a contribution to the entropy floor observed in groups of galaxies. The tantalizing redshifted 21-cm absorption line feature observed by EDGES could well be connected to the radio emission contributed by PBH to the cosmic background radiation. Finally, the number of intermediate-mass black holes and the diffuse X–ray emission in the Galactic Center region are not violated by the assumed PBH dark matter, on the contrary, some of the discrete sources resolved in the deepest Chandra observations of the Galactic Ridge could indeed be accreting PBH. ## 1 Introduction Recent years saw a revival of the idea originally put forward by S. Hawking [2], that Primordial Black Holes (PBH) could make up the so far elusive Dark Matter. LIGOs first detection of gravitational waves from merging binary black holes of approximately equal masses in the range 10–30 M⊙ [3, 4] led to the suggestion that these could be a signature of dark matter stellar mass PBH [5, 6, 7] in a mass window not yet excluded by other astrophysical constraints. A recent review about the rich literature constraining the possible contributions of PBH to the dark matter is e.g. given in [8]. In a recently published theoretical prediction [1, 9] PBH are created in the QCD phase transitions (around 100 MeV) of different particle families freezing out of the primordial Quark-gluon plasma within the first two seconds after the inflationary phase. When W+/-, Z bosons, baryons, pions are created, and e+e- pairs annihilate, they leave an imprint in form of a significant reduction of the sound speed at the corresponding phase transitions, and allow regions of high curvature to collapse and form PBH [see also 10]. The typical mass scale of these PBH is defined by the size of the horizon at the time of the corresponding phase transition. In this model four distinct populations of PBH in a wide mass range are formed: planetary mass black holes at the W+/-, Z transition, PBH of around the Chandrasekhar mass when the baryons (protons and neutrons) are formed from 3 quarks, PBH of masses of order 30 M⊙ (these correspond to the LIGO black holes), when pions are formed from two quarks, and finally supermassive black holes (SMBH) at the e+e- annihilation [see also 11]. Another remarkable aspect of this theory is, that the gravitational energy released at at the PBH collapse locally reheats regions (hot spots) around the black holes to the electroweak transition scale (around 100 GeV), where chiral sphaleron selection effects can introduce the matter/antimatter asymmetry. The PBH in this picture would therefore also be responsible for the baryogenesis and fix the ratio of dark matter to baryons. Clustering of the PBH in a very wide mass distribution could alleviate some of the more stringent observational constraints on the allowed contribution of PBH to the dark matter [7, 12]. The interpretation of cold dark matter as the sum of contributions of different mass PBH families could explain a number of so far unsolved mysteries, like e.g. the massive seed black holes required to create the supermassive black holes in the earliest QSOs [13], the ubiquitous massive LIGO/VIRGO massive binary black holes [e.g. 6], or even the putative "Planet X" PBH in our own Solar System [14]. The most abundant family of PBH should be around the Chandrasekhar mass (1.4 M⊙). This prediction may already have been vindicated by the recent OGLE/GAIA discovery of a sizeable population of putative black holes in the mass range 1–10 M⊙ [15]. The microlensing survey OGLE has detected $\sim$60 long-duration microlensing events. About 20 of these have GAIA DR2 parallax distances of a few kpc, which break the microlensing mass–distance degeneracy and allow the determination of masses in the few solar mass range, implying that these objects are most likely black holes, since stars at those distances would be directly visible by OGLE. Important fingerprints of a population of PBH may be hidden in the Cosmic infrared and X–ray background radiation (see [16] for a comprehensive review). Indeed, [6] argues, that the near-infrared Cosmic background (CIB) anisotropies detected in deep Spitzer [17, 18, 19, 20] and Akari [21] images, which cannot be accounted for by known galaxy populations [22], could be connected to PBH. Similar fluctuations were discovered in the Cosmic X–ray background (CXB) observed in a deep Chandra survey, which are correlated with the CIB anisotropies in the same field [23]. Later studies of wider/deeper fields covered by both Chandra and Spitzer [24, 25, 26] have substantially improved the detection significance of the observed signal. The X–ray fluctuations contribute about 20% to the CIB signal, indicating that black hole accretion should be responsible for such highly efficient X–ray emission. Similar studies of deep fields observed with the Hubble Space Telescope in the optical range do not show such a cross-correlation signal down to mAB$\sim$28 [see 16]. The angular scale of the fluctuation power spectra of the CIB and CXB reach values >1000", much larger than expected for the known galaxy populations [27]. All of these findings can be understood, if the fluctuation signal comes from a high-redshift (z$\gtrsim$12) population of black holes. The spectral shape of the CXB fluctuations determined from a combination of the deepest/widest fields [26] can be fit either with a very high redshift population of obscured black holes, or with completely unobscured black hole accretion. Original models [28] invoked highly obscured Direct Collapse Black Holes formed in metal-free halos at z>12 to explain the observed CIB and CXB signal. However, accreting massive black holes have recently been firmly ruled out as the source of these fluctuations [29], because they would require an unfeasible amount of black hole accretion at z>6, locking up a larger amount of mass in massive black holes at high redshift, than the known black hole mass function at z=0. These authors also ruled out local diffuse emission as the source of the X–ray fluctuations. The CXB has been largely resolved into discrete sources in deep X–ray images, either directly [see 30, 31], or by crosscorrelating with the deepest Hubble galaxy catalogues [32, 33]. However, [32] show that some marginally significant diffuse CXB still remains after accounting for all discrete contributions. This is consistent with the independent determination of [34]. The residual unresolved flux is about 3 times larger than the X-ray flux associated with the above CXB/CIB fluctuations. Given the difficulties in explaining the CIB/CXB correlation with known classes of sources, and motivated by the notion that the dark matter could be dominated by an extended mass distribution of PBH, I constructed a toy model to explore the potential contribution to the cosmic backgrounds by the accretion of baryons throughout cosmic history onto such a population of early black holes. Assuming a combination of Bondi-Hoyle-Lyttleton quasi-spherical capture at large distances from the PBH, and advection-dominated disk accretion flows (ADAF) in the vicinity of the central object, I can explain the observed residual CXB flux and the CXB/CIB crosscorrelation with minimal tuning of the input parameters, and find a maximum contribution to the extragalactic background light in the redshift range 15<z<30\. I further estimate that this accretion onto PBH can produce enough flux to significantly contribute to the pre-ionization of the intergalactic medium with UV photons by a redshift z$\gtrsim$15 and to the pre-heating of the baryons with X–ray photons, observed as an "entropy floor" in the X–ray emission of galaxy groups. In section 2 the assumed PBH mass distribution is introduced and contrasted with recent observational limits on the PBH contribution to the dark matter. The basic ingredients of the toy model for the accretion onto PBH are presented in section 3. The assumed radiation mechanism and efficiency is discussed in section 4. The contribution of the PBH emission to the different bands is compared with the observational constraints in section 5. Other potential diagnostics of this putative dark matter black hole population are discussed in section 6, and conclusions are presented in section 7. Throughout this work a $\Lambda$CDM cosmology with $\Omega_{M}$=0.315, $\Omega_{\Lambda}$=0.685, and H0=67.4 km s-1 Mpc-1 [35] is used. These parameters define the baryon density $\Omega_{bar}$=0.049, the dark matter density $\Omega_{DM}$=0.264, and the critical mass density of the universe $\rho_{crit}$=1.26$\times 10^{20}M_{\odot}~{}{\rm Gpc}^{-3}$. All logarithms in this paper are taken to the base 10. ## 2 The assumed PBH mass distribution The theoretical predictions in [1, 9, 11, 36] yield a broad distribution of PBH masses with a number of peaks corresponding to the particle families freezing out from the Big Bang. Depending on the spectral index $n_{s}$ of the primordial curvature fluctuation power spectrum, the PBH mass distribution has a different overall slope. [36] find consistency of these predictions with a number of recent observational limits on the PBH contribution to the dark matter, but there is a tension of their models with the Cosmic Microwave Background (CMB) constraints from accretion at large PBH masses [37, 38]. Recent limits from gravitational lensing of type Ia supernovae on a maximum contribution of stellar-mass compact objects to the dark matter of around 35% [39], and from the LIGO OI gravitational wave merger rate of black holes in the mass range 10–300 M⊙ [40] are also in tension with these models. An additional important constraint comes from a comparison of the predicted PBH fraction with the measured local mass function of supermassive black holes (SMBH) in the centers of nearby galaxies. Integrating the local SMBH mass function of [41] (see figure 1) in the range $10^{6}$–$10^{10}$ M⊙ yields a local SMBH mass density of $\rho_{SMBH}$=6.3$\times$105 M⊙ Mpc-3, corresponding to a dark matter fraction of fSMBH=1.89$\times$10-5, which is about a factor of 10–100 lower than the fPBH predictions in [1, 36]. Figure 1: The PBH mass spectrum (thick red line) assumed for this work (García-Bellido, 2020, priv. comm.), compared to a number of observational constraints. Microlensing limits from SNe [39], EROS [42], and the Subaru M31 survey [43] are shown as solid, dashed and dotted green lines, respectively. LIGO limits from gravitational merger event rates are shown as blue solid line for subsolar masses [44], and as blue dashed line for 10-300 M⊙ [40]. The CMB accretion limits from [37] are shown as orange dashed line. Multiwavelength limits from the Galactic Center [45] are shown in magenta for X-ray (solid) and radio (dashed) observations. Finally, the local SMBH mass function [41] is shown as black line at 106-10 M⊙. For these reasons, García-Bellido et al. (2020 in prep.) are revising their model parameters in order to predict a steeper PBH mass function at large MPBH and shared one of their new models, shown as red curve in figure 1. Here a value of ns=0.987 is assumed for the spectral index of the primordial fluctuation power spectrum, as well as a running curvature of dns=$-$0.0006. The integral of this PBH distribution over the whole mass range yields fPBH=1. On the other hand, the distribution yields only $\sim$40% of the dark matter in the peak mass range [0.1,10] M⊙, and is thus fully consistent with the microlensing constraints in figure 1. In the mass range of the LIGO black hole binaries it predicts just the right amount of dark matter to explain the gravitational wave merger rates, and in the SMBH range it is consistent with the local black hole mass function (taking into account the accretion onto supermassive PBH over cosmic time producing the bulk of the X-ray background [46]). Apart from small sections, the new PBH mass function is thus fully consistent with the most recent observational constraints. ## 3 Baryon accretion onto the PBH In the following I use the PBH mass spectrum presented in section 2 to calculate the accretion of baryons onto PBH over cosmic time, and to predict the electromagnetic emission from this process. As we will see, for most of the cosmic history these black holes move at supersonic speeds among the baryons and will therefore undergo Bondi-Hoyle-Lyttleton quasi-spherical capture [47, 48, 49, 50]. In the Bondi-Hoyle picture of a black hole moving supersonically through a homogeneous gas, the capture happens in the wake of the moving object. Behind the object, material moves in from a wide cone, and needs to lose angular momentum before it can fall towards the black hole. The gas is in principle collisionless, so that only the magnetic field trapped in the plasma allows particles to lose angular momentum and start to behave like a fluid. This gas forms the accretion flow, in which it is adiabatically heated. The accreting gas is ionized and embedded in the magnetic field. Any plasma drawn in by the gravitational field will carry along the magnetic field. Shvartsman [51] argues that in the black hole tail, where the matter flow stops, the gravitational and magnetic energy densities become nearly equal. This equipartition is preserved in the infalling flow and thus the magnetic field grows towards the black hole. Like the heat has to be ultimately radiated away, the magnetic field needs a way to dissipate energy on its way inward. [52] discuss that the most likely dissipation mechanism for the magnetic field is reconnection of field lines in narrow current sheets, similar to the processes we observe in solar flares and active galactic nuclei. Magnetic reconnection causes the acceleration and non-thermal heating of a small fraction of the infalling electrons. In parallel, decoupled magnetic field lines can carry some of the amplified magnetic field outward and eject plasma [52]. An important question is, whether the accretion flow is spherically symmetric close to the black hole, or whether an accretion disk is formed. Originally most researchers assumed spherical accretion for PBH [e.g. 53, 54, 38]. However, [37] argues, that the accreted angular momentum is large enough, that an accretion disk is formed, at least close to the black hole. According to these authors, the granularity of the PBH distribution and the formation of PBH binaries at the scale of the Bondi radius will imprint density and velocity gradients into the relative distribution of baryons and PBH, such that ultimately an accretion disk and an advection-dominated accretion flow (ADAF) will form [55]. The formation of an ADAF disk significantly reduces the accretion rate and the radiative efficiency [56], compared to spherical accretion. But to first order the Bondi-Hoyle-Lyttleton mechanism can be used to estimate the accretion rate $\dot{M}$ onto the PBH [37, 8]. Bondi [49] discusses two different approximations to the spherical gas accretion problem, (i) the velocity-limited case, where the motion of the accreting object through the gas is dominant and an accretion column is formed in the wake of the moving object, and (ii) the temperature-limited case, where the sound speed of the gas is dominant and a spherical accretion flow forms. In the velocity-limited case (i) the mass accretion rate is given as $\dot{M}=2.5\pi\rho(GM)^{2}v_{rel}^{-3},$ (3.1) where $\rho$ is the gas density, $M$ is the PBH mass, and $v_{rel}$ is the relative velocity between object and gas. In the temperature-limited case (ii) with negligible relative velocity, the thermal velocity of the gas particles is dominant and the corresponding accretion rate is given by $\dot{M}=2.5\pi\rho(GM)^{2}c_{s}^{-3},$ (3.2) where $c_{s}$ is the sound speed. For intermediate cases, [49] introduces an effective velocity $v_{eff}=\sqrt{v_{rel}^{2}+c_{s}^{2}}$ (3.3) and the corresponding mass accretion rates becomes $\dot{M}=2\lambda\pi\rho(GM)^{2}v_{eff}^{-3},$ (3.4) where the so called accretion eigenvalue $\lambda$ is is a fudge factor of order unity, dependent on non-gravitational aspects of the problem, like e.g. the gas equation of state or outflows from feedback effects. Different authors have discussed this parameter for the particular application of gas accretion onto PBH in the early universe. [53] find values of $1.12>\lambda>10^{-3}$, depending e.g. on the PBH mass. For masses of order 1 M⊙ they find $\lambda=1.12$. [5] discriminate between isothermal and adiabatic gas with accretion eigenvalues of $\lambda$=1.12, and 0.12, respectively. In this paper I assume an eigenvalue $\lambda$=0.05. The motivation for this choice is discussed in section 4, while section 5 and 6 show that this choice fits the observational constraints quite well. Figure 2: Left: Baryon temperature as a function of redshift. Right: Mean relative velocity $\langle v_{rel}\rangle$ between dark matter and baryons, sound speed $c_{s}$ and the effective velocity $v_{eff}$ (eq. 3.8) as a function of redshift. Let us first look at the thermal history and thus the sound speed of the gas over cosmic history. A nice summary is given in figure 15 of [57]. Despite having decoupled from the CMB at z$\approx$1089, the gas temperature continues to follow the temperature evolution T$\propto$(1+z) of the background photons due to Compton scattering off residual ionized electrons from the recombination era. Below redshifts z$\approx$200 the residual ionization in the gas is low enough, that it decouples from the background radiation and cools adiabatically following the relation T$\propto$(1+z)2. When the first objects form and reionization starts around z$\lesssim$20, the gas is heated up to temperatures $\sim$104 K. The details of re-ionization are still uncertain and will be discussed below. I have deliberately chosen a redshift of z$\approx$20 for re-ionization to become dominant, with full ionization occurring around z$\approx$7\. Finally, at z<3, when the bulk of the cosmic baryons are falling into increasingly larger dark matter halos and become virialized, they are further heated up to form the warm/hot intergalactic medium at temperatures $10^{5-7}$K [58]. Using figure 2b in this paper I estimate average temperatures for the IGM of 5$\times 10^{4}$, 1.5$\times 10^{5}$, and 8$\times 10^{5}$ K at z=2, 1, 0, respectively. The baryon temperature as a function of redshift assumed in this work is shown in figure 2 (left). The sound speed of the gas is given by $c_{s}=\sqrt{\frac{\gamma kT}{\mu m_{H}}},$ (3.5) where $\gamma$=5/3 for an ideal monoatomic gas, $\mu$=1.22 is the mean molecular weight including a helium mass fraction of 0.24, $m_{H}$ is the mass of the hydrogen atom, and $T$ is the temperature of the baryons as a function of cosmic history discussed above [59]. The sound speed as a function of redshift is the dotted curve shown in figure 2 (right). I now discuss the relative velocity $v_{rel}$ between the dark matter PBH and the baryons throughout cosmic history. In the radiation-dominated phase of the universe at z>1089, the dark matter is already hierarchically clustering under the influence of its own gravity. The sound speed of the photon-baryon fluid is very high, of order one third of the velocity of light, and thus the normal matter undergoes baryonic acoustic oscillations [60, 61]. This leads to a spatial separation between baryons and dark matter and thus to a Gaussian distribution of relative velocities with an average around $\langle v_{rel}\rangle$$\approx$30 km/s [see 59, 62]. At z$\approx$1089, when electrons and protons combine and the universe becomes transparent, the sound speed of the gas dramatically drops to $\sim$6 km/s. The dark matter PBH kinematically decouple from the baryons and their relative velocities become highly supersonic. In the linear growth phase of the universe, at scales larger than the gas Jeans-length, the dark matter and the baryons fall in the same gravitational potentials of the cosmic web and thus their relative velocity decreases with the cosmic expansion: $\langle v_{rel}\rangle_{linear}\approx 30~{}{\frac{1+z}{1000}}~{}{\rm km~{}s}^{-1}.$ (3.6) This relation is shown as the right branch of the dashed line in figure 2 (right), above redshifts $z\gtrsim 20$. From this figure it becomes apparent, that between recombination and re-ionization the PBH move with highly supersonic, but decreasing velocities through the gas, due to the decaying sound waves. As we will see below, in this branch of the velocity curve the contribution of PBH to the cosmic backgrounds has a maximum. At lower redshifts, at scales smaller than the gas Jeans-length, the hierarchical clustering becomes non-linear and baryons falling into growing dark matter halos are virialized. As a consequence, the velocity dispersion between dark matter and gas increases again towards lower redshifts, scaling as $M_{Halo}^{1/3}$, where $M_{Halo}$ is the mass of the dark matter halo becoming non-linear. I used two different methods to estimate the average virial velocity for redshifts z$\lesssim$20\. First, the Millenium Simulation run described in [63] gives the mass function of dark matter halos with halo masses $M_{Halo}$>$10^{10}M_{\odot}$ for five different redshifts between z=10 and z=0. I extrapolated these simulated mass functions to lower masses ($M_{Halo}$>10${}^{3}M_{\odot}$) using the empirical universal halo mass function shape found through simulations by [64]. For every mass bin I determined the virial velocity according to the calibration of the velocity dispersion as a function of halo mass described in [65], and then calculated the average for each epoch. These virial velocities are shown as crosses in figure 2 (right). The extrapolation to halo masses as small as $M_{Halo}>10^{3}M_{\odot}$ is rather uncertain, both for the mass function and the velocity dispersion, because the cosmological simulations do not have a fine enough mass resolution at this scale. Also, the velocity dispersion relevant for Bondi capture onto PBH is determined by the smallest mass scales becoming non-linear at any redshift. A second possibility to calculate the relative velocities in the non-linear phase is therefore to determine the velocity dispersion directly from the dark matter power spectrum and integrate this over the smallest non-linear scales. This calculation has been performed by M. Bartelmann (2020, priv. comm.), adopting the normalization of the primordial power spectrum of $\sigma_{8}$=0.8. The relative velocity in the non-linear regime can be approximated by $\langle v_{rel}\rangle_{nonlinear}\approx 620~{}(1+z)^{-2.3}~{}{\rm km~{}s}^{-1},$ (3.7) and is shown as the left branch ($z\lesssim 20$) of the dashed line in figure 2 (right). At z=2 the cluster velocity dispersion agrees with this estimate, but it systematically overestimates the small-scale velocity dispersion at larger redshifts. Since we are interested in the total contribution of PBH to the electromagnetic radiation of the universe, we have to average over the whole Gaussian distribution of relative velocities. The Bondi accretion rate is proportional to $v_{rel}^{-3}$ (see above), and therefore smaller velocities dominate. For this particular case [38] propose to replace the quadratic average of relative velocity and sound speed in Bondi’s formula (3.3) above with their harmonic mean: $v_{eff}=\sqrt{\langle v_{rel}\rangle~{}c_{s}.}$ (3.8) This is the assumption I adopt here, and the resulting effective velocity $v_{eff}$ is shown as solid red curve in figure 2 (right). With equation (3.8) the accretion rate becomes $\dot{M}=2\lambda\pi\rho(GM)^{2}~{}(\langle v_{rel}\rangle~{}c_{s})^{-3/2}$ (3.9) It is interesting to note that in the range 200<z<20 both relative velocity and sound speed decrease linearly with (1+z). Therefore the mass accretion rate is expected to be constant in this era. It is important to understand that the redshift, at which both the sound speed and the relative velocity of the gas turn around due to re-ionization and virialization, respectively, and rapidly increase towards lower redshift, is crucial for our analysis. The redshift, where the minimum velocity occurs, ultimately determines the maximum flux contribution of PBH accretion to the cosmic backgrounds. The calculation of the Bondi accretion rate in equation (3.9) requires the density $\rho$ as a function of redshift. With $\Omega_{bar}$=0.049 and $\rho$=n$\cdot$mH, where $n$ is the number density of particles, I find $n=250~{}\left(\frac{1+z}{1000}\right)^{3}~{}{\rm cm}^{-3}.$ (3.10) I define $\dot{m}$ as the normalized mass accretion rate $\dot{m}=\dot{M}/\dot{M}_{Edd}$, with the Eddington accretion rate $\dot{M}_{Edd}$=1.44$\times 10^{17}M/M_{\odot}$ g s-1. Then I can rewrite equation (3.9) into normalized quantities $\dot{m}=\lambda\cdot 0.321\left(\frac{1+z}{1000}\right)^{3}~{}\left(\frac{M}{M_{\odot}}\right)\left(\frac{v_{eff}}{1~{}{\rm km~{}s}^{-1}}\right)^{-3}$ (3.11) With a very broad PBH mass spectrum, including intermediate-mass and supermassive black holes (MPBH>1000 M⊙), it is important to include the effective viscosity due to the Hubble expansion in the early universe [53]. The Bondi radius determines the amount mass captured by the PBH: $r_{B}={\frac{G~{}M}{v_{eff}^{2}}}\approx 1.34\cdot 10^{16}\left(\frac{M}{M_{\odot}}\right)\left(\frac{v_{eff}}{1~{}{\rm km~{}s}^{-1}}\right)^{-2}cm.$ (3.12) This is shown for two different PBH masses in figure 8 (left). The characteristic time scale for accretion is the Bondi crossing time $t_{cr}=r_{B}/v_{eff}$, which can be compared to the Hubble time $t_{H}$ at the corresponding redshift. If $t_{cr}<t_{H}$ there will be stationary accretion, while for Bondi crossing times larger than the Hubble time the accretion is suppressed. For every redshift we can calculate a critical PBH mass Mcr, below which the steady-state Bondi assumption can be applied. For redshifts z=1000, 200, 20 this critical mass corresponds to $log(M_{cr}/M_{\odot})$=5.3, 4.8, 3.4, respectively. At redshifts below z=20 Mcr rapidly increases to values above 106 M⊙. For PBH masses close to and above Mcr the Bondi accretion rate can be scaled by the Hubble viscosity loss given in the dashed curve in figure 3 (left) of [53]. Inserting $v_{eff}$ from equation (3.8) and figure 2 (right) into equation (3.11), assuming an accretion eigenvalue $\lambda$=0.05 and applying the above Hubble viscosity correction, I can finally calculate the normalized accretion rate as a function of redshift and PBH mass. The results are shown in figure 3 (left). For PBH masses smaller than $\sim$1000 M⊙ the normalized accretion rate is roughly constant in the redshift range 20<z<200 due to the fact that the density and velocity dependence on redshift in equation (3.9) roughly cancel out (see also the lower panel of figure 4 in [38]). At z<20 $\dot{m}$ drops dramatically because of the effective velocity increase. PBH masses larger than $\sim 10^{4}$ M⊙ reach accretion rates close to the Eddingon limit at z$\gtrsim$100, but are significantly affected by the Hubble viscosity at z$\gtrsim$20\. For all PBH masses the accretion rate is small enough, that the growth of the PBH population can be neglected over cosmic time (PBH with masses in the range 105-7 M⊙ accrete about 0.5–2% of their mass until z>20). Figure 3: Left: Normalized accretion rate onto PBH with masses in the range 0.1–107 M⊙ as a function of redshift. Right: Radiative efficiencies derived from the accretion rates, assuming the hot accretion flow model of [56] with a viscous heating parameter $\delta$=0.5. Figure 4: Spectra of the hot disk accretion flow (ADAF) from [55] with a viscous heating parameter $\delta$=0.5, divided by the normalized accretion rate. Left: accretion onto a 10 M⊙ black hole for different accretion rates, as indicated. Right: same for an accretion rate of $log(\dot{m})$=-1.6 but different black hole masses (as indicated) . ## 4 Accretion spectrum and radiative efficiency For the accretion flow and the electromagnetic emission mechanism I follow [37, 8] and assume the formation of an accretion disk. Accretion rates close to the Eddington limit will lead to the formation of a standard Shakura- Sunyaev disk [66], which has a canonical radiative efficiency $\eta\approx 0.1$. For much lower accretion rates $\dot{m}$$\ll$1 an advection-dominated hot accretion flow [55] is expected, with a significantly lower radiation efficiency [56], roughly scaling according to $\eta\propto\dot{m}$. Figure 4 shows hot accretion flow spectra from [55] with a viscous heating parameter $\delta$=0.5 for black holes, normalized by Eddington luminosity and mass accretion rate. The left graph shows radiation from a 10 M⊙ black hole at different mass accretion rates. The right graph shows the spectrum from black holes with different masses in the range 10-109 M⊙ and a mass accretion rate $log(\dot{m})$=–1.6. It is important to understand, that for advection dominated accretion flows not all the matter entering the Bondi radius will actually reach the black hole. This is due to feed-back mechanisms acting on the accreted gas, e.g. producing outflows or jets. The advection dominated flow models of [56, 55] therefore find a radial dependence of mass accretion rate $\dot{M}\propto R^{\alpha}$, typically with $\alpha\sim 0.4$. Within a radius of about 10 RS, where $R_{S}=2GM/c^{2}$ is the Schwarzschild radius, the accretion flow more closely follows the standard Shakura-Sunyaev description of a constant accretion rate with radius down to the last stable orbit ($\sim 3R_{S}$). In terms of the classical Bondi description of quasi-spherical capture, the loss of accreted matter can be associated with the accretion eigenvalue: $\lambda\approx\left(\frac{10R_{S}}{R_{D}}\right)^{\alpha},$ (4.1) where $R_{D}$ is the outer radius of the accretion disk formed. For $\alpha$=0.4, the value of $\lambda$=0.05 chosen for the analysis in this paper therefore corresponds to an outer disk radius of $R_{D}\sim$2$\times$104 $R_{S}$, about 8 orders of magnitude smaller than the Bondi radius. In this picture the accretion flow on large scales follows the Bondi quasi-spherical flow for most of the radial distance, until the advection-dominated accretion disk is formed. The radiative efficiency for the ADAF spectra in figure 4 is the integral over these curves and has been calculated through numerical simulations by [56]. For this work I use a digitized version of the highest efficiency curve in their figure 1, with a viscous heating parameter $\delta$=0.5111Please take note that the definition of $\dot{m}$ between these authors and the analysis presented here differs by a factor of 10.. A maximum radiative efficiency of $\eta$$\sim$0.08 is achieved for log($\dot{m}$)>–1.6. We can now calculate the radiative efficiency for every mass and redshift bin from the normalized accretion rate in figure 3 (left). The result is shown in figure 3 (right). It turns out that above redshifts z$\gtrsim$20 and black hole masses MPBH>100 M⊙, which dominate the contribution to the extragalactic background light, the radiative efficiencies are relatively large (typically >3%). Figure 5: Density-weighted bolometric luminosity of single PBH as a function of mass for different redshifts indicated (left), and as a function of redshift for different mass bins indicated (right). Figure 6: Density-weighted bolometric flux of single PBH as a function of mass for different redshifts indicated (left), and as a function of redshift for different mass bins indicated (right). We now have the ingredients to calculate the bolometric luminosity and flux expected from the baryon accretion onto the assumed PBH mass spectrum over cosmic time. For every black hole of mass MPBH I calculate the expected bolometric luminosity L${}_{bol}=\dot{m}~{}\eta~{}L_{Edd}$, where LEdd=1.26$\times$10${}^{38}~{}M_{PBH}/M_{\odot}$ erg/s is the Eddington luminosity, and the normalized mass accretion rate $\dot{m}$ as well as the radiation efficiency $\eta$ are taken from the data in figure 3. In every mass bin, the relative number density of PBH compared to those of 1 M⊙ is nPBH=fPBH/MPBH, where fPBH is the PBH mass function from figure 1. For every mass and redshift bin I thus multiply the bolometric luminosity with this relative number density in order to obtain the density-weighted luminosity $\langle L_{bol}\rangle^{*}$ for an equivalent PBH of 1 M⊙. This quantity is shown in figure 5 as a function of PBH mass (left) and redshift (right). It shows that the largest contribution to the PBH luminosity over cosmic time comes from PBH in the mass range MPBH=103-7 at redshifts z>100\. The Chandrasekhar PBH mass peak is subdominant in this representation. The total PBH luminosity deposited in the baryonic gas at high redshifts is important for the pre-ionization and X–ray heating of the intergalactic medium discussed in section 6. To calculate the contribution of PBH accretion to the extragalactic background light we need to convert the density-weighted luminosities in Figure 5 to bolometric fluxes using the luminosity distance $D_{L}$ at the corresponding redshift: $\langle F_{bol}\rangle^{*}$=$\langle L_{bol}\rangle^{*}$/(4$\pi~{}D_{L}^{2}$). This quantity is shown in figure 6 as a function of PBH mass (left) and redshift (right). It shows that the largest contribution to the extragalactic surface brightness is produced at a redshift z$\approx$20 from PBH in the mass range MPBH=102-5, and a similar contribution from the Chandrasekhar mass peak. SMBH at M${}_{PBH}\sim 10^{6.5}$ have a notable contribution around z$\sim$10. ## 5 The contribution of PBH to the extragalactic background light To calculate the surface brightness per redshift shell in a particular observed frequency band [$\nu_{1}$;$\nu_{2}$] of the electromagnetic spectrum, I take into account the spectral shape and the fraction of the radiation falling into the rest frame frequency band [$\nu_{1}$/(1+z);$\nu_{2}$/(1+z)]. The exact spectral shape is not so important for this derivation, it is mainly used to calculate the bolometric corrections, i.e. the fraction of the total luminosity falling into the various frequency bands as a function of redshift. The ADAF spectra in figure 4, in particular those at high $\dot{m}$ values, can be approximated by power laws with an exponential cutoff at $\sim$200 keV. Following [37] and [8], I assume a power law slope of –1 (corresponding to a flat line in figure 4). Below a critical frequency $\nu_{c}$ the power law spectrum is cut off by synchrotron self-absorption into a power law with a steep slope of approximately +1.86. As can be seen in figure 4 (right), $\nu_{c}$ depends on MPBH and can be approximated by log($\nu_{c}$)$\approx$14.82–0.4log(MPBH/M⊙). The bolometric corrections are then obtained by integrating the analytic normalized spectra over the observed frequency bands. For the 2–5$\mu$m band we have to consider in addition the Lyman-$\alpha$ break, which produces a sharp cutoff at z$\gtrsim$30 (see e.g. [28, 67]). These bolometric corrections are shown in figure 7 (left) for the 2–5$\mu$m NIR band, the 0.5–2 keV and the 2–10 keV X–ray bands, respectively. To predict the surface brightness of all PBH across cosmic time in these observed frequency bands, the total flux per PBH in figure 6 (right) has to be multiplied with the bolometric correction and the PBH surface density in a particular redshift shell. Using the critical mass density of the universe $\rho_{crit}$=1.26$\times 10^{20}M_{\odot}~{}{\rm Gpc}^{-3}$ and the Dark Matter density $\Omega_{DM}$=0.264, as well as the reference mass 1M⊙, a comoving PBH space density of $n_{PBH}=3.32\times 10^{19}(1+z)^{3}~{}{\rm Gpc}^{-3}$ is obtained. For every redshift shell [z+$\Delta$z] the PBH space density is multiplied with the volume of the shell [V(z+$\Delta$z)–V(z)] and divided by 4$\pi$ deg2 to obtain the number of PBH per deg2. Figure 7 (right) shows the derived surface brightness as a function of redshift (per $\Delta$z=1 interval) for the three spectral bands considered here. The emission in all three bands peaks around z$\approx$20 with a FWHM of $\Delta$z$\approx$[-3;+6]. Figure 7: Left: The bolometric correction, i.e. the fraction of the total luminosity falling into the respective observed frequency band as a function of redshift, for the 2–5$\mu$m NIR band, as well as the 0.5–2 and 2–10 keV X–ray bands. Right: Predicted surface brightness of the PBH in the same observed bands as a function of redshift (per $\Delta$z=1). The curves in figure 7 (right) can now be integrated to predict the total PBH contribution to the extragalactic background light as SB2-5μ$\approx$10-13, SB0.5-2keV$\approx$1.9$\times$10-13, and SB2-10keV$\approx$1.3$\times$10-13 erg cm-2 s-1 deg-2, respectively. The minimum amount of X–ray surface brightness necessary to explain the CXB/CIB cross-correlation signal observed by [23] in the 0.5–2 keV band has been discussed by [29]. This is $9\times 10^{-14}$ erg cm-2 s-1 deg-2, corresponding to roughly 1% of the total CXB signal in this band. The 0.5–2 keV PBH contribution predicted for an accretion eigenvalue of $\lambda$=0.05 in equation (3.11) is thus about a factor of 2 larger than the observed CXB fluctuation signal, which could well be consistent, given the coherence between the CXB and CIB signals. As discussed above, there is a marginally significant diffuse CXB remaining after accounting for all discrete source contributions [31, 34]. Extrapolating into the X–ray bands considered here, this residual flux corresponds to $\approx$(7$\pm$3) and $\approx$(9$\pm$20)$\times 10^{-13}$ erg cm-2 s-1 deg-2 in the 0.5–2 keV and 2–10 keV band, respectively. Assuming the $\lambda$=0.05 value, the predicted PBH contribution is therefore well below the upper limit (15–25%) of any unresolved component left in the CXB. The main result of this paper is therefore, that the assumed PBH population for the dark matter can indeed explain the X–ray fluctuation signal, with a Bondi accretion eigenvalue of $\lambda$=0.05. The flux measured in the 2–5$\mu$m CIB fluctuations at angular scales >100" is about 1 nW m-2 sr-1 [68], or 3$\times 10^{-10}$ erg cm-2 s-1. The cross- correlated CIB/CXB fluctuations contribute about 10% to the total CIB fluctuations [23], i.e. 3$\times 10^{-11}$ erg cm-2 s-1. Therefore the predicted PBH contribution to these CIB fluctuations is only about 0.5% for $\lambda$=0.05. It is argued in [6] that PBH in the early universe could amplify the cosmic power spectrum at small spatial scales (see below). Together with the pre-ionization of the intergalactic medium discussed below, the PBH can therefore significantly increase the associated star formation. The NIR emission in this picture would then be dominated by early star formation associated with PBH instead of direct PBH emission. ## 6 Discussion ### 6.1 Linear versus post-linear growth In this simplified treatment I only consider the linear evolution of the power spectrum above the virialization redshift around z$\approx$20 (see figure 2 right). On sufficiently large scales the initial power spectrum has been very precisely determined as nearly scale invariant with overdensities of 10-4 [35], and the PBH density field is expected to follow the standard adiabatic perturbations. On small scales the power spectrum is only poorly constrained and could be significantly amplified by the discrete nature of the PBH population itself [6, 69, 70]. Poisson variations in the density of PBH will introduce non-linear growth of density fluctuations and the corresponding velocity dispersion already well before the virialization redshift z$\sim$20 discussed above. However, from numerical simulations [70] conclude that the nonlinear velocity perturbations introduced by >20 M⊙ PBH are too small to dominate the relative velocities between baryons and PBH at z$\gtrsim$100 [see also 71]. However, non-linear effects definitely become more important at lower redshifts (see above) and could effectively reduce the Bondi capture rate. ### 6.2 Magnetic fields in the early universe The accretion mechanism assumed in the Bondi capture model only works, if there is a rough equipartition between the kinetic energy and magnetic fields in the accreted gas, as it is the case in the turbulent interstellar medium of our Galaxy. It is therefore justified to ask, whether this mechanism can also work at high redshifts, where the existence and magnitude of magnetic fields is still unclear. Magnetic fields are present at almost every scale of the low redshift universe, from stars and planets to galaxies and clusters of galaxies and possibly even in the intergalactic medium in voids of the cosmic web, as well as in high-redshift galaxies. [72] and [73] review the observations and possible origin of magnetic fields. There is a surprising similarity between the relatively strong magnetic fields measured in our own Galaxy (0.3–0.4 nT) and other nearby galaxies ($\sim$1 nT) with magnetic fields measured in clusters of galaxies (0.1–1 nT), as well as in high redshift galaxies ($\sim$1 nT), when the universe was only about 1/3 of its current age. There are even indications of magnetic fields of order $\gtrsim$10-20 T in cosmic voids derived from the gamma ray emission of blazars [74]. One can come to the conclusion that the origin of cosmic magnetism on the largest scales of galaxies, galaxy clusters and the general intergalactic medium is still an open problem [75]. It is usually assumed that primordial or cosmic seed fields are amplified over time through the galactic dynamo effect to produce the rather strong fields observed in local galaxies. In this picture it is, however, unclear how similar fields can be created in such different settings (e.g. clusters) and different cosmic times (high-redshift galaxies). An interesting possibility is therefore that cosmic magnetic fields could be remnants from the early universe, or created in a process without galactic dynamos. Assuming equipartition, the energy density in the CMB photons would correspond to a magnetic field of about 0.3 nT. Magnetic fields of $10^{-20}$ T, as observed in cosmic voids today, would only require a minute fraction of $10^{-10}$ of this energy density in the early universe to be channeled into magnetic fields. Figure 8: Left: The Bondi radius for a 104 M⊙ (thin blue) and 1 M⊙ (thick blue) PBH compared to the proton (red) and electron (green) Larmor radius, assuming a magnetic field of B=$10^{-20}$ T, as observed in local galaxy voids. Right: Baryon ionization/heating fraction $\chi_{e}$ as a function of redshift. The thin dash-dotted line shows the residual ionization left over from the radiation dominated era [76]. The red curve shows the ionization fraction from UV photons produced by accreting PBH. The blue curve shows the corresponding heating fraction by >1 keV X–ray photons. The thick dashed black line shows one of the models consistent with the Planck satellite data [35] (see text). The green hatched areas shows the range of high–redshift ionization fractions considered in [16]. Here I argue, that PBH could play a role in amplifying early magnetic seed fields and sustaining them until the epoch of galaxy formation. I compare the Bondi radius in eq. (3.12) and figure 8 (left) with the Larmor radius $r_{L}={\frac{m~{}v_{\bot}}{|q|~{}B}},$ (6.1) which determines the gyro motion of particles moving through a magnetic field. Here $m$ is the mass of the particle (either proton or electron), $v_{\bot}$ is the velocity component of the particle perpendicular to the magnetic field, $|q|$ is the absolute electric charge of the particle, and $B$ is the magnetic field. Assuming a seed field of $B$=10-20 T and approximating the velocity with the sound speed $v_{\bot}$$\approx$$c_{s}$ yields the gyro radius for both protons and electrons. The proton gyro radius is about a factor of 2000 larger than the electron gyro radius. Figure 8 (left) shows the Bondi radius as well as the proton and electron Larmor radii as a function of redshift. If the gyro radius is smaller than the Bondi radius, the respective particle is easily accreted by the PBH. If, however, the gyro radius is larger than the Bondi radius, the particle will first not be easily accreted, but rather spiral around the PBH. From 8 we see, that at redshifts z$\gtrsim 70$ and PBH masses in the range MPBH$\approx$0.3–500 for our assumed magnetic field strength the proton Larmor radius is larger than the Bondi radius, while the electron Larmor radius is smaller than the Bondi radius. There is still a substantial fraction of residual electrons and protons/helium ions from the era before recombination (see the dash-dotted curve in 8 right from [76]). These electrons have therefore no problem being accreted, while for certain PBH masses protons resist the accretion. This will create a net electric current, which in turn will increase the average magnetic field strength until the proton gyro radius becomes smaller than the Bondi radius. This way the PBH can amplify the average magnetic field. The supersonic motion between baryon gas and PBH discussed above is expected to be coherent over large scales (of the order of Mpc) and can therefore induce large-scale ordered magnetic fields. A further magnetic field amplification occurs, as discussed above, in the accretion funnel, when magnetic fields are dissipated through reconnection and ejected with the plasma. In a sense, the ubiquitous PBH can possibly create their own magnetic field and distribute it throughout the universe. It is, however, plausible to assume, that magnetic fields in the early universe should be smaller than today, and the fractions of ionized baryons is less. This could also explain a rather small Bondi accretion eigenvalue required to match the observations. ### 6.3 Re-Ionization Next I turn to the contribution of PBH accretion to the re-ionization and re- heating history of the universe. At z$\approx$1089, when the photons decoupled from the baryons and formed the CMB radiation, the universe became predominantly neutral. Afterwards the universe entered a long period of “darkness”, in which the residual ionization left over from the primordial photon-baryon fluid diminished (see figure 8 right), the background photons cooled down, and any higher-frequency emission was quickly absorbed in the atomic gas. In the model described here the first sources to illuminate the “dark ages” would be the PBH accreting from the surrounding gas. Their ultraviolet emission, above the Hydrogen ionization energy of 13.6 eV, would start to re-ionize small regions around each PBH. However, in the beginning the density of the surrounding gas is still so high that the ionized matter quickly recombines. As long as the re-combination time is much shorter than the Hubble time at the corresponding epoch, UV photons from the PBH cannot penetrate the surrounding medium, but instead produce an ionized bubble growing with time. In this type of ionization equilibrium the number of ionizing photons $N_{ion}$ required to overcome recombination is given as the ratio between the Hubble time $t_{H}(z)$ and the recombination time $t_{rec}(z)$ at the particular epoch, and can be derived from equations (2) and (3) in [77] as $N_{ion}=t_{H}/t_{rec}=max[1,0.59~{}\left(\frac{1+z}{7}\right)^{1.5}].$ (6.2) At a redshift z=1000, $N_{ion}$ is about 1000, and reaches a level of unity at z$\lesssim$10 for the assumed set of parameters. For this calculation I ignore clumping of the ionized gas. In reality the effective clumping factor is relatively large for reionization at high redshift because the ionizing sources are more numerous in the filaments of the cosmic web, but must also ionize a correspondingly larger fraction of dense gas in the filaments, and thus ionization is slowed down. At lower redshift, when molecular gas and stars have already formed, not all UV photons will escape the dense regions. The effective escape fraction is one of the largest uncertainties in our current understanding of re-ionization. For simplicity, I assume an escape fraction $f_{esc}$=0.1 for UV photons, and $f_{esc}$=1 for X–ray photons, independent of redshift. To calculate the history of pre-ionization by PBH I integrate the above normalized ADAF model for frequencies log($\nu$)>15.52 Hz, corresponding to the hydrogen ionization energy of 13.6 eV. To calculate the number of ionizing photons per PBH of reference mass 1 M⊙ I take the density-weighted luminosity $\langle L_{bol}\rangle^{*}$ from figure 5 (right). To determine the average space density of ionizing photons I multiply with the average comoving space density of PBH (assuming the reference mass 1 M⊙): $n_{PBH}=1.06\times 10^{-54}~{}\left(\frac{1+z}{1000}\right)^{3}~{}{\rm cm}^{-3},$ (6.3) and with the escape fraction $f_{esc}$ and finally divide by $N_{ion}$ from eq. (6.3) and the average density of baryons given in equation (2.10) to determine the ionization rate of baryons over cosmic time. The red curve in figure 8 (right) shows the cumulative ionization fraction $\chi_{e}$ as a function of redshift for the accretion eigenvalue $\lambda$=0.05. A maximum cumulative ionization fraction of $\sim$2.8%, is reached at a redshift z$\approx$10\. This can be compared to one of the recent models determined from the Planck satellite data [35]. Here the 1$\sigma$ upper percentile of the FlexKnot model in their figure 45, which is consistent with the ionization optical depth determined from the most recent Planck data, is shown as dashed curve. A high-redshift contribution to the ionization history of the universe has also been discussed by [78] and [16]. The range of $\chi_{e}$ values assumed in the latter work is shown as green hatched region in figure 8 (right). For the choice of $\lambda$=0.05, the UV emission from the PBH population assumed in the toy model is therefore fully consistent with the observational constraints from Planck. ### 6.4 X–ray heating The role of X–ray heating in shaping the early universe has been discussed by [79]. Compared to UV photons, X–ray photons have a much larger mean free path and can therefore ionize regions far away from the source. In addition, most of the X–ray energy gets deposited into heating up the gas. In order to estimate the amount of X–ray heating of the gas I applied the same mechanism as for the UV photons above, but integrating the above ADAF model for frequencies log($\nu$)>17.68 Hz, corresponding to 2 keV. I assume an escape fraction of $f_{esc}$=1 and $N_{ion}$=1. The blue curve in figure 8 (right) shows the cumulative 2 keV heating fraction per baryon as a function of redshift for the assumed accretion eigenvalue of $\lambda$=0.05. The maximum cumulative heating fraction is $\sim$1.6%. X–rays from PBH therefore have only a small contribution to the pre-ionization of the universe as a whole, but can be responsible for a part of the pre-heating of gas observed in the "entropy floor" of groups of galaxies. [80] reviewed the energetics of groups and clusters of galaxies, which cannot be reproduced by simple models, where the gas density is proportional to dark matter density. [81] and [82] argued that the gas must have been pre-heated before falling into the cluster potential. X–ray observations of groups of galaxies with ROSAT by [83] confirmed the need for a non-gravitational entropy injection in the group gas. These authors coined the term "entropy floor", which amounts to an energy of about 2 keV per baryon injected into the group gas. The pre-heating of the gas by PBH, albeit only contributing to a small fraction of the total baryon content of the universe, could have played an important role in heating the high-density regions, which first formed groups and clusters. ### 6.5 Cosmological 21-cm signal Figure 9: Density-weighted 1.4 GHz (observed) luminosity of a single PBH as a function of mass for different redshifts indicated. The red-shifted 21-cm line can provide important new constraints on the physical processes in the early universe [see e.g. 84, 8]. The Experiment to Detect the Global EoR Signature (EDGES) has measured a strong, sky-averaged 21-cm absorption line profile after subtracting the Galactic synchrotron emission [85]. The signal is centered at a frequency around 78 MHz and covers a broad range in redshift z=15–20. The signal may be due to ultraviolet light from first objects in the universe altering the emission of the 21-cm line by lowering the spin temperature of neutral hydrogen relative to the CMB. However, the signal is about three times larger than that expected from the standard $\Lambda$CDM cosmology, which led some authors to suggest new dark matter physics [e.g. 86]. Instead of new physics, an increased 21-cm radio background contribution above the CMB at the epoch around z=15–20 could also explain the EDGES signal. Indeed, [87] estimate the additional 21-cm radio background from accretion onto putative radio-loud intermediate-mass black holes (IMBH) forming in first molecular cooling halos at redshifts z=16–30. This could be sufficient to explain the EDGES feature, however, it requires extreme assumptions about the radio loudness of the IMBH population. Instead of assuming an interpretation in terms of mini-QSOs from IMBH grown through accretion, I estimate here, whether PBH accretion could have a significant contribution to the EDGES signal. A full treatment of this effect for the PBH toy model is beyond the scope of this paper, but similar to the treatment of the PBH contribution to the CXB and CIB derived in section 5, I can estimate the PBH contribution to the observed low-frequency cosmic radio background, and thus to the EDGES signal. The balloon-borne double-nulled Absolute Radiometer for Cosmology, Astrophysics and Diffuse Emission (ARCADE2) instrument has measured the absolute temperature of the sky at frequencies 3, 8, 10, 30, and 90 GHz, and detected a significant excess over the CMB blackbody spectrum at a temperature of 2.731K [88]. Combining the ARCADE2 measurements with lower frequency data from the literature, the excess brightness temperature can be characterized as a power law TR=1.19 ($\nu$/1 GHz)-2.62 K, which translates into a radio spectrum with a slope of -0.62 and a normalization of 3$\times$10-22 W m-2 sr-1 at 1.4 GHz. This cosmic radio synchrotron background is substantially larger than that expected from an extrapolation of the observed radio counts [89], and thus presents a major new challenge in astrophysics. [90] found that the global 21cm signal can be significantly amplified by an excess background radiation compared to the standard $\Lambda$CDM models, especially in absorption. Assuming that only 10% of the radio synchrotron background originates at large redshifts, they predict a 21cm feature almost an order of magnitude stronger than that expected purely from the CMB. Interpolating between their calculations for 0.1% and 10% excess background I find, that an excess high-redshift radiation field of about 5% of the observed radio synchrotron background is sufficient to explain the EDGES findings. In order to calculate the expected PBH contribution to the radio background I assume that each black hole has a radio emission following the fundamental plane relation between X-ray luminosity, radio luminosity and black hole mass found by [91]. I use the parameterization for radio-quiet AGN from [92]: log(LR)=0.85 log(LX)+0.12 log(MPBH), where LR is the 1.4 GHz radio luminosity in units of 1040 erg/s, LX is the 0.1–2.4 keV X–ray luminosity in units of 1044 erg/s, and MPBH is the PBH mass in units of 108 M⊙. The X–ray luminosity is calculated from the bolometric luminosity shown in figure 5 (right). Assuming the ADAF radiation spectrum above, the fraction of the bolometric luminosity radiated in the 0.1-2.4 keV band is 0.23. For the radio spectrum I assume a power law with spectral index -0.62. This means that the bolometric correction is $\propto$(1+z)0.38. The radio luminosities derived this way as a function of PBH mass and redshift are shown in figure 9. Multiplying these luminosities with the PBH density over cosmic time, converting into observed fluxes and integrating over mass and redshift I obtain a contribution of radio-quiet PBH to the observed radio background of $\sim$3$\times$10-25 W m-2 sr-1 at 1.4 GHz, i.e. a fraction of 0.1% of the observed synchrotron radio background. Most of this additional radiation field is accumulated at redshifts z$\gtrsim$20\. Following [90], this excess radio flux would increase the depth of the 21cm absorption line only by about 30%. If, however, some fraction of the PBH would be radio-loud (e.g. 5% with 1000 times higher fluxes), like observed in the AGN population, the 5% excess high-redshift radio background flux necessary to explain the EDGES feature could be easily achieved by PBH. ### 6.6 Primordial Black Holes in the Galactic Center Next I discuss some observational effects of the putative PBH population residing in the Galactic Center region. First, assuming a Milky Way dark matter halo of $\sim$10${}^{12}M_{\odot}$, the PBH mass spectrum from section 2 (figure 1) indeed predicts about one supermassive PBH with a mass $\gtrsim 10^{6.5}M_{\odot}$, consistent with the Sgr A∗ SMBH in the center of our Galaxy [93]. To estimate the density of dark matter and baryons in the Galactic bulge region itself, I refer to dynamical models of the Milky Way’s center, using the density of red clump giant stars stars measured in infrared photometric surveys, as well kinematic radial velocity measurements of M-giant stars in the Galactic bulge/bar constructed in [94]. From N–body simulations of stellar populations for barred spiral discs in different dark matter halos these authors were able to determine with high precision the mass in a volume of ($\pm 2.2\times\pm 1.4\times\pm 1.2$ kpc3) centered on the Galactic Bulge/Bar. The total mass is (1.84$\pm$0.07)$\times$1010 M⊙. Depending on the assumed model, about 9–30% consists of dark matter, i.e. 1.7–5.5$\times$109 M⊙. Applying the above PBH mass spectrum, we thus expect 5–10 intermediate- mass PBH with MPBH>104 M⊙ in the Galactic bulge region, but zero with MPBH>105 M⊙. Recent high-resolution observations of high-velocity compact clouds (HVCC) in the central molecular zone of our Milky Way with the Atacama Large Millimeter/submillimeter Array (ALMA) have indeed identified five promising IMBH candidates, wandering through the patchy ISM in the Galactic Center [see 95]. The most compelling case is HCN–0.044–0.009, which shows two dense molecular gas streams in Keplerian orbits around a dark object with a mass MIMBH=(3.2$\pm$0.6)$\times$104 M⊙ [96]. The semimajor axis of these Keplerian streams are around 2 and 5$\times$1017 cm. Another interesting case is the infrared and radio object IRS13E, a star cluster close to the Galactic Center potentially hosting an IMBH [97]. ALMA observations identified a rotating, ionized gas ring around IRS13E [98], with an orbit radius of 6$\times$1015 cm and a rotation velocity of $\sim$230 km/s. This is thus another promising IMBH candidate with a mass of MIMBH=2.4$\times$104 M⊙. Two of the five IMBH candidate sources in [95] are possibly associated with X–ray sources detected in the deep Chandra images of the Galactic Center [99]. IRS13E has the X–ray counterpart CXOGC 174539.7-290029 with an X–ray luminosity L2-10keV$\approx$3$\times$1030 erg/s, and CO–0.31+0.11 has the possible X–ray counterpart CXOGC 174426.3-290816 with an X–ray luminosity L2-10keV$\approx$4$\times$1029 erg/s. The other three sources have X–ray upper limits in the range of several 1030 erg/s. Assuming a bolometric correction factor of 1/30 for the 2–10 keV range, the combination of the mass accretion eigenvalue $\lambda$ and the radiative efficiency $\eta$ therefore has to be extremely small, on the order of 3$\times$10-11. This is about a factor of 100 lower than the 2$\times$10-9 LEdd derived for the Galactic Center black hole Sgr A∗ [55]. Even assuming a very low efficiency ADAF model, a steady-state accretion solution is unlikely for these objects. The solution of this puzzle may come from the fact, that the velocity and density gradients of the gas in the Galactic Center region are so large, that the angular momentum forces any accreted matter into Keplerian motion well outside the classical Bondi radius [see 37]. Indeed, the orbital periods and lifetimes of the Keplerian streams around HVCCs are in the range 104-5 years, and thus accretion is expected to be highly variable on very long time scales. Another possibility to halt accretion for a considerable time is the feedback created by outflows during efficient accretion events. Numerical simulations of the gas dynamics in the center of the Galaxy [100] show that the outflows significantly perturb the gas dynamics near the Bondi radius and thus substantially reduce the capture rate. The net result of both these effects would be a highly variable, low duty cycle bursty accretion onto the IMBH and SMBH in the Galactic Center, consistent with the extremely low accretion efficiencies observed. The accretion limits for black holes in the mass range MPBH=20–100 M⊙ derived from deep Chandra and radio observations of the Galactic Center [45], are already shown in figure 1 to be consistent with the assumed PBH mass spectrum. Recent NuSTAR observations of the Galactic Center, including the effects of gas turbulence and the uncertainties related to the dark matter density profile even further weaken these constraints [101]. At any rate, the assumed PBH mass distribution of section 2 is fully consistent with the observational constraints for all PBH masses >20 M⊙ in the Galactic Center. Finally, I check the PBH predictions for lower masses against the Galactic ridge X–ray emission (GRXE), an unresolved X–ray glow at energies above a few keV discovered almost 40 years ago and found to be coincident with the Galactic disk. The GRXE in the 2–10 keV band has a background-corrected surface brightness of (7.1$\pm$0.5)$\times 10^{-11}$ erg cm-2 s-1 deg-2 which was largely resolved into discrete sources [102], with the brightest source having an X–ray luminosity of about 1032 erg s-1, and the minimum detectable luminosity around 1030 erg s-1. The integrated emission has a strong iron line from hot plasma at 6.7 keV, and the authors interpret the X–ray emission as coming from a large population of cataclysmic variables and coronally active stars. Using the mass determination in the Galactic bulge/bar above I find that the average baryon density in this region is in the range 17–22 cm-3. However, most of these baryons are locked up in stars. In order to estimate the physical conditions of the gas in the Galactic Bulge/Bar region I follow [103]. According to these authors, there are four phases of the interstellar medium in the Galactic center region: (1) a cold molecular phase in Giant Molecular Clouds with temperatures around 50 K and gas densities n=103.5-4 cm-3 covering a volume fraction around 1%; (2) a warm molecular phase with temperatures around 150 K and gas density n=102.5 cm-3, covering a volume fraction of $\sim$10%; (3) an atomic phase with temperatures around 500-1000 K and density $\sim$10 cm-3, covering a volume fraction around 70%, and (4) ionized gas with temperatures 104-8 K and an unknown density. Depending on the temperature of the interstellar medium, the sound speeds are in the range $c_{s}$=1–100 km/s. The stellar velocity dispersion in the central region of our Galaxy is in the range 100–140 km/s [104], while the dark matter velocity dispersion is about 110 km/s [105]. In the spirit of the discussion leading up to equation 3.9 and figure 2 (right) above, I assume an effective velocity for Bondi accretion $v_{eff}\approx$50 km/s and $\lambda$=0.1. As shown in figures 5 and 6, the PBH emissivity for the assumed mass spectrum is typically dominated by objects with MPBH>100, which already are discussed above. Indeed, calculating the Bondi accretion rates and radiative efficiencies for objects with MPBH<100 for the four ISM phases in the Galactic Center I obtain negligible PBH contributions to the total GRXE brightness. Some individual MPBH$\sim$100 M⊙ objects in high density regions could in principle have X–ray luminosities up to $L_{2-10keV}$=1033 erg/s, more luminous than the brightest X–ray source detected in the Galactic Ridge survey [102], but taking into account the strong variability and small duty cycle expected for this class of objects, their absence in the surveys is understandable. Some of the fainter unidentified sources in the current deep X–ray surveys could indeed be accreting PBH and future large X–ray observatories like ATHENA [106] or LYNX [107] should be able to identify more. See also [108] for future searches in the IR and sub-mm region. ## 7 Conclusions and Outlook The interpretation of cold dark matter as the sum of contributions of different mass PBH families [1] could explain a number of so far unsolved mysteries, like e.g. the massive seed black holes required to create the supermassive black holes in the earliest QSOs [13], the ubiquitous massive LIGO/VIRGO massive binary black holes [e.g. 6], or even the putative "Planet X" PBH in our own Solar System [14]. The most abundant family of PBH should be around the Chandrasekhar mass (1.4 M⊙). This prediction may already have been vindicated by the recent OGLE/GAIA discovery of a sizeable population of putative black holes in the mass range 1-10 M⊙ [15]. Here I estimate the contribution of baryon accretion onto the overall PBH population to various cosmic background radiations, concentrating first on the crosscorrelation signal between the CXB and the CIB fluctuations discovered in deep Chandra and Spitzer surveys [23]. Assuming Bondi capture and advection dominsted disk accretion with reasonable parameters like baryon density and the effective relative velocity between baryons and PBH over cosmic time, as well as appropriate accretion and radiation efficiencies, I indeed predict a contribution of PBH consistent with the residual X–ray fluctuation signal. This signal peaks at redshifts z$\approx$17–30. The PBH contribution to the 2–5 $\mu$m CIB fluctuations, however, is only about 1%, so that these would have to come from star formation processes associated with the PBH. I discuss a number of other phenomena, which could be significantly affected by the PBH accretion. Magnetic fields are an essential ingredient in the Bondi accretion process, and I argue that the PBH can play an important role in amplifying magnetic seed fields in the early universe and maintaining them until the galactic dynamo processes set in. Next I study the contribution of the assumed PBH population to the re-ionization history of the universe and find that they do not conflict with the stringent ionization limits set by the most recent Planck measurements [35]. X–ray heating from the PBH population can provide a contribution to the entropy floor observed in groups of galaxies [83]. The tantalizing redshifted 21-cm absorption line feature observed by EDGES [85] could well be connected to the radio emission contributed by PBH to the cosmic background radiation. Finally, the number of IMBH and the diffuse X–ray emission in the Galactic Center region are not violated by the PBH dark matter, on the contrary, some of the discrete sources in the resolved GRXE could be accreting PBH. It is obvious, that our simple PBH toy model for the dark matter requires significantly more work to turn it into quantitative predictions. Real magnetohydrodynamic simulations of the whole PBH mass spectrum including their own hierarchical clustering would be required to obtain the full history of their contribution to the cosmic backgrounds. The exciting EDGES discovery definitely requires a full-blown analysis of the radio contribution of PBH to the cosmic background. Future X–ray observations with eROSITA and ATHENA, infrared wide field surveys with Euclid and WFIRST, and microlensing observations with WFIRST will provide important additional diagnostics in this exciting and dramatically developing PBH field (see [109, 110]). ## Acknowledgments I am thankful to Juan García-Bellido for sharing a digital copy of the new running spectral index PBH mass distribution model in figure 1 in advance of publication, as well as many very useful discussions about PBH. I am indebted to Matthias Bartelmann for computing the small-scale non-linear relative velocity dispersion (figure 2 right) and providing very valuable comments and corrections to the manuscript. I would like to thank Sergey Karpov for very helpful discussions and inputs about their spherical accretion model. I would also like to thank my colleagues Nico Cappelluti, Sasha Kashlinsky and Alexander Knebe for very helpful discussions and contributions. Finally, I thank an anonymous referee for pointing out a substantial flaw in the first version of the paper, which has been corrected here and led to significant improvements. Throughout this work I made use of Ned Wright’s cosmology calculator [111] and the NASA Astrophysics Data System (ADS), operated by the Smithsonian Astrophysical Observatory under NASA Cooperative Agreement NNX16AC86A. ## References * [1] J. García-Bellido, _Primordial black holes and the origin of the matter–antimatter asymmetry_ , _Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences_ 377 (2019) 91. * [2] S. Hawking, _Gravitationally collapsed objects of very low mass_ , _Monthly Notices of the RAS_ 152 (1971) 75. * [3] B. P. Abbott, R. Abbott, T. D. Abbott, M. R. Abernathy, F. Acernese, K. Ackley et al., _Binary Black Hole Mergers in the First Advanced LIGO Observing Run_ , _Physical Review X_ 6 (2016) 041015 [1606.04856]. * [4] B. P. Abbott, R. Abbott, T. D. Abbott, S. Abraham, F. Acernese, K. Ackley et al., _Binary Black Hole Population Properties Inferred from the First and Second Observing Runs of Advanced LIGO and Advanced Virgo_ , _Astrophysical Journal, Letters_ 882 (2019) L24 [1811.12940]. * [5] S. Bird, I. Cholis, J. B. Muñoz, Y. Ali-Haïmoud, M. Kamionkowski, E. D. Kovetz et al., _Did LIGO Detect Dark Matter?_ , _Physical Review Letters_ 116 (2016) 201301 [1603.00464]. * [6] A. Kashlinsky, _LIGO Gravitational Wave Detection, Primordial Black Holes, and the Near-IR Cosmic Infrared Background Anisotropies_ , _Astrophysical Journal, Letters_ 823 (2016) L25 [1605.04023]. * [7] S. Clesse and J. García-Bellido, _The clustering of massive Primordial Black Holes as Dark Matter: Measuring their mass distribution with advanced LIGO_ , _Physics of the Dark Universe_ 15 (2017) 142 [1603.05234]. * [8] O. Mena, S. Palomares-Ruiz, P. Villanueva-Domingo and S. J. Witte, _Constraining the primordial black hole abundance with 21-cm cosmology_ , _Physical Review D_ 100 (2019) 043540 [1906.07735]. * [9] J. García-Bellido, B. Carr and S. Clesse, _A common origin for baryons and dark matter_ , _arXiv e-prints_ (2019) arXiv:1904.11482 [1904.11482]. * [10] C. T. Byrnes, M. Hindmarsh, S. Young and M. R. S. Hawkins, _Primordial black holes with an accurate QCD equation of state_ , _Journal of Cosmology and Astroparticle Physics_ 2018 (2018) 041 [1801.06138]. * [11] J. García-Bellido, _Massive Primordial Black Holes as Dark Matter and their detection with Gravitational Waves_ , in _Journal of Physics Conference Series_ , vol. 840 of _Journal of Physics Conference Series_ , p. 012032, May, 2017, DOI [1702.08275]. * [12] K. M. Belotsky, V. I. Dokuchaev, Y. N. Eroshenko, E. A. Esipova, M. Y. Khlopov, L. A. Khromykh et al., _Clusters of Primordial Black Holes_ , _European Physical Journal C_ 79 (2019) 246 [1807.06590]. * [13] Y. Li, L. Hernquist, B. Robertson, T. J. Cox, P. F. Hopkins, V. Springel et al., _Formation of z ~6 Quasars from Hierarchical Galaxy Mergers_, _Astrophysical Journal_ 665 (2007) 187 [astro-ph/0608190]. * [14] J. Scholtz and J. Unwin, _What if Planet 9 is a Primordial Black Hole?_ , _arXiv e-prints_ (2019) arXiv:1909.11090 [1909.11090]. * [15] Ł. Wyrzykowski and I. Mandel, _Constraining the masses of microlensing black holes and the mass gap with Gaia DR2_ , _arXiv e-prints_ (2019) arXiv:1904.07789 [1904.07789]. * [16] A. Kashlinsky, R. G. Arendt, F. Atrio-Barandela, N. Cappelluti, A. Ferrara and G. Hasinger, _Looking at cosmic near-infrared background radiation anisotropies_ , _Reviews of Modern Physics_ 90 (2018) 025006 [1802.07774]. * [17] A. Kashlinsky, R. G. Arendt, J. Mather and S. H. Moseley, _Tracing the first stars with fluctuations of the cosmic infrared background_ , _Nature_ 438 (2005) 45 [astro-ph/0511105]. * [18] A. Kashlinsky, R. G. Arendt, J. Mather and S. H. Moseley, _New Measurements of Cosmic Infrared Background Fluctuations from Early Epochs_ , _Astrophysical Journal, Letters_ 654 (2007) L5. * [19] R. G. Arendt, A. Kashlinsky, S. H. Moseley and J. Mather, _Cosmic Infrared Background Fluctuations in Deep Spitzer Infrared Array Camera Images: Data Processing and Analysis_ , _Astrophysical Journal, Supplement_ 186 (2010) 10 [0909.3816]. * [20] A. Kashlinsky, R. G. Arendt, M. L. N. Ashby, G. G. Fazio, J. Mather and S. H. Moseley, _New Measurements of the Cosmic Infrared Background Fluctuations in Deep Spitzer/IRAC Survey Data and Their Cosmological Implications_ , _Astrophysical Journal_ 753 (2012) 63 [1201.5617]. * [21] T. Matsumoto, H. J. Seo, W. S. Jeong, H. M. Lee, S. Matsuura, H. Matsuhara et al., _AKARI Observation of the Fluctuation of the Near-infrared Background_ , _Astrophysical Journal_ 742 (2011) 124 [1010.0491]. * [22] K. Helgason, M. Ricotti and A. Kashlinsky, _Reconstructing the Near-infrared Background Fluctuations from Known Galaxy Populations Using Multiband Measurements of Luminosity Functions_ , _Astrophysical Journal_ 752 (2012) 113 [1201.4398]. * [23] N. Cappelluti, A. Kashlinsky, R. G. Arendt, A. Comastri, G. G. Fazio, A. Finoguenov et al., _Cross-correlating Cosmic Infrared and X-Ray Background Fluctuations: Evidence of Significant Black Hole Populations among the CIB Sources_ , _Astrophysical Journal_ 769 (2013) 68 [1210.5302]. * [24] N. Cappelluti, R. Arendt, A. Kashlinsky, Y. Li, G. Hasinger, K. Helgason et al., _Probing Large-scale Coherence between Spitzer IR and Chandra X-Ray Source-subtracted Cosmic Backgrounds_ , _Astrophysical Journal, Letters_ 847 (2017) L11 [1709.02824]. * [25] Y. Li, N. Cappelluti, R. G. Arendt, G. Hasinger, A. Kashlinsky and K. Helgason, _The SPLASH and Chandra COSMOS Legacy Survey: The Cross-power between Near-infrared and X-Ray Background Fluctuations_ , _Astrophysical Journal_ 864 (2018) 141 [1807.10304]. * [26] Y. Li, N. Cappelluti, G. Hasinger, R. G. Arendt, A. Kashlinsky and F. Pacucci, _Spectral Properties of Populations Behind the Coherence in Spitzer Near-infrared and Chandra X-Ray Backgrounds_ , _Astrophysical Journal_ 883 (2019) 64 [1908.02293]. * [27] K. Helgason, N. Cappelluti, G. Hasinger, A. Kashlinsky and M. Ricotti, _The Contribution of z &lt;~6 Sources to the Spatial Coherence in the Unresolved Cosmic Near-infrared and X-Ray Backgrounds_, _Astrophysical Journal_ 785 (2014) 38 [1311.1254]. * [28] B. Yue, A. Ferrara, R. Salvaterra, Y. Xu and X. Chen, _Infrared background signatures of the first black holes_ , _Monthly Notices of the RAS_ 433 (2013) 1556 [1305.5177]. * [29] A. Ricarte, F. Pacucci, N. Cappelluti and P. Natarajan, _The clustering of undetected high-redshift black holes and their signatures in cosmic backgrounds_ , _Monthly Notices of the RAS_ 489 (2019) 1006 [1907.03675]. * [30] W. N. Brandt and G. Hasinger, _Deep Extragalactic X-Ray Surveys_ , _Annual Review of Astron and Astrophys_ 43 (2005) 827 [astro-ph/0501058]. * [31] R. C. Hickox and M. Markevitch, _Absolute Measurement of the Unresolved Cosmic X-Ray Background in the 0.5-8 keV Band with Chandra_ , _Astrophysical Journal_ 645 (2006) 95 [astro-ph/0512542]. * [32] R. C. Hickox and M. Markevitch, _Resolving the Unresolved Cosmic X-Ray Background in the Chandra Deep Fields_ , _Astrophysical Journal, Letters_ 661 (2007) L117 [astro-ph/0702556]. * [33] L. L. Cowie, A. J. Barger and G. Hasinger, _The Faintest X-Ray Sources from z = 0 TO 8_ , _Astrophysical Journal_ 748 (2012) 50 [1110.3326]. * [34] N. Cappelluti, Y. Li, A. Ricarte, B. Agarwal, V. Allevato, T. Tasnim Ananna et al., _The Chandra COSMOS Legacy Survey: Energy Spectrum of the Cosmic X-Ray Background and Constraints on Undetected Populations_ , _Astrophysical Journal_ 837 (2017) 19 [1702.01660]. * [35] Planck Collaboration, N. Aghanim, Y. Akrami, M. Ashdown, J. Aumont, C. Baccigalupi et al., _Planck 2018 results. VI. Cosmological parameters_ , _arXiv e-prints_ (2018) arXiv:1807.06209 [1807.06209]. * [36] B. Carr, S. Clesse, J. Garcia-Bellido and F. Kuhnel, _Cosmic Conundra Explained by Thermal History and Primordial Black Holes_ , _arXiv e-prints_ (2019) arXiv:1906.08217 [1906.08217]. * [37] V. Poulin, P. D. Serpico, F. Calore, S. Clesse and K. Kohri, _CMB bounds on disk-accreting massive primordial black holes_ , _Physical Review D_ 96 (2017) 083524 [1707.04206]. * [38] Y. Ali-Haïmoud and M. Kamionkowski, _Cosmic microwave background limits on accreting primordial black holes_ , _Physical Review D_ 95 (2017) 043534 [1612.05644]. * [39] M. Zumalacárregui and U. Seljak, _Limits on Stellar-Mass Compact Objects as Dark Matter from Gravitational Lensing of Type Ia Supernovae_ , _Physical Review Letters_ 121 (2018) 141101 [1712.02240]. * [40] Y. Ali-Haïmoud, E. D. Kovetz and M. Kamionkowski, _Merger rate of primordial black-hole binaries_ , _Physical Review D_ 96 (2017) 123523 [1709.06576]. * [41] F. Shankar, _Black hole demography: from scaling relations to models_ , _Classical and Quantum Gravity_ 30 (2013) 244001 [1307.3289]. * [42] P. Tisserand, L. Le Guillou, C. Afonso, J. N. Albert, J. Andersen, R. Ansari et al., _Limits on the Macho content of the Galactic Halo from the EROS-2 Survey of the Magellanic Clouds_ , _Astronomy and Astrophysics_ 469 (2007) 387 [astro-ph/0607207]. * [43] H. Niikura, M. Takada, N. Yasuda, R. H. Lupton, T. Sumi, S. More et al., _Microlensing constraints on primordial black holes with Subaru/HSC Andromeda observations_ , _Nature Astronomy_ 3 (2019) 524 [1701.02151]. * [44] B. P. Abbott, R. Abbott, T. D. Abbott, S. Abraham, F. Acernese, K. Ackley et al., _Search for Subsolar Mass Ultracompact Binaries in Advanced LIGO’s Second Observing Run_ , _Physical Review Letters_ 123 (2019) 161102. * [45] J. Manshanden, D. Gaggero, G. Bertone, R. M. T. Connors and M. Ricotti, _Multi-wavelength astronomical searches for primordial black holes_ , _Journal of Cosmology and Astroparticle Physics_ 2019 (2019) 026 [1812.07967]. * [46] A. Comastri, R. Gilli, A. Marconi, G. Risaliti and M. Salvati, _Mass without radiation: Heavily obscured AGNs, the X-ray background, and the black hole mass density_ , _Astronomy and Astrophysics_ 574 (2015) L10 [1501.03620]. * [47] F. Hoyle and R. A. Lyttleton, _The effect of interstellar matter on climatic variation_ , _Proceedings of the Cambridge Philosophical Society_ 35 (1939) 405. * [48] H. Bondi and F. Hoyle, _On the mechanism of accretion by stars_ , _Monthly Notices of the RAS_ 104 (1944) 273. * [49] H. Bondi, _On spherically symmetrical accretion_ , _Monthly Notices of the RAS_ 112 (1952) 195. * [50] R. Edgar, _A review of Bondi-Hoyle-Lyttleton accretion_ , _New Astronomy Review_ 48 (2004) 843 [astro-ph/0406166]. * [51] V. F. Shvartsman, _Halos around “Black Holes”._ , _Soviet Astronomy_ 15 (1971) 377. * [52] G. M. Beskin and S. V. Karpov, _Low-rate accretion onto isolated stellar-mass black holes_ , _Astronomy and Astrophysics_ 440 (2005) 223 [astro-ph/0403649]. * [53] M. Ricotti, _Bondi Accretion in the Early Universe_ , _Astrophysical Journal_ 662 (2007) 53 [0706.0864]. * [54] M. Ricotti, J. P. Ostriker and K. J. Mack, _Effect of Primordial Black Holes on the Cosmic Microwave Background and Cosmological Parameter Estimates_ , _Astrophysical Journal_ 680 (2008) 829 [0709.0524]. * [55] F. Yuan and R. Narayan, _Hot Accretion Flows Around Black Holes_ , _Annual Review of Astron and Astrophys_ 52 (2014) 529 [1401.0586]. * [56] F.-G. Xie and F. Yuan, _Radiative efficiency of hot accretion flows_ , _Monthly Notices of the RAS_ 427 (2012) 1580 [1207.3113]. * [57] S. Zaroubi, _The Epoch of Reionization_ , vol. 396 of _Astrophysics and Space Science Library_ , p. 45. 2013\. 10.1007/978-3-642-32362-1. * [58] R. Cen and J. P. Ostriker, _Where Are the Baryons?_ , _Astrophysical Journal_ 514 (1999) 1 [astro-ph/9806281]. * [59] D. Tseliakhovich and C. Hirata, _Relative velocity of dark matter and baryonic fluids and the formation of the first structures_ , _Physical Review D_ 82 (2010) 083520 [1005.2416]. * [60] R. A. Sunyaev and Y. B. Zeldovich, _Small-Scale Fluctuations of Relic Radiation_ , _Astrophysics and Space Science_ 7 (1970) 3. * [61] P. J. E. Peebles and J. T. Yu, _Primeval Adiabatic Perturbation in an Expanding Universe_ , _Astrophysical Journal_ 162 (1970) 815. * [62] A. Fialkov, _Supersonic relative velocity between dark matter and baryons: A review_ , _International Journal of Modern Physics D_ 23 (2014) 1430017 [1407.2274]. * [63] V. Springel, S. D. M. White, A. Jenkins, C. S. Frenk, N. Yoshida, L. Gao et al., _Simulations of the formation, evolution and clustering of galaxies and quasars_ , _Nature_ 435 (2005) 629 [astro-ph/0504097]. * [64] W. A. Watson, I. T. Iliev, A. D’Aloisio, A. Knebe, P. R. Shapiro and G. Yepes, _The halo mass function through the cosmic ages_ , _Monthly Notices of the RAS_ 433 (2013) 1230 [1212.0095]. * [65] E. Munari, A. Biviano, S. Borgani, G. Murante and D. Fabjan, _The relation between velocity dispersion and mass in simulated clusters of galaxies: dependence on the tracer and the baryonic physics_ , _Monthly Notices of the RAS_ 430 (2013) 2638 [1301.1682]. * [66] N. I. Shakura and R. A. Sunyaev, _Reprint of 1973A &amp;A….24..337S. Black holes in binary systems. Observational appearance._, _Astronomy and Astrophysics_ 500 (1973) 33. * [67] M. R. Santos, V. Bromm and M. Kamionkowski, _The contribution of the first stars to the cosmic infrared background_ , _Monthly Notices of the RAS_ 336 (2002) 1082 [astro-ph/0111467]. * [68] A. Kashlinsky, R. G. Arendt, J. Mather and S. H. Moseley, _On the Nature of the Sources of the Cosmic Infrared Background_ , _Astrophysical Journal, Letters_ 654 (2007) L1 [astro-ph/0612447]. * [69] B. Carr and J. Silk, _Primordial black holes as generators of cosmic structures_ , _Monthly Notices of the RAS_ 478 (2018) 3756 [1801.00672]. * [70] D. Inman and Y. Ali-Haïmoud, _Early structure formation in primordial black hole cosmologies_ , _Physical Review D_ 100 (2019) 083528 [1907.08129]. * [71] G. Hütsi, M. Raidal and H. Veermäe, _Small-scale structure of primordial black hole dark matter and its implications for accretion_ , _Physical Review D_ 100 (2019) 083016 [1907.06533]. * [72] D. Grasso and H. R. Rubinstein, _Magnetic fields in the early Universe_ , _Physics Reports_ 348 (2001) 163 [astro-ph/0009061]. * [73] K. Subramanian, _The origin, evolution and signatures of primordial magnetic fields_ , _Reports on Progress in Physics_ 79 (2016) 076901 [1504.02311]. * [74] K. Takahashi, M. Mori, K. Ichiki, S. Inoue and H. Takami, _Lower Bounds on Magnetic Fields in Intergalactic Voids from Long-term GeV-TeV Light Curves of the Blazar Mrk 421_ , _Astrophysical Journal, Letters_ 771 (2013) L42 [1303.3069]. * [75] K. Subramanian, _Magnetic Fields in the Universe_ , _arXiv e-prints_ (2018) arXiv:1809.03543 [1809.03543]. * [76] M. Bruscoli, A. Ferrara and E. Scannapieco, _How is the reionization epoch defined?_ , _Monthly Notices of the RAS_ 330 (2002) L43 [astro-ph/0201094]. * [77] M. Ricotti and J. P. Ostriker, _Reionization, chemical enrichment and seed black holes from the first stars: is Population III important?_ , _Monthly Notices of the RAS_ 350 (2004) 539 [astro-ph/0310331]. * [78] C. Heinrich and W. Hu, _Does Planck 2015 polarization data favor high redshift reionization?_ , _Physical Review D_ 98 (2018) 063514 [1802.00791]. * [79] A. Mesinger, A. Ferrara and D. S. Spiegel, _Signatures of X-rays in the early Universe_ , _Monthly Notices of the RAS_ 431 (2013) 621 [1210.7319]. * [80] S. Dos Santos and O. Doré, _Competition between shocks and entropy floor: Unifying groups and clusters of galaxies_ , _Astronomy and Astrophysics_ 383 (2002) 450 [astro-ph/0106456]. * [81] N. Kaiser, _Evolution of Clusters of Galaxies_ , _Astrophysical Journal_ 383 (1991) 104. * [82] A. E. Evrard and J. P. Henry, _Expectations for X-Ray Cluster Observations by the ROSAT Satellite_ , _Astrophysical Journal_ 383 (1991) 95. * [83] T. J. Ponman, D. B. Cannon and J. F. Navarro, _The thermal imprint of galaxy formation on X-ray clusters_ , _Nature_ 397 (1999) 135 [astro-ph/9810359]. * [84] A. Fialkov and A. Loeb, _Precise Measurement of the Reionization Optical Depth from the Global 21 cm Signal Accounting for Cosmic Heating_ , _Astrophysical Journal_ 821 (2016) 59 [1601.03058]. * [85] J. D. Bowman, A. E. E. Rogers, R. A. Monsalve, T. J. Mozdzen and N. Mahesh, _An absorption profile centred at 78 megahertz in the sky-averaged spectrum_ , _Nature_ 555 (2018) 67 [1810.05912]. * [86] A. Fialkov, R. Barkana and A. Cohen, _Constraining Baryon-Dark-Matter Scattering with the Cosmic Dawn 21-cm Signal_ , _Physical Review Letters_ 121 (2018) 011101 [1802.10577]. * [87] A. Ewall-Wice, T. C. Chang, J. Lazio, O. Doré, M. Seiffert and R. A. Monsalve, _Modeling the Radio Background from the First Black Holes at Cosmic Dawn: Implications for the 21 cm Absorption Amplitude_ , _Astrophysical Journal_ 868 (2018) 63 [1803.01815]. * [88] D. J. Fixsen, A. Kogut, S. Levin, M. Limon, P. Lubin, P. Mirel et al., _ARCADE 2 Measurement of the Absolute Sky Brightness at 3-90 GHz_ , _Astrophysical Journal_ 734 (2011) 5 [0901.0555]. * [89] J. Singal, J. Haider, M. Ajello, D. R. Ballantyne, E. Bunn, J. Condon et al., _The Radio Synchrotron Background: Conference Summary and Report_ , _Publications of the ASP_ 130 (2018) 036001 [1711.09979]. * [90] C. Feng and G. Holder, _Enhanced Global Signal of Neutral Hydrogen Due to Excess Radiation at Cosmic Dawn_ , _Astrophysical Journal, Letters_ 858 (2018) L17 [1802.07432]. * [91] A. Merloni, S. Heinz and T. di Matteo, _A Fundamental Plane of black hole activity_ , _Monthly Notices of the RAS_ 345 (2003) 1057 [astro-ph/0305261]. * [92] R. Wang, X.-B. Wu and M.-Z. Kong, _The Black Hole Fundamental Plane from a Uniform Sample of Radio and X-Ray-emitting Broad-Line AGNs_ , _Astrophysical Journal_ 645 (2006) 890 [astro-ph/0603514]. * [93] R. Genzel, F. Eisenhauer and S. Gillessen, _The Galactic Center massive black hole and nuclear star cluster_ , _Reviews of Modern Physics_ 82 (2010) 3121 [1006.0064]. * [94] M. Portail, C. Wegg, O. Gerhard and I. Martinez-Valpuesta, _Made-to-measure models of the Galactic box/peanut bulge: stellar and total mass in the bulge region_ , _Monthly Notices of the RAS_ 448 (2015) 713 [1502.00633]. * [95] S. Takekawa, T. Oka, Y. Iwata, S. Tsujimoto and M. Nomura, _The Fifth Candidate for an Intermediate-mass Black Hole in the Galactic Center_ , _Astrophysical Journal_ 890 (2020) 167 [2002.05173]. * [96] S. Takekawa, T. Oka, Y. Iwata, S. Tsujimoto and M. Nomura, _Indication of Another Intermediate-mass Black Hole in the Galactic Center_ , _Astrophysical Journal, Letters_ 871 (2019) L1 [1812.10733]. * [97] R. Schödel, A. Eckart, C. Iserlohe, R. Genzel and T. Ott, _A Black Hole in the Galactic Center Complex IRS 13E?_ , _Astrophysical Journal Letters_ 625 (2005) L111 [astro-ph/0504474]. * [98] M. Tsuboi, Y. Kitamura, T. Tsutsumi, R. Miyawaki, M. Miyoshi and A. Miyazaki, _Rotating ionized gas ring around the Galactic center IRS13E3_ , _Publications of the Astronomical Society Japan_ 71 (2019) 105 [1907.12311]. * [99] M. P. Muno, F. E. Bauer, F. K. Baganoff, R. M. Band yopadhyay, G. C. Bower, W. N. Brandt et al., _A Catalog of X-Ray Point Sources from Two Megaseconds of Chandra Observations of the Galactic Center_ , _Astrophysical Journal, Supplement_ 181 (2009) 110 [0809.1105]. * [100] J. Cuadra, S. Nayakshin and Q. D. Wang, _The role of feedback in accretion on low-luminosity AGN: Sgr A* case study_ , _Monthly Notices of the RAS_ 450 (2015) 277 [1503.02745]. * [101] A. Hektor, G. Hütsi and M. Raidal, _Constraints on primordial black hole dark matter from Galactic center X-ray observations_ , _Astronomy and Astrophysics_ 618 (2018) A139 [1805.06513]. * [102] M. Revnivtsev, S. Sazonov, E. Churazov, W. Forman, A. Vikhlinin and R. Sunyaev, _Discrete sources as the origin of the Galactic X-ray ridge emission_ , _Nature_ 458 (2009) 1142 [0904.4649]. * [103] K. Ferrière, W. Gillard and P. Jean, _Spatial distribution of interstellar gas in the innermost 3 kpc of our galaxy_ , _Astronomy and Astrophysics_ 467 (2007) 611 [astro-ph/0702532]. * [104] E. Valenti, M. Zoccali, A. Mucciarelli, O. A. Gonzalez, F. Surot, D. Minniti et al., _The central velocity dispersion of the Milky Way bulge_ , _Astronomy and Astrophysics_ 616 (2018) A83 [1805.00275]. * [105] P. M. W. Kalberla, J. Kerp and U. Haud, _The Velocity Dispersion of Galactic Dark Matter_ , vol. 276 of _Astronomical Society of the Pacific Conference Series_ , p. 453. 2002\. * [106] K. Nandra, D. Barret, X. Barcons, A. Fabian, J.-W. den Herder, L. Piro et al., _The Hot and Energetic Universe: A White Paper presenting the science theme motivating the Athena+ mission_ , _arXiv e-prints_ (2013) arXiv:1306.2307 [1306.2307]. * [107] D. A. Schwartz, A. Vikhlinin, H. Tananbaum, M. Freeman, G. Tremblay, E. D. Schwartz et al., _The Lynx X-ray Observatory: revealing the invisible universe_ , in _Proc. SPIE_ , vol. 11118 of _Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series_ , p. 111180K, Sept., 2019, DOI. * [108] P. B. Ivanov, V. N. Lukash, S. V. Pilipenko and M. S. Pshirkov, _Search for isolated Galactic Centre stellar mass black holes in the IR and sub-mm range_ , _Monthly Notices of the RAS_ 489 (2019) 2038 [1905.04923]. * [109] A. Kashlinsky, Y. Ali-Haïmoud, S. Clesse, J. Garcia-Bellido, L. Amendola, L. Wyrzykowski et al., _Electromagnetic probes of primordial black holes as dark matter_ , _Bulletin of the AAS_ 51 (2019) 51 [1903.04424]. * [110] A. Kashlinsky, R. G. Arendt, N. Cappelluti, A. Finoguenov, G. Hasinger, K. Helgason et al., _Probing the Cross-power of Unresolved Cosmic Infrared and X-Ray Backgrounds with Upcoming Space Missions_ , _Astrophysical Journal, Letters_ 871 (2019) L6 [1812.01535]. * [111] E. L. Wright, _A Cosmology Calculator for the World Wide Web_ , _Publications of the ASP_ 118 (2006) 1711 [astro-ph/0609593].
2024-09-04T02:54:59.069368
2020-03-11T08:05:02
2003.05151
{ "authors": "Jukka Ruohonen and Kalle Hjerppe", "full_text_license": null, "license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/", "provenance": "arxiv-papers-0000.json.gz:26155", "submitter": "Jukka Ruohonen", "url": "https://arxiv.org/abs/2003.05151" }
arxiv-papers
11institutetext: Department of Future Technologies, University of Turku, Turku, Finland # Predicting the Amount of GDPR Fines Jukka Ruohonen Kalle Hjerppe {juanruo<EMAIL_ADDRESS> ###### Abstract The General Data Protection Regulation (GDPR) was enforced in 2018. After this enforcement, many fines have already been imposed by national data protection authorities in the European Union (EU). This paper examines the individual GDPR articles referenced in the enforcement decisions, as well as predicts the amount of enforcement fines with available meta-data and text mining features extracted from the enforcement decision documents. According to the results, articles related to the general principles, lawfulness, and information security have been the most frequently referenced ones. Although the amount of fines imposed vary across the articles referenced, these three particular articles do not stand out. Furthermore, good predictions are attainable even with simple machine learning techniques for regression analysis. Basic meta- data (such as the articles referenced and the country of origin) yields slightly better performance compared to the text mining features. ###### Keywords: T ext mining Legal mining Data protection Law enforcement ## 1 Introduction Data protection has a long history in the EU. In particular, the GDPR repealed the earlier Directive 95/46/EC. Although this directive laid down much of the legal groundwork for EU-wide data protection and privacy, its national adaptations, legal interpretations, and enforcement varied both across the member states and different EU institutions [10]. In short: it was a paper tiger. In contrast, Regulation (EU) 2016/679, the GDPR, is a regulation; it is binding throughout the EU with only a minimal space for national adaptations. In practice, only a few Articles (A) in the GDPR provide some but limited room for national maneuvering; these include A6 with respect to relaxation in terms of other legal obligations or public interests, A9 in terms of sensitive data, and A10 regarding criminal matters. Thus, in general, this particular legislation should be interpreted and enforced uniformly through the European Union by national data protection authorities whose formal powers are defined in A58. In practice, however, already the resources and thus the actual power for enforcement vary across the member states [1, 7]. Coupled with a lack of previous research on the enforcement of the GDPR, this variance provides a motivation for the present work to examine the recent enforcement fines imposed according to the conditions specified in A83. In addition, the work is motivated by a tangential question; is it also possible to predict these fines by machine learning methods? To answer to the question, the paper uses meta-data and text miming features extracted from the decision documents released by the national authorities. As such, only black-box predictions are sought; the goal is not to make any legal interpretations whatsoever. Nevertheless, the answer provided still establishes a solid contribution—especially when considering that the paper is presumably the very first to even examine the GDPR fines. As is discussed in Section 2, the black-box approach also places the paper into a specific branch of existing research dealing with legal documents. This section also refines the question into two more specific research questions. Afterwards, the structure is straightforward: the dataset and methods are elaborated in Sections 3 and 4, results are presented in Section 5, and conclusions follow in Section 6. As will be noted in the final section, there are also some lessons that should not be learned from this work. ## 2 Background Legal mining—in lack of a better term—has emerged in recent years as a promising but at times highly contested interdisciplinary field that uses machine learning techniques to analyze various aspects related to law [8]. Although the concrete application domains vary, case law and court cases are the prime examples already because these constitute the traditional kernel of legal scholarship. Within this kernel, existing machine learning applications range from the classification of judges’ ideological positions [12], which may be illegal in some European countries [3], to the prediction of decisions of the European Court of Human Rights [16, 17]. These examples convey the traditional functions of applied machine learning; exploratory data mining and the prediction of the future. There is also another closely related application domain. Again in lack of a better term, data extraction could be a label for this domain: by exploiting the nature of law as an art of persuasion [8], the domain uses distinct information retrieval techniques to extract and quantify textual data from legal documents into structured collections with a predefined logic and semantics [2, 24, 28]. To gain a hint about the extraction, one might consider a legal document to contain some facts, rights, obligations, and prohibitions, statements and modalities about these, and so forth. Although the two application domains are complementary in many respects, the underlying rationales exhibit some notable differences. Oftentimes, the legal mining domain is motivated by a traditional rationale for empirical social science research: to better understand trends and patterns in lawmaking and law enforcement; to contrast these with legal philosophies and theories; and so forth. This rationale extends to public administration: machine learning may ease the systematic archiving of legal documents and the finding of relevant documents, and, therefore, it may also reduce administrative costs [4]. These administrative aspects reflect the goal of building “systems that assist in decision-making”, whereas the predictive legal mining applications seek to build “systems that make decision” [21]. Although the data extraction domain can be motivated by the same administrative rationale, providing data to predictive systems is seldom the intention behind the extraction. Instead, there is a further rationale in this domain: to extract requirements for software and systems in order to comply with the laws from which a given extraction is done [24]. Driven by the genuine interest to facilitate collaboration between lawyers and engineers in order to build law-compliant software and systems [26], this rationale has been particularly prevalent in the contexts of data protection and privacy. For instance, previous work has been done to extract requirements from the Health Insurance Portability and Accountability Act in the United States [2]. Against this backdrop, it is no real surprise that data extraction has been applied also for laws enacted in the EU. While there is previous work for identifying requirements from the GDPR manually [13], there indeed exists also more systematic data extraction approaches [25]. However, neither domain has addressed the enforcement of this EU-wide regulation. In fact, a reasonably comprehensive literature search indicates no previous empirical research on the GDPR’s enforcement. Given this pronounced gap in the existing literature, this paper sets to examine the following two Questions (Q) regarding the enforcement fines: $\textmd{Q}_{1}$: (i) Which GDPR articles have been most often referenced in the recent enforcement cases, (ii) and do the enforcement fines vary across these articles? $\textmd{Q}_{2}$: How well the recent GDPR fines can be predicted in terms of basic available (i) meta-data and (ii) textual traits derived from the enforcement decisions? These two questions place the present work into the legal mining domain. Also the underlying rationales are transferable. For instance, an answer to $\textmd{Q}_{1}$ helps to understand which aspects of the GDPR have been actively enforced during the early roll out of the regulation. Also $\textmd{Q}_{2}$ carries a practical motivation: by knowing whether the penalties are predictable by machine learning techniques, a starting point is available for providing further insights in different practical scenarios. These scenarios range from the automated archival of enforcement decisions and the designation of preventive measures to litigation preparations. However, it is important to remark that the GDPR’s enforcement is done by national data protection authorities. Although the focus on public administration is maintained nevertheless, documents about the enforcement decisions reached by these authorities should not be strictly equated to law-like legal documents. This point provides an impetus to move forward by elaborating the dataset used. ## 3 Data The dataset is based on a GDPR enforcement tracker that archives the fines and penalties imposed by the European data protection authorities [5]. This tracker is maintained by an international law firm for archiving many of the known enforcement cases. Each case is accompanied by meta-data supplied by the firm as well as a link to the corresponding decision from a national authority. In addition to potentially missing cases due to the lack of publicly available information, the archival material is unfortunately incomplete in many respects. The reason originates from the incoherent reporting practices of the European data protection authorities. Therefore, all cases were obtained from the tracker, but the following four steps were followed to construct a sample for the analysis: 1. 1. To maintain coherence between $\textmd{Q}_{1}$ and $\textmd{Q}_{2}$, only those cases were included that had both meta-data and links to the decisions available. In terms of the former, some cases lacked meta-data about the fines imposed, the particular GDPR articles referenced in the decisions, and even links to the decisions. 2. 2. To increase the quality of the sample, only those cases were included that were accompanied with more or less formal documents supplied on the official websites of the data protection authorities. Thus, those cases are excluded whose archival material is based online media articles, excerpts collected from annual reports released by the authorities, and related informal sources. 3. 3. If two or more cases were referenced with the same decision, only one decision document was included but the associated meta-data was unified into a single case by merging the articles references and totaling the fines imposed. 4. 4. All national decisions written in languages other than English were translated to English with Google Translate. In general, such machine translation is necessary due to the EU-wide focus of the forthcoming empirical analysis. Given these restrictions, the sample amounts to about 72% of all cases archived to the tracker at the time of data collection. Even with these precautions, it should be stressed that the quality of the sample is hardly optimal. While the accuracy of the meta-data supplied by the firm is taken for granted, there are also some issues with the quality of the publicly available decisions. The authorities in some countries (e.g., Hungary and Spain) have released highly detailed and rigorous documents about their decisions, while some other authorities (e.g., in Germany) have opted for short press releases. Although most of the documents were supplied in the portable document format (PDF) and informally signed by the authorities, it should be thus stressed that the data quality is not consistent across the European countries observed. In addition, it is worth remarking the detail that scanned PDF documents (as used, e.g., in Portugal) had to be excluded due to the automatic data processing. While these data quality issues underline the paper’s exploratory approach, these carry also political and administrative ramifications that are briefly discussed later on in Section 6. ## 4 Methods Descriptive statistics and regression analysis are used for answering to the two questions asked. In terms of Question $\textmd{Q}_{1}$, dummy variables for the GDPR articles referenced are simply regressed against the logarithm of the fines imposed by using the conventional analysis-of-variance (ANOVA). As many of the cases reference multiple articles, it should be remarked that these dummy variables are not so-called fixed effects. The methods for answering to the second Question $\textmd{Q}_{2}$ require a more thorough elaboration. In addition to (i) the GDPR articles, the meta-data aspects include dummy variables for the following features: (ii) the year of a given enforcement case; (iii) the country in which the given fine was imposed; and (iv) the sector of the violating organization. The last feature was constructed manually by using five categories: individuals, public sector (including associations), telecommunications, private sector (excluding telecommunications), and unknown sector due to the lack of meta-data supplied in the enforcement tracker. In total, these features amount to $49$ dummy variables. The textual aspects for $\textmd{Q}_{2}$ are derived from the translated decisions. Seven steps were used for pre-processing: (a) all translated decision documents were lower-cased and (b) tokenized according to white space and punctuation characters; (c) only alphabetical tokens recognized as English words were included; (d) common and custom stopwords were excluded; (e) tokens with lengths less than three characters or more than twenty characters were excluded; (f) all tokens were lemmatized into their common English dictionary forms; and, finally, (g) those lemmatized tokens were excluded that occurred in the whole decision corpus in less than three times. A common natural language processing library [22] was used for this processing together with a common English dictionary [20]. In addition to the stopwords supplied in the library, the twelve most frequent tokens were used as custom excluded stopwords: data, article, personal, protection, processing, company, authority, regulation, information, case, art, and page. After this pre- processing, the token-based term frequency (TF) and term frequency inverse document frequency (TF-IDF) were calculated from the whole corpus constructed (for the exact formulas used see, e.g., [23]). These common information retrieval statistics are used for evaluating the other part in $\textmd{Q}_{2}$. In general, TF-IDF is often preferred as it penalizes frequently occurring terms. Sparsity is the biggest issue for prediction. There are only $154$ observations but already the meta-data amounts to $49$ independent variables—and the TF and TF-IDF each to $4189$ independent variables. Fortunately, the problem is not uncommon, and well-known solutions exist for addressing it. Genomics is a good example about the application domains riddled with the problem; within this domain, it is not uncommon to operate with datasets containing a few thousand observations and tens of thousands of predictors [6]. Dimension reduction is the generic solution in this domain and other domains with similar problems. Thus, three common dimension reduction methods for regression analysis are used: principal component regression (PCR), partial least squares (PLS), and ridge regression (for a concise overview of these methods see, e.g., [11]). In essence, PCR uses uncorrelated linear combinations as the independent variables; PLS is otherwise similar but also the dependent variable is used for constructing the combinations. Ridge regression is based on a different principle: the dimensionality is reduced by shrinking some of the regression coefficients to zero. In general, all three methods are known to yield relatively similar results in applied work. In terms of practical computation, the number of components for the PCR and PLS models, and the shrinkage parameter for the ridge regression, is optimized during the training while the results are reported with respect to a test set containing 20% of the enforcement cases. Centering (but not scaling) is used prior to the training with a $5$-fold cross-validation. Computation is carried out with the caret package [14] in conjunction with the pls [18] and foba [30] packages. Although root-mean-square errors (RMSEs) are used for optimizing the training, the results are summarized with mean absolute errors (MAEs) due to their straightforward interpretability. These are defined as the arithmetic means of the absolute differences between the observed and predicted fines in the test set. ## 5 Results The GDPR fines imposed vary greatly. As can be seen from Fig. 1, a range from about $e^{6}$ euros to $e^{12}$ euros capture the majority of the enforcement fines observed. This range amounts roughly from about four hundred to $163$ thousand euros. That said, the distribution has a fairly long tail; also a few large, multi-million euro fines are present in the sample. Therefore, the sample cannot be considered biased even though the restrictions discussed in Section 3 exclude some of the largest enforcement cases, including the announcements about the intention to fine the British Airways and Marriott International by the Information Commissioner’s Office in the United Kingdom. Although these two excluded cases are—at least at the time of writing—preliminary announcements, they are still illuminating in the sense that both were about large-scale data breaches. Figure 1: Enforcement Fines in the Sample However, the GDPR’s corresponding A32 for information security has not been the most frequently referenced article in the recent enforcement cases. Instead, A5 and A6, which address the general principles and lawfulness of personal data processing, have clearly been the most referenced individual articles, as can be seen from Fig. 2. These two articles account for as much as 87% of all $252$ references made in the $154$ enforcement cases. More than six references have been made to A13 (informing obligations to data subjects), A15 (right to access), A21 (right to object), and A17 (right to erasure). These references indicate that enforcement has been active also with respect to the rights granted by the GDPR for individual data subjects. Furthermore, less frequent references have been made in the decisions to numerous other articles. These include the obligations to designate data protection officers (A37), conduct impact assessments (A35), and consult supervisory authorities (A36), to name three examples. While the principles, lawfulness, and information security account for the majority, the less frequent but still visible references to more specific articles hint that the regulation’s whole scope is slowly being enforced by the European authorities. Figure 2: Referenced GDPR Articles in the Enforcement Cases Turning to the second part of $\textmd{Q}_{1}$, the regression coefficients from the log-linear ANOVA model are visualized in Fig. 3 (the intercept is present in the model but not shown in the figure, and A36 is omitted as the single reference made to the article corresponds with the single reference made to A35 in the same decision; the dummy variable for A35 thus captures the effect of both articles). As can be seen, the confidence intervals (CIs) are quite wide for the articles referenced only infrequently, and only six coefficients are statistically significant at the conventional threshold. Thus, some care is required for interpretation. Figure 3: Enforcement Fines Across Articles (logarithm, ANOVA, 95% CIs) When looking at the coefficients with relatively tight CIs, it is evident that variation is present but the magnitude of this variation is not substantial. Most of the coefficients remain in the range $[-5,5]$. However, together all the references do yield a decent model; an $F$-test is statistically significant and the coefficient of determination is large ($R^{2}\simeq 0.44$). To put aside the statistical insignificance, it is also interesting to observe that some of the coefficients have negative signs, meaning that some references indicate smaller fines compared to the average. Among these are the conditions for consent (A7), sensitive data (A9), transparency (A12), and informing (A13), as well as the already noted right to access (A15), proper notifications about data breaches (A33), and the powers granted for the supervisory authorities (A58). Finally, the magnitude of the coefficient ($1.52$) for the information security article (A32) is significant but does not stand out in terms of magnitude. When compared to cases without a reference to this article, only about $1.5\%$ higher fines have been imposed in cases referencing A32. Figure 4: Prediction Performance (logarithm, MAEs) Figure 5: Observed and Predicted Values in the Test Set The results regarding $\textmd{Q}_{2}$ are summarized in Fig. 4 (the MAEs for the training refer to the best cross-validated models). Three noteworthy observations can be drawn from this summary. First and foremost, the prediction performance is generally decent: the best-performing cases all yield MAEs roughly between $1.3$ and $1.5$ for the log-transformed fines. These average prediction errors seem also reasonable when taking a closer look at the actual predictions—except for the outlying large fines. Take Fig. 5 as a brief example; the figure displays the observed fines and the predicted fines based on the PLS and ridge regression estimators for the first meta-data model. Even though most of the predicted observations are fairly close to the observed fines, the test set also contains one five million euro fine that is quite severely underestimated by both regression estimators. The underestimations amount to over $246$ thousand euros. Though, when a magnitude is measured in millions, it is a matter of interpretation whether an error measured in hundreds of thousands is large, small, or something else. Second, there are some interesting differences between the regression estimators. In particular, PLS and ridge regression exhibit relatively large differences between training and testing. The explanation relates to the RMSE- based optimization during training. For instance, PCR was estimated with only one component for the first meta-data model and three components for the remaining three models, whereas two components were picked for all four PLS models. Last but not least, the smallest MAE for the test set is outputted by ridge regression using only the $49$ meta-data variables. The second and third models containing the TF and TF-IDF variables both perform worse. Furthermore, the fourth model, which contains the meta-data and TF-IDF variables, indicates that the text mining features tend to slightly weaken the predictions. It is also worth remarking that some redundancy is present among the meta-data variables; comparable performance is obtained with only $17$ meta-data variables that are left after prior pre-processing with the caret’s nearZeroVar function. All this said, the overall interpretation should be less explicit when considering the practical motivation for $\textmd{Q}_{2}$ noted in Section 2. If only the decision documents are available without any prior work to manually construct the meta-data from these, even the simple text mining features could be used for black-box predictions. ## 6 Conclusion This paper explored two questions. The answers to these can be summarized as follows. First: regarding $\textmd{Q}_{1}$, the articles related to the general principles (A5), lawfulness (A6), and information security (A32) have been most frequently referenced by the national data protection authorities during the early enforcement period observed in this paper. Although also the enforcement fines vary across the various GDPR articles referenced in the authorities’ decisions, the effects of these three articles do not stand out in particular. A good corollary question for further work would be to examine the future evolution of these references; a hypothesis is that the regulation’s enforcement is slowly moving from the principles and lawfulness conditions to more specific elements. Then: regarding $\textmd{Q}_{2}$, it is possible to obtain decent predictions even with standard machine learning techniques for regression analysis. Basic meta-data (i.e., articles referenced, year of enforcement, country or origin, and industry sector) seems to provide slightly better predictive performance compared to basic text mining features (i.e., TF and TF-IDF) extracted from the decision documents. Yet, even the text mining features seem sufficient for blind black-box predictions. There are also many potential ways to improve the predictions reported, including those related regression analysis (such as using specific sparse-PLS estimators) and text mining (such as using word embeddings). Data mining techniques (such as topic modeling) could be used also for better understanding the nuances behind the decisions. An alternative path forward would be to extend the specific data extraction approaches discussed in Section 2 to the enforcement decisions. However, the motivation to move forward is undermined by practical problems. As was remarked in Section 3, already the quality of data is a problem of its own. Recently, the enforcement of the GDPR has been fiercely criticized by some public authorities and pundits alike. The reasons are many: a lack of transparency and cooperation between national data protection authorities, diverging legal interpretations, cultural conflicts, the so-called “one-stop- shop” system, old-fashioned information systems and poor data exchange practices, and so on and so forth [27]. The data collection used for the present work testifies on behalf of the criticism: the decision documents released by the national authorities have varied wildly in terms of quality and rigor. Some national authorities have even hidden their decisions from public scrutiny. A paradox is present: although A15 grants a right for data subjects to access their personal data, the same subjects may need to exercise their separate freedom of information rights to obtain cues about decisions reached by national authorities. Four legs good, two legs bad. Finally, it is necessary to briefly point out the bigger issues affecting the legal mining and data extraction domains—and, therefore, also the present work. For one thing, the practical usefulness of legal expert systems has been questioned for a long time. The artificial intelligence hype has not silenced the criticism [15]. Like with the “code is law” notion, which has never existed in reality [19], there are also many philosophical counterarguments against the legal mining and data extraction domains [8, 9, 21]. It is problematic at best to codify the methodology of a scholarly discipline into rigid schemas in order to nurse the methodological requirements of another discipline; legal reasoning is distinct from other types of reasoning exercised in empirical sciences; and so forth. Law is not code. But code is increasingly used to predict law enforcement decisions. The legal mining domain, in particular, is frequently involved with a motivation to build “a system that could predict judicial decisions automatically” but with a provision that there is “no intention of creating a system that could replace judges” [17]. Such system-building leads to another delicate paradox. Namely, the GDPR and related laws (such as Directive 2016/680 for data protection in criminal matters) were also designed to provide certain guards against legal mining and the resulting automated decision-making involving human beings [29]. This paper is not immune to criticism originating from this fundamental paradox. If it is seen as undesirable to build systems for making law enforcement decisions, it should be also seen as undesirable to build systems for automatically fining companies. ### Acknowledgements This research was funded by the Strategic Research Council at the Academy of Finland (grant no. 327391). ## References * [1] Bennett, C.J., Raab, C.D.: Revisiting the Governance of Privacy: Contemporary Policy Instruments in Global Perspective. Regulation & Governance (Published online in September) (2018) * [2] Breaux, T.D., Vail, M.W., Anton, A.I.: Towards Regulatory Compliance: Extracting Rights and Obligations to Align Requirements with Regulations. In: Proceedings of the 14th IEEE International Requirements Engineering Conference (RE 2006). pp. 49–58. IEEE, Minneapolis (2006) * [3] Calomme, C.: Why Open Legal Data and Analytics Are Not Without Risks (2020), Centre for IT & IP Law (CiTiP) Blog, KU Leuven, available online in April: https://www.law.kuleuven.be/citip/blog/why-open-legal-data-and-analytics-are-not-without-risks/ * [4] Chhatwal, R., Huber-Fliflet, N., Keeling, R., Zhang, J., Zhao, H.: Empirical Evaluations of Active Learning Strategies in Legal Document Review. In: Proceedings of the IEEE International Conference on Big Data (Big Data 2017). pp. 1428–1437. IEEE, Boston (2017) * [5] CMS Law.Tax: GDPR Enforcement Tracker (2020), Data obtained in 24 February from: https://enforcementtracker.com/ * [6] Colombani, C., Croiseau, P., Fritz, S., Guillaume, F., Legarra, A., Ducrocq, V., Robert-Granié, C.: A Comparison of Partial Least Squares (PLS) and Sparse PLS Regressions in Genomic Selection in French Dairy Cattle. Journal of Dairy Science 95(4), 2120–2131 (2012) * [7] Custers, B., Dechesne, F., Sears, A.M., Tani, T., van der Hof, S.: A Comparison of Data Protection Legislation and Policies Across the EU. Computer Law & Security Review 34(2), 234–243 (2018) * [8] Dyevre, A., Wijtvliet, W., Lampach, N.: The Future of European Legal Scholarship: Empirical Jurisprudence. Maastricht Journal of European and Comparative Law 26(3), 348–371 (2019) * [9] Franklin, J.: Discussion Paper: How Much of Commonsense and Legal Reasoning is Formalizable? A Review of Conceptual Obstacles. Law, Probability and Risk 11(2–3), 225–245 (2012) * [10] Fuster, G.G.: The Emergence of Personal Data Protection as a Fundamental Right of the EU. Springer, Cham (2014) * [11] Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, New York (2011) * [12] Hausladen, C.I., Schubert, M.H., Ash, E.: Text Classification of Ideological Direction in Judicial Opinions. International Review of Law and Economics 62, 105903 (2020) * [13] Hjerppe, K., Ruohonen, J., Leppänen, V.: The General Data Protection Regulation: Requirements, Architectures, and Constraints. In: Proceedings of the 27th IEEE International Requirements Engineering Conference (RE 2019). pp. 265–275. IEEE, Jeju Island (2019) * [14] Kuhn, M., et al.: caret: Classification and Regression Training (2020), R package version 6.0-85, available online in February: https://cran.r-project.org/web/packages/caret/ * [15] Leith, P.: The Rise and Fall of the Legal Expert System. International Review of Law, Computers & Technology 30(3), 94–106 (2016) * [16] Liu, Z., Chen, H.: A Predictive Performance Comparison of Machine Learning Models for Judicial Cases. In: Proceedings of the IEEE Symposium Series on Computational Intelligence (SSCI 2017). pp. 1–6. IEEE, Honolulu (2017) * [17] Medvedeva, M., Vols, M., Wieling, M.: Using Machine Learning to Predict Decisions of the European Court of Human Rights. Artificial Intelligence and Law (Published online in June), 1–30 (2019) * [18] Mevik, B.H., Wehrens, R.: The pls Package: Principal Component and Partial Least Squares Regression in R. Journal of Statistical Software 18(2), 1–23 (2007) * [19] Mueller, M., Badiei, F.: Requiem for a Dream: On Advancing Human Rights via Internet Architecture. Policy and Internet 11(1), 61–83 (2019) * [20] Németh, L., Hendricks, K., McNamara, C., et al.: Hunspell (2020), Version 1.7.0, available online in February https://github.com/hunspell/hunspell * [21] Nissan, E.: Computer Tools and Techniques for Lawyers and the Judiciary. Cybernetics and Systems 49(4), 201–233 (2018) * [22] The Natural Language Toolkit (NLTK): Version 3.4.5 (2019), available online in January 2020: http://www.nltk.org * [23] Ruohonen, J., Leppänen, V.: Toward Validation of Textual Information Retrieval Techniques for Software Weaknesses. In: Elloumi, M., Granitzer, M., Hameurlain, A., Seifert, C., Stein, B., Tjoa, A.M., Wagner, R. (eds.) Proceedings of the 29th International Conference on Database and Expert Systems Applications (DEXA 2018), Communications in Computer and Information Science (Volume 903). pp. 265–277. Springer, Regensburg (2018) * [24] Sleimi, A., Ceci, M., Sannier, N., Sabetzadeh, M., Briand, L., Dann, J.: A Query System for Extracting Requirements-Related Information from Legal Texts. In: Proceedings of the IEEE 27th International Requirements Engineering Conference (RE 2019). pp. 319–329. IEEE, Jeju Island (2019) * [25] Tamburri, D.A.: Design Principles for the General Data Protection Regulation (GDPR): A Formal Concept Analysis and Its Evaluation. Information Systems 91, 101469 (2020) * [26] van Dijk, N., Tanas, A., Rommetveit, K., Raab, C.: Right Engineering? The Redesign of Privacy and Personal Data Protection. International Review of Law, Computers & Technology 32(2–3), 230–256 (2018) * [27] Vinocur, N.: ‘We Have a Huge Problem’: European Tech Regulator Despairs Over Lack of Enforcement: The World’s Toughest Privacy Law Proves Toothless in the Eyes of Many Critics (2019), Politico. Available online in February 2020: https://www.politico.com/news/2019/12/27/europe-gdpr-technology-regulation-089605 * [28] Wagh, R.S., Anand, D.: Legal Document Similarity: A Multi-Criteria Decision-Making Perspective. PeerJ Computer Science 6, e262 (2020) * [29] Završnik, A.: Criminal Justice, Artificial Intelligence Systems, and Human Rights. ERA Forum 20, 567–583 (2020) * [30] Zhang, T.: foba: Greedy Variable Selection (2008), R package version 0.1, available online in February: https://cran.r-project.org/web/packages/foba/
2024-09-04T02:54:59.080131
2020-03-11T09:12:44
2003.05174
{ "authors": "Jose Blanchet, Renyuan Xu and Zhengyuan Zhou", "full_text_license": null, "license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/", "provenance": "arxiv-papers-0000.json.gz:26156", "submitter": "Renyuan Xu", "url": "https://arxiv.org/abs/2003.05174" }
arxiv-papers
# Delay-Adaptive Learning in Generalized Linear Contextual Bandits Jose Blanchet Department of Management Science and Engineering, Stanford University, USA. Email<EMAIL_ADDRESS>Renyuan Xu Mathematical Institute, University of Oxford, UK. Email<EMAIL_ADDRESS>Zhengyuan Zhou Stern School of Business, New York University, USA. Email<EMAIL_ADDRESS> ###### Abstract In this paper, we consider online learning in generalized linear contextual bandits where rewards are not immediately observed. Instead, rewards are available to the decision maker only after some delay, which is unknown and stochastic. We study the performance of two well-known algorithms adapted to this delayed setting: one based on upper confidence bounds, and the other based on Thompson sampling. We describe modifications on how these two algorithms should be adapted to handle delays and give regret characterizations for both algorithms. Our results contribute to the broad landscape of contextual bandits literature by establishing that both algorithms can be made to be robust to delays, thereby helping clarify and reaffirm the empirical success of these two algorithms, which are widely deployed in modern recommendation engines. ## 1 Introduction The growing availability of user-specific data has welcomed the exciting era of personalized recommendation, a paradigm that uncovers the heterogeneity across individuals and provides tailored service decisions that lead to improved outcomes. Such heterogeneity is ubiquitous across a variety of application domains (including online advertising, medical treatment assignment, product/news recommendation ([LCLS2010], [BCN2012],[chapelle2014],[bastani2015online],[SBF2017])) and manifests itself as different individuals responding differently to the recommended items. Rising to this opportunity, contextual bandits ([besbes2009dynamic, rigollet2010nonparametric, goldenshluger2011note, hsu2014taming, agrawal2016efficient]) have emerged to be the predominant mathematical formalism that provides an elegant and powerful formulation: its three core components, the features (representing individual characteristics), the actions (representing the recommendation), and the rewards (representing the observed feedback), capture the salient aspects of the problem and provide fertile ground for developing algorithms that balance exploring and exploiting users’ heterogeneity. As such, the last decade has witnessed extensive research efforts in developing effective and efficient contextual bandits algorithms. In particular, two types of algorithms–upper confidence bounds (UCB) based algorithms ([LCLS2010, FCGS2010, chu2011contextual, JBNW2017, LLZ2017]) and Thompson sampling (TS) based algorithms ([AG2013a, AG2013b, RV2014, russo2016information, agrawal2017thompson])–stand out from this flourishing and fruitful line of work: their theoretical guarantees have been analyzed in many settings, often yielding (near-)optimal regret bounds; their empirical performance have been thoroughly validated, often providing insights into their practical efficacy (including the consensus that TS based algorithms, although sometimes suffering from intensive computation for posterior updates, are generally more effective than their UCB counterparts, whose performance can be sensitive to hyper-parameter tuning). To a large extent, these two family of algorithms have been widely deployed in many modern recommendation engines. However, a key assumption therein–both the algorithm design and their analyses–is that the reward is immediately available after an action is taken. Although useful as a first-step abstraction, this is a stringent requirement that is rarely satisfied in practice, particularly in large-scale systems where the time-scale of a single recommendation is significantly smaller than the time-scale of a user’s feedback. For instance, in E-commerce, a recommendation is typically made by the engine in milliseconds, whereas a user’s response time (i.e. to buy a product or conversion) is typically much larger, ranging from hours to days, sometimes even to weeks. For instance, a thorough empirical study in [chapelle2014] found that more than 10% of the conversions in Criteo (a real-time bidding company) were at least 2 weeks old. Furthermore, [chapelle2014] found that the delay distribution from the company’s data follows the exponential distribution closely and hence does have heavy tails. Similarly, in clinical trials, it is infeasible to immediately observe and hence take into account the medical outcome after applying a treatment to a patient–collecting medical feedback can be a time- consuming and often random process; and in general, it is common to have applied trial treatments to a large number of patients, with individual medical outcomes only available much later at different, random points in time. In both the E-commerce ([KCW2001, chapelle2014])and the clinical trials cases ([CC2011]), a random and often significantly delayed reward is present. Further, such delays empirically often follow a heavy tail distribution, and hence a priori can have substantially negative impact on the learning performance. Consequently, to understand such impact of delays, adjustments in classical formulations must be made, both at the algorithmic level and at the analysis level. ### 1.1 Related Work In the past five years or so, the problem of learning on bandits with delays has received increasing attention and has been studied in several different settings in the existing literature, where most of the efforts have concentrated on the multi-armed bandits setting, including both the stochastic multi-armed bandits and the adversarial multi-armed bandits. For stochastic multi-armed bandits with delays, [JGS2013] show a regret bound $O(\log T+\mathbb{E}[\tau]+\sqrt{\log T\mathbb{E}[\tau]})$ where $\mathbb{E}[\tau]$ is the mean of the iid delays. [DKVB2014] consider Gaussian Process bandits with a bounded stochastic delay. [MLBP2015] follow the work of [JGS2013] and propose a queue-based multi-armed bandit algorithm to handle delays. [PASG2017] match the same regret bound as in [JGS2013] when feedback is not only delayed but also anonymous. For adversarial multi-armed bandits with delays, [NAGS2010] establish the regret bound of $\mathbb{E}[R_{T}]\leq O(\tau_{\text{const}})\times\mathbb{E}[R^{\prime}_{T}(\frac{T}{\tau_{\text{const}}})]$ for Markov decision process, where $\tau_{\text{const}}$ is the constant delay and $R^{\prime}_{T}$ is the regret without delays. [CGM2019] consider adversarial bandits with fixed constant delays on the network graph, with a minimax regret of the order $\tilde{O}\sqrt{(K+\tau_{\text{const}})T}$, where $K$ is the number of arms. Another related line of work to adversarial multi- armed bandits is adversarial learning with full information, where the rewards for all arms are observed. Different variants of this problems in the delayed setting have been studied by [WO2002], [mesterharm2005], [QK2015] and [GST2016]. On the other hand, learning in contextual bandits with delays are much less explored. [JGS2013] consider learning on adversarial contextual bandits with delays and establish an expected regret bound $\mathbb{E}\left[R_{T}\right]\leq(1+\mathbb{E}[M_{T}^{*}])\times\mathbb{E}\left[R^{\prime}_{T}\left(\frac{T}{1+\mathbb{E}[M_{T}^{*}]}\right)\right]$ by using a black-box algorithm, where $M_{T}^{*}$ is the running maximum number of delays up to round $T$. [DHKKLRZ2011] consider stochastic contextual bandits with a fixed constant delay. The reward model they consider is general (i.e. not necessarily parametric); however, they require the policy class to be finite. In particular, they obtain the regret bound $O(\sqrt{K\log N}(\tau_{\text{const}}+\sqrt{T}))$, where $N$ is the number of policies and $\tau_{\text{const}}$ is again the fixed constant delay. Finally, we also note that there is a growing literature on offline contextual bandits (for a highly incomplete list, see [dudik2011doubly, swaminathan2015batch, athey2017efficient, zhou2018offline, kitagawa2018should, off-policy-evaluation-slate-recommendation, deep-learning-logged-bandit- feedback]). This is a setting where all the data has been collected upfront and a policy needs to be learned from this batch data at once. Although sharing the same primitives (contexts, actions and rewards), this problem has important differences from the online setting. In particular, the exploration part is missing in this problem and a separate set of challenges exist in the offline case. In this setting, delays would have no impact since all the rewards will have been collected at the end (except perhaps at the tail of the batch). ### 1.2 Our Contributions In this paper, we consider learning on generalized linear (stochastic) contextual bandits with stochastic unbounded delays. Our contributions are two-fold. First, we design two delay-adaptive algorithms for generalized linear contextual bandits, one based on UCB, the other based on TS. We refer to the two variants as Delayed UCB (DUCB, as given in Algorithm 1) and Delayed TS (DTS, as given in Algorithm 2) respectively. DUCB requires a carefully designed delay-adaptive confidence parameter, which depends on how many rewards are missing up to the current time step. In contrast, DTS is a straightforward adaptation that incorporates the delayed rewards as they become available. Second, we give regret characterizations of both DUCB and DTS under (1) independent stochastic, unbounded delays that can have heavy tails, (2) unbounded Markov delays that can have near-heavy tails (tails that are arbitrarily close to exponential tails), and (3) unbounded delays with any dependency structure that have light (sub-Gaussian) tails. In particular, as a special case of our results, when the delays are iid with mean $\mu_{I}$, we have a high-probability regret bound of $\tilde{O}\left(\left(\sigma_{G}\sqrt{d}+\mu_{I}d+d\right)\sqrt{T}\right)$ on DUCB, where $\sigma_{G}$ is a parameter characterizing the tail bound of the delays and $d$ is the feature dimension. For comparison, the state-of-the-art regret bound of UCB on generalized linear contextual bandits without delays is $\tilde{O}\left(d\sqrt{T}\right)$ ([FCGS2010, LLZ2017]). For DTS, we have the Bayesian regret bound of $\tilde{O}\left(\left(\sigma_{G}\sqrt{d}+\mu_{I}\sqrt{d}+d\right)\sqrt{T}\right)$. For comparison, the state-of-the-art Bayesian regret bound of TS on generalized linear contextual bandits without delays is $\tilde{O}\left(d\sqrt{T}\right)$ ([RV2014, russo2016information]). The regret bounds we have obtained highlight the dependence on the delays in two ways: one is how much delay is present on average, the other is how heavy the tail of the distribution is. Both factors contribute to the degradation of the regret bounds: that the average delay enlarges regret is intuitive; that the tail influences regret is because a more likely large delay (at the far right end of a tail) can delay the learning for that context significantly, particularly in the early stages when the decision maker is unsure about the underlying parameter is. To the best of our knowledge, these regret bounds provide the first theoretical characterizations in generalized linear contextual bandits with large delays. Our results contribute to the broad landscape of contextual bandits literature by establishing that both algorithms are robust to delays, thereby helping clarify and reaffirm the empirical success of these two algorithms, which are widely deployed in modern recommendation engines. Some of the initial results have appeared in the conference version [zhou2019]. Our work here provides a comprehensive treatment of learning in generalized linear contextual bandits with large delays that incorporates substantially more in-depth inquiries on several fronts. First, we consider the heavier-tailed delays that include exponential distributions whereas [zhou2019] only dealt with light-tailed delays that are either sub-Gaussian or have ($1+q$)-th moment (for some $q>0$). This relaxation is important both from an empirical standpoint and from a theoretical standpoint. Empirically, as mentioned earlier, the field study in [chapelle2014] found that the delay distribution from the company’s data follows the exponential distribution closely, rather than a sub-Gaussian distribution that is commonly assumed in the bandits literature. Theoretically, establishing guarantees in this larger- delay regime requires us to develop a new (and arguably more elegant) argument from that in [zhou2019], which is not applicable here. We explain the technical difficulty in more detail in Section 3.3. Second, the sole focus of [zhou2019] is on adapting and analyzing UCB-based algorithms. However, as mentioned earlier, it is known that Thompson sampling often achieves superior empirical performance, despite the fact that their theoretical bounds (when no delays are present) may not match exactly those of the UCB algorithms. Furthermore, TS-based algorithms do not suffer from hyper-parameter tuning and can effectively incorporate prior and can therefore significantly outperform (when priors are available and correct). Consequently, in this paper, in addition to adapting and analyzing the UCB-based algorithms, we also discuss (in Section 4) the adaptation of TS-based algorithms in the delayed feedback setting and obtain regret bounds that characterize the corresponding performance. Finally, we move beyond the regime of the independent delay setting studied in [zhou2019], and instead consider (in Section 5) the much more general and realistic history-dependent delays setting. We give regret bounds of both UCB-based algorithms and TS-based algorithms, under both the Markov delays assumption and the general stationary delays assumption. We also highlight, in this unified presentation, the comparison of the various regret bounds as the assumption on delays get progressively weakened. ## 2 Problem Setup In this section, we describe the formulation for learning in generalized linear contextual bandits (GLCB) in the presence of delays. We start by reviewing the basics of generalized linear contextual bandits, followed by a description of the delay model. Before proceeding, we first fix some notation. For a vector $x\in\mathbb{R}^{d}$, we use $\|x\|$ to denote its $l_{2}$-norm and $x^{\prime}$ its transpose. $\mathbb{B}^{d}:=\\{x\in\mathbb{R}^{d}:\|x\|\leq 1\\}$ is the unit ball centered at the origin. The weighted $l_{2}$-norm associated with a positive- definite matrix $A$ is defined by $\|x\|_{A}:=\sqrt{x^{\prime}Ax}$. The minimum and maximum singular values of a matrix $A$ are written as $\lambda_{\min}(A)$ and $\|A\|$ respectively. For two symmetric matrices $A$ and $B$ the same dimensions, $A\succeq B$ means that A-B is positive semi- definite. For a real-valued function f, we use $\dot{f}$ and $\ddot{f}$ to denote its first and second derivatives. Finally, $[n]:=\\{1,2,\cdots,n\\}$. ### 2.1 Generalized Linear Contextual Bandits #### Decision procedure. We consider the generalized linear contextual bandits problem with $K$ actions. At each round $t$, the agent observes a context consisting of a set of $K$ feature vectors $x_{t}:=\\{x_{t,a}\in\mathbb{R}^{d}|a\in[K]\\}$, which is drawn iid from an unknown distribution $\gamma$ with $\|x_{t,a}\|\leq 1$. Each feature vector $x_{t,a}$ is associated with an unknown stochastic reward $y_{t,a}\in[0,1]$. If the agent selects one action $a_{t}$, there is a resulting reward $y_{t,a_{t}}\in[0,1]$ associated. In the standard contextual bandits setting, the reward is immediately observed after the decision is made and the observed reward can be utilized to make decision in the next round. Although it is generally understood in the contextual bandits literature, for completeness, here we briefly discuss the meaning of the above quantities, as well as where they come from. In general, at each round $t$, an individual characterized by $v_{t}$ (a list of characteristics associated with that individual) is drawn from a population and becomes available. When the decision maker decides to apply action $a_{t}$ (one of the available $K$ actions) to this individual, then a reward $y_{t}(v_{t},a_{t})$ is obtained: this reward can depend stochastically on both the individual characteristics $v_{t}$ and the selected action $a_{t}$. However, in practice, for both modelling and computational reasons, one often first featurizes the individual characteristics and the actions. In particular, with sufficient generality, one assumes $\mathbf{E}[y_{t}(v_{t},a_{t})\mid v_{t},a_{t}]=g_{\theta}(\phi(v_{t},a_{t}))$, where $g_{\theta}(\cdot)$ is the parametrized mean reward function and $\phi(v_{t},a_{t})$ extracts the features from the given raw individual characteristics $v_{t}$ and action $a_{t}$. In the above formulation, as is standard in the contextual bandits literature, we assume the feature map $\phi(\cdot)$ is known and given and $x_{t,a}=\phi(v_{t},a)$. If $V_{t}$ is already a vector in Euclidean space, then a common choice for the feature extractor is $\phi(v_{t},a)=[\mathbf{0},\dots,\mathbf{0},v_{t},\mathbf{0},\dots,\mathbf{0}]$: that is, a $Kd$-dimensional vector with all zeros except at the $a$-th block. #### Relationship between reward $Y$ and context $X$. In terms of the relationship between $Y_{t,a}$ and $X_{t,a}$, we follow the standard generalized linear contextual bandits literature ([FCGS2010, LLZ2017]). Define $\mathcal{H}^{0}_{t}=\\{(s,x_{s},a_{s},y_{s,a_{s}}),s\leq t-1\\}\cup\\{x_{t}\\}$ as the information available at the beginning of round $t$. The agent maximizes the cumulative expected rewards over $T$ rounds with information $\mathcal{H}^{0}_{t}$ at each round $t$ ($t\geq 1$). Suppose the agent takes action $a_{t}$ at round $t$. Denote by $X_{t}=x_{t,a_{t}}$, $Y_{t}=y_{t,a_{t}}$ and we assume the conditional distribution of $Y_{t}$ given $X_{t}$ is from the exponential family. Therefore its density is given by $\displaystyle\mathbb{P}_{\theta^{*}}(Y_{t}|X_{t})=\exp\left(\frac{Y_{t}X_{t}^{\prime}\theta^{*}-m(X_{t}^{\prime}\theta^{*})}{h(\eta)}+A(Y_{t},\eta)\right).$ (1) Here, $\theta^{*}$ is an unknown number under the frequentist setting; $\eta\in\mathbb{R}^{+}$ is a given parameter; $A$, $m$ and $h$ are three normalization functions mapping from $\mathbb{R}$ to $\mathbb{R}$. For exponential families, $m$ is infinitely differentiable, $\dot{m}(X^{\prime}\theta^{*})=\mathbb{E}[Y|X]$, and $\ddot{m}(X^{\prime}\theta^{*})=\mathbb{V}(Y|X)$. Denote $g(X^{\prime}\theta^{*})=\mathbb{E}[Y|X]$ , one can easily verify that $g(x^{\prime}\theta)=x^{\prime}\theta$ for linear model, $g(x^{\prime}\theta)=\frac{1}{1+\exp(-x^{\prime}\theta)}$ for logistic model and $g(x^{\prime}\theta)=\exp(x^{\prime}\theta)$ for Poisson model. In the generalized linear model (GLM) literature ([NW1972, McCullagh2018]), $g$ is often referred to as the inverse link function. Note that (1) can be rewritten as the GLCB form, $\displaystyle Y_{t}=g(X_{t}^{\prime}\theta^{*})+\epsilon_{t},$ (2) where $\\{\epsilon_{t},t\in[T]\\}$ are independent zero-mean noise, $\mathcal{H}^{0}_{t}$-measurable with $\mathbb{E}[\epsilon_{t}|{\mathcal{H}^{0}_{t}}]=0$. Data generated from (1) automatically satisfies the sub-Gaussian condition: $\displaystyle\mathbb{E}\left[\exp({\lambda\epsilon_{t}})|{\mathcal{H}^{0}_{t}}\right]\leq\exp\left({\frac{\lambda^{2}\hat{\sigma}^{2}}{2}}\right).$ (3) Throughout the paper, we denote $\hat{\sigma}>0$ as the sub-Gaussian parameter of the noise $\epsilon_{t}$. ###### Remark 1 In this paper, we focus on the GLM with exponential family (1). In general, one can work with model (2) under the sub-Gaussian assumption (3). Our analysis will still hold by considering maximum quasi-likelihood estimator for (2). See more explanations in Section 3.1. ### 2.2 The Delay Model Unlike the traditional setting where each reward is immediately observed, here we consider the case where stochastic and unbounded delays are present in revealing the rewards. Let $T$ be the number of total rounds. At round $t$, after the agent takes action $a_{t}$, the reward $y_{t,a_{t}}$ may not be available immediately. Instead, it will be observed at the end of round $t+D_{t}$ where $D_{t}$ is the delay at time $t$. We assume $D_{t}$ is a non- negative random number which is independent of $\\{D_{s}\\}_{s\leq t-1}$ and $\\{x_{s},y_{s,a_{s}},a_{s}\\}_{s\leq t}$. First, we define the available information for the agent at each round. #### Information structure under delays. At any round $t$, if $D_{s}+s\leq t-1$ (reward occurred in round $s$ is available at the beginning of round $t$), then we call $(s,x_{s},y_{s,a_{s}},a_{s})$ the complete information tuple at round $t$. If $D_{s}+s\geq t$, we call $(s,x_{s},a_{s})$ the incomplete information tuple at the beginning of round $t$. Define $\mathcal{H}_{t}=\left\\{(s,x_{s},y_{s,a_{s}},a_{s})\,\,|\,\,s+D_{s}\leq t-1\right\\}\cup\left\\{(s,x_{s},a_{s})\,\,|\,\,s\leq t-1,s+D_{s}\geq t\right\\}\cup\left\\{x_{t}\right\\},$ then $\mathcal{H}_{t}$ is the information (filtration) available at the beginning of round $t$ for the agent to choose action $a_{t}$. In other words, $\mathcal{H}_{t}$ contains all the incomplete and complete information tuples up to round $t-1$ and the content vector $x_{t}$ at round $t$. Moreover define $\displaystyle\mathcal{F}_{t}=\\{(s,x_{s},a_{s},y_{s,a_{s}})\,\,|\,\,s+D_{s}\leq t\\}.$ (4) Then $\mathcal{F}_{t}$ contains all the complete information tuples $(s,x_{s},a_{s},y_{s,a_{s}})$ up to the end of round $t$. Denote $\mathcal{I}_{t}=\mathcal{F}_{t}-\mathcal{F}_{t-1}$, $\mathcal{I}_{t}$ is the new complete information tuples revealed at the end of round $t$. #### Performance criterion. Under the frequentist setting, assume there exists an unknown true parameter $\theta^{*}\in\mathbb{R}^{d}$. The agent’s strategy can be evaluated by comparing her rewards to the best reward. To do so, define the optimal action at round $t$ by $a_{t}^{*}=\arg\max_{a\in[K]}g(x_{t,a}^{\prime}\theta^{*})$. Then, the agent’s total regret of following strategy $\pi$ can be expressed as follows $R_{T}(\pi):=\sum_{t=1}^{T}\left(g\left(x_{t,a^{*}_{t}}^{\prime}\theta^{*}\right)-g\left(x_{t,a_{t}}^{\prime}\theta^{*}\right)\right),$ where $a_{t}\sim\pi_{t}$ and policy $\pi_{t}$ maps $\mathcal{H}_{t}$ to the probability simplex $\Delta^{K}:=\\{(p_{1},\cdots,p_{K})\,\,|\,\,\sum_{i=1}^{K}p_{i}=1,p_{i}\geq 0\\}$. Note that $R_{T}(\pi)$ is in general a random variable due to the possible randomness in $\pi$. #### Assumptions. Throughout the paper, we assume the following assumption on distribution $\gamma$ and function $g$, which is standard in the generalized linear bandit literature ([FCGS2010, LLZ2017, JBNW2017]). ###### Assumption 1 (GLCB) * • $\lambda_{\min}(\mathbb{E}[\frac{1}{K}\sum_{a\in[K]}x_{t,a}x_{t,a}^{\prime}])\geq\sigma_{0}^{2}$ for all $t\in[T]$. * • $\kappa:=\inf_{\\{\|x\|\leq 1,\|\theta-\theta^{*}\|\leq 1\\}}\dot{g}(x^{\prime}\theta)>0$. * • $g$ is twice differentiable. $\dot{g}$ and $\ddot{g}$ are upper bounded by $L_{g}$ and $M_{g}$, respectively. In addition, we assume the delay sequence $\\{D_{t}\\}_{t=1}^{T}$ satisfies the following assumption. ###### Assumption 2 (Delay) Assume $\\{D_{t}\\}_{t=1}^{T}$ are independent non-negative random variables with tail-envelope distribution $(\xi,\mu,M)$. That is, there exists a constant $M>0$ and a distribution $\xi$ with mean $\mu<\infty$ such that for any $m\geq M$ and $t\in[T]$, $\mathbb{P}(D_{t}\geq m)\leq\mathbb{P}(D\geq m),$ where $D\sim\xi$. Furthermore, assume there exists $q\geq 0$ such that $\mathbb{P}(D-\mu\geq x)\leq\exp\left(\frac{-x^{1+q}}{2\sigma^{2}}\right),$ where $\mathbb{E}[D]=\mu$. Assumption 2 includes the most common delay patterns in real-world applications. $D$ is sub-Gaussian when $q=1$ and $D$ has exponential delays when $q=0$. When $D_{t}$’s are iid, the following condition guarantees Assumption 2: $\mathbb{P}(D_{t}-\mathbb{E}[D_{t}]\geq x)\leq\exp\left(\frac{-x^{1+q}}{2\tilde{\sigma}^{2}}\right),$ with some $\tilde{\sigma}>0$ and $q\geq 0$. We summarize the parameter definition in Table LABEL:tab:parameters. (See Section LABEL:app:table.) Note that with Assumption 2, we do not need to assume all delays have identical distributions, as long as they are independent over time. Since there exists an envelop distribution $\xi$ uniformly dominating the tail probability of all delays, we can get a handle on the tail of all the delay distributions. This can be viewed as the regularity condition on the delays. ## 3 Delayed Upper Confidence Bound (DUCB) for GLCB In this section, we propose a UCB type of algorithm for GLCB adapting the delay information in an online version. Let us first introduce the maximum likelihood estimator we adopt and then state the main algorithm. ### 3.1 Maximum Likelihood Estimators (MLEs). Denote $T_{t}=\\{s:s\leq t-1,D_{s}+s\leq t-1\\}$ as the set containing timestamps with complete information tuples at the beginning of round $t$. We use data with timestamps in $T_{t}$ to construct the MLE. Suppose we have independent samples of $\\{Y_{s}:s\in T_{t}\\}$ condition on $\\{X_{s}:s\in T_{t}\\}$. The log-likelihood function of $\theta$ under (1) is $\displaystyle\log l\left(\theta\,\,|\,\,T_{t}\right)$ $\displaystyle=$ $\displaystyle\sum_{s\in T_{t}}\left[\frac{Y_{s}X_{s}^{\prime}\theta-m(X_{s}^{\prime}\theta)}{v(\eta)}+B(Y_{s},\eta)\right]$ $\displaystyle=$ $\displaystyle\frac{1}{v(\eta)}\sum_{s\in T_{t}}\left[Y_{s}X_{s}^{\prime}\theta-m(X_{s}^{\prime}\theta)\right]+\text{constant}.$ Therefore, the MLE can be defined as $\hat{\theta}_{t}\in\arg\max_{\theta\in\Theta}\sum_{s\in T_{t}}\left[Y_{s}X_{s}^{\prime}\theta-m(X_{s}^{\prime}\theta)\right].$ Since $m$ is differentiable with $\ddot{m}\geq 0$, the MLE can be written as the solution of the following equation $\displaystyle\sum_{s\in T_{t}}(Y_{s}-g(X_{s}^{\prime}\theta))X_{s}=0,$ (5) which is the estimator we use in Step 4 of Algorithm 1. Note that, the general GLCB, a semi-parametric version of the GLM, is obtained by assuming only that $\mathbb{E}[Y|X]=g(X^{\prime}\theta^{*})$ (see (2)) without further assumptions on the conditional distribution of $Y$ given $X$. In this case, the estimator obtained by solving (5) is referred to as the maximum quasi-likelihood estimator. It is well-documented that this estimator is consistent under very general assumptions as long as matrix $\sum_{s\in T_{t}}X_{s}X_{s}^{\prime}$ tends to infinity as $t\rightarrow\infty$ ([CHY1999, FCGS2010]). ### 3.2 Algorithm: DUCB-GLCB Denote $G_{t}=\sum_{s=1}^{t-1}\mathbb{I}\\{s+D_{s}\geq t\\}$ as the number of missing reward when the agent is making a prediction at round $t$. Further denote $W_{t}=\sum_{s\in T_{t}}X_{s}X_{s}^{\prime}$ as the matrix consisting feature information with timestamps in $T_{t}$ and $V_{t}=\sum_{s=1}^{t-1}X_{s}X_{s}^{\prime}$ as the matrix consisting all available features at the end of round $t-1$. Then the main algorithm is defined as follows. Algorithm 1 DUCB-GLCB 1: Input: the total rounds $T$ , model parameters $d$ and $\kappa$, and tuning parameters $\tau$ and $\delta$. 2: Initialization: randomly choose $\alpha_{t}\in[K]$ for $t\in[\tau]$, set $V_{\tau+1}=\sum_{i=1}^{\tau}X_{s}X_{s}^{\prime}$, $T_{\tau+1}:=\\{s\,:\,s\leq\tau,s+D_{s}\leq\tau\\}$, $G_{\tau+1}=\tau-|T_{\tau+1}|$ and $W_{\tau+1}=\sum_{s\in T_{\tau+1}}X_{s}X_{s}^{\prime}$ 3: for $t=\tau+1,\tau+2,\cdots,T$ do 4: Update Statistics: calculate the MLE $\hat{\theta}_{t}$ by solving $\sum_{s\in T_{t}}(Y_{s}-g(X_{s}^{\prime}\theta))X_{s}=0$ 5: Update Parameter: $\beta_{t}=\frac{\hat{\sigma}}{\kappa}\sqrt{\frac{d}{2}\log\left(1+\frac{2(t-G_{t})}{d}\right)+\log(\frac{1}{\delta})}+{\sqrt{G_{t}}}$ 6: Select Action: choose $a_{t}=\arg\max_{a\in[K]}\left(x_{t,a}^{\prime}\hat{\theta}_{t}+\beta_{t}\|x_{t,a}\|_{V_{t}^{-1}}\right)$ 7: Update Observations: $X_{t}\leftarrow x_{t,a_{t}}$, $V_{t+1}\leftarrow V_{t}+X_{t}X_{t}^{\prime}$ and $T_{t+1}\leftarrow T_{t}\cup\\{s\,:\,s+D_{s}=t\\}$, $G_{t+1}=t-|T_{t+1}|$, and $W_{\tau+1}=W_{\tau}+\sum_{s:s+D_{s}=t}X_{s}X_{s}^{\prime}$ 8: end for ###### Remark 2 (Comparison to UCB-GLM Algorithm in [LLZ2017]) We make several adjustments to the UCB-GLM Algorithm in [LLZ2017]. First, in step 4 (statistics update), we only use data with timestamps in $T_{t}$ to calculate the estimator using MLE. In this step, using data without reward will cause bias in the estimation. Second, when selecting the action in step 5, parameter $\beta_{t}$ is updated adaptively at each round whereas in [LLZ2017], the corresponding parameter is constant over time. Moreover, in step 4, we choose to use $V_{t}$ to normalize the context vector $X_{t,a}$ instead of $W_{t}$. ### 3.3 Preliminary Analysis Denote $G_{t}^{*}=\max_{1\leq s\leq t}G_{s}$ as the running maximum number of missing reward up to round $t$. The property of $G_{t}$ and $G^{*}_{t}$ is the key to analyze the regret bound for both UCB and Thompson sampling algorithms. We next characterize the tail behavior of $G_{t}$ and $G^{*}_{t}$. ###### Proposition 1 (Properties of $G_{t}$ and $G_{t}^{\star}$) Assume Assumption 2. Denote $\sigma_{G}=\sigma\sqrt{2+q}$. Then, 1. 1. $G_{t}$ is sub-Gaussian. Moreover, for all $t\geq 1$, with probability $1-\delta$ $\displaystyle{G}_{t}\leq 2(\mu+M)+\sigma_{G}\sqrt{2\log\left(\frac{1}{\delta}\right)}+2\sigma_{G}^{2}\log C_{3}+1,$ (6) where $C_{3}=2\sigma^{2}+1$. 2. 2. With probability $1-\delta$, $\displaystyle{G}_{T}^{*}$ $\displaystyle\leq$ $\displaystyle 2(\mu+M)+\sigma_{G}\sqrt{2\log T}+2\sigma_{G}^{2}\log C_{3}$ (7) $\displaystyle+\sigma_{G}\sqrt{2\log\left(\frac{1}{\delta}\right)+2\log C_{3}\sigma_{G}\sqrt{2\log T}+2\log C_{3}}+1,$ where $G_{T}^{*}=\max_{1\leq s\leq T}G_{s}$. 3. 3. Define $W_{t}=\sum_{s\in T_{t}}X_{s}X_{s}^{\prime}$ where $X_{t}$ is drawn iid. from some distribution $\gamma$ with support in the unit ball $\mathbb{B}_{d}$. Furthermore, let $\Sigma:=\mathbb{E}[X_{t}X_{t}^{\prime}]$ be the second moment matrix, and $B$ and $\delta>0$ be two positive constants. Then there exist positive, universal constants $C_{1}$ and $C_{2}$ such that $\lambda_{\min}(W_{t})\geq B$ with probability at least $1-2\delta$, as long as $\displaystyle t\geq\left(\frac{C_{1}\sqrt{d}+C_{2}\sqrt{\log(\frac{1}{\delta})}}{\lambda_{\min}(\Sigma)}\right)^{2}+\frac{2B}{\lambda_{\min}(\Sigma)}+2(\mu+M)+\sigma_{G}\sqrt{2\log\left(\frac{1}{\delta}\right)}+2\sigma_{G}^{2}\log C_{3}+1.$ (8) A special case of Proposition 1-1 is when $D_{i}$’s are iid and $q=0$. Now assume $D_{i}\sim D$ are iid with exponential-decays: $\displaystyle\mathbb{P}(D-\mu_{I}\geq t)\leq\exp(-\frac{t}{2\sigma_{I}^{2}}),$ (9) and $\mu_{I}=\mathbb{E}D$. Then with probability $1-\delta$, we have $\displaystyle G_{t}-\mu_{I}\leq 2\sigma_{I}\sqrt{\log\left(\frac{1}{\delta}\right)}+1+4\sigma_{I}^{2}\log(2\sigma_{I}^{2}).$ (10) At a high level, the proof utilizes the fact that, with high probability, there will be a lot of zero terms in the summation $G_{t}=\sum_{s=1}^{t-1}\mathbb{I}(s+D_{s}\geq s)$ when $t$ is large. This is done by designing a sequence of stopping times for the successes. We highlight the idea by showing result (10) for the special case when $D_{t}$’s are iid and $q=0$. The full version of the proof is deferred to Appendix LABEL:proof. ###### Sketch of the proof.. Define $V=\sum_{i=1}^{\infty}\mathbb{I}(D_{i}-\mu_{I}\geq i)$ where $D_{i}\sim D$ are iid that satisfies (9). Now let us define the following sequence of stopping times, $(k\geq 1)$, $T(k)=\inf\\{t>T(k-1):D_{t}\geq t\\},$ where $T(k)$ is the time of the $k^{\text{t}h}$ success. Therefore, $\displaystyle\mathbb{P}(V\geq j)$ $\displaystyle=$ $\displaystyle\mathbb{P}(T(1)<\infty,T(2)<\infty,\cdots,T(j-1)<\infty,T(j)<\infty)$ (11) $\displaystyle=$ $\displaystyle\Pi_{k=1}^{j}\mathbb{P}\left(T(k)<\infty|T(i)<\infty\,\,\text{ for }\,\,i\leq k-1\right)$ $\displaystyle=$ $\displaystyle\Pi_{k=2}^{j}\mathbb{P}\left(T(k)<\infty|T(k-1)<\infty\right)\mathbb{P}\left(T(1)<\infty\right)$ (12) $\displaystyle\leq$ $\displaystyle\Pi_{k=1}^{j}\left(\sum_{i=k}^{\infty}\exp\left(-\frac{i}{2\sigma_{I}^{2}}\right)\right)$ (13) $\displaystyle\leq$ $\displaystyle\Pi_{k=1}^{j}\left(2\sigma_{I}^{2}\exp\left(-\frac{k-1}{2\sigma_{I}^{2}}\right)\right)$ (14) $\displaystyle=$ $\displaystyle(2\sigma_{I}^{2})^{j}\exp\left(-\frac{(j-1)j}{4\sigma_{I}^{2}}\right)$ (15) (11) holds by tower property. (12) holds since event $\\{T(k)<\infty|T(k-1)<\infty\\}$ is equivalent to event $\\{T(k)<\infty|T(j)<\infty\,\,{\text{f}or}\,\,j\leq k-1\\}$. Condition on $T(k-1)<\infty$, we have $\mathbb{P}\left(T(k)<\infty|T(k-1)<\infty\right)\leq\mathbb{P}(\cup_{j\geq k}\mathbb{I}(D_{j}\geq j))\leq\sum_{i=k}^{\infty}\exp\left(-\frac{i}{2\sigma_{I}^{2}}\right)$. The last inequality holds by the union bound. Therefore (13) holds. Finally, 14 holds by integration. Given (14), $V$ is sub-Gaussian and with probability $1-\delta$, $V\leq 2\sigma_{I}\sqrt{\log\left(\frac{1}{\delta}\right)}+1+4\sigma_{I}^{2}\log(2\sigma_{I}^{2}).$ Similarly, we can show that, for any $t\geq 1$, $G_{t}$ is sub-Gaussian. With probability $1-\delta$, we have $G_{t}-\mu_{I}\leq 2\sigma_{I}\sqrt{\log\left(\frac{1}{\delta}\right)}+1+4\sigma_{I}^{2}\log(2\sigma_{I}^{2}).$ ∎ Note that $G_{t}$ is sub-Gaussian even when $D$ has near-heavy-tail distribution ($p\in[0,1)$). ###### Remark 3 The proof of Proposition 1 is simple but essential. It fully utilizes the property that the sequence in $V$ has a lot of zero terms (with high probability). In particular, one will not be able to fully obtain the result if one uses the standard approach and directly works at the level of “the-sum- of-sub-Guassians-is-sub-Gaussian" and thereafter analyzing sum of sub-Gaussian constants, which is the method used in [zhou2019]. In order to drive this point home, we provide an approach in this direction using Hoeffding bound (Theorem LABEL:thm9). See Appendix LABEL:further_G. With such a approach, one can only handle the case when $q>0$, which excludes the most difficult scenario with exponential delays. With Hoeffding bound, the sub-Gaussian parameter for $V$ is of the form $\sigma=\sqrt{\sum_{i=1}^{\infty}\sigma_{i}^{2}}$ where $\sigma$ is the sub- Gaussian parameter for indicator function $\mathbb{I}(G_{i}\geq i)$. Intuitively speaking, this Hoeffding bound does not take into consideration of the sparsity in the sequence. Therefore, the argument cannot reach the limit for $q=0$. ### 3.4 Regret Bounds ###### Theorem 1 Assume Assumptions 1-2. Fix any $\delta$. There exists a universal constant $C:=C(C_{1},C_{2},M,\mu,\sigma_{0},\hat{\sigma},\,\sigma,\kappa)>0,$ such that if we run DUCB-GLCB with $\tau:=C\left(d+\log(\frac{1}{\delta})\right)$ and $\beta_{t}=\frac{\hat{\sigma}}{\kappa}\sqrt{\frac{d}{2}\log\left(1+\frac{2(t-G_{t})}{d}\right)+\log(\frac{1}{\delta})}+G_{t}$, then, with probability at least $1-5\delta$, the regret of the algorithm is upper bounded by $\displaystyle R_{T}$ $\displaystyle\leq$ $\displaystyle\tau+L_{g}\left[4\sqrt{\mu+M}\sqrt{Td\log\left(\frac{T}{d}\right)}+2^{7/4}\sqrt{\sigma_{G}}(\log T)^{1/4}\sqrt{d\log\left(\frac{T}{d}\right)T}+\frac{2d\hat{\sigma}}{\kappa}\log\left(\frac{T}{d\delta}\right)\sqrt{T}\right.$ (16) $\displaystyle\,\,+\left.2\sqrt{2Td\log\left(\frac{T}{d}\right)}\left(\sqrt{\sigma_{G}}\left({2\log\left(\frac{1}{\delta}\right)+2\log C_{3}\sigma_{G}\sqrt{2\log T}+2\log C_{3}}\right)^{1/4}\right.\right.$ $\displaystyle\,\,\left.\left.+\sqrt{1+2\sigma_{G}^{2}\log C_{3}}\right)\right]$ For parameter definition, we refer to Table LABEL:tab:parameters in Section LABEL:app:table. The proof of Theorem 1 consists of three steps. The first step is to construct a confidence ball associated with the adaptive parameter $\beta_{t}$ and show that the true parameter falls into the confidence ball with high probability. The second step is to upper bound the normalized context sequence $\sum_{t=\tau+1}^{\tau+n}\|X_{t}\|_{V_{t}^{-1}}$. And the last step is to utilize the property of $G_{t}$ and $G^{*}_{t}$ proved in Proposition 1. The details is deferred to Appendix LABEL:proof. Given the high probability bound in Theorem 1, one can show the expected regret bound without much of work. ###### Corollary 1 (Expected regret) Assume Assumptions 1-2. The expected regret is bounded by $\displaystyle\mathbb{E}[R_{T}]={O\left(d\sqrt{T}\log(T)+\sqrt{\sigma_{G}}\sqrt{Td}(\log(T))^{3/4}+(\sqrt{\mu+M}+\sigma_{G})\sqrt{Td\log\left({T}\right)}\right).}$ (17) Given the result in (16), (17) holds by choosing $\delta=\frac{1}{T}$ and using the fact that $R_{T}\leq T$. The highest order term $O(d\sqrt{T}\log(T))$ does not depend on delays. This result is in line with the non-contextual stochastic bandit literature ([JGS2013]). Delay impacts the expected regret bound in two folds. First, the sub-Gaussian parameter $\sigma_{G}$ and the mean-related parameter ${\mu+M}$ appears in the second-highest order term. Second, the sub-Gaussian parameter ${\sigma_{G}}$ appears in the third-order term. Note that here we include the log factors in deciding the highest order term, the second highest order term and so on. If we exclude the log terms, then both delay parameters impact the regret bound multiplicatively. ### 3.5 Tighter Regret Bounds for Special Cases When the sequence $\\{D_{s}\\}_{s=1}^{T}$ satisfies some specific assumptions, we are able to provide tighter high probability bounds on the regret. ###### Proposition 2 Given Assumptions 1-2, we have the following results. 1. 1. If there exists a constant $D_{\max}>0$ such that $\mathbb{P}(D_{s}\leq D_{\max})=1$ for all $s\in[T]$. Fix $\delta$. There exists a universal constant $C>0$ such that by taking $\tau=D_{\max}+C(d+\log(\frac{1}{\delta}))$, with probability $1-3\delta$, the regret of the algorithm is upper bounded by $\displaystyle R_{T}\leq\^{A}~\tau+L_{g}\left(2{\sqrt{D_{\max}}}\sqrt{2Td\log\left(\frac{T}{d}\right)}+\frac{2d\hat{\sigma}}{\kappa}\log\left(\frac{T}{d\delta}\right)\sqrt{T}\right).$ (18) 2. 2. Assume $D_{1},\cdots,D_{T}$ are iid non-negative random variables with mean $\mu_{I}$. There exists $C>0$ such that by taking $\tau:=C\left(d+\log(\frac{1}{\delta})\right)$, with probability $1-5\delta$, the regret of the algorithm is upper bounded by $\displaystyle R_{T}\leq$ $\displaystyle\tau+L_{g}\left[{4\sqrt{\mu_{I}}}\sqrt{Td\log\left(\frac{T}{d}\right)}+{2^{7/4}\sqrt{\sigma_{G}}(\log T)^{1/4}}\sqrt{d\log\left(\frac{T}{d}\right)T}+\frac{2d\hat{\sigma}}{\kappa}\log\left(\frac{T}{d\delta}\right)\sqrt{T}\right.$ $\displaystyle\,\,+\left.2\sqrt{2Td\log\left(\frac{T}{d}\right)}\left(\sqrt{\sigma_{G}}\left({2\log\left(\frac{1}{\delta}\right)+2\log C_{3}\sigma_{G}\sqrt{2\log T}+2\log C_{3}}\right)^{1/4}\right.\right.$ $\displaystyle\,\,\left.\left.+\sqrt{1+2\sigma_{G}^{2}\log C_{3}}\right)\right]$ When delays $\\{D_{s}\\}_{s=1}^{T}$ are bounded by $D_{\max}$, the delay paramter $D_{\max}$ only appears in the term $\sqrt{Td\log{{T}}}$ and does not affect the highest order term $d\log(\frac{T}{d\delta})\sqrt{T}$. Compared to (17), there is no regret term on the order of $O(\sqrt{Td}\left(\log(T)\right)^{3/4})$ in (18). This is because we can provide a smaller number on the right hand side of (8) when delays are bounded. When delays are iid, $\mu+M$ is replaced by $\mu_{I}$, which is the common expectation of all the random delays. We refer to Appendix LABEL:proof for the proof of Proposition 2. ## 4 Delayed Thompson Sampling (DTS) for GLCB In section 3, under the frequentist set-up, we assume there exists a true parameter $\theta^{*}$ and use UCB to encourage exploration and construct the confidence interval for $\theta^{*}$. On the contrary, posterior sampling does not make use of upper confidence bounds to encourage exploration and instead relies on randomization. In this section, we operate in the Bayesian decision making setting and assume the decision maker is equipped with a prior distribution on $\theta^{*}$. In this setting, the standard performance metric is Bayesian regret, defined as follows: $R^{B}_{T}(\pi)=\mathbb{E}_{\theta^{*},x}[R_{T}(\pi,\theta^{*})]=\sum_{t=1}^{T}\mathbb{E}_{\theta^{*},x}\left[g\left(x_{t,a^{*}_{t}(\theta^{*})}^{\prime}\theta^{*}\right)-g\left(\ x_{t,a_{t}}^{\prime}\theta^{*}\right)\right],$ where $a_{t}\sim\pi_{t}$. Next, we present the Thompson sampling algorithm when adapted to the delayed setting. Algorithm 2 provides a formal description. Algorithm 2 DTS-GLCB 1: Input: the total rounds $T$ , tuning parameter $\tau$, prior $Q_{0}$ 2: Initialization: randomly choose $\alpha_{t}\in[K]$ for $t\in[\tau]$ 3: Update information: $\mathcal{F}_{\tau}$ according to (4). 4: if $\mathcal{I}_{\tau}=\emptyset$ then 5: $Q_{1}(\theta)=Q_{0}(\theta)$ 6: else 7: $Q_{1}(\theta)\propto Q_{0}(\theta|\mathcal{F}_{\tau})\Pi_{(s,x_{s},a_{s},y_{s,a_{s}})\in\mathcal{I}_{\tau}}\mathbb{P}(y_{s,a_{s}}|\theta,x_{s,a_{s}})$ 8: end if 9: for $t=1,2,\cdots,T-\tau$ do 10: Sample Model: $\hat{\theta}_{t+\tau}\sim Q_{t}$ 11: Select Action: $\bar{a}_{t+\tau}\in\arg\max_{a\in[K]}\left\langle x_{t+\tau,a},\hat{\theta}_{t+\tau}\right\rangle$ 12: Update information: $\mathcal{F}_{t+\tau}$ according to (4). Define $\mathcal{I}_{t+\tau}:=\mathcal{F}_{t+\tau}-\mathcal{F}_{t+\tau-1}$ as the new information at round $t+\tau$ 13: if $\mathcal{I}_{t+\tau+1}=\emptyset$ then 14: $Q_{t+1}(\theta)=Q_{t}(\theta)$ 15: else 16: $Q_{t+1}(\theta)\propto Q_{t}(\theta|\mathcal{F}_{\tau+t})\Pi_{(s,x_{s},a_{s},y_{s,a_{s}})\in\mathcal{I}_{t+\tau+1}}\mathbb{P}(y_{s,a_{s}}|\theta,x_{s,a_{s}})$ 17: end if 18: end for ###### Remark 4 Note that in Algorithm 2, there is an exploration period of length $\tau$. The posterior distribution employed at round $\tau+1$ is conditioned on observations made over the first $\tau$ time rounds. Another point to note is that Algorithm 2 is kept at an abstract level. The exact computation depends on the prior chosen and the exponential family. Note that every exponential family has a conjugate prior ([DY1979]), which admits efficient posterior update. Section 4.1 provides a concrete example on linear contextual bandits, which is a simple special case. We use this special case to illustrate how one can perform efficient incremental update in the presence of delays. ### 4.1 Delayed Thompson Sampling For Linear Contextual Bandits When $g(x)=x$ and $m(x)=\frac{x^{2}}{2}$, (1) reduces to $\displaystyle\mathbb{P}(Y|X)=\exp\left(\frac{YX^{\prime}\theta^{*}-(X^{\prime}\theta^{*})^{2}/2}{h(\eta)}+A(Y,\eta)\right).$ (19) Recall, from Bayes’ theorem, the posterior distribution is equal to the product of the likelihood function $\theta\rightarrow\mathbb{P}(y|\theta)$ and prior $\mathbb{P}(\theta)$, normalized by the probability of the data $\mathbb{P}(y)$: $\mathbb{P}(\theta|y)=\frac{\mathbb{P}(y|\theta)\mathbb{P}(\theta)}{\int\mathbb{P}(y|\theta)\mathbb{P}(\theta^{\prime})d\theta^{\prime}}$ Different choices of the prior distribution $\mathbb{P}(\theta)$ may make the integral more or less difficult to calculate. Moreover, the product $\mathbb{P}(y|\theta)\mathbb{P}(\theta)$ may take one form or another. But for certain choices of the prior, the posterior will have the same form as the prior, with possibly different parameter values. Such a choice is a conjugate prior. The conjugate prior, giving a closed-form expression for the posterior, makes Thompson sampling efficient to update. Further notice that, every exponential family has a conjugate prior ([DY1979]). Now we consider the normal conjugate prior for the linear model (19). Let $B_{t}=aI_{d}+\sum_{s\in T_{t}}x_{s,a_{s}}x_{s,a_{s}}^{\prime}$ and $\theta_{t}=B_{t}^{-1}\left(\sum_{s\in T_{t}}x_{s,a_{s}}y_{s,a_{s}}\right)$. Given the linear model (19), suppose we have $Y|X$ is Gaussian with $\mathcal{N}(X^{\prime}{\theta},v^{2})$. If the prior for $\theta$ at round $t$ is given by $\mathcal{N}(\theta_{t},v^{2}B_{t}^{-1})$, then it is easy to verify that the posterior distribution at around $t+1$ is $\mathcal{N}(\theta_{t+1},v^{2}B_{t+1}^{-1})$. Then Algorithm 2 becomes Algorithm 3 DTS-LCB 1: Input: the total rounds $T$ , constant $v>0$ and $a\geq 0$, tuning parameter $\tau$, conjugate prior $\mathcal{N}(\theta_{0},v^{2}B_{0}^{-1})$ with $\theta_{0}=0$ and $B_{0}=aI_{d}$, $f_{0}=0$ 2: Initialization: randomly choose $\alpha_{t}\in[K]$ for $t\in[\tau]$ 3: Update information: $\mathcal{F}_{\tau}$ according to (4). 4: if $\mathcal{I}_{t}=\emptyset$ then 5: $\theta_{1}=\theta_{0}$, $B_{1}=B_{0}$ 6: else 7: $B_{1}=\sum_{(s,x_{s},a_{s},y_{s,a_{s}})\in\mathcal{I}_{t}}x_{s,a_{s}}x_{s,a_{s}}^{T}$, $f_{1}=\sum_{(s,x_{s},a_{s},y_{s,a_{s}})\in\mathcal{I}_{t}}x_{s,a_{s}}y_{s,a_{s}}$ and $\theta_{1}=B_{1}^{-1}f_{1}$ 8: end if 9: for $t=1,2,\cdots,T-\tau$ do 10: Sample Model: $\hat{\theta}_{t+\tau}\sim\mathcal{N}(\theta_{t},v^{2}B_{t}^{-1})$ 11: Select Action: $\bar{a}_{t+\tau}\in\arg\max_{a\in[K]}\left\langle x_{t+\tau,a},\hat{\theta}_{t+\tau}\right\rangle$ 12: Update information: $\mathcal{F}_{t+\tau}$ according to (4). Define $\mathcal{I}_{t+\tau}:=\mathcal{F}_{t+\tau}-\mathcal{F}_{t+\tau-1}$ as the new information at round $t+\tau$ 13: if $\mathcal{I}_{t+\tau+1}=\emptyset$ then 14: $B_{t+1}=B_{t}$ 15: $\theta_{t+1}=\theta_{t}$ 16: else 17: $B_{t+1}=B_{t}+\sum_{(s,x_{s},a_{s},y_{s,a_{s}})\in\mathcal{I}_{t+\tau}}x_{s,a_{s}}x_{s,a_{s}}^{T}$,$f_{t+1}=f_{t}+\sum_{(s,x_{s},a_{s},y_{s,a_{s}})\in\mathcal{I}_{t+\tau}}x_{s,a_{s}}y_{s,a_{s}}$, and $\theta_{t+1}=B_{t+1}^{-1}f_{t+1}$ 18: end if 19: end for Note that the update (line 17) is on the incremental form which is practically efficient. ### 4.2 Regret Bounds Denote $\pi_{\tau}^{\text{PS}}$ as the posterior sampling policy described in Algorithm 2 with an exploration period $\tau$. We have the following result. ###### Theorem 2 Assume Assumptions 1-2. There exists a universal constant $C:=C(C_{1},C_{2},M,\mu,\sigma_{0},\sigma_{G},\,\sigma,\kappa)>0$, such that if we run exploration with $\tau:=C\left(d+\log(\frac{1}{\delta})\right)$, $\displaystyle R^{B}_{T}(\pi_{\tau}^{\text{PS}})={O\left(d\log T\sqrt{T}+\sqrt{\sigma_{G}}\sqrt{Td}(\log(T))^{3/4}+(\sqrt{\mu+M}+\sigma_{G})\sqrt{dT\log\left(T\right)}\right).}$ (20) For parameter definition, we refer to Table LABEL:tab:parameters in Section LABEL:app:table. We follow the steps in [RV2014] to prove the Bayesian regret bound in Theorem 2. The idea is the follows. We first decompose the Bayesian regret and the UCB the regret and build a connection between them. We then provide the Bayesian regret bound by utilizing a sequence of upper confidence bounds. We defer the details to Appendix LABEL:proof. When $\\{D_{s}\\}_{s=1}^{T}$ satisfies some specific assumptions, we are able to provide tighter Bayesian regret bounds. ###### Corollary 2 Assume Assumptions 1-2, we have the following result: 1. 1. If there exists a constant $D_{\max}>0$ such that $\mathbb{P}(D_{s}\leq D_{\max})=1$ for all $s\in[T]$. Then, $R^{B}_{T}(\pi_{\tau}^{\text{PS}})=O\left(d\log T\sqrt{T}+{\sqrt{D_{\max}}}\sqrt{dT\log T}\right).$ 2. 2. Assume $D_{1},\cdots,D_{T}$ are iid non-negative random variables with mean $\mu_{I}$. Then, $R^{B}_{T}(\pi_{\tau}^{\text{PS}})={O\left(d\log T\sqrt{T}+\sqrt{\sigma_{G}}\sqrt{Td}(\log(T))^{3/4}+(\sqrt{\mu_{I}}+\sigma_{G})\sqrt{dT\log\left(T\right)}\right).}$ We defer the proof of Corollary 2 to Appendix LABEL:proof. The results in Theorem 2 and Corollary 2 are comparable to the results in Section 3. ## 5 Extensions: History-dependent Delays In previous sections, we have analyzed the regret bounds for both DUCB-GLCB and DTS-GLCB when delays are independent. In practice, such independence assumption may not hold and current delays may depend on historical delays. In this section, we explore two types of dependency structures for the delays. In section 5.1, we discuss Markov delays where the stationary distribution is near-heavy-tail. In section LABEL:sec:random_delay, we discuss delays with random dependency structures but under a stronger assumption on the stationary distribution, which is lighter-than-sub-Gaussian. ### 5.1 Markov Delays ###### Assumption 3 (Markov Delay) Let $\\{D_{t}\\}_{t=1}^{T}$ be a stationary Markov chain on the general state space $\mathcal{X}=\mathbb{N}^{+}$ with invariant distribution $\pi$. Given $D\sim\pi$ with $\mu_{M}=\mathbb{E}[D]$, we further assume that $\mathbb{P}(D-\mu_{M}\geq x)\leq\exp\left(\frac{-x^{1+q}}{2\sigma_{M}^{2}}\right),$ for some $q>0$ and $\sigma_{M}>0$. Under Assumption 3, the stationary distribution $\pi$ can have near-heavy-tail property when $q$ is small. Recall that $G_{t}=\sum_{s=1}^{t-1}\mathbb{I}\\{s+D_{s}\geq t\\}$ is the number of missing reward and $G_{t}^{*}=\max_{1\leq s\leq t}G_{t}$ is the running maximum number of missing reward. Under Assumption 3, $G_{t}$ and $G_{t}^{*}$ has the following properties and again this is the key to analyze regret bounds for both DUCB and DTS. ###### Proposition 3 (Properties of $G_{t}$ and $G_{t}^{\star}$ under Markov delays) Assume Assumption 3 and $l_{2}$-spectral gap $1-\lambda\in(0,1]$. Then, 1. 1. For any $0<\delta<1$ and any $t$ we have, with probability at least $1-\delta$, $\displaystyle G_{t}-\mu_{M}\leq A_{2}(\lambda)\log\left(\frac{1}{\delta}\right)+\sqrt{2A_{1}(\lambda)\mu_{M}\log\left(\frac{1}{\delta}\right)},$ (21) where $A_{1}(\lambda)=\frac{1+\lambda}{1-\lambda}$ and $A_{2}(\lambda)=\frac{1}{3}\mathbb{I}(\lambda=0)+\frac{5}{1-\lambda}\mathbb{I}(\lambda>0)$. 2. 2. With probability at least $1-\delta$, $\displaystyle G_{T}^{*}\leq\mu_{M}+A_{2}(\lambda)\log\left(\frac{T}{\delta}\right)+\sqrt{2A_{1}(\lambda)\mu_{M}\log\left(\frac{T}{\delta}\right)},$ (22) where $G_{T}^{*}=\max_{1\leq t\leq T}G_{t}$. 3. 3. Define $W_{t}=\sum_{s\in T_{t}}X_{s}X_{s}^{\prime}$ where $X_{t}$ is drawn iid. from some distribution $\gamma$ with support in the unit ball $\mathbb{B}_{d}$. Furthermore, let $\Sigma:=\mathbb{E}[X_{t}X_{t}^{\prime}]$ be the second moment matrix, and $B$ and $\delta>0$ be two positive constants. Then there exist positive, universal constants $C_{1}$ and $C_{2}$ such that $\lambda_{\min}(W_{t})\geq B$ with probability at least $1-2\delta$, as long as $\displaystyle t\geq\left(\frac{C_{1}\sqrt{d}+C_{2}\sqrt{\log(\frac{1}{\delta})}}{\lambda_{\min}(\Sigma)}\right)^{2}+\frac{2B}{\lambda_{\min}(\Sigma)}+\mu_{M}+A_{2}(\lambda)\log\left(\frac{1}{\delta}\right)+\sqrt{2A_{1}(\lambda)\mu_{M}\log\left(\frac{1}{\delta}\right)}.$ (23) $\lambda$ is the $l_{2}$-spectral gap of the transition probability. We refer the formal concepts and the definition of $l_{2}$-spectral gap to [JSF2018, Section 2.2]. Proposition 3-1 is proved by utilizing the Berstein’s inequality for general Markov chains ([JSF2018, Theorem 1.1]) and Proposition 3-2 is proved by applying union bound. ###### Proof. Proof of Proposition 3. Recall $G_{t}=\sum_{s=1}^{t-1}\mathbb{I}\\{D_{s}\geq t-s\\}$. Define $f_{i}(D_{i})=\mathbb{I}\\{D_{s}\geq t-s\\}-p_{i}$ with $p_{i}=\mathbb{P}(D_{s}\geq t-s)$. Then $\mathbb{E}[f_{i}(D_{i})]=0$, $\mathbb{V}[f_{i}(D_{i})]=p_{i}(1-p_{i})\leq p_{i}$, and $\sum_{s=1}^{t-1}\mathbb{V}[f_{i}(D_{i})]\leq\sum_{s=1}^{t-1}p_{s}<\mu_{M}$. From [JSF2018, Theorem 1.1], we have $\displaystyle\mathbb{P}\left(\sum_{s=1}^{t-1}f_{i}(D_{i})>x\right)\leq\exp\left(-\frac{x^{2}}{2(A_{1}(\lambda)\mu_{M}+A_{2}(\lambda)x)}\right).$ (24) Note that the right hand side in (24) is independent of $t$. Technically speaking, this is because the summation of the variance $\sum_{s=1}^{t-1}\mathbb{V}[f_{i}(D_{i})]$ is upper bounded by $\mu_{D}$ which is independent of $t$. Therefore, Property 1 in Proposition 3 holds for any $t\geq 1$. Property 2 holds by the union bound and Property 1, $\displaystyle\mathbb{P}\left(\max_{1\leq t\leq T}G_{t}>\mu_{M}+A_{2}(\lambda)\log\left(\frac{T}{\delta}\right)+\sqrt{2A_{1}(\lambda)\mu_{M}\log\left(\frac{T}{\delta}\right)}\right)$ $\displaystyle\leq\sum_{t=1}^{T}\mathbb{P}\left(G_{t}>\mu_{M}+A_{2}(\lambda)\log\left(\frac{T}{\delta}\right)+\sqrt{2A_{1}(\lambda)\mu_{M}\log\left(\frac{T}{\delta}\right)}\right)\leq T\times\frac{\delta}{T}.$ Therefore, the following holds with probability no smaller than $1-\delta$, $\mathbb{P}\left(\max_{1\leq t\leq T}G_{t}<\mu_{M}+A_{2}(\lambda)\log\left(\frac{T}{\delta}\right)+\sqrt{2A_{1}(\lambda)\mu_{M}\log\left(\frac{T}{\delta}\right)}\right).$ ∎